Syntax and robot behavior

An excerpt from "Flesh and Machines" by Rodney Brooks
"Dances with Machines
"What separates people from animals is syntax and technology. Many
species of animals have a host of alert calls. For vervet monkeys one call means there is a bird of prey in the sky. Another means there is a snake on the ground. All members of the species agree on the mapping between particular sounds and these primitive meanings. But no vervet monkey can ever express to another "Hey, remember that snake we saw three days ago? There's one down here that looks just like it." That requires syntax. Vervet monkeys do not have it."
In a previous thread, "Where is behavior AI now" we discussed time domain based signals on simple inputs and outputs.
Isn't what Brooks saying above, that animals are not able to put such time weighted concepts (i.e. snake we saw three days ago) into communications? Isn't this parallel to the state information discussion, to say, animals have very limited ability to remember state?
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
An excerpt from "Flesh and Machines" by Rodney Brooks
"Dances with Machines
"What separates people from animals is syntax and technology. Many species of animals have a host of alert calls. For vervet monkeys one call means there is a bird of prey in the sky. Another means there is a snake on the ground. All members of the species agree on the mapping between particular sounds and these primitive meanings. But no vervet monkey can ever express to another "Hey, remember that snake we saw three days ago? There's one down here that looks just like it." That requires syntax. Vervet monkeys do not have it."
In a previous thread, "Where is behavior AI now" we discussed time domain based signals on simple inputs and outputs.
Isn't what Brooks saying above, that animals are not able to put such time weighted concepts (i.e. snake we saw three days ago) into communications? Isn't this parallel to the state information discussion, to say, animals have very limited ability to remember state?
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

I think the difference is that humans have the ability to manipulate private state that is independent of the environment to an extent that is far beyond all other animals. Before we can develop language to talk about what happened yesterday, we first have to remember what happened yesterday, or what happened 10 minutes ago. By this, I mean we have the power to call up memories of the past. When we do that, all that I think is happening is that our brain is partially activating old states, created from experience.
So, when sensory data flows in, it's decoded through a large parallel network which specifies how the current sensory signals are different from other sensory signals as well as decoded all the way to the correct actions to take in response to this current sensory environment.
If we see/sense 100 things that are around us, it's because there are 100 different parts of the network activating at the same time in response to this current sensory environment. I know there is a computer in front of me only because parts of by brain that represent that idea have been activated in response to this visual data. But at the same time, many other lower level parts of the brain are being activated by the vision data - the parts that detect simple edges, and areas of color, and shapes. It's all this combined that creates our full experience of seeing a computer.
All that "state" is activated directly by our current and recent past sensory inputs and all that state is also driving our behavior.
But, humans also have the ability to active some of that network state, independent of the current sensory inputs. We can create a memory of something that happened in the past by making part of that network activate again. I can close my eyes, and still "think" about looking at the computer. This memory is very weak and poor compared to the sensation of actually seeing a computer because only a very small part of my brain is being put back into that "seeing a computer" state when I have the memory.
If I remember seeing a snake yesterday, it's because my brain has sections which are able to disconnect from the current sensory experience. It's hard for example, for me to look at my computer screen, and have a past memory of seeing a snake at the same time. I almost have to close my eyes or at least, concentrate to block the sensory data in order to allow me to have a past memory of seeing a snake. This is because the current sensory data is trying to force the brain into the configuration of looking at a computer monitor.
None the less, humans have a lot of power to do things like close our eyes, and make our mind drift back to partial (very partial) recreations of past experiences.
Our behavior however is a function of the entire state of the brain. So, when parts of our brains are recreating state from past experience, our behavior can also be a function of that part of our brain state, instead of being a function of only brain state created from current sensory experience. In other words, we can produce behaviors that are function of our memories. We can say something like, "hey, that snake is like the one I saw yesterday". That's because when we first saw the new snake, a small part of the brain switched back to a state that represented what happened yesterday. But not all of it switched back (not very much of it at all really) which is why we can for the most part not be confused about what is happening now and what is a memory. We only get confused about that if we cut off our sensory inputs so that the memories are all we have to react to - like what happens when we sleep and dream.
I think animals have nearly as much state information in their brain as we do. It's just that most of their state is always a direct connection of the current sensory inputs. Most of our state works that way as well. It's how we know where we are and whats going on around us. But because we have this percentage of the brain that flaps in the wind and can flip back to old states it allows us to react now, as if we were reacting to something that happened last week.
So I don't think we have that much more state. Or that we can can react in ways all that more complex than the ways many higher animals can react to their internal state, I just think that for some reason, some sections of our brain has a bit more freedom to disconnect from current sensory inputs, and switch to active configurations which represent states that were active in the past. We constantly have memories of past events which allow us to act in complex ways that many animals don't seem to have. Most of them seem to be far more forced to react to only what is happening around them instead of having a brain that can switch to a past experience (aka daydream).
A dog for example shows clear signs of action that shows their behavior was based on past experience. They run to the door to be let out because they know that door is how they are let out. But this doesn't seem to happen because they can recall a past memory of being let out that door. It seems to happen simply by conditioning. I don't see any real signs of dogs day dreaming. They only seem to react directly to what's happening around them (except when they are sleeping and they do seem to have dreams in that case but that's easy to explain if your body cuts off your sensory signals and lets the brain free-wheel).
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Curt Welch wrote:

Yes to all you've said, except I interpret it differently.
I describe it by saying that we have an additional sense, a "sense of thought". That is, we can perceive our own thoughts at a sensory level, and process them through the lower brain's sensor fusion circuits without losing track of the fact that what we sensed actually originated internally - just as we don't lose the fact that what we just saw was a sight, not a sound.
I think this idea can be used as a basis for explaining most of what we observe as consciousness, dreaming (including day- dreaming), etc, and probably very many disfunctions also, such as autism, bipolar disorders, etc.
I mention autism because if the sensory origin of each percept is lost during sensor fusion, then the ability to distinguish internal from external senses is lost, and internal disturbances can create instability in the apparent "real world", leading to profound disorientation. I would expect behaviours on the order of those encountered in autism to be the result.
Clifford Heath.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Yes, I've used that exact idea many times in trying to explain (and understand) what thought is. I strongly try to argue the point that we sense our thoughts just like we sense the external world.

But here's where it gets interesting.
Just where in the brain does "sensing" start and stop, and where does it turn into something else? And what else does it turn into after it stops being sensing? I think the entire path from sensor to effector is doing "sensing". I even argue that every neuron in the brain is acting as a sensor. But instead of sensing light, or heat, or pressure, most are sensing temporal patterns of neural activity in other neurons. Most the neurons in our brain are in fact, "brain activity sensors". With a head full of brain activity sensors, is it surprising that we can sense our own thoughts? :)
I used to think of my brain the same way I thought about a piece of electronic equipment like an audio amplifier connected to a microphone and speaker. The sound that it senses exists at the microphone, and is recreated at the speaker, but that in the middle, there was just "magic" that represent the sound using electrons. The sound didn't exist inside the machine. The amplifier "sensed" the sound, only at the microphone. That was where sensing happened in a system like that.
Likewise, I felt that I sensed with my eyes and ears. I felt that the processing that then happened inside the brain, was all invisible to me - it was all just part of the magic of the subconscious. When I see a dog, I was seeing it with my eyes. When I hear a dog, I was hearing it with my ears (as we normally talk about these things).
But, after more thought on this is problem, I realized that can't be how it works at all. This is because as the data from the eyes is processed though higher levels of the brain, the neural circuits are responding to higher level abstractions - to higher level meaning in the data. The cones in the eyes, only see brightness, or lightness, at one spot. Further along, we have neurons that "see" center surround features. Further along, there are neurons that "see" edges. Way further alone, there are collections of neurons that "see" the dog. It's not the eyes, or the first N layers of processing that "sees" the dog, it's only the higher level circuits that can see a dog. And they "see" it by becoming active when the sensory data contains the correct type of dog pattern. They dog detection circuits in our brain are not seeing the light, they are seeing the "dog" in the firing patterns of other neurons.
We see how this can turn off, and on, when given optical illusions that are so distorted they are hard for our brain to find patterns in. Such as the classic dalmatian dog in the middle of a picture full of back and white shadows. The first time you see the picture, you don't see the dog. But at some point, you find enough clues, and suddenly, the dog jumps out at us. The picture before we saw the dog is what the lower level detection hardware as able to make out (mostly just odd meaningless white and black spots). But then suddenly, the "dog" detector found enough patterns to work with, and started to fire. Every thing we understand about what we see, and hear, and feel, is due to the fact that there are neurons firing in our brain that represents that understanding. It's all just brain activity.
In other words, what this implies, is that our conscious awareness, aka, what we are constantly aware of, is all the neural activity happening in our brain. It's not the microphone, or the eyes, that's doing the real sensing work at all, it's a head full of neural activity detectors that allow us to have this complete understanding of what's happening around us.
But, where does this job of sensing stop, and the job of acting begin? Is one part of the brain used for sensing, where all our awareness is generated? The only real option would be the sensory cortex vs the motor cortex. But I strongly suspect the motor cortex is nothing more than a sensory cortex, wired up to sense our own behavior - to sense the outputs of the brain. So if we have conscious awareness of all the "magic" happening in the sensory cortex, we should have conscious awareness of all that is happening in the motor cortex as well. In other words, the entire information processing path, from sensor, to effector, is our conscious awareness - none of it is hidden to us.
So, this implies, that the processing happening in the brain is not hidden magic like what happens inside some piece of electronics. This implies that everything we are aware of, is what is happening in our brain, and that nothing else is happening in there (at least not in the path that connects sensors to effectors and forms this major feedback loop through the environment). What we are not aware of, is the low level chemical and biological processes at work which are busy re-wiring our brain - adjusting weights, adding neurons, etc. The data flowing in the brain, is what we are aware of.
So, when I look around the office and see all the stuff, it's not the office I'm sensing as much as it's the brain activity that I'm sensing. The stuff in my office is what caused these patterns of neural activity to form in my brain, but what I'm aware of, is the neural activity, not the office itself. So like looking at a TV screen and seeing people, but knowing in fact I'll I'm seeing is flashing red, blue, and green dots on the screen, I now look around the office, and know that what I'm seeing, is not a office full of stuff, but just the flashing of billions of neurons.
So with all that as background, let me repeat what you wrote above:

I don't think we have "lower level" sensory fusion circuits. I think the whole thing is sensory fusion circuits. The fusion happens at all levels. When a center surround detector activates because of the correct pattern of light levels in a small collections of cones, it is doing fusion. It's the same fusion that happens at all levels. The neurons are detecting temporal patterns of activity in other neurons, and in doing so, they "fuse" the information in those other neurons, into a new piece of information. A dog detector neuron fuses activity from other lower level detectors, which had previously fused activity from detectors below that.
We see the dog, because the dog neuron (or neurons) activate. We see his spots at the same time, because there are "spot" detectors activating at the same time. We see 3 of his legs, and 1 ear, because 3 "dog leg" detectors, and 1 dog ear detector has also activated. The entire sensation of seeing a dog, in a particular position, is the sum total of the activation of all these detectors at once.
But what happens if later, the "dog" detectors activate, but there are no "dog leg" detectors active, or "dog ear" detectors active, or "dog spot" detectors active? This is the sensation we have of "a thought about a dog". What the thought is "about" is controlled simply by which high level detectors have been activated.
How do we tell a memory of a dog from the sensation of seeing a real dog? Simply by the fact that all the lower level detectors are not active at the same time. We no doubt have many different types of "dog" concepts all associated with different detection circuits in the cortex. The type of "dog" thought we are having is created by different combinations of these detectors.
So, I agree with your idea that we are sensing our own internal thoughts. But I don't believe it works because there is something feeding back brain activity, to our lower level sensory circuits. I think it's just a natural process of what the entire brain is doing. It's translating sensory data, into effector data, and all the middle terms of this translation is what makes up our "awareness". There are "dog" signals in the middle of this translation simply because it was helpful for the brain to create these signals on it's way to creating the signal that controls our arms and legs.

Yes, as a matter of fact, I've used the position that we are sensing our thoughts to explain in a mechanical way both what conscious is, and why there is this pervasive myth in all cultures that the mind is separate from the body. I think it's the only correct way to understand our thougths.

I don't understand autism enough to know much, but I've wondered about these sort of things as well. A model which correctly explains brain function should also be able to explain all the Brian dis-functions. The brain dis-functions should act as strong clues about the structure of the brain - if we only knew how to read the clues.
If a lack of connection to reality is a symptom though, the issue could be explained by a brain which is configured to be less sensitive to current sensory inputs, and more free to flap in the wind and generate it's on memories and thoughts - more like a person constantly daydreaming and only slightly connected to reality.

--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Curt Welch wrote:

Yes... but that tends to imply a single amorphous massively- connected network, and that's clearly not what brains are. They do exist in clumps, layers, clusters of neurons having both local and distant communication channels, but local circuits predominate. Current theory is that there is a form of voting going on in sub-oscillatory circuits based on particular distances (loop-lengths, which relate to time- delays) being typical of each individual site. I.e. a certain cortical surface may be populated by neurons having most synapses almost exactly 3.2mm away, where other areas of the brain have other distances. The brain decides what it's sensing by the circuits forming reinforcement patterns that cause a particular pattern in that area, like when a player at Risk fights back and forth and eventually wins a continent.

Yes, possibly, but not all of the brain is contributing to all of the activity - the brain is not amorphous. The morphology has been extensively studied and related to particular sensations and activities, and there are clearly zones that correlate between individuals. There is also strong evidence that core structures from the cerebellum up relate to evolutionary phases - we have a reptilian brain, a mammalian brain wrapped around that, and a human brain in the morphology and function of the cortex. Many of the functions e.g. of the mammalian brain (mothering, emotion, etc) can be observed in all mammals, but not in reptiles... so these functions are clearly arbited within these structures not the older ones. Though all parts of the brain might be involved, the functions only emerge when these higher structures are intact.
In regard to sensory fusion, it's very interesting to read studies on synaesthesia. In this condition, senses cross over - so a taste might be sensed as a texture ("there are too many points on the chicken stew"), or a sight might evoke a colour. By mapping the dimensions of the senses reported, it's possible to learn about the kinds of sensory processing occurring on individual senses before the sensations are fused. I figured out some of this stuff after reading "The Man Who Tasted Shapes" - look it up.
However, the upshot is that though we form a particular concept like "dog" at some level, the sensory origins are not lost in the fusion process. That means that each concept is associated with the activities (or at least the level of involvement) of the individual senses involved, so we can normally tell what part of our thought is imaginary. My theory is that we can inject hypothetical sensations in to the middling stages of sensory fusion, while retaining the ability to determine that the outcome of fusion has been affected by the injection. It's this ability to distinguish internal sensations from external ones that we call consciousness.
The evolutionary driver for it is that being able to conjure up hypothetical situations improves our ability to predict. We can run "what if" scenarios in more complex ways than lower animals. A hunting cat might have learnt that to stalk prey from behind cover is better, but it still has to look here and there to decide where the cover is best - that's a "what-if" scenario. Humans are just better at it. Prediction is the core of intelligence. It's also the core of our enjoyment of company, humour and music. We have a biological imperative to improve our ability to predict, in order to eat instead of being eaten.
BTW, the study of information theory and in particular data compression is absolutely key to understanding what's going on and how to reproduce it artificially. Compression is the art of removing whatever is predictable in an information stream. I've wondered if it would be feasible to build a type of robotic cerebellum using a small micro with large data compression tables to create learning behaviour like our muscle memory. Like Asimo's taught strides, but learnt instead.
> Is

No. But remember - the cortex is only the couple of millimetres of the outer skin of the brain. Most of the kilojoules are burnt there, but the rest of the brain volume is also active, it's not just a backplane.

Well that's demonstrably wrong (reflexes and muscle memory aren't subject to direct perception and control), but I see where you're trying to go with it.

That theory offers no explanation of synaesthesia, for example.

That's true - but there is still a stage where a sound, or a sight, is still an isolated phenomenon, not yet joined into a single concept. That's a lower level, and AFAIK, not one into which we can inject a hypothetical sensation. At some point, the sensations are related to each other, unified into a single concept, and it's likely there that we can inject our "sense of thought". The circularity of this causes pulsing oscillations of activity which we identify as thought and can measure on ECG's - they're brainwaves.

I think you contradicted yourself pretty thoroughly there. It might not be accurate to talk of "levels" of processing, since there is a clear circularity, but there is clearly localization of certain *kinds* of processing. Not all areas are accessible to direct perception.
Clifford Heath.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Clifford Heath wrote:

This is very likely where Brooks' idea of the subsumption architecture originally came from. Newly-evolved areas subsuming functions of ancient structures.

That sounds like one of Oliver Sacks' essays. You might also check out V.S. Ramachandran's book, A Brief Tour of Human Consciousness, 2003, which also talks about synesthesia. His evidence indicates that synesthesia may be due to incoming sensory fibers which inadevertently spread migrate from their proper termination areas of [usually temporal] cortex into nearby areas, which process a different sensory modality.
http://en.wikipedia.org/wiki/Vilayanur_S._Ramachandran
"... Ramachandran suggested that synesthesia may arise from a similar cross-activation between brain regions. However, rather than being within a single sensory stream, this form of cross-activtion would occur between sensory streams, and is thought to be due to genetic differences, rather than neural re-organization..."

This is no doubt based on the fact there is topographic [spatial] mapping from senses to cortex in all modalities, and all later-on processing uses these maps as the "primary" reference point.

"taught strides" ??

Curt's comments seem to imply there is no such thing as subconscious processing, whereas it's commonly accepted that maybe 99% [whatever] of brain processing occurs subconsciously, below our awareness level.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

It's just an issue of what parts create the "subconscious". That's a big part of the point I was getting at in the post. Where as I had assumed for a long time that a lot of the neural signals in the neocortex was part of the "magic subconscious" that made everything work, I now believe it's more likely that the firing of the neurons, and the signals the spike codes represent, are the consciousness we are aware of. The underlying mechanisms that regulate when the neurons fire, and which rewire or adjust the connection strengths between the neurons is the subconscious process we are not aware of. That is, when a synaptic strength changes, we are not consciously aware of that change. But when the neuron ends up firing later, when it wouldn't have otherwise, which then triggers a chain of activity in other neurons - that's what we are consciously aware of.
But notice I'm not talking about the entire brain and every neuron the fires. I'm only talking about those neurons (mostly in the neocortex) which are part of that direct signal path that connects sensory inputs to effector outputs and all the loops in that path.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Curt Welch wrote:

Oh yes, those direct paths. What do you suppose the neurons stuck in indirect paths are up to?
-- Joe Legris
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Well, I think the neocortex is first and foremost, a reinforcement learning machine. So in addition to the cortex which is the reaction machine that is trained through experience, there must be critic hardware which is the hard-coded circuits needed to generate the reward and punishment signals which shape the behavior of the neocortex. So that's part of the hardware I would expect to find in the mid brain for example. It's support circuits for the cortex.
Second, I suspect the reinforcement learning cortex was added on top of a system of hard-coded instinctual behaviors which was a much older part of the brain before it developed these strong general learning abilities. So some large section of the lower brain is most likely hard coded instincts which the learning part has the power to override. This is very important in most animals which must be able to survive on the own within minutes of birth, but has been nearly completely replaced by learned skills in humans who have the luxury of a very long learning period (years) before they have to develop all the skills for survival.
Next, much of the lower brain is no doubt there to help regulate important body functions, such as keeping the heart beating, controlling breathing to some extent, helping to control and regulate the digestion process, helping to regulate various chemical levels in the body, etc. - all that stuff that was needed to keep a complex organism alive before it developed these strong general learning skills.
The cerebellum is another major chunk of brain which is not a direct part of the main reinforcement learning system that I think creates our conscious awareness and intelligence. It seems to be some type of output motor processor which acts much like PID controllers act in our robots to help make the body parts respond proportionally to what the signals represent instead of having to build in the physics and reaction characteristics of each body part into the higher level controls.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Curt Welch wrote:

This is kind of the cortex = tabula rasa argument, but might not be so cut and dried as you indicate, since animals such as horses, which can get up and run and follow their mothers within a few minutes of birth, also have neocortex. Hard to imagine the cortex is doing nothing for the colt in early life, because it hasn't learned anything as yet.
Rather than contending all of this "instinctual" colt-early-life stuff takes place below the cortical level, and that the cortex is mainly just for reinforcement learning, it's much more likely the cortex has added many advanced processing capabilities on top of the older areas, but also the ability to modify those capabilities to a much greater extent than can happen in sub-cortical levels. IOW, the "general" functions of the 30+ cortical visual areas, as well as their interconnections with the rest of the brain, are actually determined in the genome, rather than learned after birth.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Yeah, that's the stuff that more research needs to answer.
Of course, the neocortex has been receiving signals and "learning" for a long time before birth, so it's not exactly "blank" when the horse is born.

It's possible. Research is needed to answer these questions.
Much of the bulk configuration of the neocortex is clearly predefined in the genome. But how much of the interconnections that develop happen after the eyes start sending data to the cortex? I don't know when the eyes form and first start to send signals, but I know it's long before birth. If we could just cut up, dissect, and stick probes in a few thousand human babies we could answer more of these questions. :)
Without knowing the answer, we still have the issue of what we can do as programmers and robot builders to push the technology forward while we wait for the neuroscience to figure out more about real brains.
I believe there is still much to be learned about generic signal processing and learning systems. The type of system you can feed any sensory signal to, and it will figure out on it's own, what to do with the sensory information. I happen to believe the neocortex and most the supporting structure is just such a module in the brain, but whether I'm write or wrong about that is not all that important. What's important is whether there are better generic learning algorithms still possible to be developed. I think they are (and I think it's the key to creating human like behavior in robots), so that's what I'm exploring.
If your think the cortex is instead a lot of custom designed circuits, each for dealing with it's own special type of sensory data, then you can go about trying to duplicate such algorithms in code for processing data such as video. It's wise for people to be working on both approaches to see what we can find.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

It's what the neocortex is and that's mostly what I was making reference to even if I didn't make that clear. The lower, and older brain is mostly not part of the path from sensor to effector output.

The neocortex is mapped rather well into many different areas (I think Dan likes to quote something like 21 visual areas?). But it's not mapped by it's physical structure as the rest of the brain is - it's mapped only by the nature of the signals being carried in each section - just as we would have to map computer memory. The neocortex, like computer memory, is a single amorphous massively-connected network - well not exactly because there are distinct pathways between different cortical sections but the entire neocortex is made up of the same fundamental hardware structure just like computer memory is all made up the same fundamental hardware structure grouped into modules with pathways between them.

There are many current theories but few actual answers. :)

Are you talking about the diameters of the interconnects for superficial pyramidal neurons? Yes, it's clear that evolution has tuned their design to optimize the structure in different sections of the neocortex, and from species to species they are different. But still, the entire neocortex is basically the same structure - micro columns combined to form macro columns combined to form various cortical regions.

Yeah, the cortex definitely forms networks which seem to want to lock into different stable states - Walter Freeman's work seems to come up a lot when people talk about the connection to chaos theory and strange attractors in how the brain's state seems to lock onto various stable points.
The basic behavior of the brain tending to lock into a stable state is easily demonstrated with optical illusions that our placed between two stable interpretations such as the goblet that looks like two faces. We can make our brain jump from one to the other but once it's changed, it wants to lock onto the interpretation and block the other.
This tendency is also easily understood by looking at the cortex as a pattern recognizer which uses feedback to lower levels to improve the accuracy of the interpretation. Once the higher level circuits see a "dog" in the picture, that information seems to be feed back to lower levels to allow us to see something like a dog-ear which we would never had interpreted as a dog-ear had we not first recognized the larger image to be a dog. Once you add feedback of that type, the system will naturally want to "lock" into a stable configuration (the one created by "I see a dog and lots of dog parts").
But still, it's not the the fact that the brain has locked into a stable configuration that means "dog+dog parts". It's the actual neurons that fire which seem to represent what we are sensing. When the brain is locked in to the "dog" pattern there will be a different set of neurons activated then when it's locked into a "cat" pattern.

Yes exactly. There are zones that represent different colors, and zones for faces, and zones for sounds. Each neuron or small cluster of neurons (like a cortical micro column) seems to have a precise "meaning" to our conscious awareness. We only sense that "meaning" when those neurons are active. Our total conscious awareness at any one time is then, most likely, simply a function of which cortical columns are currently active.
At the same time, these neurons can be activated artificially and in doing so, people report they are able to sense some conscious awareness connected to the stimulation. I don't know how extensive this type of testing has been to try and get a better map between activity and conscious awareness. I suspect it's considered too dangerous to do much of with humans so I suspect the testing has been limited mostly to cases where brain surgery was needed for other reasons.

Right, I'm only really interested in the neocortex. The rest for the most part doesn't seem to be very relevant to AI - it has more to do with keeping us alive than in making us intelligent.

I've not seen much on that.

I'll keep it in mind.
One of the biggest mysteries of the brain is trying to understand why each of our sensory experiences seem to have such a unique, and different sensation. Why does red look red to us for example? The neurons that define redness for us are not any different than the neurons that define blue for us. So why do we end up with such different perceptions of these signals?
The answer I like is that our perception of all the senses is simply created by how they all relate to one another in the brain - in how they are associated and in the behaviors they tend to create in us. This is a fairly weak answer, but it's the only one that makes sense to me.
This would imply that sounds should always sound like sounds, and not invoke a color sensation. The only way I believe that could happen, is if the color sensation was formed by mostly processing visual data in the correct way, but then having a cross connect from the auditory data into the color sections that could also, at times, simulate a similar pattern. In other words, a slight, but not strong, fusion of sensory data, where small amounts of visual data might be leaking into the auditory sections, or small amounts of auditory data was leaking into the visual processing areas. But this is just another wild guess.

I don't really follow your logic there. Are you saying the "imaginary" thoughts are the ones which don't have of a sensory origin?

Yeah, that's basically the same thing I'm saying anyway. The tricky part is when you switch to words like "we can inject" vs, the "the brain is structured like xyz..." it gets very confusing to understand what the "we" is you are making reference to and what it means "to inject". I however make the same mistake as well and don't always have a good translation to pure terms of "the brain is structured ...".

Yeah, that's clear. I've seen others try to justify our ability to talk to ourselves as a natural evolution of language to allow us to have private thoughts that other's can't here. But that doesn't really answer why we have the ability to have non linguistic thoughts which probably showed up before language.
Understand the power it gives is fairly easy. But understanding the evolutionary path the brain must have undergone to get to what it was before, and after, is not so easy.

Yeah, I understand the connection to information theory. I've been looking at ways to use it to build networks for a long time. I'm reading "Spikes : Exploring the Neural Code", by Fred Rieke currently and just hit the chapter where they jump deep into information theory as a tool for understanding neural spike trains.

The gray matter which is those millimeters of outer skin is where all the neocortex neurons are. The white matter which is most the mass inside is just backplane.
Most the rest of the brain is not part of the direct pathways from sensors to effector and not part of our voluntary control system. The lower brain seems to be mostly stuff to keep us alive. The exception is parts of the midbrain which seem to there to support the operation of the cortex.

Right. Where I wrote "brain" substitute "neocortex".

Sure it does. As explained above.

Very true. There are large sections of cortex which are fed only by a single modality. But at some point, the regions all merge.

Right. That's totally consistent with how I look at it.

But that's not quite how I look at it.
It's just invalid in my view to think that sensory fusion happens after the visual cortex for example. The entire visual cortex is doing sensory fusion. The eye is not one sensor. It's millions of them. And the data from each of these individual sensors must be fused in the same basic ways sound and vision data must be fused. The N layers of visual cortex are simply fusing individual light sensors into higher level visual concepts.
Likewise, we don't have one sensor signal coming from the ear, we have a huge bundle of sensory signals coming from each ear which must be fused to create higher level auditory information.
At some point, these different modalities start to fuse, but it just seems wrong to think that there was no fusion happening in the visual cortex, and that some magical "fusion" process then happens when data from the ears first mixes with high level fused data from the eyes.
Dan and I get into these sort of debates all the time in c.a.p. He tends to learn towards the belief that each of the distinct regions of the cortex that have been mapped out are special designs created by a long process of evolution to do each special type of data processing task. So each of the 21 visual areas are each different circuit, doing a different job. To duplicate this in our software, we would need to write different software for processing each type of data. Fusing left and right eye data would be a totally different algorithm than fusing ear and eye data.
I on the other hand lean towards far greater reductionism and believe the entire cortex is basically performing the same algorithm and that each section, has only had that algorithm tuned to best fit the nature of the data it is processing. I believe one algorithm can basically perform all data fusion. I believe that it will be found by following ideas just like you suggested above, with the data compression, and information theory based ideas.
However, in terms of the "injection point", it seems to me that when I have memories or thoughts, they seem all directly associated with sensory data. I have visual memories, or sound memories, or smell memories, etc. But yet, as you say, it can't be happening at a very low level, because the memories are always such a pale echo, of the real thing. If they happened at a low level, I would expect them to be more life-like. So either the effect is happening higher in the chain, or else, it works as some partial reactivation staring lower. This is the stuff that would be so easy to figure out if we only had high quality brain activity scanners (down the the neuron level). How some figures out how to do that without hurting the brain some day.

Yeah, that seems to be somewhat logical.

I think it's valid to talk about levels. But it only goes so far since there seem to be so many feedback loops in the process as well.
But by "entire brain" I was really referring to just the neocortex and being sloppy in my description, not all areas of the entire brain.
Have you read Jeff Hawkins' book, "On Intelligence"? He too is big on the belief that the neocortex is the "interesting" part of the brain from an AI perspective and seems to believe it's basically one algorithm at work. He See's it as a data processing problem of extracting invariant representations - which is basically consistent with various information theory ways of looking at what the cortex is doing.
The part I think he's missed, and one I stress, is the reinforcement learning that is needed as well. Basic data extraction or compression ideas can lead us to circuits that fuse data and extract the essence of the information in the data - removing duplicate information that flows in through different sensory channels (be it from cone to cone in the eye, or from eye to ear). This is what would allow us to turn hard to interpret data like a million pixels of visual data, into signals that tell us there's a cat in the environment. But knowing there's a "cat" in the environment, doesn't answer the question about how we, (or a robot we want to build) should react to the cat. Should we chase it as food, or run away, or just ignore it, or what?. The basic fusion algorithms are important in telling us as much as possible about the state of the environment in a form that's easy to use and as compact as possible (aka compressed), but you have to add to that reinforcement learning, for the system to learn how it needs to react to the current state of the environment to reach its goals (get food, don't let your body be harmed, reproduce, etc).
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Curt Welch wrote:

Ok, that makes more sense now.
> The lower, and older brain is mostly not

Hmmm. I'd be very surprised if that's true. It seems you're of the belief that humans are rational beings - we think before we act - rather than rationalizing beings, which seems a better description. Try this - ride a bicycle, then cross your arms and hold the opposite handlebars. You'll soon learn how effective our conscious is(n't). Try to make yourself feel grief, or joy, or any other emotion... rabbits and mice feel most of these things; they're in the mammalian brain.
My point is that most of our activity and being is arbited by parts of the brain accessible only indirectly, though we may have awareness of their functioning.

Yes, but a computer memory can, though structurally amorphous, contain a story book. The book and the story are not less structured and real for being contained in an amorphous mechanism. You might as well say that all of the universe is amorphous because it's made up of quarks. What you're really saying is that there's a level of organization that you can't observe in the material structure with the tools you have - not that there isn't any organization. So I maintain that "amorphous" is the wrong term - the functional morphology is invisible but present in the arrangement of active synapses.

Sorry, the terminology from the book I read on this has faded, and I can't even remember the title.

Stable *oscillatory* patterns. The time constants in these oscillatory circuits goes a long way to explain the sensory process - they form auto-correlators that progressively firm-up a sensation in the process of representing it as a particular pattern. It's this idea of temporal correlation that I think is missing from traditional approaches to robotics and AI.

I think you're wrong to use the term intelligence only for the higher functions. A robot that had the same "stay alive" characteristics would still be an impressive achievement and be considered intelligent, though perhaps only in the way that an animal is.

It's not that black and white. I see a dog and perhaps I imagine it pissing on my leg, but though the idea of the dog is real (reflects a real present dog), I know that my leg isn't wet. I don't try to kick the dog because it's pissing on me. The emotion of revulsion is activated nonetheless, and I probably don't feel like throwing a stick. IOW the conscious mind knows it has meddled with its own perceptions, and can still tell which bits are real. The mammalian brain still responds to having been pissed on, and generates the appropriate emotion.
If I had lost the distinction of having meddled in reality, I might just actually kick the dog. Such behaviour is deemed psychotic. The difference isn't that psychotic individuals generate more unreality than others, but that they can't tell they're doing it.
The philosophical question of "why did I imagine that", and "what is this 'I' that 'decided' to imagine that" is where the arguments start. My view on that is the same as yours, I believe. The book is in the computer, and so is the story. It might look like there can't be a book or a story inside a bit of silicon, but it's there, it's *all* there.
The idea of the "individual" and all the legal responsibilities that go with that is a way of identifying a *process* which is complex beyond prediction, so we hold the *process* itself responsible for its behaviour. We don't hold the molecules of a murderer's body responsible - even though they fully embody the process.

Either that, or your "dog leg" neurons and your "red" neurons activate to represent a symbol which is defined as a linguistic element without the need to have a word associated. IOW don't get hung up on the idea of linguistics needing words. Language manipulates symbols, whether or not they have words.

I did a review of the state of data compression a year or so back, and there are some very good ideas in things as common as ZIP. IMO it would just take an appropriate way of representing the temporality of a stream of behaviour/sensation and they could be applied to the creation of muscle memory, so a robot could learn how to walk - unlike the recorded step-sequences that allowed Asimo to stand, walk and dance. Clearly they encoded the required behaviour, and maybe even compressed the encoding, but dynamic compression and matching of recent event sequences is what's needed for learning at this level. To my mind, the temporal content in the data stream is the part we haven't thought about enough.

Sorry for the typo, I meant to say synaesthesia. But you've answered my objection by clarifying that your use of "amorphous" was structural not functional.

Yes, I agree. When I say "sensory fusion", I restrict it to mean the area where processed data from the different senses gets fused. That's not to deny that fusion occurs elsewhere. I was responding to your apparent denial of the functional morphology, of layers of processing, where in fact you were just pointing out that there's little structural morphology.
While that's true, I hope I've explained why I think it's beside the point.

I'm with you on that. It seems unlikely that there's enough information in the genome to describe the entire schematic :-).

Which is exactly why I call it a sense of thought. It's at the level where we're collating the various senses, but here we're sensing our own thoughts, and the data has been through a similar amount of pre-processing.

No, sounds like a good one, I'll look it up.

That's just data compression under another name, as you and I both pointed out.

I don't agree here. We correlate all concurrent events and the associated emotions & thought patterns in the process of encoding/compressing. Learning is implicit in these quality association in the sensory data stream. If that cat scratched me last time I tried to pat it, I have a negative association. Because I can do partial retrieval "that's the same cat", I can recall the qualitative data that was associated previously.

Can't agree here. Data compression requires memory, and memories encode learning.
Clifford Heath.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

No, I believe we are emotional beings. Way down at the bottom of the post I'll explain.

That's a good one!

Your using terms different from how I'm using them. Let me ignore this for now because what I have to say below about reinforcement learning touches on that.

And again, I'll cover emotions below with reinforcement.

You are the one that used it. I actually am not really familiar with the term and I just copied your usage as best as I could guess what you were getting at.

For sure. Which means it's not invisible at all. We know the function is encoded in the synapse and we have no problem seeing the wiring. We just don't know what it means.
I'm a strict materialist or physicalist. I think talk about "function" as being separate from structure is only a language convention we use. Function itself can't exist separate from the structure any more than the human soul can live on after death (which is where I think all this talk about function not being physical all came from).
We often talk, and think of computer memory devices as having a uniform physical layout and that the data stored in the memory has some logical or intangible existence separate from the physical memory. But in fact, the data stored in the memory is just as physical as rocks and makes the physical structure of the memory anything but uniform. It exists as piles of elections in typical dynamic computer memory for example and electrons are very physical. The fact that we can't see them with our eyes only helps to further the invalid concepts that data is not always physical.
And likewise, our neocortex stores it's knowledge in it's wiring. It's a learning machine which gets wired though experience. The wiring is constantly changing, just like our memory is constantly changing. So even though I think the neocortex is basically one type of circuit duplicated over and over, it's training has transformed each circuit into a very specific circuit that performs one very specific task.

Ok, that's possible. I've not seen work that describes it that way but I can relate.

Yeah, that was a big insight I finally grasped about 5 years ago because of debates here on Usenet in the AI groups. I had spent many years working with fairly traditional (non temporal) types of neural networks trained by reinforcement and though I understand the temporal issue was there, I always figured we would solve it in our hardware just like we normally solved those problems. That is, we would create a non-temporal function mapping function, and then use memory to act as delay devices to create the temporal aspects. Technically that's valid, but for practical reasons, I later discovered it was just the wrong approach.
I worked with mostly binary networks (aka networks with binary signals with N binary inputs that would produce N binary outputs once every clock cycle).
Then when I started to grasp the temporal problem, I started to look at the advantage of working with asynchronous pulse signals (like the brain uses), and realized that processing nodes that dealt with these signals had a great advantage. With the old nodes, I created a spatial function, and added extra hardware to make those spatial functions perform temporal pattern matching. But with pulse signals, you could create gates that performed temporal functions, and totally ignore the spatial aspect of the problem. In fact, there is no spatial aspect of the problem that needs to be solved - we only are trained to think in those terms so that we can write down our ideas on a sheet of paper - it was in fact our long history of using written language that biased our tools and techniques so much towards the spatial.
For the past many years now, I've only been looking at designs that use async pulse signals and which mostly, perform all their actions based on pulse spacing.

Well, if you knew me better, you would know that I'm the one that makes the argument that rocks are both conscious and intelligent. So I have no problem extending the idea of "intelligence" down to levels far below what must people would. :) But my use of intelligence above was limited to a different type of use. I think most of what people see in human behavior (and some animals) comes from us being strong reinforcement learning machines - and the lack of that very specific feature is what prevents most people from believing a machine could every be the same as a person (that is, most of the people that have a hard time accepting that idea).

The emotion stuff I'll get to below, but to remove your use of "knows" in the above, I would simply say that the reason we don't kick the "imaginary" dog is BECAUSE our leg isn't wet (not because we "know" it's "not real").
Not only is the leg not wet, but a lot of the other current sensory perceptions don't match with our perception of the dog pissing on us as well. And the reason we can tell it's not "real" as you say, is because all these other things don't match.
I think the memory is in fact very real - just as real as when we see a real dog piss on our leg. I think it's represented in the brain in the exact same way as it's represented when a real dog is pissing on our leg. There's no "memory that it came from our thoughts" that allows us to tell the difference. It's only the fact that it's inconsistent with the the rest of the data currently in our brain. So how do we know, when there is conflicting data, which is "real"? We can tell because there is far more "real" data than imaginary data active in a normal brain. So, I have data that tells me my leg is dry from feel, I see the ground, and there's no dog parts to be seen. I motion detectors didn't tell me something just moved by my feet. I'm in a shopping mall, which is a place that I wouldn't have expected to see a dog. All this other data, fits, and is consistent with each other, and is consistent with the idea that there is no dog. But yet other data in my brain tells me there is a dog. So which do I trust as "real" and which do I trust as the "memory" or "imaginary thought"? The one with the most consistent data of course.
And what happens when we sleep and our real sensory flow is cut off and the only circuits being activated are the ones that are free do so independent of our thoughts? We start to think that our dreams are real. Only when we wake up, and get a sudden influx of data which is inconsistent with the "dreams" do the "dreams" stop looking "real".
If we had some memory of the fact that the thoughts were injected (your idea in my words), then how do you explain the fact that dreams seem so real to people? And why do they sudden revert from "real" to "dream" when we wake up? Where as my idea, that these memories are in fact exactly as real as the real thing, but we can spot when they are "invalid" only by comparing it to what else is going on. (I can explain what I mean by "comparing it" if you care as well).
Also, my view of dreams and memories gives us more power to understand mental illnesses like schizophrenia and their associated hallucinations. If too much of the brain activates with a "memory" (instead of in direct response to current sensory data) it becomes impossible for us to tell which is real, and which is the dream. The only think that keeps us in touch with reality, is the fact that most the brain in a normal human, is being activated in direct response to current sensory inputs. If too much of the brain reverts to a old states (memories), then we loose track of what is reality, and what is the dream.
The dreams all become consistent with one another (when we dream a dog peeing on our leg our leg feels wet) simply because all these sensations are connected to each other though feedback loops and they have been trained to form consistent patterns. So when we have a thought of a dog peeing, it's biasing our leg to switch to the "I feel the warm wet pee on my leg", but it's only the current flow of real sensory data that's keeping that sensation from forming in our brain about our leg.
But if the sensory data starts to loose the battle, and too much of the mind switches to a dream-like state even when we are awake and functioning, then we loose touch of reality and start to believe that the these other false brain states are "real".

And my theory, is that the simple fact that generated to much "unreality" is WHY they can't tell they did it. :)

Your "book" vs "store" stuff is confusing me a bit.
I don't mix process and function and physical reality. There is _only_ physical reality. Process is just a store we create to describe physical reality. We can create language to describe how a computer work, and we can label that language, as being the "process" of the computer. But the computer is the computer, is the computer. It's not the process. It's the computer.
And likewise, I am a physical hunk of matter. That's all I am. I am not a process. I am not a mind. I am a brain. When I do things, it's my body doing things. When I talk about what my mind is doing, or what is "in my mind", I'm really just talking about what the brain is doing.

I don't like to talk that way. I hold the body responsible for the crime. Not the process. To talk as if the process is responsible for the crime and not the body is just left over bull shit about the soul being separate from the body which I don't believe.
If a machine runs out of control and kills someone. Do we hold the blueprint for the machine responsible for the action and not the machine itself? That would be silly - playing the death on a piece of paper. That's what I think you are saying when you talk about "us" being the process.
I agree that nearly everyone on the planet, even all those that are strong materialist who don't believe there is a soul separate from the body, like to talk the way you are talking, but it's deceptive in my view.
In one way, I hold that type of talk valid - but confusing. That is, our own view of our self exists as kind of a blueprint in our own brain. It's like a camera taking a picture of itself and using that picture as it's only way to understand itself. Likewise, our view of "self" is limited to what our brain can represent aboutourself, so it's value in that sense for me to say that my view of myself is limited to my understanding of myself.
But, like with the camera, we would never say the data encoded on the picture was the "real" camera. We would say the physical camera is the camera.
Likewise, I say that I am a physical body, nothing else, nothing more.

Ok, more semantics. I like to say that pulses are symbols and that the brain is a symbol processor just because it's processing pulses. This gets people that like to reserve the idea of "symbols" to something closer to the concepts represented by our natural language words. These are also the people that seem to see our language processing skills are something truly unique to humans and non existent in lower animals. I don't see it like that at all. Words are just temporal patterns to us like everything else we deal; with. The brain represents them all, be it a cat, or the word "cat", in the language of temporal pulse signals. No where do I believe there is any significant difference in what the neocortex is doing when it's processing and producing natural language behavior, or when it's processing, and producing, all other sensory information.

Though many concepts like this go though my mind as well, finding a working implementation illudes me. It's driving me crazy. :)
[snip]

Ok, here's why I answer the stuff I didn't answer above about emotion etc.
What you talk about is in fact reinforcement learning.
You wrote above:

Why is a cat scratch a "negative" association? That's what you fail to answer. How do you formally define what should be a positive association, and a negative association? How is this formal definition of "value" defined in the hardware? How is it implemented? The answer is that it's implemented as a reinforcement learning machine. That's what I mean by saying we need to build a reinforcement learning machine. You must build a machine that is able to make associations of value.
So, how do we built a robot that would register a cat scratch as a negative association? You start by building custom sensors, and hard-wired processing, which is able to detect pain and pleasure (technically the wrong words but I'll use them anyway). We build hardware that knows what sensory conditions we want to be registered by the machine as "bad" (aka pain) and what sensory conditions to be registered by the machine as "good" (aka pleasure or rewards, or reinforcer). This hardware is called the critic in reinforcement learning terminology (reinforcement learning is a specific sub-field of machine learning in AI BTW and I'm not making reference to the psychology fields of behaviorism - though they are closely connected).
The learning machine must then receive sensory data, produce outputs, and receive reward signals from the critic. To the learning machine, the critic can be thought of as just another part of the environment, but in a actual robot, the critic hardware is something we, as the creator, would design and built. The critic hardware is what gives the learning machine it's high level goal, or purpose, in life. All it's morals, and behaviors, and drives, are derived from the goals built into the critic.
The only goal of the reinforcement learning machine is to maximize TOTAL LONG TERM reward. (not to maximize only current reward). This means it must constantly estimate potential future rewards, and make constant trade off decisions about whether a bird in the hand is worth more, or less, than two in the bush. That is, given a choice of one behavior with a quick reward, or another behavior, with a larger, but more risky long term reward, which behavior is the one expected to produce (on average) the most total reward per time. The behavior with the best total reward per time, is the one the machine should select.
The entire purpose of a reinforcement learning algorithm, is to predict these values (based on data collected though past experience), and produce behaviors based on their expected value.
This is NOT the same problem, as the data compression issue or the general issue of prediction.
A machine can analyze sensory data, and find ways to compress it, and ways to predict what will happen next in the world (if we stand here we are likely to be scratched by the cat), but making that prediction, doesn't tell the machine what to do. Maybe it "likes" being scratched, so the best answer is to do nothing and hope the cat does scratch us as predicted. Or maybe a cat scratch is bad, so we should take actions to prevent that from happening. How does the human brain "known" that a cat scratch is bad? It "knows" it, because there are special hard wired circuits in the brain (not in the neocortex, but in the midbrain) that can sense when harm is done to the body in various ways, and will in tern, send a "punishment" signal to the learning brain (the neocortex), so that it can form a negative association with whatever sensory conditions preceded this punishment (the vision and sounds and smells of a cat scratching our leg).
Alone with this power, the learning system must use it's power of prediction, to later predict that just the site of a cat is at least slightly bad, because once we see a cat, the probability that we will get scratched just went up, and the prediction system should be able to predict that - leading to just the site of a "cat" having a low value associated with. If ever time we visit a given place, cats show up, then the low value of the cat, will train the vision of this cat building, so that just seeing the cat building will produce a punishment (training us not to go near that building if there are better options to chose from).
All these "values" that the reinforcement learning machine is associating with all sensory conditions, as well as all behaviors it produces, is the source of our emotions. This is what makes us love some things, and hate or fear others. If the reinforcement value prediction system predicts high future rewards for some stimulate (a hot babe), it's what makes us "like" that sensation, or that object, and it's what makes us increase the odds of using the behaviors that produce that stimulate condition. If our prediction system predicts very low future rewards, that's what makes us dislike the stimulus and it's what makes us stop producing a behavior that creates that stimulus condition.
The reason we are emotional machines, is because we are reinforcement learning machines. That's where our emotions come from. If you want to build an emotional robot, you have to build a robot with a reinforcement learning engine driving its behavior.
I could talk more, but this post has gone on too long already. If you want me to talk more about reinforcement learning, and how it's different from supervised learning for example, and why I think it's the the only type of learning that explains human intelligent behavior, (or why it's easy to know this is the answer, but not know how to code it) I can do more of that as well.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Curt Welch wrote:

Ok. Amorphous means simply lacking in structure. The lack of appearance of physical structures in the cortex belies the deep function structures, as I think we've both agreed. I originally thought you were implying that all of the cortex processes all of the data, which would have been a silly idea and demonstrably wrong.

I'm with you on the materialism, but I think you confound the ideas. A computer memory cell isn't structurally different for storing a one instead of a zero, it's functionally different. A RAM chip is structurally simple - so repetitive - but the data in it may be functionally rich - e.g. a story.
One human body isn't structurally different from another, but they sure are functionally different.

No, but the structure can exist without the function. The RAM chip has the same structure when it's not powered. It's not just the arrangement, but the way the arrangement can change, the way it's *developing*, that constitutes the process (I'll explain my use of "process" below).

The opposite approach, as I'd have it. Time and change is the key, not static modeling. We respond to stimuli precisely and only because they represent change.

Ok, I can dig that, not sure if I agree though. The generation of "what-if" scenarios in the brain makes use of the existing sensory circuitry to process hypothetical events. When the hypothesized sensations are injected into the sensory fusion chain, the change in the outcome is processed differently than if it had occurred in the absence of the hypothesis, i.e., in the real world. To observe this change is the purpose of hypothesizing.
That's what I mean when I refer to tagging - we can experiment with how we might respond if a particular event would occur. The re-use of the sensory chain has side-effects (like changing our emotional state) even though at a conscious level we know it wasn't real.

Because of the side-effects of the re-use of the sensory processing chain, some hypotheses are unsafe to inject while the motor functions are active. Nevertheless there's still an advantage in exploring our likely response to them; it serves as training for situations that have not yet occurred. So we can dream of falling off a cliff, and train a withdrawal response, without having to actually go near a cliff. So there's an evolutionary advantage to dreaming.
A waking hypothesis results in a mental change that must be separated from the base state in order to determine its outcome. As a result, it's somewhat diluted - dreams can be more intense.

It's more of the structure/function stuff I wrote about above. The book is "real" while existing only in a transitory physical state, only in a temporary arrangement. More below where I explain "process".

The idea of process is deeper than you give it credit for. The average duration for which any given atom is part of a human's body is around six months. That is, on average, every atom gets replaced every six months. So what is the human? They have a clearly identifiable likeness and individuality with a much longer time constant. Clearly the person is a *process* that exists in an organization of those atoms, where the process maintains the organization even as the atoms themselves change.
You say that "process" is "just" a story (at least I think you meant to type that), but that seems like an attempt to deny its reality and significance. It doesn't consist of atoms, but it exists through atoms. Its existence isn't fundamentally different from the existence of one of those atoms themselves, which each exist as an arrangement of lower particles.
Of course the process is a physical phenomenon. But it's a clearly identifiable phenomenon having distinct characteristics and duration, and as such shouldn't necessarily be treated differently from the physical particles in which it consists. It doesn't become meta-physical by being treated as a reality; it's a reality within a reality.

The body exists only because a process is maintaining it. They're the same thing.

No, we hold the designer responsible, because we expect that machines will only be built the bounds of whose behaviour can be completely predicted by its designer - that's an expectation we have of designers.
If we ever change that expectation, and allow designers to create intelligent robots, we might blame the designer and wish we hadn't allowed it, but it'll be the robot that gets destroyed. We'll hold the robot responsible, of course. But if the parts are useful, we'll re-use them, they won't be tarnished with the stigma of what they did when they participated in the whole thing.
My point is that blame attaches to the thing whose behaviour cannot have been predicted.

No, it's descriptive. I'm not using the word metaphysically. You'll probably argue that the description is not the reality, and I agree, but I don't think it hurts, so I'm going to keep doing it. :-).

Whoops, all too self-referential for me. I'm not going there; it's like an aircraft turning such sharp corners that it flies up its own tailpipe and vanishes :-). Not what I intended in my use of the word "process" at all.

I think we have to invert the priority of time intervals and datum. The datum is in fact the change, but the retrieval is based on the rate of change (or time between events) more than the amount of change (or the type of event).

I think you're wrong, and it's exactly the same problem. Long term prediction requires us to test hypotheses, which we can do using our normal apparatus. The predictions are based on previous learned responses. We are sentient precisely because we can do this to a much higher degree than other animals.
One other comment:
The trouble with your critic is that you assume values. "Long term reward" measured as what? We have *multiple* biological drivers that compete to be complied with, and sometimes they are at odds with each other. These drivers are the values which our critic uses, though not always consistently.

You're using the term emotion unconventionally here. Reptiles are a reinforcement learning engine, but they do not have emotions, whereas mammals do. Emotions drive behaviour which has consequences advantageous for the social or family group, but not directly for the individual.

It's been good - but it has to be truncated sometime or it'll never end :-)
Clifford Heath.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

This is how I always thought about it until I actually thought about it. :)

You seem to be using "structurally different" in an odd way. If I'm 6'1" male and my wife is a 5'7" female you are telling me our bodies are structurally the same because we are both humans? We of course share many structural similarities, but in not structurally the same unless all the structures are the same.
Not only is the structure from the outside very different, but our brains are structurally very different as well - which is why she doesn't act like me, and I don't act like her - we have different programing which is 100% represented in the structurally differences of our bodies. That's how I use the term "structurally different".

Meaning a structure can exist which has no function? Or just that a structure can exist which doesn't have a specific function (a car dose not function as a hammer).

Not at all. It's _VERY_ different when it's powered. It has piles of extra electrons that aren't there for long after the power is removed. It's the physical structure that determines the memories function.
If you load Microsoft Word into the memory of a computer, this is not just a "logical" process (how we tend to think of that). The machine has physical been reconfigured into a word processor by the physical actions of it's parts.
A computer is just as physical as a clock with gears. It works for the same reason a clock with gears works. The atoms and electrons push each other around in physical ways constrained by the physical structure of the machines.
Computers are mechanical devices just like a mechanical clock is mechanical.
The brain is likewise a mechanical device just like a clock. It's just got much smaller parts that operate in more interesting ways that clock gears. Humans are robots.

Yes, process is nothing more than the documentation of physical motion.

Yeah, but we are built to respond to a lack of change as well. (hit the button if the light hasn't changed in 10 seconds). So I think a "change" centric view might be a bit limiting. In order for a machine to perform a task like that, it must have a physical clock which is constantly changing, so internally, it's all about time based change, but externally, we are not limited to responding to only sensory changes.

I think you are getting yourself in trouble by not sticking to one abstraction level when you talk about this. You say "at a conscious level we know it wasn't real". But what does it mean at the hardware level for the machine to consciously know something? What physically happens in the hardware to indicate we know something?
You talk about the brain injecting a hypothetical event into the sensory circuits. But what hardware in the brain is generating the event signal to be injected? Are you proposing special brain modules which generate the hypothetical event signals? How does that module know what type of signal to inject and when to inject it?
And if we understand the world in terms of our senses, how is it we are not fooled by the injection of a replacement signal? Why doesn't the brain react to the signal exactly as it would if it had come from the eyes?
What general structure are you suggesting, at the hardware level, to explain these things?
I think the only difference in our views is that I have a pure mechanical answer to what's going on, and you have a half mechanical and half subjective answer to how it works (you keep saying we "know" it's not real but don't explain in mechanical terms how the brain is structured to allow that to happen).

Hey, that's interesting. It's the first time I've seen an idea that actually comes up with what I consider a valid justification for dreaming while we sleep. Many people have suggested that sleep is needed to make the brain work correctly - to reset it self or something like that - and that in our AI we might have to duplicate the function - I've never believed that). I believe you are correct and that the dreams we have when we sleep actually do condition the brain to avoid predicted dangers - or seek out predicted rewards (we had a dream that "told us" to look under the rock for the gold).
I think however the main advantage to all this, as you have said, is that we can do this "dreaming" while we are awake (we can run scenarios and test options). The fact that they happen, and can be of use as we are falling asleep, is just a side effect of the same useful brain feature. (we sleep simply because we live on a planet that's dark half the time and our body is optimized for collecting food in daylight so we need to conserve our energy at night so we have it available in the day when it will do us the most good for food collecting).

But how is the hardware structured to separate the "base state" from the injected state?

The "human" is the current structure of our body.
The same problem happens with many effects in nature like waves, or clouds.
The physical material that makes up a wave changes just like the material that makes a human changes.

Well, the problem, is that the human you are talking about, is only in your mind. The human I talk about is the physical human. I don't see any advantage to talking about my body existing in your mind. I talk about my body as my body. Do you see the difference?
I can make a pile of 7 rocks. Every day, I can replace one of the rocks with a new rock. Every week, the pile is totally different from the previous week - 7 new rocks every week. But, when I talk to other humans, I can talk about "my pile of rocks", and we can all know exactly what we are talking about - it's the stupid pile of rocks that Curt has been keeping over in the corner for the past 10 years.
I use the same words to describe the rock pile today, that I did 10 years ago. It's still, "my rock pile". But yet, the rock pile itself is not the same over that entire period. It's physically very different. It's made of very different rocks, in a very different configuration.
So what is hard to understand about this rock pile? There is the physical rock pile, and there are the words that we use to describe it, and there are physical structures in my brain that gives me the ability to talk about my rock pile, and there's the physical structures in my wife's brain that lets her talk about my stupid rock pile.
Atoms are not stable either. They constantly reconfigure themselves from nanosecond to nanosecond second. But they maintain enough consistence to allow us talk about them as being a "thing".
The point I'm making here is that the best way to understand it all, is to use the physical layer as the lowest level abstraction of what "exists". The concept of "process" is a creation of our brain, which we use to help understand the physical world - just like we use language in general as a tool to understand the physical world. But the physical world exists on it's own, independent of the fact that we are trying to talk about, and define it as a process.

I think so, but you might be using the word story in a different way than I do. It can be very hard to find a common ground to talk about concepts like "process" and "function" and "information", and "knowledge" when we try to talk about these ideas independent of the observer. All these concepts are easy to understand when there's a human in the picture acting as the observer. But to talk about brain function, we are forced to remove the human observer from the picture and suddenly, the meaning of all these words become problematical. There were never intended to be used that way.

How can something exist in this universe and not be made of atoms or other sub atomic particles?

The atoms are all that exists. If there is something else besides the atoms, tell me what it is?
This talk you are using about something being the same, but not the same, at the same time, is just confusion in our language left over from the mind body confusion - the body soul confusion. This is where all these ideas of process came from and where they got messed up.
If you are going to reject the idea of a soul being separate from the body, you also need to fix all the other words in our language that are based on the assumption of the human soul being separate from the body.
The words software and hardware for example are just an extension of the soul body belief. Software is never "soft". It's always hardware. But since our entire language is based on the idea that it's the soul that is "seeing" and "understanding" the world instead of the body doing the "seeing", we developed this large and complex language based on the same split.
I believe that what you are trying to argue here (without even realizing why you are doing it), is that the split is real and valid. And that the mistake I'm making, is to deny that there is a split. I say there is only structure, but you say there is both structure, and process. And if I deny process as anything other than the motion of physical material, I'm leaving "something" important out. Yes, I'm leaving out the concept of a soul.
It's the same confusion of mind body. We don't have a mind and a body. We just have a body. The reason we were taught to use the word mind separate from the word body is because the language was designed by people that believed (knew for a fact) that the soul was separate from the body.
But if you believe they were wrong, and that body is just a body which moves according to the laws of physics, you need to find a new understanding of what all these other words really mean.

The body is just the body. It's not a body in a reality. If we choose to talk about a small part of the universe called one human body, that's something that is happening in the brain of the person doing the talking. Concepts are physical. They exist as physical brain structures in the person that understands, and uses those concepts. We have this abstract concept of what a human body is that we make use of when we talk about human bodies. But this abstract concept of "human body" is something that exists in the person doing the talking, not in the human body. Of course this is hard to separate when it's a human body talking about a human body. It would be easier to understand if we had intelligent robots with the same mental powers so then we could talk about the concept of a human body existing in the brain of the robot when it was talking, and thinking about, human bodies.

I thought it was the laws of physics that held the body together? What is this magic "glue" called process that holds the body together?

So, when I kill someone, it's God we hold responsible and not me? Or maybe Darwin?
You seem to have a very human centric view of reality - that only "humans" can be held responsible. (this is the typical view most people have and the one I constantly have to try to beat out of people for them to correctly understand a true materialistic view of reality).
It again, goes back, to the belief that we are "special" because we have a soul. That only a soul can be held responsible for an action, because it's the soul that is the cause of the action. Rocks and machines can't be held responsible because they don't have a soul. We must always track the cause back to a soul in order to find out who is "responsible" for the action. (you can't be a "who" unless you are a soul).
This is the mind-first view of reality that existed in our culture for thousands of years. But it's the view that's being demolished, slowly, piece by piece, by Darwin's Dangerous Idea (I put it in caps because it's the title of Dennett's book on this topic). Dennett talks about Darwin's ideas of evolution being a universal acid which is slowly eating through all our established beliefs and views - that it has extended far beyond the reach of biological evolution. And he's quite right.
Man's entire view of the world was based on an error made long ago - that the soul is separate from the body, and that only a soul has the power to act with purpose. Therefor, only a soul can be held responsible. And above the soul of man, was the super mind of God, the soul and the mind that is the cause of everything. This is a nice idea, everything seemed to fit in a nice order, but one which we now understand to be nothing but hogwash a few no more useful than the flat earth perspective.
This nice little mind first view of reality all started to fall apart when Darwin figured out that you don't need a soul, to explain the creation, and the design, of life on earth. You need nothing more than than simple physics and the law of motion. Of course, more than 50% of the people here in the US still don't understand this, and are still clinging to their mind-first view of reality, but that's another problem.

Designers are already creating intelligent robots. :)

Right, of course. The truth is that there is a causality chain that creates all physical actions in the universe. It's a causality chain that acts according to the laws of physics. In order for us to control the environment we live in, we learn what the causality chain is, and we manipulate parts of the chain to get the desired effects we want. We move the light switch in order to turn off the light because we understand the switch is part of the causality chain which is creating the light.
If someone is trying to kill us with a gun, we manipulate different parts of the physical causality chain to try and prevent this action - we might take the gun away, or we might try to block the path of the bullet, or we might try to manipulate the brain of the person to prevent it from causing the finger to pull the trigger by talking to the person. In all causes, we are attempting to use our knowledge about the causality chain to make a better future for ourselves.
The concept of "holding responsible" is just one way we talk about how we manipulate a causality chain that has a human brain as a major component in the chain.

Ah, not really. We don't blame the weather on the water that falls out of the sky because we could not predict the behavior of the water molecules do we?
We assign "blame" to souls. That's how the word is used in English. It comes from a day when people believed that it took a soul to animate matter. The idea that matter could animate itself, was mostly unthinkable in those days - just as the idea that matter could self organize to form life was unthinkable.
We are slowly extending the use of the word "blame" beyond where it started, but the progress is slow. Since souls are nothing but a special type of machine, where should we draw the line on the correct use of the word "blame"? What type of robot would it be valid to "blame" for it's actions, instead of blaming the designer? :)

Yes, there's no problem in being descriptive as long as you always understand where the line is. When we use these common English words to talk about AI, the line gets very confusing simply because most common English words were defined back in day when everyone believed in the soul. Even though some of us have totally rejected that old belief, we still use the old language with all it's old issues simply because our brain as been conditioned to think in those terms.

Yeah, that's the problem that happens. The language we use was built to talk about something where there was a clean split - the body and soul - the observer and the observed. Once you realize the two are one and the same, things turn back on themselves and vanish up their own tailpipe as you say. It gets confusing.

Right, my pulse sorting nodes perform all their actions as a function of the time space between pulses. Behavior of these nodes are not a function of a static datum, but on a measure of time. They act only when they receive a pulse, but how they react is not a function of the value of the pulse (a pulse has no value), the action is based on how much time has passed since the last pulse event happened. Everything in that type of network is time based. It's a temporal network.

Yes, that power you speak of is important in humans, I agree with that. But the question we must find an answer to is what do we need to build in order to duplicate that power? How do we build a robot that acts like a human?
The lowest level hardware is what we have to build, and understand first. What does that hardware need to be? We need to reduce all these problems of making a robot act like humans to the simplest answer we can find. The answer I've found, that is the only one I've every found that all other problems can be reduced to, is a reinforcement learning machine.
Other people working on AI for example have tried to reduce the problem of AI to a knowledge storage problem. Others have tried to reduce it to a logic problem. What are you suggesting we reduce it to? A machine which makes hypotheses? How does that work exactly?

Some common measure of reward defined in internally in the machine.

That is just the nature of reinforcement learning. In the end, the machine must pick one, and only one, behavior at each moment of it's existence. It can't move the arm up, move it down, and hold it sill, all at the same time. The laws of physics dictates that the arm must be in one place at a time.
When there are conflicting values, the machine must convert those values to some common currency and compare them. It must produce an answer as to which value the machine will select as the "better" value. This is the nature of reinforcement learning. The conversion to a constant currency can't be avoided. It happens by default.
In the brain, the neurons end up being rewired as a result of rewards and punishments. The conflicting rewards and predictions of rewards acting on the brain still ends up finding a single answer as to how to rewire itself. The probability of some past behavior being repeated in the future has to either go up, go down, or stay the same. However that probability changes is the decision the system has made when it weighed the conflicting values against each other.

Yes, I tend to do that. But I think it's the correct usage in this context.

And how do you know that don't have emotions or that they are reinforcement learning machines? Is that based on simple empirical evidence or on social convention of the use of the word "emotions"?

Well, I get into these long debates in comp.ai.philosophy all the time. It's very useful to get a strong grasp on all these concepts if we are going to try and build a machine that can act just like a human. I learn a little something new every time I argue these points.
If you want to continue to hash out these ideas we could move the thread there since most of this is not very connected with practical information on how to program robots even though the entire reason I work on passing out these ideas is so I can one day build a very interesting robot.
Or, we can drift more towards talking about how one might actually program any of this in a robot in which case it might not be too off topic for this group. I'm game to talk about it at any level.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Curt Welch wrote:

Very interesting and pertainate post. Let me just take a small portion for initial reply, and then maybe I will do later posts on more sections.
I've said it before, that I am greatly influenced by Julian Jaynes thinking, as recorded in "Origins of Consciousness in the Breakdown of the BiCameral Mind". I'll be refering to him in my response to this first section above.
Yes, we can remember the past, and we can call up the past disconnected from the present senses, but we also have a very unique ability to call up "memories" of the future. By that I mean we can look at a situation and play "what it" with it. We can make a premise, okay if I do this, then this will happen. In my mind "this will happen" can actually be a little movie of what happens in the future. Then I will change the premise and wonder, okay if I do this instead, then what will happen, and another little movie of consequences plays, and so on, till I hit one I find the results of which to be acceptable.
Where is this stage upon which our little internal plays are produced? Weirder yet, Who is that character we see playing our part in the play?
Jaynes says this is our internal mind space, and it is big enough to hold a universe inside, because when we think about the universe, that's where the thing we're thinking about is, inside our internal mind space. Also we have a player to put on this stage that is us, Jaynes calls the "Analog I"
Now Jaynes ideas are very rich and complex, but I'll stop at this one, because it is an extention of what you've suggested, that humans can visualize things separate from their current state, and they do this by reactivating past state.
It's more than that. We can also imagine a play with objects from past state in a visualized future scenario inside our minds.

Just to recap, it is current and past sensory inputs, but also, a predict future sensory expectation that drives our behavior as well.
Proof? Easy. If you flinch when you see something come at you, you are not reacting to past or current sensory input alone (i.e. you feel no pain yet) but also you are reacting to a future expectation.
Now, what do we do with this conclusion, because many animals flinch. If you've ever ridden a horse you know how often they make future predictions on limited input data, jumping from percieved preditors, whether there or not.
Can horses visualize the future. I'm surprized to hear myself suggest, Yes, I guess they must. What does that say about their ability to manipulate mind state?
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

I'm not familiar with that work but the title does sound familiar. From what you write I think I would find it interesting. But I've already got a stack of about 4 books on consciousness I've not gotten to. :)

Yes, for sure.

Yes, that sounds consistent with my views.

I don't think we really need that. It's just automatic. When you replay a past experience, it's always from your own perspective. At best, we might be replaying a past experience where we witnessed another person, and are playing it with the idea of it being "us" in place of the other person.

Yes. But it's also possible to explain how we have this power.
We don't have to assume evolution created this complex "replay" or "visualization" hardware for us to explain the power. Instead, we can explain it in terms of what our normal perception system already has to do. It has to classify sensory data into invariant representations in order to drive our behavior. In other words, when we see a cat, we might see it from a million different partial angles, but the brain must still recognize all these different data patterns as being a cat. It must even classify sensory data patterns it's never seen before as "cat". We can explain how it does this, in terms of simple statistical correlations, and temporal predictions.
When a simple sold color object (like a circular disk) moves across our visual field, we can make predictions about what sensory data we expect to see in the future. After it's traveled half way across our visual field (from left to right), we can predict how the visual field is going to look in the next few frames. We expect to see a disk, a little further to the right, and we can predict exactly when, and where, we expect to see it next. We can make a temporal prediction about what data we expect to see next.
Because there are huge amounts of temporal prediction like this possible in all our sensory data, our pattern matching (aka classification circuits) end up using this to improve the accuracy of their classifications. If we see a dog spinning around, our circuits (because they have seen dogs, and things spinning many times in the past) can predict that even though we only see one eye now, that we will soon see the second eye as the head continues to turn. This gets wired into the decoding circuits, as a bias in the prediction system. The circuits that detect motion, are wired, to make the lower level circuits biased to detect what the system most likely expects to happen next. By having these feedback prediction circuits, the speed and accuracy of the pattern matching is greatly increased. Instead of seeing a dog ear, and having to guess if it's a fuzzy rat, or funny shaped leaf, the brain can instantly classify it as a dog ear simply because we had already seen the rest of the dog, and our "dog-ear" circuit was primed to be the first choice answer when something close to a dog ear showed up in the about the right place in our visual field. All our perception circuits get cross wired this way based on how well each one predicts the other in a temporal manner.
When you take this same hardware and turn off the flow of sensory data and just let the built in temporal predictive feedback loops activate it based on what it expects should happen next, it ends up playing back a movie for you. It produces a constant running stream of predictions based on what it thought just happened last. Each event it "dreams up" keeps triggering the next most likely event to follow it.
This can be used to explain why we have "what if" hardware. Our perception hardware is simply built to work that way. Put it into some staring position, and let it run, and it will predict what will happen next. Feed it random noise, and it will still extract the best prediction it can from what it's hearing (the true basis of that movie about hearing ghosts in the white noise of TVs).
This is why we dream. Cut off the sensory data, and the what-if perception hardware just starts making a stream of predictions. Feed it random noise, and it still predicts what it thinks is most likely to be "happening" in that data.
All this perception hardware is tied into our actions as well. If we produce an action of reaching out, the prediction hardware will predict that it should see our arm reach out. If we reach out and knock something over, our what-if hardware predicts it should fall over. So it's not just watching a move, it's like playing a video game using our own what-if hardware - which was all created simply for the purpose of pattern recognition to drive behavior selection, not for the purpose of playing what-if games. That power probably showed up later in our evolutionary history.
One place (other than our dreams) that we see this hardware in action, is when you listen to a CD and there's a silent pause between songs. If you have listened to the same CD many times, right before the next song starts, you will "hear" the beginning of the song. This advance prediction of what is coming next, is driving all our perception. We just get to hear it at work in this case because of the silence.

That's right.

The CD example above is a good one.

But that reaction is better explained by the prediction that conditioning (aka reinforcement learning) creates than the simple sensory prediction of the CD example.

I see at least three basic systems at work here.
They include sensory prediction which is an unsupervised learning system which gathers data simply from repetitive exposure to similar sequences of sensory events.
Then there is the reinforcement learning to direct the outputs (behaviors) - which when implemented correctly, ends up back-propagation predictions about future rewards which ends up creating behaviors that look "predictive" in nature (duck to prevent being hit).
Horses no doubt have these first two.
The third feature, which is the ability to use the predictive perception hardware as a what-if tool while at the same time receiving a normal sensory stream (aka what we might call tuning out our environment and daydreaming) is something humans seem to have in ways that I'm not aware of in animals (at least not at the dog or horse level) (though it would be hard without high quality brain scanners to verify this).
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Curt Welch wrote:

Interesting comment. How do you know the dog doesn't recall a past memory of being let out?

Not so easy to explain, I think. The following calls for some conjecture on your part.
Do you think a dog "sees" in a similar manner to how *WE* see, ie by perceiving some sort of "image" with its brain [however this happens, which is still a great mystery], or is the dog's perception of its visual environment totally different from ours? Whatever that might be.
Now, if a dog [and other higher mammals, for that matter] "see" and perceive in a similar manner to how we do, then it stands to reason that they also compute a form of internal "mental imagery" similar to how we do. In which case, when they are dreaming, they probably also "see" little pictures in their sleeping/dreaming minds, just like we do.
Some people contend that animals other than humans are fundamentally different, in their perceptions/etc, but I seriously doubt it. If you've ever been with a dog on a mountain trail, and seen it bounding over rocks and logs, and standing atop a peak and craning its neck to peer down a 2000' rock face, you'll swear the dog perceives a visual image very similar to ours. How else could it do these things?
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.