Syntax and robot behavior

An excerpt from "Flesh and Machines" by Rodney Brooks

"Dances with Machines

"What separates people from animals is syntax and technology. Many species of animals have a host of alert calls. For vervet monkeys one call means there is a bird of prey in the sky. Another means there is a snake on the ground. All members of the species agree on the mapping between particular sounds and these primitive meanings. But no vervet monkey can ever express to another "Hey, remember that snake we saw three days ago? There's one down here that looks just like it." That requires syntax. Vervet monkeys do not have it."

In a previous thread, "Where is behavior AI now" we discussed time domain based signals on simple inputs and outputs.

Isn't what Brooks saying above, that animals are not able to put such time weighted concepts (i.e. snake we saw three days ago) into communications? Isn't this parallel to the state information discussion, to say, animals have very limited ability to remember state?

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
Loading thread data ...

An excerpt from "Flesh and Machines" by Rodney Brooks

"Dances with Machines

"What separates people from animals is syntax and technology. Many species of animals have a host of alert calls. For vervet monkeys one call means there is a bird of prey in the sky. Another means there is a snake on the ground. All members of the species agree on the mapping between particular sounds and these primitive meanings. But no vervet monkey can ever express to another "Hey, remember that snake we saw three days ago? There's one down here that looks just like it." That requires syntax. Vervet monkeys do not have it."

In a previous thread, "Where is behavior AI now" we discussed time domain based signals on simple inputs and outputs.

Isn't what Brooks saying above, that animals are not able to put such time weighted concepts (i.e. snake we saw three days ago) into communications? Isn't this parallel to the state information discussion, to say, animals have very limited ability to remember state?

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to

I think the difference is that humans have the ability to manipulate private state that is independent of the environment to an extent that is far beyond all other animals. Before we can develop language to talk about what happened yesterday, we first have to remember what happened yesterday, or what happened 10 minutes ago. By this, I mean we have the power to call up memories of the past. When we do that, all that I think is happening is that our brain is partially activating old states, created from experience.

So, when sensory data flows in, it's decoded through a large parallel network which specifies how the current sensory signals are different from other sensory signals as well as decoded all the way to the correct actions to take in response to this current sensory environment.

If we see/sense 100 things that are around us, it's because there are 100 different parts of the network activating at the same time in response to this current sensory environment. I know there is a computer in front of me only because parts of by brain that represent that idea have been activated in response to this visual data. But at the same time, many other lower level parts of the brain are being activated by the vision data

- the parts that detect simple edges, and areas of color, and shapes. It's all this combined that creates our full experience of seeing a computer.

All that "state" is activated directly by our current and recent past sensory inputs and all that state is also driving our behavior.

But, humans also have the ability to active some of that network state, independent of the current sensory inputs. We can create a memory of something that happened in the past by making part of that network activate again. I can close my eyes, and still "think" about looking at the computer. This memory is very weak and poor compared to the sensation of actually seeing a computer because only a very small part of my brain is being put back into that "seeing a computer" state when I have the memory.

If I remember seeing a snake yesterday, it's because my brain has sections which are able to disconnect from the current sensory experience. It's hard for example, for me to look at my computer screen, and have a past memory of seeing a snake at the same time. I almost have to close my eyes or at least, concentrate to block the sensory data in order to allow me to have a past memory of seeing a snake. This is because the current sensory data is trying to force the brain into the configuration of looking at a computer monitor.

None the less, humans have a lot of power to do things like close our eyes, and make our mind drift back to partial (very partial) recreations of past experiences.

Our behavior however is a function of the entire state of the brain. So, when parts of our brains are recreating state from past experience, our behavior can also be a function of that part of our brain state, instead of being a function of only brain state created from current sensory experience. In other words, we can produce behaviors that are function of our memories. We can say something like, "hey, that snake is like the one I saw yesterday". That's because when we first saw the new snake, a small part of the brain switched back to a state that represented what happened yesterday. But not all of it switched back (not very much of it at all really) which is why we can for the most part not be confused about what is happening now and what is a memory. We only get confused about that if we cut off our sensory inputs so that the memories are all we have to react to

- like what happens when we sleep and dream.

I think animals have nearly as much state information in their brain as we do. It's just that most of their state is always a direct connection of the current sensory inputs. Most of our state works that way as well. It's how we know where we are and whats going on around us. But because we have this percentage of the brain that flaps in the wind and can flip back to old states it allows us to react now, as if we were reacting to something that happened last week.

So I don't think we have that much more state. Or that we can can react in ways all that more complex than the ways many higher animals can react to their internal state, I just think that for some reason, some sections of our brain has a bit more freedom to disconnect from current sensory inputs, and switch to active configurations which represent states that were active in the past. We constantly have memories of past events which allow us to act in complex ways that many animals don't seem to have. Most of them seem to be far more forced to react to only what is happening around them instead of having a brain that can switch to a past experience (aka daydream).

A dog for example shows clear signs of action that shows their behavior was based on past experience. They run to the door to be let out because they know that door is how they are let out. But this doesn't seem to happen because they can recall a past memory of being let out that door. It seems to happen simply by conditioning. I don't see any real signs of dogs day dreaming. They only seem to react directly to what's happening around them (except when they are sleeping and they do seem to have dreams in that case but that's easy to explain if your body cuts off your sensory signals and lets the brain free-wheel).

Reply to
Curt Welch

Yes to all you've said, except I interpret it differently.

I describe it by saying that we have an additional sense, a "sense of thought". That is, we can perceive our own thoughts at a sensory level, and process them through the lower brain's sensor fusion circuits without losing track of the fact that what we sensed actually originated internally - just as we don't lose the fact that what we just saw was a sight, not a sound.

I think this idea can be used as a basis for explaining most of what we observe as consciousness, dreaming (including day- dreaming), etc, and probably very many disfunctions also, such as autism, bipolar disorders, etc.

I mention autism because if the sensory origin of each percept is lost during sensor fusion, then the ability to distinguish internal from external senses is lost, and internal disturbances can create instability in the apparent "real world", leading to profound disorientation. I would expect behaviours on the order of those encountered in autism to be the result.

Clifford Heath.

Reply to
Clifford Heath

Calling this syntax and technology is one way to say it. Most people would probably say something more like animals don't have a good ability to manipulate symbols. Daniel Dennett, the philosopher, says that at best chimps and apes have the ability to use "proto-symbols".

Analogous to the snake-thing, a while back I invented a similar scenario regarding 2 chimps. Imagine one chimp trying to explain to another chimp that he had just eaten a grub over behind that bush over there. With no language, how does the chimp go about explaining grub, "behind" that bush, before=past-tense, on and on? Past-tense, alone, is a killer for a chimp. Explain yesterday.

BTW, as you read Brooks' book, ask yourself if it is really announcing the death-knell of reactive-robotics, as I suggested in a prior post. Towards the end, he asks why hasn't AI succeeded better, and finally comes to the conclusion that: #5 = there is some "missing stuff". So, now he's on to the "Living Machines" project.

EEHH! Wrong answer. It's actually #2 = not enough complexity in our current AI systems. [note I read the book 3 years ago, and may have the numbers wrong].

Personally, I think we need to think more about information and signal processing machines rather than state machines, per se, although the 2 might be ultimately congruent. Animals do have a very good ability to remember things, but they don't have 2 things that humans have: (1) ability to manipulate symbols [as indicated above], and (2) ability to work through complicated temporal sequences. Just try to get a chimp to write a short computer program [forget about the billion monkeys typing on keyboards, and writing shakespeare by random chance, scenario]. Just try to teach a dog to unwrap himself when his leash is wrapped around a tree. A trivial sequential task, but no dog ever did it [that I know of].

One problem with Brooks' simple reactive machines, and that I've also been dealing with lately myself, is the problem of **getting stuck on local maxima**.

This is the problem with simple reactive bots, for which "the environment is its own best representation" [Brooks famous punch line], and which don't have specific memory/tracking/representational systems [and I mean "specific" here], and which your own basic FSMs will not deal with very well.

This does go back to the idea of "remembering state". The typical FSM, once in a state, does not have any knowledge of how it got into that particular state, since most FSMs are big loop systems, with multiple possible trajectory paths. My walking machine controllers are the ultimate example of this. They just repeat the same leg sequences over and over. When it's in a particular state, there is no memory of its past history prior to getting into that state. It's just gone through a transition from one to the next is all.

Also, in past weeks, I have been playing both with photovore-sensor behaviors, and this past week, with sonar "echo-vore" behavior. By this last, I mean I'm using 2 sonars differentially to keep a bot aligned with a wall or other surface, just like using 2 photo-cells to track a light source .

The problem with both of these situations is that the sensor systems can easily get locked onto false maxima, by turning so far that they lock onto a new surface or light source, different from the original. This is the downside to using the enivronment as its own best representation, and not having some sort of historical or internal representation of what has been happening. That's the downside to too simple BBR. As robot programmers, we have all invented ways to get past this problem, by adding rules/etc, IE, something on top of simple reactive BBR. Rules that kick in, in certain situations, based upon recent history of events.

Reply to
dan michaels

Very interesting and pertainate post. Let me just take a small portion for initial reply, and then maybe I will do later posts on more sections.

I've said it before, that I am greatly influenced by Julian Jaynes thinking, as recorded in "Origins of Consciousness in the Breakdown of the BiCameral Mind". I'll be refering to him in my response to this first section above.

Yes, we can remember the past, and we can call up the past disconnected from the present senses, but we also have a very unique ability to call up "memories" of the future. By that I mean we can look at a situation and play "what it" with it. We can make a premise, okay if I do this, then this will happen. In my mind "this will happen" can actually be a little movie of what happens in the future. Then I will change the premise and wonder, okay if I do this instead, then what will happen, and another little movie of consequences plays, and so on, till I hit one I find the results of which to be acceptable.

Where is this stage upon which our little internal plays are produced? Weirder yet, Who is that character we see playing our part in the play?

Jaynes says this is our internal mind space, and it is big enough to hold a universe inside, because when we think about the universe, that's where the thing we're thinking about is, inside our internal mind space. Also we have a player to put on this stage that is us, Jaynes calls the "Analog I"

Now Jaynes ideas are very rich and complex, but I'll stop at this one, because it is an extention of what you've suggested, that humans can visualize things separate from their current state, and they do this by reactivating past state.

It's more than that. We can also imagine a play with objects from past state in a visualized future scenario inside our minds.

Just to recap, it is current and past sensory inputs, but also, a predict future sensory expectation that drives our behavior as well.

Proof? Easy. If you flinch when you see something come at you, you are not reacting to past or current sensory input alone (i.e. you feel no pain yet) but also you are reacting to a future expectation.

Now, what do we do with this conclusion, because many animals flinch. If you've ever ridden a horse you know how often they make future predictions on limited input data, jumping from percieved preditors, whether there or not.

Can horses visualize the future. I'm surprized to hear myself suggest, Yes, I guess they must. What does that say about their ability to manipulate mind state?

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to

Yes, I've used that exact idea many times in trying to explain (and understand) what thought is. I strongly try to argue the point that we sense our thoughts just like we sense the external world.

But here's where it gets interesting.

Just where in the brain does "sensing" start and stop, and where does it turn into something else? And what else does it turn into after it stops being sensing? I think the entire path from sensor to effector is doing "sensing". I even argue that every neuron in the brain is acting as a sensor. But instead of sensing light, or heat, or pressure, most are sensing temporal patterns of neural activity in other neurons. Most the neurons in our brain are in fact, "brain activity sensors". With a head full of brain activity sensors, is it surprising that we can sense our own thoughts? :)

I used to think of my brain the same way I thought about a piece of electronic equipment like an audio amplifier connected to a microphone and speaker. The sound that it senses exists at the microphone, and is recreated at the speaker, but that in the middle, there was just "magic" that represent the sound using electrons. The sound didn't exist inside the machine. The amplifier "sensed" the sound, only at the microphone. That was where sensing happened in a system like that.

Likewise, I felt that I sensed with my eyes and ears. I felt that the processing that then happened inside the brain, was all invisible to me - it was all just part of the magic of the subconscious. When I see a dog, I was seeing it with my eyes. When I hear a dog, I was hearing it with my ears (as we normally talk about these things).

But, after more thought on this is problem, I realized that can't be how it works at all. This is because as the data from the eyes is processed though higher levels of the brain, the neural circuits are responding to higher level abstractions - to higher level meaning in the data. The cones in the eyes, only see brightness, or lightness, at one spot. Further along, we have neurons that "see" center surround features. Further along, there are neurons that "see" edges. Way further alone, there are collections of neurons that "see" the dog. It's not the eyes, or the first N layers of processing that "sees" the dog, it's only the higher level circuits that can see a dog. And they "see" it by becoming active when the sensory data contains the correct type of dog pattern. They dog detection circuits in our brain are not seeing the light, they are seeing the "dog" in the firing patterns of other neurons.

We see how this can turn off, and on, when given optical illusions that are so distorted they are hard for our brain to find patterns in. Such as the classic dalmatian dog in the middle of a picture full of back and white shadows. The first time you see the picture, you don't see the dog. But at some point, you find enough clues, and suddenly, the dog jumps out at us. The picture before we saw the dog is what the lower level detection hardware as able to make out (mostly just odd meaningless white and black spots). But then suddenly, the "dog" detector found enough patterns to work with, and started to fire. Every thing we understand about what we see, and hear, and feel, is due to the fact that there are neurons firing in our brain that represents that understanding. It's all just brain activity.

In other words, what this implies, is that our conscious awareness, aka, what we are constantly aware of, is all the neural activity happening in our brain. It's not the microphone, or the eyes, that's doing the real sensing work at all, it's a head full of neural activity detectors that allow us to have this complete understanding of what's happening around us.

But, where does this job of sensing stop, and the job of acting begin? Is one part of the brain used for sensing, where all our awareness is generated? The only real option would be the sensory cortex vs the motor cortex. But I strongly suspect the motor cortex is nothing more than a sensory cortex, wired up to sense our own behavior - to sense the outputs of the brain. So if we have conscious awareness of all the "magic" happening in the sensory cortex, we should have conscious awareness of all that is happening in the motor cortex as well. In other words, the entire information processing path, from sensor, to effector, is our conscious awareness - none of it is hidden to us.

So, this implies, that the processing happening in the brain is not hidden magic like what happens inside some piece of electronics. This implies that everything we are aware of, is what is happening in our brain, and that nothing else is happening in there (at least not in the path that connects sensors to effectors and forms this major feedback loop through the environment). What we are not aware of, is the low level chemical and biological processes at work which are busy re-wiring our brain - adjusting weights, adding neurons, etc. The data flowing in the brain, is what we are aware of.

So, when I look around the office and see all the stuff, it's not the office I'm sensing as much as it's the brain activity that I'm sensing. The stuff in my office is what caused these patterns of neural activity to form in my brain, but what I'm aware of, is the neural activity, not the office itself. So like looking at a TV screen and seeing people, but knowing in fact I'll I'm seeing is flashing red, blue, and green dots on the screen, I now look around the office, and know that what I'm seeing, is not a office full of stuff, but just the flashing of billions of neurons.

So with all that as background, let me repeat what you wrote above:

I don't think we have "lower level" sensory fusion circuits. I think the whole thing is sensory fusion circuits. The fusion happens at all levels. When a center surround detector activates because of the correct pattern of light levels in a small collections of cones, it is doing fusion. It's the same fusion that happens at all levels. The neurons are detecting temporal patterns of activity in other neurons, and in doing so, they "fuse" the information in those other neurons, into a new piece of information. A dog detector neuron fuses activity from other lower level detectors, which had previously fused activity from detectors below that.

We see the dog, because the dog neuron (or neurons) activate. We see his spots at the same time, because there are "spot" detectors activating at the same time. We see 3 of his legs, and 1 ear, because 3 "dog leg" detectors, and 1 dog ear detector has also activated. The entire sensation of seeing a dog, in a particular position, is the sum total of the activation of all these detectors at once.

But what happens if later, the "dog" detectors activate, but there are no "dog leg" detectors active, or "dog ear" detectors active, or "dog spot" detectors active? This is the sensation we have of "a thought about a dog". What the thought is "about" is controlled simply by which high level detectors have been activated.

How do we tell a memory of a dog from the sensation of seeing a real dog? Simply by the fact that all the lower level detectors are not active at the same time. We no doubt have many different types of "dog" concepts all associated with different detection circuits in the cortex. The type of "dog" thought we are having is created by different combinations of these detectors.

So, I agree with your idea that we are sensing our own internal thoughts. But I don't believe it works because there is something feeding back brain activity, to our lower level sensory circuits. I think it's just a natural process of what the entire brain is doing. It's translating sensory data, into effector data, and all the middle terms of this translation is what makes up our "awareness". There are "dog" signals in the middle of this translation simply because it was helpful for the brain to create these signals on it's way to creating the signal that controls our arms and legs.

Yes, as a matter of fact, I've used the position that we are sensing our thoughts to explain in a mechanical way both what conscious is, and why there is this pervasive myth in all cultures that the mind is separate from the body. I think it's the only correct way to understand our thougths.

I don't understand autism enough to know much, but I've wondered about these sort of things as well. A model which correctly explains brain function should also be able to explain all the Brian dis-functions. The brain dis-functions should act as strong clues about the structure of the brain - if we only knew how to read the clues.

If a lack of connection to reality is a symptom though, the issue could be explained by a brain which is configured to be less sensitive to current sensory inputs, and more free to flap in the wind and generate it's on memories and thoughts - more like a person constantly daydreaming and only slightly connected to reality.

Reply to
Curt Welch

Yes... but that tends to imply a single amorphous massively- connected network, and that's clearly not what brains are. They do exist in clumps, layers, clusters of neurons having both local and distant communication channels, but local circuits predominate. Current theory is that there is a form of voting going on in sub-oscillatory circuits based on particular distances (loop-lengths, which relate to time- delays) being typical of each individual site. I.e. a certain cortical surface may be populated by neurons having most synapses almost exactly 3.2mm away, where other areas of the brain have other distances. The brain decides what it's sensing by the circuits forming reinforcement patterns that cause a particular pattern in that area, like when a player at Risk fights back and forth and eventually wins a continent.

Yes, possibly, but not all of the brain is contributing to all of the activity - the brain is not amorphous. The morphology has been extensively studied and related to particular sensations and activities, and there are clearly zones that correlate between individuals. There is also strong evidence that core structures from the cerebellum up relate to evolutionary phases - we have a reptilian brain, a mammalian brain wrapped around that, and a human brain in the morphology and function of the cortex. Many of the functions e.g. of the mammalian brain (mothering, emotion, etc) can be observed in all mammals, but not in reptiles... so these functions are clearly arbited within these structures not the older ones. Though all parts of the brain might be involved, the functions only emerge when these higher structures are intact.

In regard to sensory fusion, it's very interesting to read studies on synaesthesia. In this condition, senses cross over - so a taste might be sensed as a texture ("there are too many points on the chicken stew"), or a sight might evoke a colour. By mapping the dimensions of the senses reported, it's possible to learn about the kinds of sensory processing occurring on individual senses before the sensations are fused. I figured out some of this stuff after reading "The Man Who Tasted Shapes" - look it up.

However, the upshot is that though we form a particular concept like "dog" at some level, the sensory origins are not lost in the fusion process. That means that each concept is associated with the activities (or at least the level of involvement) of the individual senses involved, so we can normally tell what part of our thought is imaginary. My theory is that we can inject hypothetical sensations in to the middling stages of sensory fusion, while retaining the ability to determine that the outcome of fusion has been affected by the injection. It's this ability to distinguish internal sensations from external ones that we call consciousness.

The evolutionary driver for it is that being able to conjure up hypothetical situations improves our ability to predict. We can run "what if" scenarios in more complex ways than lower animals. A hunting cat might have learnt that to stalk prey from behind cover is better, but it still has to look here and there to decide where the cover is best - that's a "what-if" scenario. Humans are just better at it. Prediction is the core of intelligence. It's also the core of our enjoyment of company, humour and music. We have a biological imperative to improve our ability to predict, in order to eat instead of being eaten.

BTW, the study of information theory and in particular data compression is absolutely key to understanding what's going on and how to reproduce it artificially. Compression is the art of removing whatever is predictable in an information stream. I've wondered if it would be feasible to build a type of robotic cerebellum using a small micro with large data compression tables to create learning behaviour like our muscle memory. Like Asimo's taught strides, but learnt instead.

No. But remember - the cortex is only the couple of millimetres of the outer skin of the brain. Most of the kilojoules are burnt there, but the rest of the brain volume is also active, it's not just a backplane.

Well that's demonstrably wrong (reflexes and muscle memory aren't subject to direct perception and control), but I see where you're trying to go with it.

That theory offers no explanation of synaesthesia, for example.

That's true - but there is still a stage where a sound, or a sight, is still an isolated phenomenon, not yet joined into a single concept. That's a lower level, and AFAIK, not one into which we can inject a hypothetical sensation. At some point, the sensations are related to each other, unified into a single concept, and it's likely there that we can inject our "sense of thought". The circularity of this causes pulsing oscillations of activity which we identify as thought and can measure on ECG's - they're brainwaves.

I think you contradicted yourself pretty thoroughly there. It might not be accurate to talk of "levels" of processing, since there is a clear circularity, but there is clearly localization of certain *kinds* of processing. Not all areas are accessible to direct perception.

Clifford Heath.

Reply to
Clifford Heath

This is very likely where Brooks' idea of the subsumption architecture originally came from. Newly-evolved areas subsuming functions of ancient structures.

That sounds like one of Oliver Sacks' essays. You might also check out V.S. Ramachandran's book, A Brief Tour of Human Consciousness, 2003, which also talks about synesthesia. His evidence indicates that synesthesia may be due to incoming sensory fibers which inadevertently spread migrate from their proper termination areas of [usually temporal] cortex into nearby areas, which process a different sensory modality.

formatting link
"... Ramachandran suggested that synesthesia may arise from a similar cross-activation between brain regions. However, rather than being within a single sensory stream, this form of cross-activtion would occur between sensory streams, and is thought to be due to genetic differences, rather than neural re-organization..."

This is no doubt based on the fact there is topographic [spatial] mapping from senses to cortex in all modalities, and all later-on processing uses these maps as the "primary" reference point.

"taught strides" ??

Curt's comments seem to imply there is no such thing as subconscious processing, whereas it's commonly accepted that maybe 99% [whatever] of brain processing occurs subconsciously, below our awareness level.

Reply to
dan michaels

Interesting comment. How do you know the dog doesn't recall a past memory of being let out?

Not so easy to explain, I think. The following calls for some conjecture on your part.

Do you think a dog "sees" in a similar manner to how *WE* see, ie by perceiving some sort of "image" with its brain [however this happens, which is still a great mystery], or is the dog's perception of its visual environment totally different from ours? Whatever that might be.

Now, if a dog [and other higher mammals, for that matter] "see" and perceive in a similar manner to how we do, then it stands to reason that they also compute a form of internal "mental imagery" similar to how we do. In which case, when they are dreaming, they probably also "see" little pictures in their sleeping/dreaming minds, just like we do.

Some people contend that animals other than humans are fundamentally different, in their perceptions/etc, but I seriously doubt it. If you've ever been with a dog on a mountain trail, and seen it bounding over rocks and logs, and standing atop a peak and craning its neck to peer down a 2000' rock face, you'll swear the dog perceives a visual image very similar to ours. How else could it do these things?

Reply to
dan michaels

It's just an issue of what parts create the "subconscious". That's a big part of the point I was getting at in the post. Where as I had assumed for a long time that a lot of the neural signals in the neocortex was part of the "magic subconscious" that made everything work, I now believe it's more likely that the firing of the neurons, and the signals the spike codes represent, are the consciousness we are aware of. The underlying mechanisms that regulate when the neurons fire, and which rewire or adjust the connection strengths between the neurons is the subconscious process we are not aware of. That is, when a synaptic strength changes, we are not consciously aware of that change. But when the neuron ends up firing later, when it wouldn't have otherwise, which then triggers a chain of activity in other neurons - that's what we are consciously aware of.

But notice I'm not talking about the entire brain and every neuron the fires. I'm only talking about those neurons (mostly in the neocortex) which are part of that direct signal path that connects sensory inputs to effector outputs and all the loops in that path.

Reply to
Curt Welch

Oh yes, those direct paths. What do you suppose the neurons stuck in indirect paths are up to?

-- Joe Legris

Reply to
J.A. Legris

It's what the neocortex is and that's mostly what I was making reference to even if I didn't make that clear. The lower, and older brain is mostly not part of the path from sensor to effector output.

The neocortex is mapped rather well into many different areas (I think Dan likes to quote something like 21 visual areas?). But it's not mapped by it's physical structure as the rest of the brain is - it's mapped only by the nature of the signals being carried in each section - just as we would have to map computer memory. The neocortex, like computer memory, is a single amorphous massively-connected network - well not exactly because there are distinct pathways between different cortical sections but the entire neocortex is made up of the same fundamental hardware structure just like computer memory is all made up the same fundamental hardware structure grouped into modules with pathways between them.

There are many current theories but few actual answers. :)

Are you talking about the diameters of the interconnects for superficial pyramidal neurons? Yes, it's clear that evolution has tuned their design to optimize the structure in different sections of the neocortex, and from species to species they are different. But still, the entire neocortex is basically the same structure - micro columns combined to form macro columns combined to form various cortical regions.

Yeah, the cortex definitely forms networks which seem to want to lock into different stable states - Walter Freeman's work seems to come up a lot when people talk about the connection to chaos theory and strange attractors in how the brain's state seems to lock onto various stable points.

The basic behavior of the brain tending to lock into a stable state is easily demonstrated with optical illusions that our placed between two stable interpretations such as the goblet that looks like two faces. We can make our brain jump from one to the other but once it's changed, it wants to lock onto the interpretation and block the other.

This tendency is also easily understood by looking at the cortex as a pattern recognizer which uses feedback to lower levels to improve the accuracy of the interpretation. Once the higher level circuits see a "dog" in the picture, that information seems to be feed back to lower levels to allow us to see something like a dog-ear which we would never had interpreted as a dog-ear had we not first recognized the larger image to be a dog. Once you add feedback of that type, the system will naturally want to "lock" into a stable configuration (the one created by "I see a dog and lots of dog parts").

But still, it's not the the fact that the brain has locked into a stable configuration that means "dog+dog parts". It's the actual neurons that fire which seem to represent what we are sensing. When the brain is locked in to the "dog" pattern there will be a different set of neurons activated then when it's locked into a "cat" pattern.

Yes exactly. There are zones that represent different colors, and zones for faces, and zones for sounds. Each neuron or small cluster of neurons (like a cortical micro column) seems to have a precise "meaning" to our conscious awareness. We only sense that "meaning" when those neurons are active. Our total conscious awareness at any one time is then, most likely, simply a function of which cortical columns are currently active.

At the same time, these neurons can be activated artificially and in doing so, people report they are able to sense some conscious awareness connected to the stimulation. I don't know how extensive this type of testing has been to try and get a better map between activity and conscious awareness. I suspect it's considered too dangerous to do much of with humans so I suspect the testing has been limited mostly to cases where brain surgery was needed for other reasons.

Right, I'm only really interested in the neocortex. The rest for the most part doesn't seem to be very relevant to AI - it has more to do with keeping us alive than in making us intelligent.

I've not seen much on that.

I'll keep it in mind.

One of the biggest mysteries of the brain is trying to understand why each of our sensory experiences seem to have such a unique, and different sensation. Why does red look red to us for example? The neurons that define redness for us are not any different than the neurons that define blue for us. So why do we end up with such different perceptions of these signals?

The answer I like is that our perception of all the senses is simply created by how they all relate to one another in the brain - in how they are associated and in the behaviors they tend to create in us. This is a fairly weak answer, but it's the only one that makes sense to me.

This would imply that sounds should always sound like sounds, and not invoke a color sensation. The only way I believe that could happen, is if the color sensation was formed by mostly processing visual data in the correct way, but then having a cross connect from the auditory data into the color sections that could also, at times, simulate a similar pattern. In other words, a slight, but not strong, fusion of sensory data, where small amounts of visual data might be leaking into the auditory sections, or small amounts of auditory data was leaking into the visual processing areas. But this is just another wild guess.

I don't really follow your logic there. Are you saying the "imaginary" thoughts are the ones which don't have of a sensory origin?

Yeah, that's basically the same thing I'm saying anyway. The tricky part is when you switch to words like "we can inject" vs, the "the brain is structured like xyz..." it gets very confusing to understand what the "we" is you are making reference to and what it means "to inject". I however make the same mistake as well and don't always have a good translation to pure terms of "the brain is structured ...".

Yeah, that's clear. I've seen others try to justify our ability to talk to ourselves as a natural evolution of language to allow us to have private thoughts that other's can't here. But that doesn't really answer why we have the ability to have non linguistic thoughts which probably showed up before language.

Understand the power it gives is fairly easy. But understanding the evolutionary path the brain must have undergone to get to what it was before, and after, is not so easy.

Yeah, I understand the connection to information theory. I've been looking at ways to use it to build networks for a long time. I'm reading "Spikes : Exploring the Neural Code", by Fred Rieke currently and just hit the chapter where they jump deep into information theory as a tool for understanding neural spike trains.

The gray matter which is those millimeters of outer skin is where all the neocortex neurons are. The white matter which is most the mass inside is just backplane.

Most the rest of the brain is not part of the direct pathways from sensors to effector and not part of our voluntary control system. The lower brain seems to be mostly stuff to keep us alive. The exception is parts of the midbrain which seem to there to support the operation of the cortex.

Right. Where I wrote "brain" substitute "neocortex".

Sure it does. As explained above.

Very true. There are large sections of cortex which are fed only by a single modality. But at some point, the regions all merge.

Right. That's totally consistent with how I look at it.

But that's not quite how I look at it.

It's just invalid in my view to think that sensory fusion happens after the visual cortex for example. The entire visual cortex is doing sensory fusion. The eye is not one sensor. It's millions of them. And the data from each of these individual sensors must be fused in the same basic ways sound and vision data must be fused. The N layers of visual cortex are simply fusing individual light sensors into higher level visual concepts.

Likewise, we don't have one sensor signal coming from the ear, we have a huge bundle of sensory signals coming from each ear which must be fused to create higher level auditory information.

At some point, these different modalities start to fuse, but it just seems wrong to think that there was no fusion happening in the visual cortex, and that some magical "fusion" process then happens when data from the ears first mixes with high level fused data from the eyes.

Dan and I get into these sort of debates all the time in c.a.p. He tends to learn towards the belief that each of the distinct regions of the cortex that have been mapped out are special designs created by a long process of evolution to do each special type of data processing task. So each of the

21 visual areas are each different circuit, doing a different job. To duplicate this in our software, we would need to write different software for processing each type of data. Fusing left and right eye data would be a totally different algorithm than fusing ear and eye data.

I on the other hand lean towards far greater reductionism and believe the entire cortex is basically performing the same algorithm and that each section, has only had that algorithm tuned to best fit the nature of the data it is processing. I believe one algorithm can basically perform all data fusion. I believe that it will be found by following ideas just like you suggested above, with the data compression, and information theory based ideas.

However, in terms of the "injection point", it seems to me that when I have memories or thoughts, they seem all directly associated with sensory data. I have visual memories, or sound memories, or smell memories, etc. But yet, as you say, it can't be happening at a very low level, because the memories are always such a pale echo, of the real thing. If they happened at a low level, I would expect them to be more life-like. So either the effect is happening higher in the chain, or else, it works as some partial reactivation staring lower. This is the stuff that would be so easy to figure out if we only had high quality brain activity scanners (down the the neuron level). How some figures out how to do that without hurting the brain some day.

Yeah, that seems to be somewhat logical.

I think it's valid to talk about levels. But it only goes so far since there seem to be so many feedback loops in the process as well.

But by "entire brain" I was really referring to just the neocortex and being sloppy in my description, not all areas of the entire brain.

Have you read Jeff Hawkins' book, "On Intelligence"? He too is big on the belief that the neocortex is the "interesting" part of the brain from an AI perspective and seems to believe it's basically one algorithm at work. He See's it as a data processing problem of extracting invariant representations - which is basically consistent with various information theory ways of looking at what the cortex is doing.

The part I think he's missed, and one I stress, is the reinforcement learning that is needed as well. Basic data extraction or compression ideas can lead us to circuits that fuse data and extract the essence of the information in the data - removing duplicate information that flows in through different sensory channels (be it from cone to cone in the eye, or from eye to ear). This is what would allow us to turn hard to interpret data like a million pixels of visual data, into signals that tell us there's a cat in the environment. But knowing there's a "cat" in the environment, doesn't answer the question about how we, (or a robot we want to build) should react to the cat. Should we chase it as food, or run away, or just ignore it, or what?. The basic fusion algorithms are important in telling us as much as possible about the state of the environment in a form that's easy to use and as compact as possible (aka compressed), but you have to add to that reinforcement learning, for the system to learn how it needs to react to the current state of the environment to reach its goals (get food, don't let your body be harmed, reproduce, etc).

Reply to
Curt Welch

It's hard to justify of course. But it's based on a few things I've noticed about dogs (I've owned or lived with many over the years but don't claim to be an expert in any sense). For one, they seem to learn nothing quickly. They are far more creatures of habit than humans are. If you try to change their routine, like let them out a new door, instead of the old door, they keep going back to the old door. It takes many exposures to a new routine before you see obvious signs of them learning the new routine.

Second, they show little ability to plan - which I think we do a lot of with the help of our memory ability. For example, I used to play with my dog by throwing a tennis ball in the house which the dog would fetch. But, when it rolled under the couch, the dog couldn't fit under the couch to get it. But the dog was small enough to run behind the couch and get the ball. However, trying to teach that dog that trick was close to impossible. It would see the ball by looking under the couch, and simply wanted to go straight to the ball. If I led the dog around to the side of the couch, and it saw the ball behind the couch, it would instantly run for it and get it.

But no matter how many times I did this with the dog, it would never seem to make the connection that when the ball was behind the couch, it should give up trying to go straight for it, and instead, run around it.

This I believe shows that the dog can learn to react to what it sees in front of it, and what happened in the past few seconds, but has a much harder time learning a longer and more complex sequence of behaviors (running around).

I suspect humans use their memory, to help solve, and learn the solution, to these longer term problems. Like the dog, our ability to directly learn, has very temporal range. So, when we first solve, or are shown, the solution to a long problem, we don't instantly learn to run around the couch either. Instead, the next time we see a similar problem, instead of just automatically knowing we need to run around, a memory of the past event pops into our head instead. We have a memory of seeing the ball behind the couch and then getting it. This memory then acts to guide our current behavior to head away from the ball (to get behind the couch).

After multiple times of using our memory to guild us to a solution, the behavior becomes automatic. We see the ball roll under the couch and we don't sit there having memories that then trigger us to act, we just instantly head off in the direction needed.

So the ability of our mind to call up memories of similar past events seems to act as a bridge, to allow us to see the path to longer term problems than the brain can easily learn automatically on its own. This seems to me to be our strength in planing and reasoning solutions to new problems, based on memories of past experience.

The dog's I've owned, never seemed to have this ability and their ability to solve a problem as simple as running behind a couch to get a ball when you can't go over it seems beyond them. You must instead train them to do it one small step at a time, with many repetitions.

Some animals, such as some birds, I believe have shown unexpected skills at reasoning out a multiple step solution to a food problem. I wonder if they don't in fact have some human like memory skills?

I think they see just like us. They of course don't think of it as being an "image" but neither do kids. We just see things around us and know how to react to them.

Yes, I believe that's true. My dog certainly makes woofing like noises and moves her feet in her sleep at times that makes me thing she is having a dream very similar to how we do. We say she must be chasing bunnies when we see her doing that.

I doubt that as well.

My point is that I can see the the ball roll behind the couch, and have a mental image pop into my head of me running behind the couch from the last time I did a task like that. Once I have that "memory" I then start to act out the solution which came to me in my "vision". I suspect dogs don't have the same ability to have memories pop into their head in the middle of them running around a forest. I think there head is consumed with 99.9% of what they are currently seeing and that's about all it's consumed with. When they run to the top of a rock and look around, it's not because they had a thought of rabbits just before that and reacted by running to the rock to look for rabbits. Instead, they saw the rock, and reacted to the rock by running to the top of it because they have learned from experience that running to a high place is a good thing to do (because in the past it had led to good things like rabbits). So where rabbits might pop into our head, and then the sight of the rock, combined with the thought of rabbits, might make us run to the top of the rock, I suspect the dog didn't have a thought of rabbits, and just reacted to the rock directly.

When I walk down a trial, I might just as likely be thinking about an AI problem, or what I might be doing later that day. Even though I do a lot of that with language (which we don't expect a dog to do), there is much I might think about like that is not language related at all. I might be getting thirsty and my mind might pop up an image of that water fountain I saw at the trial head. This is what I suspect doesn't happen in a dog's head - at least not anywhere near the extent it can happen in ours.

Reply to
Curt Welch

Ok, that makes more sense now.

Hmmm. I'd be very surprised if that's true. It seems you're of the belief that humans are rational beings - we think before we act - rather than rationalizing beings, which seems a better description. Try this - ride a bicycle, then cross your arms and hold the opposite handlebars. You'll soon learn how effective our conscious is(n't). Try to make yourself feel grief, or joy, or any other emotion... rabbits and mice feel most of these things; they're in the mammalian brain.

My point is that most of our activity and being is arbited by parts of the brain accessible only indirectly, though we may have awareness of their functioning.

Yes, but a computer memory can, though structurally amorphous, contain a story book. The book and the story are not less structured and real for being contained in an amorphous mechanism. You might as well say that all of the universe is amorphous because it's made up of quarks. What you're really saying is that there's a level of organization that you can't observe in the material structure with the tools you have - not that there isn't any organization. So I maintain that "amorphous" is the wrong term - the functional morphology is invisible but present in the arrangement of active synapses.

Sorry, the terminology from the book I read on this has faded, and I can't even remember the title.

Stable *oscillatory* patterns. The time constants in these oscillatory circuits goes a long way to explain the sensory process - they form auto-correlators that progressively firm-up a sensation in the process of representing it as a particular pattern. It's this idea of temporal correlation that I think is missing from traditional approaches to robotics and AI.

I think you're wrong to use the term intelligence only for the higher functions. A robot that had the same "stay alive" characteristics would still be an impressive achievement and be considered intelligent, though perhaps only in the way that an animal is.

It's not that black and white. I see a dog and perhaps I imagine it pissing on my leg, but though the idea of the dog is real (reflects a real present dog), I know that my leg isn't wet. I don't try to kick the dog because it's pissing on me. The emotion of revulsion is activated nonetheless, and I probably don't feel like throwing a stick. IOW the conscious mind knows it has meddled with its own perceptions, and can still tell which bits are real. The mammalian brain still responds to having been pissed on, and generates the appropriate emotion.

If I had lost the distinction of having meddled in reality, I might just actually kick the dog. Such behaviour is deemed psychotic. The difference isn't that psychotic individuals generate more unreality than others, but that they can't tell they're doing it.

The philosophical question of "why did I imagine that", and "what is this 'I' that 'decided' to imagine that" is where the arguments start. My view on that is the same as yours, I believe. The book is in the computer, and so is the story. It might look like there can't be a book or a story inside a bit of silicon, but it's there, it's *all* there.

The idea of the "individual" and all the legal responsibilities that go with that is a way of identifying a *process* which is complex beyond prediction, so we hold the *process* itself responsible for its behaviour. We don't hold the molecules of a murderer's body responsible - even though they fully embody the process.

Either that, or your "dog leg" neurons and your "red" neurons activate to represent a symbol which is defined as a linguistic element without the need to have a word associated. IOW don't get hung up on the idea of linguistics needing words. Language manipulates symbols, whether or not they have words.

I did a review of the state of data compression a year or so back, and there are some very good ideas in things as common as ZIP. IMO it would just take an appropriate way of representing the temporality of a stream of behaviour/sensation and they could be applied to the creation of muscle memory, so a robot could learn how to walk - unlike the recorded step-sequences that allowed Asimo to stand, walk and dance. Clearly they encoded the required behaviour, and maybe even compressed the encoding, but dynamic compression and matching of recent event sequences is what's needed for learning at this level. To my mind, the temporal content in the data stream is the part we haven't thought about enough.

Sorry for the typo, I meant to say synaesthesia. But you've answered my objection by clarifying that your use of "amorphous" was structural not functional.

Yes, I agree. When I say "sensory fusion", I restrict it to mean the area where processed data from the different senses gets fused. That's not to deny that fusion occurs elsewhere. I was responding to your apparent denial of the functional morphology, of layers of processing, where in fact you were just pointing out that there's little structural morphology.

While that's true, I hope I've explained why I think it's beside the point.

I'm with you on that. It seems unlikely that there's enough information in the genome to describe the entire schematic :-).

Which is exactly why I call it a sense of thought. It's at the level where we're collating the various senses, but here we're sensing our own thoughts, and the data has been through a similar amount of pre-processing.

No, sounds like a good one, I'll look it up.

That's just data compression under another name, as you and I both pointed out.

I don't agree here. We correlate all concurrent events and the associated emotions & thought patterns in the process of encoding/compressing. Learning is implicit in these quality association in the sensory data stream. If that cat scratched me last time I tried to pat it, I have a negative association. Because I can do partial retrieval "that's the same cat", I can recall the qualitative data that was associated previously.

Can't agree here. Data compression requires memory, and memories encode learning.

Clifford Heath.

Reply to
Clifford Heath

Well, I think the neocortex is first and foremost, a reinforcement learning machine. So in addition to the cortex which is the reaction machine that is trained through experience, there must be critic hardware which is the hard-coded circuits needed to generate the reward and punishment signals which shape the behavior of the neocortex. So that's part of the hardware I would expect to find in the mid brain for example. It's support circuits for the cortex.

Second, I suspect the reinforcement learning cortex was added on top of a system of hard-coded instinctual behaviors which was a much older part of the brain before it developed these strong general learning abilities. So some large section of the lower brain is most likely hard coded instincts which the learning part has the power to override. This is very important in most animals which must be able to survive on the own within minutes of birth, but has been nearly completely replaced by learned skills in humans who have the luxury of a very long learning period (years) before they have to develop all the skills for survival.

Next, much of the lower brain is no doubt there to help regulate important body functions, such as keeping the heart beating, controlling breathing to some extent, helping to control and regulate the digestion process, helping to regulate various chemical levels in the body, etc. - all that stuff that was needed to keep a complex organism alive before it developed these strong general learning skills.

The cerebellum is another major chunk of brain which is not a direct part of the main reinforcement learning system that I think creates our conscious awareness and intelligence. It seems to be some type of output motor processor which acts much like PID controllers act in our robots to help make the body parts respond proportionally to what the signals represent instead of having to build in the physics and reaction characteristics of each body part into the higher level controls.

Reply to
Curt Welch

No, I believe we are emotional beings. Way down at the bottom of the post I'll explain.

That's a good one!

Your using terms different from how I'm using them. Let me ignore this for now because what I have to say below about reinforcement learning touches on that.

And again, I'll cover emotions below with reinforcement.

You are the one that used it. I actually am not really familiar with the term and I just copied your usage as best as I could guess what you were getting at.

For sure. Which means it's not invisible at all. We know the function is encoded in the synapse and we have no problem seeing the wiring. We just don't know what it means.

I'm a strict materialist or physicalist. I think talk about "function" as being separate from structure is only a language convention we use. Function itself can't exist separate from the structure any more than the human soul can live on after death (which is where I think all this talk about function not being physical all came from).

We often talk, and think of computer memory devices as having a uniform physical layout and that the data stored in the memory has some logical or intangible existence separate from the physical memory. But in fact, the data stored in the memory is just as physical as rocks and makes the physical structure of the memory anything but uniform. It exists as piles of elections in typical dynamic computer memory for example and electrons are very physical. The fact that we can't see them with our eyes only helps to further the invalid concepts that data is not always physical.

And likewise, our neocortex stores it's knowledge in it's wiring. It's a learning machine which gets wired though experience. The wiring is constantly changing, just like our memory is constantly changing. So even though I think the neocortex is basically one type of circuit duplicated over and over, it's training has transformed each circuit into a very specific circuit that performs one very specific task.

Ok, that's possible. I've not seen work that describes it that way but I can relate.

Yeah, that was a big insight I finally grasped about 5 years ago because of debates here on Usenet in the AI groups. I had spent many years working with fairly traditional (non temporal) types of neural networks trained by reinforcement and though I understand the temporal issue was there, I always figured we would solve it in our hardware just like we normally solved those problems. That is, we would create a non-temporal function mapping function, and then use memory to act as delay devices to create the temporal aspects. Technically that's valid, but for practical reasons, I later discovered it was just the wrong approach.

I worked with mostly binary networks (aka networks with binary signals with N binary inputs that would produce N binary outputs once every clock cycle).

Then when I started to grasp the temporal problem, I started to look at the advantage of working with asynchronous pulse signals (like the brain uses), and realized that processing nodes that dealt with these signals had a great advantage. With the old nodes, I created a spatial function, and added extra hardware to make those spatial functions perform temporal pattern matching. But with pulse signals, you could create gates that performed temporal functions, and totally ignore the spatial aspect of the problem. In fact, there is no spatial aspect of the problem that needs to be solved - we only are trained to think in those terms so that we can write down our ideas on a sheet of paper - it was in fact our long history of using written language that biased our tools and techniques so much towards the spatial.

For the past many years now, I've only been looking at designs that use async pulse signals and which mostly, perform all their actions based on pulse spacing.

Well, if you knew me better, you would know that I'm the one that makes the argument that rocks are both conscious and intelligent. So I have no problem extending the idea of "intelligence" down to levels far below what must people would. :) But my use of intelligence above was limited to a different type of use. I think most of what people see in human behavior (and some animals) comes from us being strong reinforcement learning machines - and the lack of that very specific feature is what prevents most people from believing a machine could every be the same as a person (that is, most of the people that have a hard time accepting that idea).

The emotion stuff I'll get to below, but to remove your use of "knows" in the above, I would simply say that the reason we don't kick the "imaginary" dog is BECAUSE our leg isn't wet (not because we "know" it's "not real").

Not only is the leg not wet, but a lot of the other current sensory perceptions don't match with our perception of the dog pissing on us as well. And the reason we can tell it's not "real" as you say, is because all these other things don't match.

I think the memory is in fact very real - just as real as when we see a real dog piss on our leg. I think it's represented in the brain in the exact same way as it's represented when a real dog is pissing on our leg. There's no "memory that it came from our thoughts" that allows us to tell the difference. It's only the fact that it's inconsistent with the the rest of the data currently in our brain. So how do we know, when there is conflicting data, which is "real"? We can tell because there is far more "real" data than imaginary data active in a normal brain. So, I have data that tells me my leg is dry from feel, I see the ground, and there's no dog parts to be seen. I motion detectors didn't tell me something just moved by my feet. I'm in a shopping mall, which is a place that I wouldn't have expected to see a dog. All this other data, fits, and is consistent with each other, and is consistent with the idea that there is no dog. But yet other data in my brain tells me there is a dog. So which do I trust as "real" and which do I trust as the "memory" or "imaginary thought"? The one with the most consistent data of course.

And what happens when we sleep and our real sensory flow is cut off and the only circuits being activated are the ones that are free do so independent of our thoughts? We start to think that our dreams are real. Only when we wake up, and get a sudden influx of data which is inconsistent with the "dreams" do the "dreams" stop looking "real".

If we had some memory of the fact that the thoughts were injected (your idea in my words), then how do you explain the fact that dreams seem so real to people? And why do they sudden revert from "real" to "dream" when we wake up? Where as my idea, that these memories are in fact exactly as real as the real thing, but we can spot when they are "invalid" only by comparing it to what else is going on. (I can explain what I mean by "comparing it" if you care as well).

Also, my view of dreams and memories gives us more power to understand mental illnesses like schizophrenia and their associated hallucinations. If too much of the brain activates with a "memory" (instead of in direct response to current sensory data) it becomes impossible for us to tell which is real, and which is the dream. The only think that keeps us in touch with reality, is the fact that most the brain in a normal human, is being activated in direct response to current sensory inputs. If too much of the brain reverts to a old states (memories), then we loose track of what is reality, and what is the dream.

The dreams all become consistent with one another (when we dream a dog peeing on our leg our leg feels wet) simply because all these sensations are connected to each other though feedback loops and they have been trained to form consistent patterns. So when we have a thought of a dog peeing, it's biasing our leg to switch to the "I feel the warm wet pee on my leg", but it's only the current flow of real sensory data that's keeping that sensation from forming in our brain about our leg.

But if the sensory data starts to loose the battle, and too much of the mind switches to a dream-like state even when we are awake and functioning, then we loose touch of reality and start to believe that the these other false brain states are "real".

And my theory, is that the simple fact that generated to much "unreality" is WHY they can't tell they did it. :)

Your "book" vs "store" stuff is confusing me a bit.

I don't mix process and function and physical reality. There is _only_ physical reality. Process is just a store we create to describe physical reality. We can create language to describe how a computer work, and we can label that language, as being the "process" of the computer. But the computer is the computer, is the computer. It's not the process. It's the computer.

And likewise, I am a physical hunk of matter. That's all I am. I am not a process. I am not a mind. I am a brain. When I do things, it's my body doing things. When I talk about what my mind is doing, or what is "in my mind", I'm really just talking about what the brain is doing.

I don't like to talk that way. I hold the body responsible for the crime. Not the process. To talk as if the process is responsible for the crime and not the body is just left over bull shit about the soul being separate from the body which I don't believe.

If a machine runs out of control and kills someone. Do we hold the blueprint for the machine responsible for the action and not the machine itself? That would be silly - playing the death on a piece of paper. That's what I think you are saying when you talk about "us" being the process.

I agree that nearly everyone on the planet, even all those that are strong materialist who don't believe there is a soul separate from the body, like to talk the way you are talking, but it's deceptive in my view.

In one way, I hold that type of talk valid - but confusing. That is, our own view of our self exists as kind of a blueprint in our own brain. It's like a camera taking a picture of itself and using that picture as it's only way to understand itself. Likewise, our view of "self" is limited to what our brain can represent aboutourself, so it's value in that sense for me to say that my view of myself is limited to my understanding of myself.

But, like with the camera, we would never say the data encoded on the picture was the "real" camera. We would say the physical camera is the camera.

Likewise, I say that I am a physical body, nothing else, nothing more.

Ok, more semantics. I like to say that pulses are symbols and that the brain is a symbol processor just because it's processing pulses. This gets people that like to reserve the idea of "symbols" to something closer to the concepts represented by our natural language words. These are also the people that seem to see our language processing skills are something truly unique to humans and non existent in lower animals. I don't see it like that at all. Words are just temporal patterns to us like everything else we deal; with. The brain represents them all, be it a cat, or the word "cat", in the language of temporal pulse signals. No where do I believe there is any significant difference in what the neocortex is doing when it's processing and producing natural language behavior, or when it's processing, and producing, all other sensory information.

Though many concepts like this go though my mind as well, finding a working implementation illudes me. It's driving me crazy. :)


Ok, here's why I answer the stuff I didn't answer above about emotion etc.

What you talk about is in fact reinforcement learning.

You wrote above:

Why is a cat scratch a "negative" association? That's what you fail to answer. How do you formally define what should be a positive association, and a negative association? How is this formal definition of "value" defined in the hardware? How is it implemented? The answer is that it's implemented as a reinforcement learning machine. That's what I mean by saying we need to build a reinforcement learning machine. You must build a machine that is able to make associations of value.

So, how do we built a robot that would register a cat scratch as a negative association? You start by building custom sensors, and hard-wired processing, which is able to detect pain and pleasure (technically the wrong words but I'll use them anyway). We build hardware that knows what sensory conditions we want to be registered by the machine as "bad" (aka pain) and what sensory conditions to be registered by the machine as "good" (aka pleasure or rewards, or reinforcer). This hardware is called the critic in reinforcement learning terminology (reinforcement learning is a specific sub-field of machine learning in AI BTW and I'm not making reference to the psychology fields of behaviorism - though they are closely connected).

The learning machine must then receive sensory data, produce outputs, and receive reward signals from the critic. To the learning machine, the critic can be thought of as just another part of the environment, but in a actual robot, the critic hardware is something we, as the creator, would design and built. The critic hardware is what gives the learning machine it's high level goal, or purpose, in life. All it's morals, and behaviors, and drives, are derived from the goals built into the critic.

The only goal of the reinforcement learning machine is to maximize TOTAL LONG TERM reward. (not to maximize only current reward). This means it must constantly estimate potential future rewards, and make constant trade off decisions about whether a bird in the hand is worth more, or less, than two in the bush. That is, given a choice of one behavior with a quick reward, or another behavior, with a larger, but more risky long term reward, which behavior is the one expected to produce (on average) the most total reward per time. The behavior with the best total reward per time, is the one the machine should select.

The entire purpose of a reinforcement learning algorithm, is to predict these values (based on data collected though past experience), and produce behaviors based on their expected value.

This is NOT the same problem, as the data compression issue or the general issue of prediction.

A machine can analyze sensory data, and find ways to compress it, and ways to predict what will happen next in the world (if we stand here we are likely to be scratched by the cat), but making that prediction, doesn't tell the machine what to do. Maybe it "likes" being scratched, so the best answer is to do nothing and hope the cat does scratch us as predicted. Or maybe a cat scratch is bad, so we should take actions to prevent that from happening. How does the human brain "known" that a cat scratch is bad? It "knows" it, because there are special hard wired circuits in the brain (not in the neocortex, but in the midbrain) that can sense when harm is done to the body in various ways, and will in tern, send a "punishment" signal to the learning brain (the neocortex), so that it can form a negative association with whatever sensory conditions preceded this punishment (the vision and sounds and smells of a cat scratching our leg).

Alone with this power, the learning system must use it's power of prediction, to later predict that just the site of a cat is at least slightly bad, because once we see a cat, the probability that we will get scratched just went up, and the prediction system should be able to predict that - leading to just the site of a "cat" having a low value associated with. If ever time we visit a given place, cats show up, then the low value of the cat, will train the vision of this cat building, so that just seeing the cat building will produce a punishment (training us not to go near that building if there are better options to chose from).

All these "values" that the reinforcement learning machine is associating with all sensory conditions, as well as all behaviors it produces, is the source of our emotions. This is what makes us love some things, and hate or fear others. If the reinforcement value prediction system predicts high future rewards for some stimulate (a hot babe), it's what makes us "like" that sensation, or that object, and it's what makes us increase the odds of using the behaviors that produce that stimulate condition. If our prediction system predicts very low future rewards, that's what makes us dislike the stimulus and it's what makes us stop producing a behavior that creates that stimulus condition.

The reason we are emotional machines, is because we are reinforcement learning machines. That's where our emotions come from. If you want to build an emotional robot, you have to build a robot with a reinforcement learning engine driving its behavior.

I could talk more, but this post has gone on too long already. If you want me to talk more about reinforcement learning, and how it's different from supervised learning for example, and why I think it's the the only type of learning that explains human intelligent behavior, (or why it's easy to know this is the answer, but not know how to code it) I can do more of that as well.

Reply to
Curt Welch

I'm not familiar with that work but the title does sound familiar. From what you write I think I would find it interesting. But I've already got a stack of about 4 books on consciousness I've not gotten to. :)

Yes, for sure.

Yes, that sounds consistent with my views.

I don't think we really need that. It's just automatic. When you replay a past experience, it's always from your own perspective. At best, we might be replaying a past experience where we witnessed another person, and are playing it with the idea of it being "us" in place of the other person.

Yes. But it's also possible to explain how we have this power.

We don't have to assume evolution created this complex "replay" or "visualization" hardware for us to explain the power. Instead, we can explain it in terms of what our normal perception system already has to do. It has to classify sensory data into invariant representations in order to drive our behavior. In other words, when we see a cat, we might see it from a million different partial angles, but the brain must still recognize all these different data patterns as being a cat. It must even classify sensory data patterns it's never seen before as "cat". We can explain how it does this, in terms of simple statistical correlations, and temporal predictions.

When a simple sold color object (like a circular disk) moves across our visual field, we can make predictions about what sensory data we expect to see in the future. After it's traveled half way across our visual field (from left to right), we can predict how the visual field is going to look in the next few frames. We expect to see a disk, a little further to the right, and we can predict exactly when, and where, we expect to see it next. We can make a temporal prediction about what data we expect to see next.

Because there are huge amounts of temporal prediction like this possible in all our sensory data, our pattern matching (aka classification circuits) end up using this to improve the accuracy of their classifications. If we see a dog spinning around, our circuits (because they have seen dogs, and things spinning many times in the past) can predict that even though we only see one eye now, that we will soon see the second eye as the head continues to turn. This gets wired into the decoding circuits, as a bias in the prediction system. The circuits that detect motion, are wired, to make the lower level circuits biased to detect what the system most likely expects to happen next. By having these feedback prediction circuits, the speed and accuracy of the pattern matching is greatly increased. Instead of seeing a dog ear, and having to guess if it's a fuzzy rat, or funny shaped leaf, the brain can instantly classify it as a dog ear simply because we had already seen the rest of the dog, and our "dog-ear" circuit was primed to be the first choice answer when something close to a dog ear showed up in the about the right place in our visual field. All our perception circuits get cross wired this way based on how well each one predicts the other in a temporal manner.

When you take this same hardware and turn off the flow of sensory data and just let the built in temporal predictive feedback loops activate it based on what it expects should happen next, it ends up playing back a movie for you. It produces a constant running stream of predictions based on what it thought just happened last. Each event it "dreams up" keeps triggering the next most likely event to follow it.

This can be used to explain why we have "what if" hardware. Our perception hardware is simply built to work that way. Put it into some staring position, and let it run, and it will predict what will happen next. Feed it random noise, and it will still extract the best prediction it can from what it's hearing (the true basis of that movie about hearing ghosts in the white noise of TVs).

This is why we dream. Cut off the sensory data, and the what-if perception hardware just starts making a stream of predictions. Feed it random noise, and it still predicts what it thinks is most likely to be "happening" in that data.

All this perception hardware is tied into our actions as well. If we produce an action of reaching out, the prediction hardware will predict that it should see our arm reach out. If we reach out and knock something over, our what-if hardware predicts it should fall over. So it's not just watching a move, it's like playing a video game using our own what-if hardware - which was all created simply for the purpose of pattern recognition to drive behavior selection, not for the purpose of playing what-if games. That power probably showed up later in our evolutionary history.

One place (other than our dreams) that we see this hardware in action, is when you listen to a CD and there's a silent pause between songs. If you have listened to the same CD many times, right before the next song starts, you will "hear" the beginning of the song. This advance prediction of what is coming next, is driving all our perception. We just get to hear it at work in this case because of the silence.

That's right.

The CD example above is a good one.

But that reaction is better explained by the prediction that conditioning (aka reinforcement learning) creates than the simple sensory prediction of the CD example.

I see at least three basic systems at work here.

They include sensory prediction which is an unsupervised learning system which gathers data simply from repetitive exposure to similar sequences of sensory events.

Then there is the reinforcement learning to direct the outputs (behaviors)

- which when implemented correctly, ends up back-propagation predictions about future rewards which ends up creating behaviors that look "predictive" in nature (duck to prevent being hit).

Horses no doubt have these first two.

The third feature, which is the ability to use the predictive perception hardware as a what-if tool while at the same time receiving a normal sensory stream (aka what we might call tuning out our environment and daydreaming) is something humans seem to have in ways that I'm not aware of in animals (at least not at the dog or horse level) (though it would be hard without high quality brain scanners to verify this).

Reply to
Curt Welch

Ok. Amorphous means simply lacking in structure. The lack of appearance of physical structures in the cortex belies the deep function structures, as I think we've both agreed. I originally thought you were implying that all of the cortex processes all of the data, which would have been a silly idea and demonstrably wrong.

I'm with you on the materialism, but I think you confound the ideas. A computer memory cell isn't structurally different for storing a one instead of a zero, it's functionally different. A RAM chip is structurally simple - so repetitive - but the data in it may be functionally rich - e.g. a story.

One human body isn't structurally different from another, but they sure are functionally different.

No, but the structure can exist without the function. The RAM chip has the same structure when it's not powered. It's not just the arrangement, but the way the arrangement can change, the way it's *developing*, that constitutes the process (I'll explain my use of "process" below).

The opposite approach, as I'd have it. Time and change is the key, not static modeling. We respond to stimuli precisely and only because they represent change.

Ok, I can dig that, not sure if I agree though. The generation of "what-if" scenarios in the brain makes use of the existing sensory circuitry to process hypothetical events. When the hypothesized sensations are injected into the sensory fusion chain, the change in the outcome is processed differently than if it had occurred in the absence of the hypothesis, i.e., in the real world. To observe this change is the purpose of hypothesizing.

That's what I mean when I refer to tagging - we can experiment with how we might respond if a particular event would occur. The re-use of the sensory chain has side-effects (like changing our emotional state) even though at a conscious level we know it wasn't real.

Because of the side-effects of the re-use of the sensory processing chain, some hypotheses are unsafe to inject while the motor functions are active. Nevertheless there's still an advantage in exploring our likely response to them; it serves as training for situations that have not yet occurred. So we can dream of falling off a cliff, and train a withdrawal response, without having to actually go near a cliff. So there's an evolutionary advantage to dreaming.

A waking hypothesis results in a mental change that must be separated from the base state in order to determine its outcome. As a result, it's somewhat diluted - dreams can be more intense.

It's more of the structure/function stuff I wrote about above. The book is "real" while existing only in a transitory physical state, only in a temporary arrangement. More below where I explain "process".

The idea of process is deeper than you give it credit for. The average duration for which any given atom is part of a human's body is around six months. That is, on average, every atom gets replaced every six months. So what is the human? They have a clearly identifiable likeness and individuality with a much longer time constant. Clearly the person is a *process* that exists in an organization of those atoms, where the process maintains the organization even as the atoms themselves change.

You say that "process" is "just" a story (at least I think you meant to type that), but that seems like an attempt to deny its reality and significance. It doesn't consist of atoms, but it exists through atoms. Its existence isn't fundamentally different from the existence of one of those atoms themselves, which each exist as an arrangement of lower particles.

Of course the process is a physical phenomenon. But it's a clearly identifiable phenomenon having distinct characteristics and duration, and as such shouldn't necessarily be treated differently from the physical particles in which it consists. It doesn't become meta-physical by being treated as a reality; it's a reality within a reality.

The body exists only because a process is maintaining it. They're the same thing.

No, we hold the designer responsible, because we expect that machines will only be built the bounds of whose behaviour can be completely predicted by its designer - that's an expectation we have of designers.

If we ever change that expectation, and allow designers to create intelligent robots, we might blame the designer and wish we hadn't allowed it, but it'll be the robot that gets destroyed. We'll hold the robot responsible, of course. But if the parts are useful, we'll re-use them, they won't be tarnished with the stigma of what they did when they participated in the whole thing.

My point is that blame attaches to the thing whose behaviour cannot have been predicted.

No, it's descriptive. I'm not using the word metaphysically. You'll probably argue that the description is not the reality, and I agree, but I don't think it hurts, so I'm going to keep doing it. :-).

Whoops, all too self-referential for me. I'm not going there; it's like an aircraft turning such sharp corners that it flies up its own tailpipe and vanishes :-). Not what I intended in my use of the word "process" at all.

I think we have to invert the priority of time intervals and datum. The datum is in fact the change, but the retrieval is based on the rate of change (or time between events) more than the amount of change (or the type of event).

I think you're wrong, and it's exactly the same problem. Long term prediction requires us to test hypotheses, which we can do using our normal apparatus. The predictions are based on previous learned responses. We are sentient precisely because we can do this to a much higher degree than other animals.

One other comment:

The trouble with your critic is that you assume values. "Long term reward" measured as what? We have *multiple* biological drivers that compete to be complied with, and sometimes they are at odds with each other. These drivers are the values which our critic uses, though not always consistently.

You're using the term emotion unconventionally here. Reptiles are a reinforcement learning engine, but they do not have emotions, whereas mammals do. Emotions drive behaviour which has consequences advantageous for the social or family group, but not directly for the individual.

It's been good - but it has to be truncated sometime or it'll never end :-)

Clifford Heath.

Reply to
Clifford Heath

This is kind of the cortex = tabula rasa argument, but might not be so cut and dried as you indicate, since animals such as horses, which can get up and run and follow their mothers within a few minutes of birth, also have neocortex. Hard to imagine the cortex is doing nothing for the colt in early life, because it hasn't learned anything as yet.

Rather than contending all of this "instinctual" colt-early-life stuff takes place below the cortical level, and that the cortex is mainly just for reinforcement learning, it's much more likely the cortex has added many advanced processing capabilities on top of the older areas, but also the ability to modify those capabilities to a much greater extent than can happen in sub-cortical levels. IOW, the "general" functions of the 30+ cortical visual areas, as well as their interconnections with the rest of the brain, are actually determined in the genome, rather than learned after birth.

Reply to
dan michaels

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.