Where is behavior AI now?

snipped-for-privacy@media.mit.edu wrote:


[...]
Before we worry about what it is that distinguishes us from animals, wouldn't it be a reasonable goal to find out first what it is that distinguishes animals from everything else?
-- Joe Legris
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
J.A. Legris wrote:

Seems to me, that's pretty much what subsumption and BBR are all about. How do you tell a rock from a subsumption robot? The rock just sits there doing nothing [although, as I recall, Curt has some strong opinions about "rock intelligence" ;-)], while the sub.s-bot both reacts to and acts upon its environment.
To me [but not to Marvin, given his past comments], the sub.s-bot is basically a good start at what distinquishes animals from everything else. Autonomy, sensing, and action. However, as this thread is all about, the BBR/sub.s approach looks to be basically stalled, and there doesn't seem to be much [or enough] effort at adding all those levels in-between the 2 ends Marvin just talked about. I think Brooks' book Flesh and Machines was really the death-knell of sub.s, and his final conclusion - there is some "missing stuff" - was totally wrong, as I mentioned in an earlier post.
I do distinquish the difference between human and animal, in that humans have "another" level on top that adds high-level symbolic processing to what the animals have - roughly parallel to the distinction between higher-order and primary consciousness that Dennett and Edelman talk about. But you have to start adding those in-between levels to bridge the gap down to subsumption. Marvin's high-level commonsense reasoning systems can only work on a proper lower-level foundation platform that can function successfully in the real world.
For my part, I'm pursuing [and have been for a while] the idea of adding those in-between levels from the bottom up, which is the same way organisms evolved intelligence. I just don't think you can do this from the top down. For one thing, there is too much of an unexplained gap between the highest-most symbolic levels in humans, and primary consciousness levels in the other animals. IOW, you don't start with language first, and then extend that to build the communications-interaction capability of a monkey. You do it the other way around. My $0.02.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dan michaels wrote:

But it's not a rock. It's a rock lobster!
(I guess only B-52 fans will get this one...)
-- Gordon
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Gordon McComb wrote:

Southern california beach humor?
http://www.lyrics007.com/The%20B-52 's%20Lyrics/Rock%20Lobster%20Lyrics.html
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dan michaels wrote:

The band is from Athens, GA, actually. Am I showing my age here?
http://en.wikipedia.org/wiki/The_B-52 's http://en.wikipedia.org/wiki/Rock_Lobster_%28B-52%27s_song%29
If you can create an AI system that can make sense of Fred Schneider's lyrics, than you can create anything!
-- Gordon
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dan michaels wrote:

[...]

But what are the leads into these "in-between levels"?
What is the dna, protein and control loop equivalents for the evolution of intelligent neural systems?
I suspect that high level mimicry of human intelligence may turn out as unreal as high level mimicry seen in an animatron. Looks real with lots of clever programming but lacks something fundamental to duplicate human or even animal intelligence. All of the intelligence being a product of the programmer being part of the developmental loop and forever requiring the programmer in that loop.
-- JC
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
JGCASEY wrote:

The implication in the previous messages was that BBR is essentially the lowest level, and what needs filling in is the levels in-between that and the symbolic levels at the top-end. Much higher than DNA and proteins.
There are 2 strains of people on this thread. The guys who came over from c.a.p., with their grandiose ideas about solutions to general AI [hello, Curt :)], and the guys who are currently building robots which are essentially at the BBR level. A wide gulf there. I put myself in the 2nd group. I'm not expecting to add some kind of GOFAI-type processor - or direct link to CYC, or unstructured neural net, etc - to my little robot, and expect much. And neither are Randy or David, I would imagine.
Rather what we need to look at is the "next" step up from the bottom. Better sensors and perceptual processing, plus memory and learning, simple goal-planning [cf, chap 6 of Arkin's book]. Our robots are little better than blind, deaf, and dumb, like Tommy of the Who [now Gordon's got me doing it too :)].
The first thing is much better sensors and perceptual processing, so the robot isn't stuck forever in Helen Keller's internal world [see Curt and Joe's comments]. But this is such a radical step up in cpu requirements, that it's a problem right now too. Most of our little bots don't have much more power than PICs or AVRs, and maybe a DSP chip here or there.
I realize these sorts of comments about stupid little robots always lets Marvin down, but that's where OUR [this forum's] world currently is, I think.

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dan michaels wrote:

I am so pleased to see Marvin grace us with his presence again, I would hate to think we were letting him down with our interests. Personally, I think I'm on to something enabling and trancedental concerning another level of understanding above the plateau bbr has come to rest upon. But I might be self aggrandizing.
Perhaps I should ask Marvin himself. Would the separation of layered behaviors from emergent intelligence, with the realization the behaviors themselves are devoid of intelligence, but rather, the sequencing of behaviors being the source of intelligence not be a significant clarifying point for ai? Or, in you opinion, is this all beneath a professional level of curiousity or research?
Randy www.newmicros.com
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

I've not read enough about BBR to know all the typical details yet, but the little I've picked up so far, I think it's not quite low enough. The problem is that we (as humans with brains) tend to try and classify behavior into buckets. We look at an animal and say things like, "oh, that's wall following behavior", or "that's goal seeking", or "that's light avoidance". The lowest level we tend to try and understand behavior informally is the highest level at which our brain is able to detect recurring patterns.
When we try to build robots, we tend to think first at similar levels. How for example might we create wall following behavior, or obstacle avoidance in a robot. And I suspect (but don't know because of my lack of reading on the subject), a lot of BBR work has systems for creating similar level behaviors. For example, our robot hits a wall and keeps spinning it's wheels, and we decide we need to add wall avoidance which combines some some combination of sensory triggers and behaviors to make it either prevent from hitting the wall in the first place (turn before getting too close), or adjust after a collision (back up and turn). But for the most part, only one behavior is active at a time, and there is some logic for selecting which behavior to use in the current situation which might be a priority interrupt type of logic (drive straight is the default lowest priory behavior until it's interrupted by a higher priority sensory condition for turning - like there is a wall near us).
BBR (as far as I can tell) is the attempt to move to using simpler behaviors with simpler triggers, so that the system will produce the correct combination of behaviors to fit a wide range of environmental conditions - so that new and thought about (by the designer) sequences will emerge and just naturally perform a useful function.
I believe what people end up coding by hand in these systems is typically not low enough however. The problem needs to be factored down even further, to even simpler behaviors. The reason they are not I believe, is simply because it's too hard for us to understand how to do this by hand. It moves the complexity of the behavior down to a set of more complex trigger logic. And understanding how to code the complex logic for to trigger micro behaviors, is something that's not at all intuitive to a human programmer. It's just too hard for us to understand.
But, for the same reason BBR seems to have an advantage of being more adaptive to different environments (environments the programmer might not have thought about), I think factoring the problem down to even simpler behaviors, is likely to only improve the adaptability and the amount of emergent behaviors that spontaneously arise from the machine.
The level of behavior I'm talking about is more like "turn right wheel 1 deg clockwise". This is consistent with the level I was trying to attack the problem with my pulse sorting network (which Dan another others from c.a.p know about). But it would be very hard to hand code a complex set of conditional expressions to specify for a typical robot when it should "turn right wheel 1 deg clockwise". Which is why this level of micro behavior is not used so much. But, with the correct learning system at work evolving the complex logic tests used to select micro behaviors, I think you would get excellent results (and ultimately, use this type of low level micro behavior selection system, as the foundation for a system that produced human level behaviors).

But keep in mind, I came here not to continue the discussions we have in c.a.p. but because I've been building and experimenting with robots to see if I can apply some of those grandiose ideas to real world problems. So I want to know more about what had worked in these real world applications - as I am as people teach me more about ideas like BBR. :)

But, if we could produce a good generic RL trained decision tree (which is basically what I was trying to create with my "pulse sorting" networks) which could be plugged in to drive micro behaviors in the simplest of bots, I think it could be of real value for even these very simple bots. I've been playing with the vex hardware which has only a small PIC processor but that's more than enough to test some of basic RL ideas.

As you well know, I think more can be done with general solutions than you do. I think if we solve the lower level better than we have now, it will make filling in those higher levels much easier.
The problem with hand-coding instead of learning, is that you have to hand code all the levels - which means you have to conceptualize how to factor the problem into levels, and sub-levels, and then fill in each level. Strong learning systems should do that factoring and filling in automatically. With a strong learning system to work with, the problem for the programming becomes one of picking the correct high level goals and motivations. If you can define them correctly, the learning system will fill in the implementation details for you.

Well, we don't really have a problem there do we? It's the next step that has all the problem...

Right. There are plenty of cheap high quality high bandwidth sensors we can put on robots (vision, sound, vibration sensors), but the problem is that we don't have good processing systems to deal with complex high bandwidth temporal data sources like these. Think about how useful sound is for example if you have brain like ours to process it. You could add low cost directional microphones and D/A converters to a any of our cheap robots but yet it's not done (except in the very special and limited application of sonar distance sensors). It's because extracting useful trigger data from an N-channel microphone system is beyond what we easily know how to do.
If we were driving the bot, we would learn to hear the sound of the bot hitting a wall and of the wheels spinning on the ground and the sound of the motors straining. These are the type of clues that exist buried in an audio data stream that you don't expect a human to be able to easily program to trigger the switch to a different behavior (we need to back up and turn left because our wheels sound like they are slipping on this surface). This is the type of thing a learning system needs to be able to find, and extract from complex data streams. It needs to recognize the correlations between a sound, and the fact that a goal is not being reached.
And I think a big part of why current learning systems don't do this very well, is because we don't yet have the right data processing algorithms - which I believe you think of as a perception problem, but I tend to see more of as a behavior problem. But either way, we don't have good enough systems for finding, and extracting, useful information from complex sensory data streams - which I think agrees with the point you are making. I think this ability is the number one most important ability to improve on.

I think if done correctly, it won't be as radical as it seems to be. And though many robots use very cheap low power CPUs, we have plenty of high power cpus that really are still affordable. And, I think better algorithms for the automatic processing of complex data can be of use even for simple sensors with much smaller processors. For example, we can create fairly low bandwidth audio signals that any human could make very good use of (I'm thinking even less than phone bandwidth). Or just tactile vibration sensors which need not be more than single bit sensors. But the problem is always not knowing what to do with the data because it's too complex for us to understand how to transform that data into useful behavior triggers.

I think stupid little robots are likely to be an important testing ground for the ideas that will lead us to the high level processing levels Marvin (and many of us) want to create.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Curt Welch wrote:

And it had layers so that higher-level behaviors could suppress or encourage lower level behaviors.
I also prefer to call this "reactive programming" because "behaviors" rings wrong for me. This type of programming uses short sections of code that react to sensory data.

I like this. I think that the lower-level behaviors should be pure motor functions (power X to right motor), and "wall-following" would be a behavior that supressed/encouraged the proper lower behaviors.
As a future modification the wall-following behavior could be programmed with a learning algorithm of some type.
...

Pardon me, but I missed the abbreviation "RL"? I do agree that some sort of decision tree would be useful and quicker than a neural network. ...

This is what DSPs are made for. I believe that there are special purpose chips to do limited voice-recognition. It seems to me possible to use a DSP for the purpose of identifying and remembering these clues.

Yes, algorithms tend to give a bigger improvement than better hardware, but don't overlook brute force when necessary.

Yes, there are DSP chips and ARM chips which are almost as affordable as the PICs and AVRs and such.

Agreed.
--
D. Jay Newman ! Author of:
snipped-for-privacy@sprucegrove.com ! _Linux Robotics: Programming Smarter Robots_
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Reinforcement Learning.

That was one of the things that has really excited me about that approach. It was orders of magnitude faster than the typical neural nets I had spent years playing with and it had excellent scaling characteristics. There was no exponential explosion as the dimensions increased. At worse, it seemes to be only N Log N.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
snipped-for-privacy@media.mit.edu wrote:

That's because the few people in those areas have been banging their head against the wall for decades now. Is there any "reflective, meaning-based, knowledge-using" system in production use anywhere? (I don't think we can consider Cyc a production system.)
                John Nagle                 Animats
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
snipped-for-privacy@media.mit.edu wrote:

Certainly, our language behaviors are what distinguish us from other animals. I'm not sure what you mean by "reflective" however, unless this is just a reference to our power to generate language behavior internally (our thoughts), and at the same time react to them by producing more internal language (aka reflect on our thoughts). I do strongly suspect that our brain features which support this type of activity (private behaviors) is quite a bit stronger than all the other animals.
I've not seen any evidence however to make me believe that the systems that control our language behavior is different in any significant way from the systems that control all our behavior. Producing language behavior shares all the same problems in common with producing all behavior. The brain must, at all times, be constantly generating a continuous sequence of behaviors. Whether that is mouth and lung motions for the purpose of producing spoken words, or whether it's a complex orchestration of limb movements to make ourselves a sandwich and eat it, the problem is basically the same - how does the brain determine what behavior to produce next?

Very true.

Reinforcement learning is making a comeback and making notable progress such as the success of the TD-Gammon game in the early 90's (based on temporal difference learning algorithms). Algorithms like Q-learning have been developed in the past 20 years. Though I don't follow the research closely, it's reported that there's been an explosion in the field in the past 10 to 20 years.
I came across this interesting 1 hour video today which is a talk about fairly recently (the past 5 years) discoveries on animal learning using Temporal Difference computer models which led to the discovery and understanding of yet another piece of the puzzle in what the brain is doing.
How Do We Predict the Future: Brains, Rewards and Addiction www.uctv.com greymatters.ucsd.edu
http://video.google.com/videoplay?docid 09575117793246083
To me, reinforcement learning, the work you are known for publishing some of the first papers on in the context of computers and artificial intelligence, is the only foundation that can explain why, and how, humans use language.
Humans use language to direct all our high level behaviors. We make plans, and follow our dreams, by talking about them, either with others, or just to ourselves.
But, how does he brain learn language, and how does it select what language to generate, and when? Why do we suddenly stop what we are doing, and start talking to ourselves? What triggers that reaction? What controls it? Why do we suddenly generate language in the presence of another human? How does the brain select what language to generate? How does it know when to stop talking and start doing something else?
These are the high level human unique behaviors we must understand and build machines to copy.
We can record knowledge in a book by filling the book up with words. And likewise, we can fill a computer with knowledge in many different ways. But what is the purpose of the knowledge in the machine? There is only one ultimate purpose - to allow the machine, to know what behavior to generate next, at all times. This is the only knowledge that exists in a human brain - the knowledge about what to do next for any given environmental context - about what behavior is the next "note" to play next in our life long symphony (to use the metaphor someone else brought up here).
When we read a book, we can absorb knowledge from the book, into our brain. But how does this happen? How is it stored?
If the ultimate goal is to build a knowledge storing machine that can read books, and talk to other humans, and absorb knowledge through this interaction, then it's obvious you want to build a knowledge database. And you want to give it the power to learn from it's interactions. And, we would like it to have the power to interact with itself, to gain further understanding of it's own knowledge (talk to itself to discover, and create, new knowledge) (which might be the "reflective" thing you talked about above).
So I agree with your ultimate desire to duplicate high level language behavior in a machine, for the purpose of duplicating our must human of behaviors - with the hope of duplicating our most humans of powers at the same time. If we can make a talking machine which can interact with us like a human would, one which would do much better on the Turing test for example than any machine has yet done, that would be fantastic. I would love to have a computer I could ask to go do research for me on the Internet and report back to me what it had learned. I don't need it to have arms and legs or vision.
But, a machine with nothing but the ability to receive, and generate words through a communication channel, has the same fundamental behavior problems to solve, that all animals (and robots - trying not to forget what group this is) needs to solve - what should it do next. You can ask this question many ways - such as, what is the purpose of the machine, or what is its goal, or how does it pick its own goals? How does it demonstrate creativity? How does it demonstrate adaptability.
What makes human behavior intelligent, is that all our behavior is directed towards a purpose. Without a purpose, the machine has now way to know what to do next. Without a purpose, there is now way for the machine to evaluate which behavior is "better". It wouldn't care what it did next - anything would be just as good as the next. What makes us creative, is that we can find new behaviors on our own. How do we do this? By having a system which can understand the value of a behavior never before seen.
Any computer program which is going to attempt to produce human level intelligence, and creative, language behavior, is going to need an evaluation system which assigns value to all behaviors. Without this, the machine won't know a great idea when it has it. It won't know which idea to pursue, and develop further, and which to drop.
Likewise, humans don't have photographic memories. We selectively extract, and keep, the knowledge which we sense as being important from the environment. We follow ideas just like we follow bread crumbs to find food - we seek out what we believe is valuable.
No knowledge based approach to AI that I've seen, seems to understand these two fundamental issues. First, the only knowledge we have, is knowledge about what is the best behavior to produce in a given context, and second, we have an intrinsic value system which is able to judge the value of all behaviors. It's this value system, that allows the brain, to determine what it should do next, and when a new behavior emerges, to recognize (and reward) its value.
I agree completely that we need to build knowledge systems and solve the problems of high level language production, but I think that most people working in that field have totally missed the mark on what they should be building. They have structured their systems more like electronic books, than like brains. Books are not intelligent. They lay there and do nothing until an intelligent agent interacts with it. We need to build a machine that duplicates the function of the brain reading the book, not the book. And humans make behavior choices (do I read this book, or that other book) based on the perceived value of each behavior.
Reinforcement learning is all about the creation of value based behavior systems. They explain how humans evolve their complex "intelligent" behaviors, and they explain what intelligent behavior is. They explain how it is possible for humans to be creative and inventive. They also all use a knowledge database to direct all their behaviors. But that knowledge database doesn't just store associations between facts. It instead creates a knowledge database in the form a value function which answers the question of how valuable different behaviors are, in different contexts. That's the type of knowledge database you must build, in order to direct a machine to act intelligently.
And it makes no difference if you chose to limit the machine to only being able to produce language behaviors, or if it's a robot with arms and legs and eyes and ears. If you want it to be intelligent, it has to be a reinforcement learning machine which directs behavior though a value system, and which also, has the power to evolve it's value system through experience.
The advantage to working with robots first, is that we can learn to produce strong reinforcement learning systems for behavior problems which are simpler than the full human language problem first. If you can't build a reinforcement learning machine that can learn to find food in a maze, then you aren't going to get a language machine to work. This is because the maze problem, like language, requires the machine to understand a long history of context (what is the history of turns I've made so far), and produce the correct next behavior, based on that long temporal context. Language, is the same, but even worse. Because the next word I need to speak, might be based on a long context of the last 500 words I just heard. If you can't build a mouse that can learn to correctly react to a small context created by a small maze, there's now way you are going to get a machine to correctly react to a long string of language.
I'm not aware of any robot mouse that can learn how to get itself out of different mazes, or to find food in different mazes, through generic learning techniques (aka a mouse that wasn't hard-coded just to solve mazes). But yet, this is a problem you looked at over 50 years ago. It's the same problem that wasn't solved then, and hasn't yet been solved, but needs to be solved before we are ever going to make a machine use language like humans do. It's an easier version of the same problem and the type of problem we should solve first.

I've been working in this same general areas for 25+ years. If you call it work - it's really more like play. I didn't pick this direction because it was popular - I picked it because it was the only one that looked like the approach that could explain how to create an intelligent machine.
I don't know what motivates the bulk of the AI community, but I never had to make a living, or build a career, so my decisions were not biased by needs like that. I simply wanted to figure out how to build an intelligent machine for the sheer intellectual challenge it presented (and OK, for the potential glory that it might bring if I succeeded in creating something significant before anyone else). But I had no downside to fear - no need to cover my bet to make sure I could keep eating.
Many that are making their living doing AI research, are no doubt highly motivated by projects they believe they can get funding for, and for which they believe they can create a success - to allow them to get funding for the next project. This no doubt motivates people to work on stuff that is currently "popular" in the eyes of the people with funding power. Mostly, because the lack of any significant breakthroughs in the past 40 years, I suspect this has caused people to set there sights far lower - to do simple things, and ignore the hard problems.
Creating new significant algorithms, is hard, and very risky. But I think that's what needs to be done. Instead, people probably look for projects that are only a small step forward - lets build a faster chess machine, or lets build an autonomous car that can drive a little faster.
But trying to understand how to create a value based knowledge data base for open ended high dimension problems like making a robot that can learn to find its way though a maze, requires some new insight into how to structure the database that might come after a 5 year project to look for it, or the 5 year project might produce nothing - greatly reducing the odds the researcher will get any more funding.

I've pre-ordered a copy. I don't like reading on-line all that much. :)

Ultimately, we have to bridge the gap from the top to the bottom. It really makes no difference whether we build from the top down, or the bottom up, as long as the end result is a complete bridge.
So far, I think most the top down approaches have been lost not knowing where they were headed (or headed in a direction that I think was a dead end). Knowing for example that we need to build a knowledge database is a top down issue. Knowing how to structure it, is the problem of not knowing where we are headed. This is because when we look into ourselves, we can't see the mechanisms that create our intelligence, we only see the top level end product - our behavior. So the top level problem is obvious - we need to build a machine to receive, and generate language - strings of words.
Reinforcement learning is one of the bottom up approaches that seemed hopeful very early on, but which never got very far and was given up before much of anything was built, simply because no one could see how to build the bridge any further - there were many problems, and no answers.
But, I think after many different top down, and bottom up approaches have all failed to close the gap, some people are beginning to see the light. Reinforcement learning is the only bottom up approach that explains human behavior. No matter how many problems are left unanswered in how this is implemented, that's the path we must take from the bottom, and it's the point any top-down approach, much close in on. Any top down approach which isn't headed towards creating a reinforcement learning machine, isn't creating machine intelligence - they are just creating yet another type of computer tool (like a game playing program, or a logic reasoning engine, expert system, etc.).
Machine intelligence, requires the type of creativity that only comes from critic based learning systems that can evolve their own complex future looking value prediction functions. Machines can't be intelligently creative, if they can't recognize the value of their own behaviors.
TD-Gammon, is a good example of a machine that can do this. It recognized, and learned on it's own (by playing itself - a very "reflective" behavior) a opening in the backgammon game that none of the expert players used. But after seeing TD-Gammon use the opening, the expert humans analyzed it, and decided it was the best way to play that opening, and now they use it. TD-Gammon showed how machines can be intelligently creative - to create things that no human has created. Other evolution based systems have done the same - (GA or evolution based learning systems are just another type of reinforcement learning).
At the low level, we know how to build "intelligent" machines, like TD-gammon. At the high level, we have built some interesting word and idea manipulation machines - but none that I know of have been built with a reinforcement learning core directing it's behavior - which is why the high end solutions don't look at all "intelligent" no matter how much they seem to "know". They are just not creative purpose driven machines yet.
What we don't know, is how to build a high end reinforcement learning language machine - one which is able to learn open ended language behavior, instead of just playing in a very limited environment like a board game. That's the gap we have to close.
I think playing with robots is a great way to help close the gap from the bottom up. From the top down, no one is going to get anywhere unless they realize where they need to head - which means they need to figure out how to build a knowledge database structured for the purpose of producing constantly changing language behavior, based on a reinforcement learning core. The knowledge database must be structured to answers to the only question the machine ever needs to answer - which is - "What behavior is most likely to produce the most value for the current situation?". It produces a constant string of intelligent behaviors, by continually answering that question (and constantly learning - aka changing its predictions of values based on experience).
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Curt Welch wrote:

A neuron does what neurons do and a computer does what computers do. Getting one to do what the other does is a matter of approximation and interpretation. They are never one and the same.
-- Joe Legris
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On 29 Aug 2006 17:58:44 GMT, snipped-for-privacy@kcwc.com (Curt Welch) wrote:

Except, and correct me if I'm wrong, haven't they discovered micro structures (microtubules) within each neuron that act like biological quantum computers? So instead of being a simple input output circuit, the individual neurons have some significant processing power in their own right. The problem just got more complicated by a few orders of magnitude.
____________________________________________________ "I like to be organised. A place for everything. And everything all over the place."
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Tim Polmear wrote:

"Microtubuar consciousness" is the brainchild of the quantum-mechanic Roger Penrose, but I really don't think many in the neuroscience community, outside of possible Karl Pribram, take this idea very seriously. Pribram, BTW, was the brain-master of the hologram-in-the-brain hypothesis, which is somewhat debunked now, I think.
My feeling towards the genesis of Penrose's idea has always been ... "Well, I'm a quantum mechanic, and quantum weirdness underlies everything, therefore it must underly consciousness too, so what can I find in the brain that looks quantum? Ahh, microtubules!" Gakk!
In short, there are MANY MANY different "theories" of brain operation, with many different [and small] groups of adherents, and only marginal evidence to support any of them, as yet.
Regards individual neurons, they are much more than simple input-output ckts, as they have large dendritic trees, which act like complex distributed analog computing elements, the output of which triggers action potentials [digital outputs] in the cell axons which can propagate for many cm's. We've had many discussions of this on google c.a.p. [comp.ai.philosophy]. Curt's use of the word "easy" above is probably overly-optimistic.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

I've been debating these sort of AI ideas with dan for years on c.a.p. I'm will known for my belief that AI will be simple or easy once we understand the correct big picture, or the correct approach, or the correct algorithm. I'm also tend to be more optimistic than most about how long it will take to uncover this "simplicity". I made a 10 year bet back in the 70's which I lost, but I've recently made another 10 year bet with the same old friend, which will end on 2015. I don't intend to lose this time. :)
Dan (as well as most other people) don't share my vision of simplicity. They see the brain, and AI, as complex machines solving complex problems (as far as I can tell). I believe the theory behind what it's doing is fairly simple, and the complexity only comes from it's size, and the implementation details which do make us what we are. I believe creating intelligent machines will, in time, be easy. Duplicating all the nuances of human behavior and human personality, however will always be very complex and time consuming.
I think AI is like trying to understand the orbits of the planets. On the surface, it looks too complex to understand. Only after a lot of careful documentation and collection of data, and study, could the true simplicity be uncovered and expressed by Kepler's three laws of planetary motion. And then later, simplified even more by newton's laws of gravity and motion.
What started out as something only the Gods could understand, translated to something as simple as F = G m1 m2 / r^2 to explain the force of gravity and F = ma to explain all motion from force. From these simple concepts, the motions of everything in the sky can be explained.
I believe machine intelligence will turn out to be just as simple at the core. Other's accuse me of greedy reductionism. Time will tell who's right.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Curt Welch wrote:

<...>
And I suspect that is where this thread should be. My impression is that comp.robotics.misc is about hardware and _practical_ control systems (brains) for robots.
-- JC
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

I'm not familiar with this discovery but I've think I've seen mention of it. But yes, for a long time now, the more they study neurons, the more complex they get. It makes it only harder to try and understand what it's all doing.
I believe there are some fundamental simple ideas behind what the brain is doing what we don't fully understand yet. Just like there's simple ideas behind all complex machines. You can understand the basics of an airplane, by playing with a simple rubber band powered balsa wood toy plane. But yet, tear apart a modern jet fighter, and all you find is unlimited amounts of complexity in all the small parts. Seeing the big picture is very had when you get lost looking at all the small parts.
Understanding the big picture is the trick to cracking the secrets of the brain. It's being approached on both the theoretical fronts of computer science and mathematics as well as the experimental front of neuroscience research. Together, they will at some point, uncover a clear big picture understanding of what the brain is doing. Once we have that, we will probably be able to re-implement the same ideas, using digital technology. Most likely, we won't end up with anything that looks like a neuron when we are done, just like you can't find feathers or flapping wings on our flying machines. Feathers are extremely complex things, just like neurons are extremely complex things, and though a feather is an important part of a birds flying features (take away their feathers and they can't fly), we don't find them in planes. Likewise, I don't really care how complex neurons are, it's unlikely we are going to find anything like them in our intelligent robots.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
snipped-for-privacy@kcwc.com (Curt Welch) wrote:

Oops, sorry for coming in late and replying to a reply. But as someone with a master's in neuroscience, and a practicing software engineer with a great deal of interest in neuron emulation [1], I can tell you that that microtubule-quantum-computer story is complete and utter bunk. Roger Penrose is looking to find God in the microtubules, that's all. He is a brilliant mathematician, but he is NOT a neuroscientist and should quit trying to play one on TV.
Real neuroscientists can simulate neurons to pretty much any desired degree of accuracy, even predicting their outputs to a given set of inputs. They do this with compartmental [2] or kernel-based models, and quantum mechanics has nothing to do with it.
Cheers, - Joe
[1] http://www.ibiblio.org/jstrout/uploading / [2] http://www.strout.net/conical
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.