Where is behavior AI now?

wrote:


Hey, VERY simple is all you're going to get from me, I couldn't back-propagate my way out of a wet paper bag. Making sense of the environment seems a hideously complicated task, to my way of thinking and let's face it: People with PhDs in all sorts of things have been working in the field for decades.
But my illustration of a bot in a box was not intended to merely represent a typical bump-n-go thing on wheels. I maintain that even that minimalist environment could be used to test a robust machine intelligence system.
Driving, bumping, odometry, mapping, route planning, these are all mechanical aspects of the process. In my example, I suggest that the theoretical obstacle is removed after being mapped by the robot. It doesn't take intelligence by the robot to discover that the obstacle is missing and update its area map, An intelligent robot, however, might be able to generalise about the nature of obstacles:
I can crash into obstacles I can drive around obstacles Obstacles can exist Obstacles can be in front of me or beside me or behind me Obstacles can be close or far away Obstacles can be in my way or not Obstacles can be small like the one I found at x,y position Obstacles can surround me There can be a distance between obstacles Obstacles cause a change in certain registers of my circuits When I am stationary, obstacles have no effect on me unless I have already crashed into one. Obstacles can be approached from a number of directions Obstacles can meet in a corner Some obstacles stop my odometry circuits registering forward motion even though I am trying to drive forwards. Some obstacles can move Obstacles do not usually move Some obstacles can be there and then not there If the obstacles that surround me behave in the same way as the obstacle that was in the middle of my world and which is now gone, what will happen? And so on...
Writing these ideas as a human is a fairly simple task, but I think it would be a very elaborate robot that could fully comprehend the nature of a box with a block in the middle.

What would her mind have done then, I wonder. A 'tabula rasa' with no sensory input... The neurons would have created some kind of altered sense of reality unless consciousness is entirely based on sensory input. Or would the brain have simply atrophied, losing its capacity to function beyond the semi-autonomous regulation of he body systems. Any evil scientist worth his reputation could answer the question with a brain in a jar and an MRI machine :)
____________________________________________________ "I like to be organised. A place for everything. And everything all over the place."
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

You have a long and reasoned argument. I'm sorry, but I do not buy it. I think the argument is based on falacious reasoning, via application of reductio ad absurdum. It starts with bump switches. No magic demon sits on the bump switch and sends back detailed morse code describing the tecture and color and arangements of the world sliding by the wisker. No UART sits on them coding up ascii messages of depth and complexity. No such information content exists no matter how often the bump switch is read.
There are no trillions of inputs with 20 readings on 2 sensors taken over 2 seconds. All that is there is a 1 one part in 20 representation of a switch closure, not 40 bits of information. There are two parts to the information. One is purely binary. The switch is open, the switch is closed. The other part is purely a measure of time. Now matter if the switches are read 1 time a second, or 1 million times a second, only (largely useless) resolution is gained on the timing of that bit change. It is of little "information" significance. Given two bump switches as the only inputs, the information content is relative between the two switches, so which switch went first (assuming both closed before escape action was taken and releaved the first) is about all of significance that can be made of the situation.
But even my premises that there are only a few bits of information at most in two bump switches misses the point I tried to make in my post, by putting the horse before the cart (or some such).
I was focused on outputs. If your robot has limited output combinations, it doesn't matter how intelligent it might otherwise be. It can only express itself by having a very few states.
Here the poster child should not be Helen Keller, but Terri Schiavo. If there are no motor functions available to evidence intelligence, we tend to assume there is none.
jBot is a wonderful and talented robot, a credit to the state of the art. But it can't even wag it's tail, when it is finished hunting it's spot.
So without much over splitting of the analog control of outputs (so as to have infinite responses, like: this is a one meter circle, this is a 1001 millimeter circle, this is a 1002 millimeter circle, and so on) we ought to be able to characterize all possible behaviors, as evidenced by output settings. Then given all the possible inputs in terms of relationships (again avoiding analog hair splitting) we should be able to come up with an equivalent machine, by linking all output states by input data relations, and be able to describe the complexity of the machine we see.
Complex behaviors like escapes? perhaps they are simple behaviors applied according to passage of time. Then that takes me back full circle to the original premise. The intelligence is in the changes of behaviors, and not the behaviors themselves.
--
Randy M. Dumse
www.newmicros.com
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Howdy,
Randy
I'm not sure I understand your premise, here. That limited output resources (i.e., 2 drive motors) somehow translates into limited behaviors?
I don't think that is a given.
For example, when I drive the R/C camera car, its behavior is distinctively more intelligent than when the robot is driving itself, especially interacting with other humans. And it is limited to the same two output motors in both cases.
The complexity of the robot's behavior and the intelligence, even humor, inferred from that behavior does not seem to depend on the limited-ness of the output resources, but rather on the way they are manipulated.
regards, dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

Then perhaps you haven't programmed the robot all that well (not a serious suggestion here, but retorical argument for sake of amplification of missing pieces.) As soon as it sees humans, it should do a humor act like you do when you see a human close by. Oh, but it does have eyes does it? Well, what's the point of a humor output if it can't tell if there is anyone there to see it! (Hummm... Is the problem the humor act, or is the knowing when to do it?)
So perhaps it cannot act as intelligent as you, perhaps because it does not have the sensors to detect occassions for humor, or perhaps it doesn't have the dexterity in its fingers that you do on the RC controls. Opps. It doesn't have fingers does it? But surely, if your fingers can tickle the RC controls to jiggle-giggle (or however you express humor) the robot could make the same motor command outputs without the controller, that you can with the control, too, right?
So now, about that jiggle-giggle thing. What combinations of motor control does it take to do them? We go both forward for a while, then both back for a while, then forward left, back right, then back left forward right, then pause, then repeat...
Which one of these "behaviors" is it the robot can't do?
My answer, none. I see the atomic "both forward" as a behavior. Also I see the atomic "left forward right back" as a behavior. I see all those quantifiably different output combinations as atomic behaviors.
But the jiggle-giggle thing? Right now I'm wondering if that isn't a behavior at all. It's something else. It's a time sequenced exhibition of behaviors. It is an attempt to send "morse code" by patterning behaviors.
Hence I suggest, "It occurs to me that artificial intelligence will be found in the decisions that switch behaviors rather than the layers of behavior themselves."
Given limited outputs, I suspect we can list all possible (or observed) outputs. So being in some state of output becomes quantifiable. But, it is the transitions between those states which shows the intelligence.

Okay, I offer a proposal. Let's put you in a steel box with your RC controller (antenna outside) but you have no vision of where you are going, you have no sense of the roughness of the terrain, you have no camera to see when humans are approaching your robot, you have only this feedback: You have five numbers that represent ranges. You have current indications in motors. You have a few numbers showing compass heading and inertia changes. Now I'll bet you, your robot will look much more intelligent without your help, than you will with full control.
Again I think the intelligence is not in creating robots with behaviors (BBR's). If the outputs are few, and the inputs also few, all possible behaviors are pretty quickly delineated. The emmergence in intelligence is instead in the sequencing of the behaviors.
even though I seldom place a close on my posts, let it be assumed as implied and understood, always, best regards,
Randy www.newmicros.com
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

Then perhaps you haven't programmed the robot all that well (not a serious suggestion here, but retorical argument for sake of amplification of missing pieces.) As soon as it sees humans, it should do a humor act like you do when you see a human close by. Oh, but it does have eyes does it? Well, what's the point of a humor output if it can't tell if there is anyone there to see it! (Hummm... Is the problem the humor act, or is the knowing when to do it?)
So perhaps it cannot act as intelligent as you, perhaps because it does not have the sensors to detect occassions for humor, or perhaps it doesn't have the dexterity in its fingers that you do on the RC controls. Opps. It doesn't have fingers does it? But surely, if your fingers can tickle the RC controls to jiggle-giggle (or however you express humor) the robot could make the same motor command outputs without the controller, that you can with the control, too, right?
So now, about that jiggle-giggle thing. What combinations of motor control does it take to do them? We go both forward for a while, then both back for a while, then forward left, back right, then back left forward right, then pause, then repeat...
Which one of these "behaviors" is it the robot can't do?
My answer, none. I see the atomic "both forward" as a behavior. Also I see the atomic "left forward right back" as a behavior. I see all those quantifiably different output combinations as atomic behaviors.
But the jiggle-giggle thing? Right now I'm wondering if that isn't a behavior at all. It's something else. It's a time sequenced exhibition of behaviors. It is an attempt to send "morse code" by patterning behaviors.
Hence I suggest, "It occurs to me that artificial intelligence will be found in the decisions that switch behaviors rather than the layers of behavior themselves."
Given limited outputs, I suspect we can list all possible (or observed) outputs. So being in some state of output becomes quantifiable. But, it is the transitions between those states which shows the intelligence.

Okay, I offer a proposal. Let's put you in a steel box with your RC controller (antenna outside) but you have no vision of where you are going, you have no sense of the roughness of the terrain, you have no camera to see when humans are approaching your robot, you have only this feedback: You have five numbers that represent ranges. You have current indications in motors. You have a few numbers showing compass heading and inertia changes. Now I'll bet you, your robot will look much more intelligent without your help, than you will with full control.
Again I think the intelligence is not in creating robots with behaviors (BBR's). If the outputs are few, and the inputs also few, all possible behaviors are pretty quickly delineated. The emmergence in intelligence is instead in the sequencing of the behaviors.
even though I seldom place a close on my posts, let it be assumed as implied and understood, always, best regards,
Randy www.newmicros.com
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote: <snip>

<snip>
But that was not your premise, as I understand it. You wrote:

We only have 12 musical notes. So how many different melodies can be written? Can you calculate it with some simple connection matrix? Is it 12 factorial? Is there a limit?
I reject the premise that the variety of robot behaviors is limited by the number of output resources, or can be determined by some simple connection matrix, as referenced in my original query, to wit:

Your response, a proposed experiment in a metal room, concerns itself with limiting input sensors, not output resources, and so strikes me as irrelevant.
Let me try again. Musicians have been able to get an infinite variety of music from the same 12 notes since the time of the ancient Greeks. The variety and "intelligence" of the music does not seem to be limited by the limited nature of the "output resources."
Similary, a remotely piloted vehicle, as an extension of the human operating it, has an infinite variety of behaviors, as varied as the imagination of its operator. The attempt to limit the complex world of "behavior" to some combination of motor movements strikes me as naive, like suggesting that because there are only 88 keys on a piano keyboard, that somehow limits the number of possible piano pieces that can be composed.
Clearly we have a long way to go before our robot's autonomous behaviors begin to resemble the intelligence and creativity of that same robot as operated by a human. Until that is achieved, it seems pointless to assume that advances in AI are being held back by the lack of output resources.
It seems to me that the lack of perceived intelligent behavior in our robots is not a function of their lack of output resources, but rather how those resources are used. If you accept that as true, then I think the rest of your conjecture falls apart.
In fact you may remember from our last action-packed episode, the somewhat heretical observation that sophisticated robot behavior arises from REDUCING the number of possible states, which I still think is worthy of further contemplation.
as always, dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

I realize your music analogy is about how M outputs does not necessarily constrain N results, but I think it works just as well in a larger picture. To wit:
I find the above analogy doubly interesting because Pythagorean intervals aren't the only way to play music. There's the pentatonic scale, which the Greeks based their scale on, which consists of five notes, a scale every rocker gets to know well, and are identified by the black keys on the piano. Indian music divides an octave into 22 steps, and uses only a subset of them. Arabic music has 16 steps, as I recall.
The point is there is more to music than the (two) Pythagorean intervals and a 12-step scale, and different cultures have created their music using a variety of tonalities. To me, this is yet another example of how there can never be one way of doing anything. A Western ear will hear only 5 or 12 notes to an octave, because that's the music we listen to; yet there is a whole world of musical differences that have historically proven themselves to be "tuneful" and enjoyable to other cultures. This thread serves as ample proof that there is no right (or in my view wrong) way of approaching robotic programming, whether or not it includes aspects of behaviorism.
You spoke of a "heretical observation." I have one I'm sure will send me to hell: personally I find behaviors for many (I didn't say all) tasks in robotics a dead-end. The formal literature on the subject has all about ceased; Brooks hasn't written anything new on it, really, since his original papers some 10 years ago -- his Cambrian Intelligence book is a reprint of old articles. Could it be that behaviorism is not capable of all that it is cracked up to be, and the real results can not sustain the hype?
I base my contrarian observations on my box turtle, Brantley, who has escaped yet again. Turtles are rather intelligent as far as reptiles go, but what I saw of Brantley's abilities could be summed up in in one phrase: "dogged determination and random actions." Every day Brantly tried the same length of bender board in his quest to see the rest of the world. He did the same things over and over again. There was no indication he ever learned that what he did even a minute ago would not work right now. But it turns out that Brantley was able to escape into the larger part of our yard (thick with ivy and underbrush) exactly because he had no memory of what didn't work. As a state machine all it did was eat and poop. He kept trying a random assortment of things until, one day, his efforts yielded success.
I'm not saying the ideal robot just goes around doing random acts, simply because there is NO ideal robot. It seems to me that just as there is more than one musical scale and an infinite number of ways to put those tones together, just about everything works to one degree or another, depending on our application. It seems dubious to argue an idiometric approach such as behavior AI when the application universe may be infinite.
-- Gordon
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Gordon McComb wrote:

In one sense, behaviour-based robotics is a much bigger task than mere methodological behaviourism because roboticists must both justify their theories of behaviour and implement them too, which probably requires a full-blown quantitative analysis of behaviour. Behaviourism is still mostly in the qualitative stage so it cannot provide much guidance here. One thing it can do now is to provide reality checks on the results, assuming that natural-like behaviour is the goal.
IMO, a quantitive analysis of behaviour is to be found in neuroscience - consequently, behaviourism will have to sneak a peak under hood if it hopes to achieve quantitative status. Jeff Hawkin's work on intelligence is an example of trying to stay close to the neurological basis of behaviour (e.g. http://www.stanford.edu/~dil/invariance/Download/GeorgeHawkinsIJCNN05.pdf ), but I think we're going to have to get much closer still.
Unfortunately, the complexity of neurological processes does not lend itself to digital implementation - we immediately run into a computational explosion, just as we do in attempting to simulate other natural phenomena with computers. We are left with a choice between a simulation whose implementation is vastly more complex than the phenomenon being simulated, or a simulation of a vastly simplified model. For example IBM's Blue Brain project requires a processor executing roughly 3 billion operations per second for just 1 or 2 neurons, and that doesn't include molecular-level processes such as gene expression.
If robotics is to obtain a natural model of behaviour it will need a computational platform whose intrinsic behaviour is somehow analogous to neurological processes (or some distillation of their essence, if such a thing is possible), so that instead of explicitly computing the behaviour, it just "does" the behaviour. There may be some way of doing this with analog electronic circuitry, but I suspect that the only thing that behaves like a neuron is another neuron.
And so maybe that is why behaviour-based robotics seems to have stalled. A behavioural approach leaves out the details that are vital to the implementation.
-- Joe Legris
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

It's unclear if the computational explosions that people run into when attempting to implement behavior is due to some intrinsic nature of the effect (as it is with weather for example), or if it's simply a fall out of using the wrong model. I've always believed it was just a problem of not having the right model.

Yes, that's true.

I think that's just a silly statement. "computing" and "doing" are one and the same things. Computers don't "compute", they behave according to their design.

I suspect the basic function performed by a network of neurons will be easy to duplicate in digital hardware.

Yes, something is left out, that's for sure. The question remains what that is however.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Curt Welch wrote:
Joe wrote:

Using Brooks as a primary example, one can only conclude that BBR has stalled, regards going past the basics of intelligence. And what might be your conjecture as to why this is so?
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

I've not read Brooks and I have no idea what BBR is. So I can't comment directly no why that might have stalled since I have no idea what it is.
But on the general search for intelligence, especially ones that attempt to use ideas from behaviorism, I think the problem has always been a lack of having the correct model. That is, we just don't have have the correct algorithm, or description yet. Though behaviorism has shown us the basics, it falls far short from showing how to implement it.
It's like documenting the flight path of a bird, without any understanding of how the bird actually manages to fly. Even a complete understanding of behavior, doesn't make implementation obvious (but does allow you to know when you have the wrong implementation).
Human and animal behavior only looks simple if you limit the environment and reinforcers to something very simple - such as what happens in a Skinner box. Otherwise, it's a a huge parallel process where all our behaviors and motivations are interacting and competing with each other. What behaviorism hasn't answered, is how to implement a large parallel learning system - something that produces behavior so complex, we can't even understand it unless we limit it to an isolated test in a Skinner box. This is the same missing piece which has been missing for over 50 years.
It's stalled for 50 years, because the step from how single behaviors are modified by reinforcement, to how millions are modified in parallel, is a huge gap to cross. No one has found a path across it yet. But even though we have not crossed it, I think much progress has been made in understanding the nature of the problem.
It's like trying to reverse engineer an encryption algorithm. It's just very hard to do. You really can't "see" the algorithm in the behavior. You simply have to try different algorithms until you find the one that works. Reverse engineering the basics of the brain and human intelligence seems to have much in common with this type of problem.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Curt Welch wrote:

BBR = behavior-based robotics.
Brooks = Dr. Rodney Brooks, of MIT, who championed the concept and made it relatively popular today. He didn't "invent" it, but he added some additional elements (i.e. subsumption) that were supposed to enhance the viability of behaviors as an AI model, especially in small robots where computational power is limited.
When someone talks of "behavior AI" for robotics, in the present sense it must by its nature include Brooks and his theories. Sort of like talking about psychoanalysis and forgetting all about Freud.
-- Gordon
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Gordon McComb wrote:

Curt was just in some sort of mind block ;-). Abbreviation BBR has been used in 1/2 the posts on this thread, give or take. I'm sure he's read 1/2 of Brooks' papers too.

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

I've only skimmed a few of them and I've not read the book that was mentioned here. So much to read.... :) But yes, I know of Brook's work for sure. I just haven't read it in detail and I'm not familiar with the term behavior-based robotics.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Yeah, I guessed that. Or maybe someone wrote it in one of these messages.

Yeah, I know who he is but haven't read much of what he's published. He's published alot.

I've seen the subsumption architecture stuff. At least some of it. Is that what people are talking about?

The subsumption architecture is interesting because it demonstrates how there are very different ways to solve the same problems. And it shows how much the state of the environment can be used in place in internal state variables to achieve some fairly complex tasks. For people use to writing traditional computer software which has little to no access to state outside the machine, we get used to assuming most the state that controls the action of the machine is stored inside the machine (memory states etc). The tendency to think in those terms blinds us at times to how much the machine can do simply by reacting to the external state. Brook's subsumption architecture makes it clear there the external state is an important part of how reactive agents work.
The problem with the technique in general, is that at least the initial stuff was not learning based. Instead of building a reaction machine that could learn, it required an intelligent programmer to hand-code all the algorithms into it. I don't think humans are smart enough to hand code the types of programs that exist in our brain. It's like trying to hand code the weights on a neural network. It's just not something humans can do for programs beyond the trivial.
I think the brain is basically a reaction machine that does much of what it does using subsumption like techniques. However, most the real complexity is created by the learning algorithms and are beyond what any human could design by hand.
If you are looking for a better way to hand-code behavior into a robot, I'm not sure if the subsumption architecture is going to buy you much. Humans just aren't smart enough to be able to specify software that way for complex problems. However, I think understanding the subsumption approach, can give us insights into the correct way to structure a learning machine.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Curt Welch wrote:

Actually, no. He's not written all that much, and almost nothing new in the last 8-10 years. He is quoted a lot, though. The book people might be referring to, Cambrian Intelligence, is a collection of his papers through the mid 90s. It's a slim volume, though not necessarily a quick read. He did a later consumer-oriented book ("Flesh and Machines"), and a couple of overview papers since his more prolific period.
I think this is part of the problem. Whether Brooks has ceded documenting the work at MIT to other professors or grad students, or whether his involvement with iRobot and other ventures has him being mum on the subject, the lack of updates and ongoing proofs has led to a lot of fracturing and "hybrid" systems that are less and less behavior, let alone subsumption, AI. I keep hoping he'll see our complaints about the lack of writing, get pissed off about it, and put out something new!! <g> > I've seen the subsumption architecture stuff. At least some of it. Is

Behavior AI doesn't necessarily involve subsumption. The subsumption stuff is Brooks' unique contribution, and it happens to continue being popular. We (here at least) often refer to subsumption-oriented behavior-based robotics as "Brooksian," to differentiate it from other behavior-based models. Most of these are defined in Arkin, and introduced in Murphy. Joe Jones, a Brooks protege and I believe a lead architect of Roomba, is one of the few to have written a practical guide on BBR.

All behavior-based robotics is reactive with the environment, which goes with the territory. Subsumption uses some simple but effective techniques to decompose apparently complex functionality. It wants to see the world in simple terms, and this was Brooks' main argument. The problem with any AI has always been the n+1 factor: even subsumption gets extremely complicated with each layer that's added. It does work very well on simpler machines, as Brooks and others have demonstrated. (I'm sure Dan, who really is our resident BBR expert, has done several iterations.)

Well, if you're hand-coding something you consider to be a behavior into a robot, it's just a cheat to call it behavior-based robotics. I've always termed these "actions" -- for the lack of a better word -- and not behaviors. A robot that always steers to the right in order to follow a wall is really just performing a simple action that is not a behavior. We only think it's a wall-following behavior if A) there's a wall there and B) there's nothing else to impede the robot to keep it from following the wall and C) we're already keyed-into the phrase "wall following behavior"!
Real BBR would entail emergent behaviors that are the result ("nexus" is the overused word these days) of two or more simple behaviors into more complex ones. A cockroach, which demonstrates wall following, may not (and usually does not) display the behavior if it's dark. Turns out the "real" behavior is not following a wall, or scattering when the light turns on, but avoiding being caught.
-- Gordon
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Interesting. Them maybe I should make more effort to read them.
I think I've been generally mixing some of his papers with other papers which came out of the MIT media lab while he was connected with it and think of them all as being part of "his work".

Yes, that's the one mentioned. I should probably get that and read it.

Arkin and Murphy mean nothing to me.
Ah, but half way down in responding I read the web page below which mentioned Arkin's book so no it means something to me. :)

I just spend some time reading a few web sites on behavior based robotics and I now I think I understand how the term is being used. Yes, I'm a big believer in the approach. It's basically what I've been looking at for the past 25 years. Though my I didn't approach the problem to try and build better practical bots. My intent has always simply been to solve AI in general (and ultimately build a robot with human ability). I didn't realize the term had evolved in the robotics world though I am familiar with a few of the projects that are being referred to.
Here's what seems to be a good overview of the idea if anyone else is like me and unaware of what it is and wants to get a quick introduction:
http://www.inl.gov/adaptiverobotics/behaviorbasedrobotics /
It's clear people looking at this have come to some of the same conclusions I have (from the above site):
... a main reason the behavior-based community is so intent on developing automated learning techniques is that a human designer often finds it excruciatingly tedious or impossibly difficult to orchestrate many behaviors operating in parallel.
This is the same point I was making reference to when I wrote:

This is because, though I think the approach is required to produce human like intelligence, I think this format is unworkable for hand designing by humans for any problem above a fairly trivial level. Humans can deal nicely with simple systems where something like 10 behaviors are competing for control, but make it 10,000 and a human has no chance of hand-coding the priorities and the interactions between the competing parallel behaviors. And to get to human level intelligent behavior, you probably need 100 million behaviors or more competing with each other.
This is why I believe the only solution for advancing this approach beyond the trivial, is to stop hand specifying the behaviors (aka reactions), and replace it with a strong learning system.
The question was asked why behavior based robotics seems to have stalled. It think this is the answer. Even though I believe the approach is the right one, it's just not workable for humans to hand-design these behavior systems for anything above fairly trivial limited domain problems - like making a robot vacuum work correctly.
To learn how to balance thousands or millions of competing behaviors, the robot has to simply learn on its own through experience. Making that work well is going to be the secret to both making the approach go further for real world robotics problems, and for ultimately, creating human level intelligence.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Curt Welch wrote:

Arkin, Murphy, Jones, Brooks, Braitenberg, Minsky, Papert (others I'll leave out for now). These are all names that are regularly referenced because they are authors of popular books on artificial intelligence that are available at most any library, or at least a university-level library. These are the people who have published the work progress so far, and this is where debate usually springs from.
I don't mean to come across as haughty or as a name-dropper, but I find that having a fairly consistent frame of reference is helpful for these types of discussions. We know we're all talking the same language, though we might not all agree what the words mean.
In any case, and forgive me if any of these are already known to you:
You don't really need to buy the Brooks book. Just download his papers. Robin Murphy's book is an introduction -- kind of like AI 101 -- and you'll probably breeze through it. If you can't find it at the library buy a used copy on Amazon. Or I mighr be able to find my copy and I'm happy to send it to you if you take care of the shipping. Ron Arkin's book is probably the seminal work used by colleges and universities to teach AI basics. Randy has mentioned it a few times. Valentino Braitenberg's Vehicles book needs no introduction, as it is constantly cited in just about anything related to robotics. MIT heavy-hitters Marvin Minsky and Seymour Papert both have "mind opening" texts that I found very enlightening. Minsky's "The Society of Mind" is a must read, IMO, if you're interested in AI, even if you don't agree with the good doctor's ideas.
-- Gordon
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Yes, they certainly are. :)

Yeah, I know a lot is available, but I don't really like reading papers on-line (though I do it a lot anyway) , and if I'm going to print them all out, I'd rather just buy a book.

I got a computer science degree in 1980 and haven't tried to keep up with the literature since then. In the past few years Dan and others from c.a.p. have gotten me to read all sorts of stuff that has helped get me back up to speed (which makes it far easier to communicate as you said), but there's still so much to catch up on. And now you have given me more. :)

I read it a year or two ago. Minsky drops in to c.a.p. now and again and I've communicated with him in email. So I know a lot about his work. I went to one of his talks about 25 years ago as well.

Yeah, it's a good book. I agree with a lot of the basic concepts. Minsky however has always been trying to attack the AI problem from a level higher than I think it should be attacked. Well, actually, he tried a behavior approach to AI early in his career (the Snark (a neural controlled virtual rat running a maze made of tubes and motors) - part of his PhD thesis in 1951) and decided that the approach could never answer some of the more complex issues of human intelligence and he seems to have spend the rest of his life looking in other directions. I think he had the right approach at the beginning and shouldn't have given up on it so quickly. :)
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Thanks, Curt, for your always insightful advice. But I have an answer to your suggestion that "I think he had the right approach at the beginning and shouldn't have given up on it so quickly." I did return to work on 'low-level' systems at various times, but only became more convinced that higher-level, more reflective systems were mainly what distinguish us from other animals. I was disappointed when Newell and Simon 'moved down' from high-level strategies and embraced non-reflective rule-based systems in the 1970s, and when in the 1980s the story-understanding researchers (and most of robotics community) moved from semantic and conceptual analysis to low-level situation-action rules and statistical models. So, by the mid 1980s, there was virtually no research on higher-level thinking in the entire AI community. (Except, perhaps, for a few like R.J. Solomonoff, who studied the properties of very powerful (but almost uncomputable) higher-level descriptions.)
When we organized a conference on "commonsense knowledge and reasoning" three years ago, we searched the world and found only a couple dozen theorists working on those areas -- as compared to the order of 100,000 people working on rules, statistical and other numerical learning networks, and the like. My question is, why do so many people decide to work in the very most popular fields, where very little is discovered from each year to the next. It seems streange that they do not recognize that those approaches have got stuck on a large and almost flat local peak. Can anyone name 20 important discoveries therein in the past 20 years.
Yes, there has been superficial progress. But Deep Blue learned nothing much about games, that was not known in the 1970 -- except (as everyone knew) that a million times faster machine could look ahead about 4 more plies. And the DARPA road-running project showed that (again, as everyone knew) combining different sensors can lead to substantially better results. What else?
It seems to me that-unless one is sure of having have ideas as original and productive as those of Hinton or Sejnowski-it would be intellectual suicide to commit oneself to those popular areas. Whereas, it seems clear to me that future progress will be mainly in the area of reflective, meaning-based, knowedge-using systems. And there are still only a handful of workers in those areas!
I suggest that readers of this group take a look at "The Emotion Machine". The full text, more or less, is on my home page. (The book will be published in November, with a lot of small changes and corrections, and a couple of small newer theories-but the web version has the most important high-level ideas that I've had since "The Society of Mind" twenty years ago.
However, the two books are almost completely different: SoM is generally 'bottom-up' while TEM is top-down, and they mainly intersect only at the lower levels of knowledge representation. So let's see a few more of you guys look toward some more powerful goals!
Curt Welch wrote:

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.