Where is behavior AI now?

Howdy

I think we must have very different definitions of what an "output state" is. If you are looking for a mechanical definition (how many keys are there on a piano) I'd say that each drive motor has 100 forward and 100 possible backward speeds so 200*200 = 40000 "output states" as you seem to want to define them.

Plus, both wheels are constantly accelerating and decelerating as the robot maneuvers, and doing so at different rates depending on the circumstances, so we probably need another multipler in there to accurately describe the number of possible "output states."

But what does that tell us? Behaviors are our robot's melodies. Counting the piano keys, and even taking into account how hard they are struck, tells us nothing about the potential variety of piano music.

It seems to me that all 40000+ "output states" so defined are involved in a "behavior" such as perimeter-following, or seeking high ground, or staying on a path, or following a human, or chasing a ball, etc, etc.

Those are what I would consider "behaviors." And their number is limited only by the imagination of the robot builder and not by how many speeds the two motors can turn in combination.

So I guess I do insist it is infinite, or at least as infinite as is human creativity.

As I say. it appears we have very different concepts of what constitutes a robot "behavior" and therein lies the rub.

regards, dpa

Reply to
dpa
Loading thread data ...

I've only skimmed a few of them and I've not read the book that was mentioned here. So much to read.... :) But yes, I know of Brook's work for sure. I just haven't read it in detail and I'm not familiar with the term behavior-based robotics.

Reply to
Curt Welch

Actually, no. He's not written all that much, and almost nothing new in the last 8-10 years. He is quoted a lot, though. The book people might be referring to, Cambrian Intelligence, is a collection of his papers through the mid 90s. It's a slim volume, though not necessarily a quick read. He did a later consumer-oriented book ("Flesh and Machines"), and a couple of overview papers since his more prolific period.

I think this is part of the problem. Whether Brooks has ceded documenting the work at MIT to other professors or grad students, or whether his involvement with iRobot and other ventures has him being mum on the subject, the lack of updates and ongoing proofs has led to a lot of fracturing and "hybrid" systems that are less and less behavior, let alone subsumption, AI. I keep hoping he'll see our complaints about the lack of writing, get pissed off about it, and put out something new!!

Behavior AI doesn't necessarily involve subsumption. The subsumption stuff is Brooks' unique contribution, and it happens to continue being popular. We (here at least) often refer to subsumption-oriented behavior-based robotics as "Brooksian," to differentiate it from other behavior-based models. Most of these are defined in Arkin, and introduced in Murphy. Joe Jones, a Brooks protege and I believe a lead architect of Roomba, is one of the few to have written a practical guide on BBR.

All behavior-based robotics is reactive with the environment, which goes with the territory. Subsumption uses some simple but effective techniques to decompose apparently complex functionality. It wants to see the world in simple terms, and this was Brooks' main argument. The problem with any AI has always been the n+1 factor: even subsumption gets extremely complicated with each layer that's added. It does work very well on simpler machines, as Brooks and others have demonstrated. (I'm sure Dan, who really is our resident BBR expert, has done several iterations.)

Well, if you're hand-coding something you consider to be a behavior into a robot, it's just a cheat to call it behavior-based robotics. I've always termed these "actions" -- for the lack of a better word -- and not behaviors. A robot that always steers to the right in order to follow a wall is really just performing a simple action that is not a behavior. We only think it's a wall-following behavior if A) there's a wall there and B) there's nothing else to impede the robot to keep it from following the wall and C) we're already keyed-into the phrase "wall following behavior"!

Real BBR would entail emergent behaviors that are the result ("nexus" is the overused word these days) of two or more simple behaviors into more complex ones. A cockroach, which demonstrates wall following, may not (and usually does not) display the behavior if it's dark. Turns out the "real" behavior is not following a wall, or scattering when the light turns on, but avoiding being caught.

-- Gordon

Reply to
Gordon McComb

Except, and correct me if I'm wrong, haven't they discovered micro structures (microtubules) within each neuron that act like biological quantum computers? So instead of being a simple input output circuit, the individual neurons have some significant processing power in their own right. The problem just got more complicated by a few orders of magnitude.

____________________________________________________ "I like to be organised. A place for everything. And everything all over the place."

Reply to
Tim Polmear

"Microtubuar consciousness" is the brainchild of the quantum-mechanic Roger Penrose, but I really don't think many in the neuroscience community, outside of possible Karl Pribram, take this idea very seriously. Pribram, BTW, was the brain-master of the hologram-in-the-brain hypothesis, which is somewhat debunked now, I think.

My feeling towards the genesis of Penrose's idea has always been ... "Well, I'm a quantum mechanic, and quantum weirdness underlies everything, therefore it must underly consciousness too, so what can I find in the brain that looks quantum? Ahh, microtubules!" Gakk!

In short, there are MANY MANY different "theories" of brain operation, with many different [and small] groups of adherents, and only marginal evidence to support any of them, as yet.

Regards individual neurons, they are much more than simple input-output ckts, as they have large dendritic trees, which act like complex distributed analog computing elements, the output of which triggers action potentials [digital outputs] in the cell axons which can propagate for many cm's. We've had many discussions of this on google c.a.p. [comp.ai.philosophy]. Curt's use of the word "easy" above is probably overly-optimistic.

Reply to
dan michaels

I'm not familiar with this discovery but I've think I've seen mention of it. But yes, for a long time now, the more they study neurons, the more complex they get. It makes it only harder to try and understand what it's all doing.

I believe there are some fundamental simple ideas behind what the brain is doing what we don't fully understand yet. Just like there's simple ideas behind all complex machines. You can understand the basics of an airplane, by playing with a simple rubber band powered balsa wood toy plane. But yet, tear apart a modern jet fighter, and all you find is unlimited amounts of complexity in all the small parts. Seeing the big picture is very had when you get lost looking at all the small parts.

Understanding the big picture is the trick to cracking the secrets of the brain. It's being approached on both the theoretical fronts of computer science and mathematics as well as the experimental front of neuroscience research. Together, they will at some point, uncover a clear big picture understanding of what the brain is doing. Once we have that, we will probably be able to re-implement the same ideas, using digital technology. Most likely, we won't end up with anything that looks like a neuron when we are done, just like you can't find feathers or flapping wings on our flying machines. Feathers are extremely complex things, just like neurons are extremely complex things, and though a feather is an important part of a birds flying features (take away their feathers and they can't fly), we don't find them in planes. Likewise, I don't really care how complex neurons are, it's unlikely we are going to find anything like them in our intelligent robots.

Reply to
Curt Welch

I've been debating these sort of AI ideas with dan for years on c.a.p. I'm will known for my belief that AI will be simple or easy once we understand the correct big picture, or the correct approach, or the correct algorithm. I'm also tend to be more optimistic than most about how long it will take to uncover this "simplicity". I made a 10 year bet back in the 70's which I lost, but I've recently made another 10 year bet with the same old friend, which will end on 2015. I don't intend to lose this time. :)

Dan (as well as most other people) don't share my vision of simplicity. They see the brain, and AI, as complex machines solving complex problems (as far as I can tell). I believe the theory behind what it's doing is fairly simple, and the complexity only comes from it's size, and the implementation details which do make us what we are. I believe creating intelligent machines will, in time, be easy. Duplicating all the nuances of human behavior and human personality, however will always be very complex and time consuming.

I think AI is like trying to understand the orbits of the planets. On the surface, it looks too complex to understand. Only after a lot of careful documentation and collection of data, and study, could the true simplicity be uncovered and expressed by Kepler's three laws of planetary motion. And then later, simplified even more by newton's laws of gravity and motion.

What started out as something only the Gods could understand, translated to something as simple as F = G m1 m2 / r^2 to explain the force of gravity and F = ma to explain all motion from force. From these simple concepts, the motions of everything in the sky can be explained.

I believe machine intelligence will turn out to be just as simple at the core. Other's accuse me of greedy reductionism. Time will tell who's right.

Reply to
Curt Welch

Oops, sorry for coming in late and replying to a reply. But as someone with a master's in neuroscience, and a practicing software engineer with a great deal of interest in neuron emulation [1], I can tell you that that microtubule-quantum-computer story is complete and utter bunk. Roger Penrose is looking to find God in the microtubules, that's all. He is a brilliant mathematician, but he is NOT a neuroscientist and should quit trying to play one on TV.

Real neuroscientists can simulate neurons to pretty much any desired degree of accuracy, even predicting their outputs to a given set of inputs. They do this with compartmental [2] or kernel-based models, and quantum mechanics has nothing to do with it.

Cheers,

- Joe

[1]
formatting link
formatting link
Reply to
Joe Strout

And I suspect that is where this thread should be. My impression is that comp.robotics.misc is about hardware and _practical_ control systems (brains) for robots.

-- JC

Reply to
JGCASEY

Yes, I know.

I am taking this stance deliberately to make a point. And I think the point is interesting enough to spend some effort upon. So I'm hoping you, or someone, reading will pick up my drift and acknowlege it, and then we can see if there is something deep and important in it. I think there is. But of course, this is chasing an intuition, and intuitions often come in stages and pieces. I don't have a final opinion, yet. I think I'm close. The big problem in knowing if I'm there or not has to do with naming what I've thought about that has no name yet. Even harder is how to pass intuitions. How to communicate something not yet fully grasped, not yet fully named, so others can explore the thought with you.

So, being as un-etherial as I can, I start again. I think behavior based robots has made much of creating behaviors. (To Wit: the field is named behavior based robotics, and not e.g. subsumed staged layered responses.) In focusing on behaviors, they've missed the intelligence lies not at all in the behaviors, but it the sequencing of behaviors. (Music makes a wonderful analogy of this and I will return to it later.)

All behaviors labeled by them "behaviors" are equal. Some behaviors are dirt simple - set and forget. In Jones terms, these are servo behaviors. Some of what they call behaviors are complex- having many subcomponents usually spaced out in time. In Jones terms, these are ballistic behaviors. The mistake has been to call all these "behaviors". They have distinctly different characters which can be identified and quantified, and thereby separated into different groupings. Further, the subcomponents of the complex behaviors, are not really components, but identifiable simple behaviors in themselves.

So yes, since first starting this thread, I guess I have come to have very different concept of what constitutes a robot "behavior". (Atleast I'm struggling with a notion change.) I am now holding a premise that any static application of a output value to an output device is a behavior. The simplist output values are constants. The slightly more complex output values are variable, but they are the output of simple computations or algorythms. What I don't hold is correctly classified a behavior, is any sequencing or cycling of different outputs from "non-linear" changes in the output algorythm. (I'm not hard on the non-linear part, but what I want to eliminate here are widely switched outputs due to booleans being snuk in.)

For example. I think given jBot's two output motors, setting both at

60% forward is a behavior. It's a settting, the value is a constant, it is applied to the outputs, and it remains established for an undetermined future time.

The more complex example of a behavior might be setting the left at 60% forward and the right at 60% + wallgain * (setpnt - currentwalldistance) or such. It's wall following algorythm, a settting with a computed variable output, it is applied to the outputs (a responsive behavior, actually) and it remains established for an undetermined future time. This is the servo mechanism which Brooks and Jones show clear perference for. This is perhaps their ideal of what a behavior should be.

So I guite disagree that jBot has inifinite behaviors. I think it has a cruise behavior, it has a edge detect behavior, maybe an avoid behavior, etc. But I'm confident the behaviors you've programmed into it are, limited, countable. Perhaps you also have other kinds of things which are (incorrectly?) called behaviors.

Reading Jones makes clear, a robot full of servo behaviors is not sufficient to make a useful robot. Another kind of response is needed, and that he calls the ballistic response. The escape response is the example.

In the escape response, a non-servo, purely timed, back-up is done, then a rotation, and an exit. There can be some detail in the exact sequence. (Is there a stop mode to slow forward motion? Is there a pause before reversing the motors? Is there a stop, or pause, to end the back up? or are there acceleration ramps on the back up, up and down? Is there a random element to the turn? etc. etc.?) But detail aside, what is consistent always is that there is detail, there are subcomponents to this supposed "behavior".

Now let's return to the tone and the tune analogy. If a behavior is a tone, and a melody has a tune ... would it be wrong to call a tone melody? Of course. You don't say, I press this piano key, and get this melody. Then I play this key and get another melody. Then I press these

5 keys in a sequence and get a beautiful little tone. No. It is wrong to interchange and intermingle the idea of a "tone" (sound of one key alone) and a "melody", also called a tune, (sound of several tones in timed sequence). (Notice a chord is another thing, which is very like the two we're discussing, but with a distinctively different meaning again related to its timing component.)

We must be careful naming things in scientific endeavors, lest confusion set in, and become its own block to progress. I'm saying a poor definition or use of the word behavior has caused just that. We've been using "behavior" as if it meant "paino sound". And then without a definition for the subcomponents, or supercomponents, of behavior, also calling them behaviors, it has become impossible to sort out the difference between a behavior with or without intelligence.

So, my new use of "behavior" is to mean "any static application of a output value (constant or linear variable) to an output device" is a behavior. Behaviors are reactionary, responses, and hold no inherent intelligence. Anything that has a collective of time sequenced behaviors is a "___blank___" yet-to-be-named. Emergent intelligence can come from "___blank___" yet-to-be-named. So a "time sequenced behavior" is a misnomer, because it is not a behavior, anymore than a note is a tune (melody). Maybe we could give it a name. Maybe a "sequavior". I don't know. I'd like something better. Stranger namings have obviously happened in the past. Let me think on it some more. But we really can't think clearly about this problem, until we come up with a non-ambigous, non-overlapping, set of definitions for what we're trying to take about.

So to repeat my opening premise, "artificial intelligence will be found in the decisions that switch behaviors rather than the layers of behavior themselves". Perhaps the depth of my original meaning should becoming much clearer now. Behaviors have no intelligence. The sequencing of behaviors is the source of emergent intelligence. Now, aren't those radical declarations? If correct, isn't this potentially a new departure for Artifical Intelligence?

Randy

formatting link

Reply to
RMDumse

Yes. Apparently.

As you acknowlege concerning information therory, there really aren't

40 bits of information. There's less than one bit of information per read. I'm saying much less. So, there aren't trillions of inputs as suggested.

For example: Use as many processors as needed, say 40, and read a bump switch, all 40 at the same time. Compare the readings.

Are there really 40 bits of information there? or one bit of information copied/verified 40 times?

I am quite sure there is only 1 bit of information.

If there is more than 1 bit of information, (as per chance there "might" be) it is not 40 bits of information about the switch state. Some other number of bits about the system and its wiring (failure) might appear, but not a drop useful information more about the state of the switch itself.

You yourself made the rather insightful and critical comment earlier that you have to consider the time content.

"But, for anything interesting, the machine has to solve problems in the temporal domain. Which simply means, the current outputs, must be calculated not from just the current inputs, but from past inputs as well."

But it is quite wrong in general to assume the rate you sample something, like the bump switches, increases the information content in that input (if for no other reason than they have a mechanical limit in how fast they can even change state), which was my objection. And really, I'm thinking only the state transitions of the bump switch are of interest, and nothing of the detail of when they happen by reading ever and ever more often applies. The mechanical details of ths switch itself (wisker compliance, inertia, vibration modes, debounce, etc.) have much more to do with the timing on the details of the state changes, than the desired reading (did I bump something).

Again, my larger point was, our current robots seem to have very low content inputs, and limited states outputs.

And where I was going for the larger sake of the thread, is, we can probably map all the limited inputs to the limited outputs, list all behaviors (or even potential behaviors we haven't used) (oh, and behaviors in the sense I described in my post to dpa today) as a combination or permutation chart. Given we know all possible behaviors of our robots (being a limited set), then we can see the intelligence is not in having behaviors, but in enabling the transitions between the behaviors.

Randy

formatting link

Reply to
RMDumse

Interesting. Them maybe I should make more effort to read them.

I think I've been generally mixing some of his papers with other papers which came out of the MIT media lab while he was connected with it and think of them all as being part of "his work".

Yes, that's the one mentioned. I should probably get that and read it.

Arkin and Murphy mean nothing to me.

Ah, but half way down in responding I read the web page below which mentioned Arkin's book so no it means something to me. :)

I just spend some time reading a few web sites on behavior based robotics and I now I think I understand how the term is being used. Yes, I'm a big believer in the approach. It's basically what I've been looking at for the past 25 years. Though my I didn't approach the problem to try and build better practical bots. My intent has always simply been to solve AI in general (and ultimately build a robot with human ability). I didn't realize the term had evolved in the robotics world though I am familiar with a few of the projects that are being referred to.

Here's what seems to be a good overview of the idea if anyone else is like me and unaware of what it is and wants to get a quick introduction:

formatting link
It's clear people looking at this have come to some of the same conclusions I have (from the above site):

... a ma> > If you are looking for a better way to hand-code behavior into a robot,

This is because, though I think the approach is required to produce human like intelligence, I think this format is unworkable for hand designing by humans for any problem above a fairly trivial level. Humans can deal nicely with simple systems where something like 10 behaviors are competing for control, but make it 10,000 and a human has no chance of hand-coding the priorities and the interactions between the competing parallel behaviors. And to get to human level intelligent behavior, you probably need 100 million behaviors or more competing with each other.

This is why I believe the only solution for advancing this approach beyond the trivial, is to stop hand specifying the behaviors (aka reactions), and replace it with a strong learning system.

The question was asked why behavior based robotics seems to have stalled. It think this is the answer. Even though I believe the approach is the right one, it's just not workable for humans to hand-design these behavior systems for anything above fairly trivial limited domain problems - like making a robot vacuum work correctly.

To learn how to balance thousands or millions of competing behaviors, the robot has to simply learn on its own through experience. Making that work well is going to be the secret to both making the approach go further for real world robotics problems, and for ultimately, creating human level intelligence.

Reply to
Curt Welch

Of course there is only bit of information when you sample it once. You seem to have missed the point of my message.

That is the main point of my post. When we have to map input, to output, we can not create a mapping function which maps one bit of input, to the output just because we have a one bit bump switch. We must instead read and record, the activity of the inputs for some amount of time, and then producing a mapping function, which takes the entire recorded history as input into the function.

Which is why I never said anything like that.

That's all fine and true. But, to produce "intelligent" behavior, we can't do it using only the last or current switch position to make an intelligent selection of what behavior to do next. The logic must be based on some history of what has been happening, even if that history is only the switch transactions and the order they happened in for the past 30 seconds.

For example, as was already mentioned, a simple robot that responds to only the current switch position, might get stuck in a corner, and bounce back and forth for a long time. But, if the robot is able to "remember" it's bounced back and forth 5 times in the past 10 seconds, that can trigger it to evoke a different behavior to escape the corner. But this means, the "trigger" that selects the correct current behavior, is not just based on the current switch setting (1 bit), but instead is based on something like the past 5 switch transitions (much more than 1 bit) of data.

And again, the low content is not a limit, because even though it's low, you need to record a history of that low content in order to act more intelligent. And once you start recording a history, the amount of content the system is using to make each behavior selection decision, is no longer so low. And the longer it can record input content (aka the further back in time it can remember), the more intelligent it can act (the more information it has to base its decisions on). So the limit to the intelligence the bot will be able to demonstrate, has as much to do with how much memory it has for past events, as it does with the flow rate of the sensors.

The size of that chart will grow exponentially with the amount of sensory data you choose to record. And even though you start with simple 1 bit input sensors, if you record 100 transitions, you don't have a table with 2 states, you have a table with 2^100 states. So looking at the input states alone is not enough. You have to also look at how much memory the system has to record and make use of history. The complexity of the behavior will be a function of how much memory the system has, and how much it makes use of to create triggers for complex behaviors.

Ok, but again, that wording is silly.

Lets say I have binary output that produces a constant strings of 1s and

0s. We will ignore what it actually controls.

This type of device has just 2 low level behaviors. It can send a 1, or it can send a 0. That's all it can do. All other higher level behaviors are created by stringing together different combinations of 1s and 0s.

So, the total intelligence of this box is determined by how it makes the decision to send a 1, or a 0 each time.

But we could look at each two bits sent, and say it has 4 behaviors. It can send, 00, 01, 10, or 11. Then we can say, as you said above, that the intelligence was not in the behaviors, but in the decision to transition to the next behavior. So, if it sends 0010, we would say that sending 00 was a behavior which was not intelligent, and 10 was a behavior which was not intelligent, but decision to send 10 after 00, was the intelligent act.

See how stupid that is? Why is the transition from the first to the second bit, "not intelligent" but the transition from the 3rd to 4th is? It's just silly to talk like that.

All outputs (aka all output decisions) are how the intelligence of the machine is demonstrated externally.

Reply to
Curt Welch

Arkin, Murphy, Jones, Brooks, Braitenberg, Minsky, Papert (others I'll leave out for now). These are all names that are regularly referenced because they are authors of popular books on artificial intelligence that are available at most any library, or at least a university-level library. These are the people who have published the work progress so far, and this is where debate usually springs from.

I don't mean to come across as haughty or as a name-dropper, but I find that having a fairly consistent frame of reference is helpful for these types of discussions. We know we're all talking the same language, though we might not all agree what the words mean.

In any case, and forgive me if any of these are already known to you:

You don't really need to buy the Brooks book. Just download his papers. Robin Murphy's book is an introduction -- kind of like AI 101 -- and you'll probably breeze through it. If you can't find it at the library buy a used copy on Amazon. Or I mighr be able to find my copy and I'm happy to send it to you if you take care of the shipping. Ron Arkin's book is probably the seminal work used by colleges and universities to teach AI basics. Randy has mentioned it a few times. Valentino Braitenberg's Vehicles book needs no introduction, as it is constantly cited in just about anything related to robotics. MIT heavy-hitters Marvin Minsky and Seymour Papert both have "mind opening" texts that I found very enlightening. Minsky's "The Society of Mind" is a must read, IMO, if you're interested in AI, even if you don't agree with the good doctor's ideas.

-- Gordon

Reply to
Gordon McComb

Yes, of course.

If you limit the definition of "behavior" to some low level set of output actions on which all the action of the bot must be based due to hardware limitations, then of course, all the "intelligence" is in the sequencing of the behaviors. Where else could it be?

I would not say so. All you are saying is that the intelligence is defined by the behavior of the machine (in the sequence of outputs it creates). Some people think of "intelligence" as the internal thought process of humans, and they mostly ignored the outputs this thought process produced. Most of what is known as symbolic AI was based on this type of idea. But at the same time, there has always been a very strong component of AI which is behavior based. Turing's very famous 1950 paper that defined the Turning test starts with the line, "I PROPOSE to consider the question, 'Can machines think?'".

He then goes on to say that this is the wrong question, and that instead, it should be replaced with the "The Imitation Game" (which later became known as the Turing test), which replaces the idea of testing for "thinking" with the idea of testing to see if the machine is creating the correct behavior.

This was Alan Turing's attempt to get people to focus on behavior alone and to ignore things like "thinking".

Like Turning, much of AI has been focused on behavior since the beginning. Behavior is always about the machine producing the correct sequence of outputs at the correct time. It's really been the foundation of AI since it started.

For behavior based robotics, the strength has always been with the ability of the robot to produce complex behaviors that were not specifically expected by the creator. The programmer simply codes various low level behaviors, and some system for selecting them, and then the bot, while interacting with the environment, ends up producing an interesting, useful, but unexpected (by the programmer) string of behaviors. We say the new higher level behavior "emerges" from the low level behaviors that the programmer explicitly built into the machine.

repeating what you wrote above:

I don't see it that way. TO me, all behaviors are intelligent. Some are just more so than others. It's just that simple behaviors alone, triggered by simple triggers, don't look very intelligent. A light that comes on in response to pushing a button doesn't look every intelligent. But combine that logic with a billion or so exceptions, and it will start to look very intelligent to us.

We see intelligence in complex behavior that seems to have a purpose. But there's now way to define where the line from simple and dumb crosses over and becomes "intelligent". It's just a matter of degree.

Reply to
Curt Welch

Yes, they certainly are. :)

Yeah, I know a lot is available, but I don't really like reading papers on-line (though I do it a lot anyway) , and if I'm going to print them all out, I'd rather just buy a book.

I got a computer science degree in 1980 and haven't tried to keep up with the literature since then. In the past few years Dan and others from c.a.p. have gotten me to read all sorts of stuff that has helped get me back up to speed (which makes it far easier to communicate as you said), but there's still so much to catch up on. And now you have given me more. :)

I read it a year or two ago. Minsky drops in to c.a.p. now and again and I've communicated with him in email. So I know a lot about his work. I went to one of his talks about 25 years ago as well.

Yeah, it's a good book. I agree with a lot of the basic concepts. Minsky however has always been trying to attack the AI problem from a level higher than I think it should be attacked. Well, actually, he tried a behavior approach to AI early in his career (the Snark (a neural controlled virtual rat running a maze made of tubes and motors) - part of his PhD thesis in

1951) and decided that the approach could never answer some of the more complex issues of human intelligence and he seems to have spend the rest of his life looking in other directions. I think he had the right approach at the beginning and shouldn't have given up on it so quickly. :)
Reply to
Curt Welch

Ok, fair enough, I'm game.

The key phrase there to me is "the behaviors you've programmed into it are limited."

Seems like the argument has shifted from what is possible with the hardware to what I personally have accomplished with it, which seems beside the point. I freely confess that jBot and all my creations are flawed and limited by my smallness of imagination.

Again, that was not your premise, as I understand it.

Interesting.

I have also found that the decision to change states is the most problematic.

The most recent example of this for jBot is the need to switch between target acquisition vs. perimeter following to make it across the SMU campus, or navigate Fair Park. I, as a human watching the robot, can easily tell when it should be in one mode or the other. It's not so obvious how the robot can make the same decision.

The current solution is to switch to perimeter following when the waypoint target is directly behind the robot, and switch back when it is directly in front. This works very well most of the time. It usually switches into perimeter following mode to go around buildings, also did recently to work its way out of a parking garage. But it also does the state change inappropriately sometimes, and in those cases usually makes the navigation worse.

(I noticed in re-reading the above that I used the word "mode" to describe target acquisition and perimeter following behaviors. Is that the word you are searching for?)

So I have arrived at a somewhat different conclusion based on the observation that making correct decisions about when to change states requires lots of intelligence, which I think is the same thing you are saying.

Intelligence that our robots don't have right now, and probably won't have for a long time.

This comes full circle to our discussion last Spring, and the "heretical" observation from my own robot building experience:

I have found, as a practical matter, that reducing the number of states results in more robust/intelligent behavior. Not the other way around.

Reducing the number of required states means reducing the really smart decisions that the robot is required to make in order to solve a particular problem, and that results in more robust and intelligent behavior, because our robots really aren't very smart and are not very good at making those sorts of decisions.

I know this seems counter-intuitive, but this has been my experience: reduce the required states. Let me give three examples to illustrate.

Example I.

Consider the aforementioned bump-and-run robot problem of getting stuck in a corner. Handling this situation requires two distinct phases.

First, the robot must recognize that it is stuck, perhaps by counting bumper presses occurring in a limited time window, or recording a "history" of left-right-left-right bumper presses, or watching the wheel odometry and observing that we're not moving very far from the same location. I've done all three of these at one time or another.

Second, once the robot "knows" that it is trapped, it must have some recovery behavior, like rotating 180 degrees at the next bumper press, or always turning left (or right) no matter which side the bumper press is on, and so forth.

The problems arise, of course, when the robot does this inappropriately, either 1) failing to change states when it is necessary, or just as bad, 2) changing states when it is not appropriate, even detrimental.

The result is that state changes intended to make the robot more capable of solving some particular problem, end up degrading the robot behavior in other circumstances.

Now let's change the operation of the robot's normal bump-and-run behavior such that we add a tiny amount of randomness to each maneuver, so that each turn it makes is not exactly the same size -- one might be

10 degrees, the next is 12, the next is 9, and so on.

This has essentially no effect on the normal bump-and-run behavior -- we can still navigate a narrow hall -- but this small amount of noise in the response prevents the robot from ever getting stuck in a corner, or any other similar geometrical symmetry (i.e., wandering around the room in the same pattern over and over).

Because the normal bump-and-run state is more robust, the robot neither needs to be able to detect the "I'm caught in a corner" condition, nor does it need to have a special behavior to recover from that condition.

Consequently, it no longer is capable of making that decision incorrectly and executing those behaviors inappropriately, which causes problems that would otherwise not occur.

Thus the robot behavior is made more capable by REDUCING the number of states required to solve the problem.

Example II.

The second example is that of dealing with curbs for an outdoor robot, such as my 6-wheel robot, jBot (which can deal with curbs but which cannot, alas, wag its tail ;)

A robot that cannot climb over a curb must treat a curb as an obstacle. That means it must be able to, much like the bump-and-run example above, 1) sense the presence of the curb, and 2) have some meaningful way of dealing with the curb. Again, problems arise when the curb is mis-identified, either by not changing behavior in the presence of the curb, or by incorrectly changing behavior in its absence.

In the case of jBot and other similar platforms, the robot is easily able to climb over curbs. This means that the robot needs neither to be able to accurately detect the presence or absence of curbing, nor does it need to have a special behavior for dealing with it.

Because the robot does not need to change states to deal with curbs as special problems, it is not able to make that decision "incorrectly." Once again, the robot's behavior becomes more robust by REDUCING the number of states required to solve the problem, in this case, mechanically.

More required states mean increased likelihood of being in the wrong state, at the wrong time, the very question that Randy originally posed, and having to be REALLY SMART about when to change states.

Example III.

The first example was a software solution to reducing required states, and the second was hardware. The third is a combination of both, and deals with the control algorithm for a differentially steered robot.

One advantage of most differentially steered platforms is zero turning radius: the ability to turn in place. That is a great simplification for navigation as compared to the back-and-forth required for a normal Ackerman-steered vehicle caught in a dead-end alley, which even we humans sometimes have trouble maneuvering in our cars.

How is that decision made? How does the robot decide when to enter the "turn in place" state? How does an Ackerman steered robot make that decision?

On my differentially steered robots, jBot included, the motor commands are not for a left motor and right motor, but rather for a velocity vector and a rotation vector. In simple terms, the velocity vector is a voltage added to both motors, and the rotation vector is a voltage added to one motor and subtracted from the other. The velocity vector causes the robot to drive forward and backward at various speeds, and the rotation vector causes it to turn left and right at various rates.

Now picture this robot driving in a large arc, perhaps to avoid an obstacle, with a positive velocity vector of some value and a positive rotation vector of some value. And picture the robot slowing down, that is, reducing the velocity vector, but not the rotation vector, as it drives. The result is that the turn becomes tighter and tighter, until the velocity vector finally becomes 0, and the robot is then rotating around its center, like an ice skater tightening her turn until she is spinning in place.

Somewhere in the course of that maneuver, the inside wheels went slower and slower and finally went through zero and into reverse, and began to speed up in reverse. When the velocity vector reaches zero, the inside and outside wheels are going the same speed, but in opposite directions, and the robot is rotating around its center.

The key point is this: the robot never "made the decision" to reverse the inside wheels and spin in place. There is not a separate "rotate in place" state the the robot must decide when to enter and exit. Rather, it is all a natural consequence of the relationship between the two control vectors, all in the same state.

As a practical matter, when the jBot enters a dead-end alley, for example, the large number of very close sonar detections force the robot velocity vector to ramp toward zero until, by the time the robot reaches the end of the alley it is 0 and can no longer make any forward progress at all, but can only rotate in place. But when it does so, the sonar gets longer detections as the robot rotates toward the entrance, and so the velocity vector begins to ramp back up to full speed from zero.

the dead-end alley, slowed down to a stop as it approached the end, rotated in place until it was facing the exit, and then accelerated back to full speed as it left.

But there were no state changes, no decision to rotate in place, no decision to stop rotating. Hence, once again, the robot was prevented from making the "wrong" decision about when to change states and rotate. And therefore its behavior was, I would argue, made correspondingly more robust and apparently more "intelligent" by reducing the number of states required to deal with the problem.

There are several videos on the jBot web-page illustrating this behavior:

See particularly the videos of the robot navigating around the nooks and crannies of the TI building in some of the later videos.

The requirement for some number of different states is, I think, inevitable. But my experience is that it's useful to work to reduce the number of required states as much as possible.

The route to robust behavior seems to lie not only in being smarter about when to change states, but also in reducing the necessity of changing states in the first place.

As I said, a heretical suggestion.

best regards, dpa

Reply to
dpa

Thanks, Curt, for your always insightful advice. But I have an answer to your suggestion that "I think he had the right approach at the beginning and shouldn't have given up on it so quickly." I did return to work on 'low-level' systems at various times, but only became more convinced that higher-level, more reflective systems were mainly what distinguish us from other animals. I was disappointed when Newell and Simon 'moved down' from high-level strategies and embraced non-reflective rule-based systems in the 1970s, and when in the 1980s the story-understanding researchers (and most of robotics community) moved from semantic and conceptual analysis to low-level situation-action rules and statistical models. So, by the mid 1980s, there was virtually no research on higher-level thinking in the entire AI community. (Except, perhaps, for a few like R.J. Solomonoff, who studied the properties of very powerful (but almost uncomputable) higher-level descriptions.)

When we organized a conference on "commonsense knowledge and reasoning" three years ago, we searched the world and found only a couple dozen theorists working on those areas -- as compared to the order of 100,000 people working on rules, statistical and other numerical learning networks, and the like. My question is, why do so many people decide to work in the very most popular fields, where very little is discovered from each year to the next. It seems streange that they do not recognize that those approaches have got stuck on a large and almost flat local peak. Can anyone name 20 important discoveries therein in the past 20 years.

Yes, there has been superficial progress. But Deep Blue learned nothing much about games, that was not known in the 1970 -- except (as everyone knew) that a million times faster machine could look ahead about 4 more plies. And the DARPA road-running project showed that (again, as everyone knew) combining different sensors can lead to substantially better results. What else?

It seems to me that-unless one is sure of having have ideas as original and productive as those of Hinton or Sejnowski-it would be intellectual suicide to commit oneself to those popular areas. Whereas, it seems clear to me that future progress will be mainly in the area of reflective, meaning-based, knowedge-using systems. And there are still only a handful of workers in those areas!

I suggest that readers of this group take a look at "The Emotion Machine". The full text, more or less, is on my home page. (The book will be published in November, with a lot of small changes and corrections, and a couple of small newer theories-but the web version has the most important high-level ideas that I've had since "The Society of Mind" twenty years ago.

However, the two books are almost completely different: SoM is generally 'bottom-up' while TEM is t> Gord> > > Arkin and Murphy mean nothing to me.

Reply to
minsky

Of course. I guess it's a generalised functional analogue of brain operation that will ultimately be of practical use. For most robotics applications humanlike capabilities are extraordinary overkill. But I'd love to see a robot with the capicty of, say, an ant. That *would* be useful.'Antbot' I would say 'tidy my house.'

____________________________________________________ "I like to be organised. A place for everything. And everything all over the place."

Reply to
Tim Polmear

Reminds me of a book I read as a teenager,

A.R. Luria on "The Functional Organization of the Brain"

formatting link
"A skilled movement is really a kinetic melody ... "

-- JC

Reply to
JGCASEY

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.