What is Subsumption?

If your code is mostly subsumtion-esk, I doubt you are up to DARPA GC level yet.

The hard part of the DAPRA GC was not the general route following, but it was the long range sensor problem. When driving 5O MPH in a car, you have to spot things like a pot hole or cliff edge (there some multiple hundred foot drop offs with no guard rails on dirt road in the second challenge) many yards out (20 to 30 maybe?)

You can't map out pot holes and barb wire fences, and rocks in the middle of the road, for the next 30 yards in front of the car using subsumption alone unless you write thousands of subsumption behaviors (something I believe is just unreasonable for a human to do).

What you can do with subsumption is respond to simple sonar sensors which makes the bot steer away from the big things. That way, you never have to deal with more than one big obstacle in front of you at once. The DARPA GC required the winners to build an active map of the road in front of them to a fairly high degree of detail and then plot the best course through all the obstacles on the map (and update that map and the course many times per second). It had to deal with potentially 100's of obstacles in front of them (small rocks, pot holes, other cars, fence posts, water puddles).

If you plotted a course for your bot that took it across a 6" deep pot hole filled with water, would your bot drive into into it and die? Can your small bot run at 10 MPH and still not hit people or drive off a cliff? That's the type of stuff the DARAP GC cars were trying to solve.

My belief, which I've stated here before, is that you can do anything with subsumption if you add some type of basic memory to give it temporal pattern recognition powers, but that complex problems become far too complex for a human to program using subsumption so we tend to switch to other paradigms when the problem gets hard (e.g. too much data to deal with (video sensor) or behaviors which are too complex to understand (bibed robot trying to catch a ball)).

Reply to
Curt Welch
Loading thread data ...

You would be wrong.

Looks like we have the makings of a wager!

best, dpa

Reply to
dpa

Basically, the leaky integrator is an add-on which Jones uses to get around one of the problems inherent in the subsumption architecture, such as endlessly repeating the same behaviors over and over, like the wasp. It provides a simplistic form of memory, but hardly true intelligence. The more intelligent way to deal with this problem is to remember your behaviors over the past few minutes, and "make a plan" for doing something more clever than endless repeats.

A more intelligent system will have memories working at several time periods. Short-term, medium term, and also long-term. Then, it can deal effectively with the range of problems humans deal with. This is just not part of subsumption, but what I consider to be "extensions" to subs.

Ok, all well and good. Now send it out into the street. there is a difference between sensing students walking at 2 mph, and who will actively get out of the way on their own accord, and negogiating traffic moving at 25-40 mph.

This is the entire point, I've made many times, but you're not getting it. These guys understand the **limitations** of subsumption. I wrote weeks ago what Jones says on page

238 of his book .. BBR works because there are so few sensors and the problems solved are relatively simple. When the #sensors rises into the millions, he says maybe new organizing principles will be necessary. Maybe some mixture of GOFAI and BBR. He understands the limitiations.

Right, but you're making the wrong point again. I agree with Brooks' words 100%, but insects don't take calculus tests. They don't drive in city traffic. They don't plan 10 years into the future. Those are the limitations.

Reply to
dan michaels

Dan, seems like you are back-tracking here. First, you argue that no memory of any sort is "allowed" (whatever that means) in subsumptive systems. Which is of course just not true.

Now, when confronted with published examples of the same, it's "simplistic" and "hardly true intelligence." Perhaps your understanding of the ideas of Brooks and Jones is not as complete as you might wish.

You state this as a truth with great confidence and authority.

I'd be more convinced if you had an actual robot to demonstrate your ideas. The phrase they teach you in law school for that type of argument is "speaking from authority" and it is usually considered un-convincing. Now, "speaking from experience" is very different from "speaking from authority" and I'd be very interested in hearing that.

Deal with the problems humans deal with? I don't think we're even talking about the same subject here. None of our robot even begin to tackle the problems that honeybees "deal with" mush less humans.

Semantics. The deadend of any useful discussion.

Just send money.

It's a matter of scale, which you seem to have trouble understanding. That's why you are apparently unable to see that there is no qualitative difference between complex insect behavior and robotic behavior. It's apparently why you believe that the DARPA task is more challenging than an insects' thousand mile migration. You are confusing size with intelligence, a common mistake that Yoda warned us against.

Quite the contrary, you have an artificially limited view of the "trivial" nature of what is possible with subsumption, a view which, based on my experience, I do not share. Again, returning to the original question which began this lively discussion:

"What robot tasks have you attempted to solve with subsumption and were unable?"

a question you have still not answered. It is not sufficient to simply throw up a bunch of theorhetical human tasks, or talk about hypothetical "millions of sensors" or insects taking calculus and similar irrelevant digressions.

I'm not asking you to speak from authority, but rather from experience.

Limitations that we are no where near accomplishing!

Dan, you seem so eager to move beyond the "limitations" of insect level intelligence and that suggests to me that you don't really understand how sophisticated that intelligence is.

All I can do is reitereate:

Read up about the honey bee. "The Honey Bee" by Gould and Gould, (Sci. Amer Press) is a little dated but a good starting place. Or just google "honey bee navigation."

Enjoy your holiday feast, dpa

Reply to
dpa

We're just going around in circles here. You'll have to answer for yourself the questions I asked you. Why doesn't Brooks have robots running the darpa challenge, sitting in calculus classes, and driving through city streets? And why are other people also not doing this? Why has Brooks' subsumption reseaarch stalled, why is he saying "something is misssing" from AI in his books, and why he now moved on to the "living machines" project?

formatting link

Reply to
dan michaels

Agreed. I ask for your personal experience and you refer me to abstract theoretical papers by someone else. My suspicion is that your opinions will change as they become more informed by actual experience. I know mine have.

Happy Thankgiving, dpa

Reply to
dpa

The best example of experience in this matter is Brooks himself. Just read what he wrote on the living machines overview page, if nothing else.

Reply to
dan michaels

Hi Dan.

Your reticence to invoke your own personal experience when making your case is puzzling, and not particularly persuasive.

On the other hand, I have read Brooks current work -- interesting that you assume otherwise. I think I've probably read everything Brooks has ever published. However, I will go back and re-read it this evening in light of your comments.

You might, in light of my own comments, reflect on what personal experience you may have to the contrary. I'm still interested in hearing about that, if it does exist.

best, dpa

dan michaels wrote:

Reply to
dpa

I am but a grasshopper, Brooks is the guru. I can learn much from what he wrote, both 20 years ago, and *especially* today. I don't need to spend 20 years repeating his work, to see where the limitations are. That's the nice thing about being a human instead of a Sphex wasp.

Reply to
dan michaels

Aren't we all.

Ok, I re-read the living-machines URL and a small lightbulb lit above my head and I exclaimed something along the lines of "Ah ha!" I see now the distinction that has not been clearly made -- at least by me.

You wrote (a while back):

I believe you are thinking of a "complex task" as something done by a human, like taking a calculus test. Beethoven composing the 9th Symphony. That sort of thing. And you somehow have arrived at the conclusion that I think for some inexplicable reason that that sort of thing can be done with behavior-based robots: subsumption.

I am a practical robot builder, or so I like to believe, and when I think of a "complex task" in the context of robotics I think of the aforementioned honeybee, operating all six of its legs in sequence to crawl and gather, operating its wings to maintain stability and direction in gusting winds (ask a helicopter pilot if that is a "trivial task"), navigating by a sophisticated process of visual recognition and pattern matching that is remarkably robust and still poorly understood, and so forth. These are hard problems, not "trivial insect-level AI." And these are the problems that this generation of robot builders is challenged to solve. Robots taking calculus tests will have to wait.

For the record, I do not believe a modern hominid brain can be modeled or explained by subsumtion. Likewise, I don't believe subsumtion can model a reptilian brain. Behavior-based robotics may be able to model an insect brain, I don't know. No one does. No one, Brooks and Co. included, have found the limits of what is possible with subsumption.

So that's what I'm looking for. And that is why I ask, what tasks have you attempted and found behavior-based robotics wanting? It is a much more practical question than an abstract and ethereal discussion of AI. Is it true that all behavior-based robotics is good for is "stupid little robots" that bump into walls in hobby robot contests? Just how complex a task can be accomplished with this approach?

So, it seems you are talking about subsumption not being able to do everything that a human can do, with which I have no disagreement. But I am trying to determine just what those limits are, and I start from the observation, from "personal experience," that subsumption is much more capable than the "conventional wisdom" on this list would tend to suggest.

So, whenever anyone holds forth that "no useful or complex tasks can be accomplished with subsumtion" my natural and honest question is, "What have you tried?" That is, where have you find the limit to be?

Hence my interest in hearing of the experience of actual robot builders.

best regards, dpa

Reply to
dpa

...

Hi dpa, If I may say so, the question you are asking has a tricky side to it, I bet you're not hearing yourself ask. Let me explain why.

As I hear it, you are asking, "What have you done (or tried) with Subsumption, that Subsumption made it so it couldn't be done. (Notice the twisted implication the programmer does something and Subsumption does something, when in actuality, only the programmer is an agent of action, of doing or trying, and the paradigm is inactive always.) When you ask a question with a null answer set, you shouldn't be surprized when the (null) answers seem evasive.

Until now, no one could answer your question, and know with any certainty, if the problem they couldn't do with subsumption was because of the programmer's ability or the paradigm restricting the programmers ability. How can one know if a problem not done in Subsumption but can some other way, is one that can't be done in Subsumption at all? Any answer given could be dismissed with "Well, you just don't know how to program in Subsumption".

But I say "until now" because I have a growing confidence something has been found in this discussion which answers something like your question. So I will try to answer the question, What have I tried to do in subsumption, and have not been able to solve, and have a reasonable basis to believe cannot be solved in the spirit of Subsumption according to the principles demonstrated in Brooks' Subsumption model and examples, without ungainly and inelegant modifications.

The problem to be considered has to do with gaits on my EH3 walker. For context, see Cambrian Intelligence, Ch 2, Pg 34. "The final piece of the puzzle is to add a single ARSM which sequences walking by sending trigger messages in some appropriate pattern to each of the six up leg trigger machines. We have used two versions of this machine, both of which complete a gait cycle once every 2.4 seconds. One machine produces the well known alternating tripod... The other produces the standard back to front ripple gait... Other gaits are possible by simple substitution of this machine."

Now notice exactly what that last sentence says, and what it does not say. It says, other gaits are possible by simple substitution. It does not say, other gaits are possible by simple subsumption. Was Genghis a single gait machine because they didn't do it, or because they couldn't do it (and be true to Subsumption)? I suspect the latter.

In my EH3 (Lynxmotion round hexapod with 3 dof in each leg), I have added about eight gaits. Stand is one with a single state and no transitions. Another is the single leg ripple with 6 basic states. It is the slowest of gaits. Another is the 2 leg ripple, with 3 basic states. Another is the tripod, which has two basic states.

Just for clarity, and following of Brooks style in describing Genghis, that behaviors should follow evolutionary development when possible, so let us consider Stand as the lowest behavior as he did, and the Tripod as the highest state. So we have a stand walk trot gallop set of locamotions. Refering to Genghis as standard by which to design, we would then make four separate machines, with Stand the lowest priority, Walk the next priority, and so on. We make them run according to the timing as required for each. Stand has one state that doesn't change. Walk has 6 states continuously rolling, trot has 3, and gallop has 2.

Here is where Subsumption now fails. In order to make smooth transitions from one independent machine to another, say a the higher priority machine is about to release to a lower level machine. There are only certain points when this is possible smoothly, just there are only certain points in a horses stride it can go crom gallop to trot.

The higher priority machine can message the lower level machine forcing its state to make it synchronized so the higher level machine can let go of subsumption. This is a bit of a violation of Subsumption because to force this state change, the higher priority machine must be heavily tied into the lower machine, and force off all states but the one it wants to allow, or the transition cannot be smooth.

Now consider it the other way around. A lower priorty gait is to be subsumed by a higher one. Certainly the higher level gait can use its trigger as a reset of its state number to a know value, but now, in order to mesh with the lower level machine, the higher must know the state of the machine about to be subsumed. This is a huge violation of Subsumption. It cannot be the responsibility of the lower level machine to tell the higher priority machine it is okay to subsume. The lower level machine cannot be allowed to message the higher level machine in any way (because the presuposes evolution will put in output message channels for behaviors that aren't even evolved yet, prior to them being allowed to evolve!) The higher level machine can only look at sensors, and not the outputs of lower level machines. So there is no way for the higher level behavior to know when to subsume the lower. It cannot have enough information to do so.

Notice there is no representation of which locamotive sequencing machine is active either. How complex would gallop have to be, to know which of the three other machines will be taking over when gallop releases subsumption, let alone the wiring to smoothly go from any one to any other in general.

Hence, my growing certainty that Subsumption cannot be reliably used in situations where there are multiple behaviors with multiple states. We can say with confidence, the subsumption model is inadequate to maintain control, given two machine with local (hidden) state information.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

Finally!!!!! Someone willing to answer the question! Someone who speaks from experience!

Thanks Randy. I need sime time to digest (no pun intended) your thoughts here and determine if you have indeed found one of the boundaries of what behavior-based robotics can do (or if you're just not a very creative programmer...;>)

Lest you guys get too tied up in Brooks worship (i.e., subsumption can't do anything He didn't do) I would also have you contemplate the following notes on the life of Ada Byron:

"Byron also saw potential in Babbage's machine that even the inventor himself never fully imagined. "

In this case, at least, the originator of the idea (Babbage, computers) did not see all the potential in his own indeas. That happens a lot. Might be happening here.

best, dpa

Reply to
dpa

My thoughts as well. This is the classic zero sum game that no experienced debate team would take! It involves a negative proof, which is considered impossible by some; nevertheless it is seldom defendable. Coming from an academic background I'm surprised dpa posed it this way. From a scientific standpoint, you don't want to be in the position of defending that something cannot be done. I see you've tried that, and I suppose it's now up to dpa to replicate your test set in order to prove or disprove your hypothesis.

-- Gordon

Reply to
Gordon McComb

Oy, Brooks has had LEGIONS of doctoral students working on these ideas for 20 years. And thence to students of his students. If anyone has the proper perspective on the matter by now, it's got to be Brooks.

Reply to
dan michaels

being, somewhat inconsistent. You try to have it both ways, but you cannot.

Again and again you've asked what subsumption "cannot do", and when people come up with examples you discount them, or just seem to change the subject back to how cool insects are, and what they "can" do. There are 2 different issues here, at least. And at least above you write ...

============= For the record, I do not believe a modern hominid brain can be modeled or explained by subsumtion. =============

So, this is the sort of thing Curt and I have been saying all along, and giving examples for. Ten, you flip again back to the following ...

This is again a different question from asking, as you have many times, what can subsumption "not" do? And I think Jones has answered that fairly explicitly in his book, and Brooks has done it indirectly, in the sense that he no longer seems to "actively" reject representation like he did 20 years ago in his manifestos. Today, he asks "what is missing?" instead, but he can't bring himself to say representational mechanisms.

Brooks specifically rejected the idea of representation, but it seems clear to everyone else in AI community that use of some sort of representational mechanism is necessary to replicate human-level perception and symbolic reasoning. I keep asking, how can you identifiy and predict where a car 100 yards away will travel to in the next few seconds, or how can you take a calculus test, without using symbolic representational formulations and internal maps of some sort?

The fact that I cannot prove this to your satisfaction, because I haven't actually tried it, is because I don't have 50 years and 500 graduate students to do the work.

So, I'll say again, to me, subsumption makes a nice "base platform" to build upon, and add the sorts of symbolic and representational modules that can do the things non-representational systems do not do. I cannot prove this, but I'm not about to spend the next 20 years of my life trying to do what Brooks could not do, using his ideas.

NO ONE ever said this, that I recall. This is not the issue that's been being discussed.

Reply to
dan michaels

It isn't in what Brooks and students have done, it's what Brooks and students haven't done.

One of those classic cases of looking not at what has been done, but looking for what hasn't been done. When I read that Brook liked state machines in the late 1990's I was excited, because I'd been hooked on the state machine paradigm since the early 1980's. I have for ever since been looking at what he's called AFSM's looking for the state part of the state machines.

My search for the FSM part of AFSM reminds me of the Wendy's commercial of ~1983, where the little old lady peers into the big (empty) bun and says, "Where's the Beef?"

Finally in Jones, with his comments about servo behaviors versus ballistic behaviors, it became clear the reason I didn't find much of any state machines with states in them, is they were deliberately being avoided, like they were some necessary but highly undesireable evil.

Then in our discussions, it became clear we'd never seen any Subsumption machine implimentations that had more than one ballistic behavior (i.e. a state machine with actual state information in it, rather than a purely reactive servo response sans historical responses, the stage was set. What was missing was any implimentation that showed two or more such state machines with state, and one subsuming the other.

So if anyone has an example of a (nontrivial) pair of true state machines implimented in a pure Subsumptive architecture, I'd love to hear about it. But if there is one, I think I can make predictions about how it was done, and there will be patches that violate the spirit of Subsumption.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

Yes, except I would phrase it as .... what Brooks and students weren't able to do. I think that's why Brooks says what he says today, and not what he was saying 20 years ago. Then, it was the world is it's own best representation, today it is something is "missing", etc.

My feeling, after reading Jones' latest book, is that the best way he sees to proceed with BBR is that the individual behaviors each be extremely simple, and that the overall behavior of the machine "emerge" out of these simple behaviors, rather than from some kind of overall strategy or use of complex multi-state behaviors [as we were discussing recently].

If you get into trouble, such as those repetitive canyoning loops, then you add "another" simple behavior on top to solve the "specific" dilemma - see page 132 ... "a better approach is to understand the interactions between pairs of behaviors that are frequently called in close combination with each other. ... we might avoid thrashing by building a separate behavior that specifically handles the [mutually-conflicting] case of homing and obstacle avoidance ..".

To me, what he's doing is mainly adding patches for specific problems, and for general real-world problems, you would spend your time just adding patch after patch, in order to fix individual problems that pop up. I better like the approach he rejects on pg 131, which is "... employ a cycle detection behavior", which "provides the robot with a modicum of introspection". Basically, Minsky's B-Brain. For my money, Jones' patching and overall approach doesn't get you very far "up" the tree of intelligence, and which is why Jones says the things he says on pg 238, regards systems with 1000s of sensors ... "... might new organizing principles be necessary? ...". The answer is yes.

Reply to
dan michaels

Hi Randy,

I've only been following bits and pieces of this particular thread but I feel compelled to jump in.

So why can't the leg positions be treated just like sensor inputs? Particular patterns of leg positions can trigger transition points. It's not specifically an output from a lower level machine it's just a pattern of leg positions that happens to be a good place to start another gait from.

You've also only shown what I would call steady state gaits, which are gaits which cause the robot to move forward. There are also going to be transitional gaits which might not be used very often but which would allow smooth transitions from one gait to another.

-- Dave Hylands Vancouver, BC, Canada

formatting link

Reply to
dhylands

Howdy!

Great discussi> My thoughts as well. This is the classic zero sum game that no

You and Randy have it exactly backwards.

I work with scientists, geologists and geophysicists, and they beat it into our heads on a regular basis. The analogy typically goes like this:

If a prospector returns from the mountains with only a single nugget of gold, he can claim with confidence and certainty, "There's gold in them thar hills." And no one can prove him wrong.

But, if that same prospector spends his life in the mountains and finds no gold, he still CANNOT say with certainty, "There's NO gold in them thar hills." All he can say is that he couldn't find any.

So, if a prospector comes down form the mountains and claims with great certitude "There is no gold in those mountains," then any scientists can be reasonably expected to ask, "Where did you look?" That's how science works.

That's exactly the question I'm asking here. Where have you looked? And not in a theoretical or argumentative way. I am truly seeking what sort of things that others have done to expand on Brook's original intuition, which he called subsumption. You guys act as if the question itself is somehow illegimate and unasnwerable. Neither is true.

However, that's all irrelevant now, as Randy has risen to the occasion with some actual data. He has returned from the mountains with a nugget.

Actually, I have several other examples (nuggets) where designers have extended the basic subsumption paradigm to solve particular problems. They seem so far to cluster as a couple of different approaches; the "gait" problem that Randy describes is one of those, and basically has to do with how differerent layers in a behavior-based implementation can communicate state information to each other. (There it is, Randy!)

But more on this anon...

Again, I am not asking the list to prove a negative. I am asking for those who have experience to draw on to share it.

best dpa

Reply to
dpa

Hardly. it seems 20 years worth of "practical" research by Brooks and company isn't enough for you. And it's hardly theoretical. Brooks hasn't spent 20 years writing theorems about subsumption. Anything but. He's been building practical situated and embodied machines, working in the real world.

And this is probably one of the primary reasons he had so much trouble with the theorem-oriented AI guys back in the old days. Not theoretical enough.

And did you REALLY mean to use the word "expand" above, or "extend" below? See my comment later.

Randy's example involves how does a subsumption machine can select between gaits in different situations. There are at least 2 answers to this. In lower animals, like arthropods, they can go from metachronal-wave to ripple-gait to alternating-tripod gait simply by using the same little sequence for each step, and changing only the relative phasing between legs.

For more complex animals, like vertebrates that change from walk to trot to gallop, these are advanced far beyond the insect-level, and their brains are more than subsumption machines. They can perceive at long distances, like humans do, and predict ahead regards routes to follow and which way other animals are traveling.

Incredible. You actually used the word "extended" here. What have I been saying for weeks about this, citing Arkin book, etc. You stole my word :).

3-layer architectures, hybird architectures, it's all in Arkin's book, as I mentioned several times.
Reply to
dan michaels

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.