What is Subsumption?

John Nagle wrote:


Exactly.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dan michaels wrote:

Are you speaking from experience? If so, what task did you attempt?
Or have you just not been able to envision how it might be done?
Just curious. I hear this sentiment offered a lot, but never with any evidence. So I'd be very interested to hear of your actual experience, if that is the basis of your opinion.
best regards, dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

Well, I did run a DARPA Grand Challenge team, Team Overbot.
                John Nagle
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
John Nagle wrote

Hi John,
Yes, I followed the progress of team Overbot. I think one of our DPRG members, Ed Okerson, was also involved in some way?
If I understand your reply, you are saying the DARPA GC is the task that you attempted to solve with a subsumption-style architecture and were unable.
Do you think the DARPA GC task requires more than "insect-level AI?" If so, why?
<http://www.monarchbutterflyusa.com/Migration.htm
best, dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

I can't speak for John, but if you look at what Thrun and the stanford guys did, they built up several different "internal maps" of the road ahead, using different types of sensors, and overlaid these maps to "predict" the best course.
This falls under the caveat of "building internal representations", and is exactly the opposite to the sorts of things that Brooks was advocating in his many papers on subsumption and reactive architectecture.
As I've mentioned a couple of times, when you are asking what cannot be done using subsumption techniques, you are really talking about "extensions" to subsumption, not the original thing. That's how I see it. To me, subsumption per se is just the lowest foundation level in a hierarchical chain on the way to building real intelligence.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Hi Dan,
Thanks for the reply.
To be clear, are you replying to the question:

And as a teaser I included a link that describes the migration of monarch butterflies, that perform a much more complex task than the DARPA GC, and are, after all, working with "insect-level AI"
Also might be useful to google "honeybee navigation" as an example of what "inset-level AI" can do.
Dan, can you address this question specifically? And John as well?
thanks dpa
dan michaels wrote:

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

Now, you're kind of mixing things. Is there an implicit assumption you're making that insects are simple Brooksian-style subsumption machines? I seriously doubt it. They are much more than that. They have memory, and their perceptual systems are somewhat more advanced than we might give them credit for. They probably have internal representations - sensory maps.
This is regards more complex insects and arachnids. The very simple bugs are probably more on a par with Brooksian subsumption ideas.
Regards the Darpa challenge, from what I've seen of Thrun's solution, it was a specific solution intended to solve a specific problem, and probably wouldn't work in more "general" situations. EG, to solve the next Darpa challenge, of negotiating busy city streets, will take a lot of additional work on top of the existing system.
One thing both old and new DCs involve is predictive capabilities far beyond that any insects need. Eg, if you're driving down a road or busy city street at 30 MPH, you need to predict both the road ahead and actions of other moving objects for the next few seconds into the future. Some predatory arthropods, like hunting spiders for instance and probably dragon flies, can do some of this, but they don't do it 100 yards and 5-10 seconds into the future. The Darpa vehicles will need much more sophisticated perceptual systems, and internal maps and processing power for this. You wouldn't want a bee driving your car in rush-hour traffic, with your baby in the back seat.

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dan michaels wrote:

Not at all. I was replying to your own post:

that posits that "no representation" and "insect-level AI" are associated, and consequently "too dumb" to do any compex task, to which you repled:

which led me to think that you agreed. Am I misunderstanding something here?
I asked for an example of a task that insect-level AI was "too dumb" to solve, and the example offered was the DARPA GC.
So I was asking now if you believe the DARPA GC requires more than insect-level AI, and why. If I understand your reply, you believe that it does.
I find that astounding. The monarch butterflies previously referenced fly a journey twice a year of thousands of miles between two exact geographical locations over many weeks. They land each night and forge for fuel. They avoid predators (I don't remember that part of the Grand Challenge!) and deal with headwinds and crosswinds that blow them hundreds of miles off course. And you wish to suggest that this is a LESS difficult autonomous task than the GC?
I supposed we'll have to agree to disagree here. But it seems to me that you are insufficiently impressed with the capabilities of our little insect friends.
best regards, dpa
I
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

Trapped. No way out. So, I'll have to put the blame on John for implying that Brooksian subsumption is essentially the same as insect neurology.
Actually, as I indicated last time, and to qualify my comments so more, Brooksian subsumption seems more on a par with the **simplest** of insects, not the more advanced forms. Eg, I have been reading up on jumping spider vision lately, and they have totally incredible visual systems. Their main eyes have vertical slit retinas that they "pan" back and forth horizontally inside the head using muscles to produce what amounts to a 2-D image. The lens is fixed to the carapace, but the retina moves behind the lens.
This gives them visual acuity which is on the order of 800x800, and touted to be about half that of humans, yet there entire brains have less than 100,000 neurons. Their visual systems have 3 successive processing centers in sequence, with retinotopic maps on each. Now, that's pretty incredible.

You must not have read my comment about more advanced versus simple insects from last time. Hunting spiders track down their prey in 3-dimensions, visually. Butterflies migrate. Most bugs aren't so clever.
Regards Thrun's Darpa solution, I already discussed that. It was a narrow solution for a narrow problem, and even so, they used the kind of internal representational maps that Brooks totally rejected.
Plus, they'll have to go one step better to plot a course through city traffic. Much better mapping, and much better analysis and predictive capabilities. IOW, another level up in perceptual capabilities. Do you really want a butterfly as your taxi driver? Bugs are those things you find smashed on your windshield in Texas.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dan michaels wrote:

If the taxi had the intelligenc of a butterfly, it would be the most sophisticated robot on the planet. We're nowhere near to that level with robotics and AI.
So yes, I think that is where we're going. A robot as smart as a honeybee will be a very challenging thing, a truly "grand" challenge.

They do pretty well in their own environment.
And don't forget that humans end up smashed on windshields every day, too, lest we get too prideful about our own abilities...
On a related note, I was thinking about interrupting ballastic behaviors as you and Randy were discussing, and it brought to mind something I'd read by Douglas Hofstadter about the behavior of the Sphex Wasp. Here's a brief description,
<http://en.wikipedia.org/wiki/Digger_wasp
which says in part:

taking provisions into the nest, the Sphex first inspects the nest, leaving the prey outside. During the wasp's inspection of the nest an experimenter can move the prey a few inches away from the opening of the nest. When the Sphex emerges from the nest ready to drag in the prey, it finds the prey missing. The Sphex quickly locates the moved prey, but now its behavioral "program" has been reset. After dragging the prey back to the opening of the nest, once again the Sphex is compelled to inspect the nest, so the prey is again dropped and left outside during another stereotypical inspection of the nest. This iteration can be repeated again and again, with the Sphex never seeming to notice what is going on, never able to escape from its genetically-programmed[citation needed] sequence of behaviors. Douglas Hofstadter and Daniel Dennett have used this mechanistic behavior as an example of how seemingly thoughtful behavior can actually be quite mindless, the opposite of human behavioral flexibility that we experience as free will (or, as Hofstadter described it, antisphexishness). <snip>
It seems like the wasp has a series of "ballastic" behaviors which, when interrupted, reset to the beginning, as you (Randy?) suggested.
It occurred to me that the bumper behaviors on a couple of my robots do the same thing. An interrupted ballastic bumper behavior just resets from the beginning, and the ballastic pattern starts over.
So I think perhaps you're on to something here. Maybe this is a general principle for interrupted "ballastic" behaviors? As you have observed, what other response makes sense?
best, dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

Your cabby for today is Mr. Bfly. but you still wouldn't want it driving the taxi you're stting in, or the one that coming from the other direction either. It needs to advance a bit beyond the B.Fly stage, before you're gonna get in the cab.

Exactly, and this isn't sitting in a class on calculus or driving in traffic, either.

Pound for pound, you find more philosophers involved in car accidents than common folk. I just made that up. Actually, physical grounding - as opposed to not paying attention - is a critical matter.

Before taking provisions into the nest, the Sphex first inspects the nest, leaving the prey outside. During the wasp's inspection of the nest an experimenter can move the prey a few inches away from the opening of the nest. When the Sphex emerges from the nest ready to drag in the prey, it finds the prey missing. The Sphex quickly locates the moved prey, but now its behavioral "program" has been reset. After dragging the prey back to the opening of the nest, once again the Sphex is compelled to inspect the nest, so the prey is again dropped and left outside during another stereotypical inspection of the nest. This iteration can be repeated again and again, with the Sphex never seeming to notice what is going on, never able to escape from its genetically-programmed[citation needed] sequence of behaviors. Douglas Hofstadter and Daniel Dennett have used this mechanistic behavior as an example of how seemingly thoughtful behavior can actually be quite mindless, the opposite of human behavioral flexibility that we experience as free will (or, as Hofstadter described it, antisphexishness).

What the wasp doesn't have is the kind of Minsky B-Brain I mentioned in the past, whose job it is to monitor the A-brain [execution module] and critically ascertain when it's going in repetitive loops, or other simple forms of pathological behavior.

If you were to analyze what you have "actually" implemented, as opposed to what you thought you were implementing, you might find it differs somewhat from what Brooks actually described.
Also, I was actually discussing more complex behaviors, eg, where an individual behavior is set up as a "sequence" of timed states in an augmented-FSM, and the sequence is interrupted in the middle, but it's a general problem that needs be addressed to produce more intelligent behavior, through use of planning modules, for instance. Simple subsumption bots don't plan or predict, rather they react, by definition.
You need memory + overseers to deal with pathologic loops, and memory + internal representations, coupled with good perceptual systems, to produce predictions.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dan michaels wrote:

Yes, very much as the Sphex wasp is doing. I thought I was agreeing with you here. ;)

Not completely true. For example, a Kalman filter used to balance a two-wheeled robot, such as my "purely reactive" nBot robot, makes predictions as part of the sensing process. That's how a Kalman filter works. These distinctions are not as cut and dried as you seem to suggest, but more on this below:

Go back and re-read the references in Jones' "Robot B ehavior" book in which he describes how to use a "leaky integrator" to control various robot functions.
I use these throughout my robot code. This is essentially an analog form of memory, that contains information not only about what the robot is doing now, but also about what it has done in the past.
These are used in much the same way as the "B mind" you describe, to monitor other behaviors and switch modes when needed. Jones does not seem to think that this information and technique are outside of the definition of subsumption What is your disagreement with Jones? Why do you believe that "leaky interators" are part of the standard subsumption model for Jones? But not for you?
Perhaps more to the point, If you'll indulge me, I'd like to quote Brooks hisownself one more time, from "Cambrian Intelligence" (pp 64), where, most fundamentally, he concludes that:
"Internal world models which are complete representations of the external environment, besides being impossible to obtain, are not at all necessary for agents to act in a competent manner."
It does not seem that you believe this is true.
Now, at the riisk of sounding pompous, I have a robot that I believe is capable of considerably more complex navigation tasks than the DARPA GC, is a purely reactive robot, including reactive waypoint navigation, and which is also considerably l ess intelligent than a honeybee. Hence my skepticism.
So all this is what led to my original query, which is, have you actually tried to accomplish a compex task with subsumtion and failed, and that is the basis of your belief? If so what task? If the GC is the task, then I simply disagree that it is not solvable with simple subsumption.
best, dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

Yeah, the wasp seems to reset its behavior, rather than pick up again in the middle of the previous sequence. So, that part is good [and not part of Brooksian subs, as I see it]. The bad part is that it doesn't have something like a B-brain to keep from "mindlessly" repeating the same behaviot again and again.

Well, I was thinking of more long-range prediction, like predicting where a car that is 100 yeards ahead and traveling at 60 mph will go. That takes a lot more capability. EG, just in order to recognize it's a car, and then where it'll be going.

I'll check this when I'm back home.

This is actually the point I have been trying to make for weeks. What is "competent manner", and what is the level of the task being executed? EG, ask what was the most complex task one of Brooks' bots ever did?

Of course, I do, but you need to put it into perspective of what was being accomplished.

Well, you should brag, but also ask whether it really is a purely reactive bot, and if it is, how far can you really take the technology. That's where our real disagreement is.
Alternately, you might ask how far Brooks actually got along the path towards creating truly intelligent bots since 1985 or so. 20 years. As we have noted in past threads, he seems to have stalled [I would say gave up] and went on to other things. Look at what he wrote in Flesh and Machines. He said "something is missing" regards creating true AI. That means subsumption isn't gonna get there.

Ok, do the next one. Driving a normal speed in heavy city traffic. Finding your way from one end of Dallas to the other. Reading the names off the store fronts as you go along. Noting that store X has a new display in the window. Going down to campus and taking a calculus test. Why aren't Brooks' bots doing these things? That's what you should be asking.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dan michaels wrote:

There is an appendix "C" in my copy of Jones "Robot Programming" titled "Frequently Used Functions" that covers the leaky integrator and also how to implement running averages, another form of "memory."
<snip>

I think that's probably do-able.
My current little robot can find it's way from one end of the SMU campus and back, avoiding students and such along the way, and across Fairpark in Dallas. So, just send money...

Now that's silly.
Why indeed? Why not sing, dance, play the clarinet, write the great American novel and bake apple pies like-a my momma? I think Brooks' and Jones' goals are much more modest. I know mine are.
Brooks, in "Cambrian Intelligence," while acknowledging mobility and navigation as subsets of more sophisticated robot functions, describes the inherent tasks of sensing, vision, navigation, and goal seeking that remain unique and vexing problems on the way to more general purpose robotics. He uses this biological analogy:

Hence the title of his book, "Cambrian Intelligence."
It seems to me that insect level intelligence is a _lofty_ goal for our robots, and one that we are a long way from achieving.
Read up about the honey bee. "The Honey Bee" by Gould and Gould, (Sci. Amer Press) is a little dated but a good starting place. Or just google "honey bee." Gould&Gould say it's the "most studied insect"
Enjoy your holiday feast, dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

If your code is mostly subsumtion-esk, I doubt you are up to DARPA GC level yet.
The hard part of the DAPRA GC was not the general route following, but it was the long range sensor problem. When driving 5O MPH in a car, you have to spot things like a pot hole or cliff edge (there some multiple hundred foot drop offs with no guard rails on dirt road in the second challenge) many yards out (20 to 30 maybe?)
You can't map out pot holes and barb wire fences, and rocks in the middle of the road, for the next 30 yards in front of the car using subsumption alone unless you write thousands of subsumption behaviors (something I believe is just unreasonable for a human to do).
What you can do with subsumption is respond to simple sonar sensors which makes the bot steer away from the big things. That way, you never have to deal with more than one big obstacle in front of you at once. The DARPA GC required the winners to build an active map of the road in front of them to a fairly high degree of detail and then plot the best course through all the obstacles on the map (and update that map and the course many times per second). It had to deal with potentially 100's of obstacles in front of them (small rocks, pot holes, other cars, fence posts, water puddles).
If you plotted a course for your bot that took it across a 6" deep pot hole filled with water, would your bot drive into into it and die? Can your small bot run at 10 MPH and still not hit people or drive off a cliff? That's the type of stuff the DARAP GC cars were trying to solve.
My belief, which I've stated here before, is that you can do anything with subsumption if you add some type of basic memory to give it temporal pattern recognition powers, but that complex problems become far too complex for a human to program using subsumption so we tend to switch to other paradigms when the problem gets hard (e.g. too much data to deal with (video sensor) or behaviors which are too complex to understand (bibed robot trying to catch a ball)).
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Hi Curt,
you wrote:

You would be wrong.
Looks like we have the makings of a wager!
best, dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

Basically, the leaky integrator is an add-on which Jones uses to get around one of the problems inherent in the subsumption architecture, such as endlessly repeating the same behaviors over and over, like the wasp. It provides a simplistic form of memory, but hardly true intelligence. The more intelligent way to deal with this problem is to remember your behaviors over the past few minutes, and "make a plan" for doing something more clever than endless repeats.
A more intelligent system will have memories working at several time periods. Short-term, medium term, and also long-term. Then, it can deal effectively with the range of problems humans deal with. This is just not part of subsumption, but what I consider to be "extensions" to subs.

Ok, all well and good. Now send it out into the street. there is a difference between sensing students walking at 2 mph, and who will actively get out of the way on their own accord, and negogiating traffic moving at 25-40 mph.

This is the entire point, I've made many times, but you're not getting it. These guys understand the **limitations** of subsumption. I wrote weeks ago what Jones says on page 238 of his book .. BBR works because there are so few sensors and the problems solved are relatively simple. When the #sensors rises into the millions, he says maybe new organizing principles will be necessary. Maybe some mixture of GOFAI and BBR. He understands the limitiations.

Right, but you're making the wrong point again. I agree with Brooks' words 100%, but insects don't take calculus tests. They don't drive in city traffic. They don't plan 10 years into the future. Those are the limitations.

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dan michaels wrote:

Dan, seems like you are back-tracking here. First, you argue that no memory of any sort is "allowed" (whatever that means) in subsumptive systems. Which is of course just not true.
Now, when confronted with published examples of the same, it's "simplistic" and "hardly true intelligence." Perhaps your understanding of the ideas of Brooks and Jones is not as complete as you might wish.

You state this as a truth with great confidence and authority.
I'd be more convinced if you had an actual robot to demonstrate your ideas. The phrase they teach you in law school for that type of argument is "speaking from authority" and it is usually considered un-convincing. Now, "speaking from experience" is very different from "speaking from authority" and I'd be very interested in hearing that.

Deal with the problems humans deal with? I don't think we're even talking about the same subject here. None of our robot even begin to tackle the problems that honeybees "deal with" mush less humans.

Semantics. The deadend of any useful discussion.

Just send money.
It's a matter of scale, which you seem to have trouble understanding. That's why you are apparently unable to see that there is no qualitative difference between complex insect behavior and robotic behavior. It's apparently why you believe that the DARPA task is more challenging than an insects' thousand mile migration. You are confusing size with intelligence, a common mistake that Yoda warned us against.

Quite the contrary, you have an artificially limited view of the "trivial" nature of what is possible with subsumption, a view which, based on my experience, I do not share. Again, returning to the original question which began this lively discussion:
"What robot tasks have you attempted to solve with subsumption and were unable?"
a question you have still not answered. It is not sufficient to simply throw up a bunch of theorhetical human tasks, or talk about hypothetical "millions of sensors" or insects taking calculus and similar irrelevant digressions.
I'm not asking you to speak from authority, but rather from experience.

Limitations that we are no where near accomplishing!
Dan, you seem so eager to move beyond the "limitations" of insect level intelligence and that suggests to me that you don't really understand how sophisticated that intelligence is.
All I can do is reitereate:
Read up about the honey bee. "The Honey Bee" by Gould and Gould, (Sci. Amer Press) is a little dated but a good starting place. Or just google "honey bee navigation."
Enjoy your holiday feast, dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
We're just going around in circles here. You'll have to answer for yourself the questions I asked you. Why doesn't Brooks have robots running the darpa challenge, sitting in calculus classes, and driving through city streets? And why are other people also not doing this? Why has Brooks' subsumption reseaarch stalled, why is he saying "something is misssing" from AI in his books, and why he now moved on to the "living machines" project?
http://www.ai.mit.edu/projects/living-machines/overview/overview.shtml
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dan michaels wrote:

Agreed. I ask for your personal experience and you refer me to abstract theoretical papers by someone else. My suspicion is that your opinions will change as they become more informed by actual experience. I know mine have.
Happy Thankgiving, dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.