What is Subsumption?

dpa wrote:


The best example of experience in this matter is Brooks himself. Just read what he wrote on the living machines overview page, if nothing else.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Hi Dan.
Your reticence to invoke your own personal experience when making your case is puzzling, and not particularly persuasive.
On the other hand, I have read Brooks current work -- interesting that you assume otherwise. I think I've probably read everything Brooks has ever published. However, I will go back and re-read it this evening in light of your comments.
You might, in light of my own comments, reflect on what personal experience you may have to the contrary. I'm still interested in hearing about that, if it does exist.
best, dpa
dan michaels wrote:

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

I am but a grasshopper, Brooks is the guru. I can learn much from what he wrote, both 20 years ago, and *especially* today. I don't need to spend 20 years repeating his work, to see where the limitations are. That's the nice thing about being a human instead of a Sphex wasp.

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Hi Dan,
dan michaels wrote:

Aren't we all.
Ok, I re-read the living-machines URL and a small lightbulb lit above my head and I exclaimed something along the lines of "Ah ha!" I see now the distinction that has not been clearly made -- at least by me.
You wrote (a while back):

I believe you are thinking of a "complex task" as something done by a human, like taking a calculus test. Beethoven composing the 9th Symphony. That sort of thing. And you somehow have arrived at the conclusion that I think for some inexplicable reason that that sort of thing can be done with behavior-based robots: subsumption.
I am a practical robot builder, or so I like to believe, and when I think of a "complex task" in the context of robotics I think of the aforementioned honeybee, operating all six of its legs in sequence to crawl and gather, operating its wings to maintain stability and direction in gusting winds (ask a helicopter pilot if that is a "trivial task"), navigating by a sophisticated process of visual recognition and pattern matching that is remarkably robust and still poorly understood, and so forth. These are hard problems, not "trivial insect-level AI." And these are the problems that this generation of robot builders is challenged to solve. Robots taking calculus tests will have to wait.
For the record, I do not believe a modern hominid brain can be modeled or explained by subsumtion. Likewise, I don't believe subsumtion can model a reptilian brain. Behavior-based robotics may be able to model an insect brain, I don't know. No one does. No one, Brooks and Co. included, have found the limits of what is possible with subsumption.
So that's what I'm looking for. And that is why I ask, what tasks have you attempted and found behavior-based robotics wanting? It is a much more practical question than an abstract and ethereal discussion of AI. Is it true that all behavior-based robotics is good for is "stupid little robots" that bump into walls in hobby robot contests? Just how complex a task can be accomplished with this approach?
So, it seems you are talking about subsumption not being able to do everything that a human can do, with which I have no disagreement. But I am trying to determine just what those limits are, and I start from the observation, from "personal experience," that subsumption is much more capable than the "conventional wisdom" on this list would tend to suggest.
So, whenever anyone holds forth that "no useful or complex tasks can be accomplished with subsumtion" my natural and honest question is, "What have you tried?" That is, where have you find the limit to be?
Hence my interest in hearing of the experience of actual robot builders.
best regards, dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

...
Hi dpa, If I may say so, the question you are asking has a tricky side to it, I bet you're not hearing yourself ask. Let me explain why.
As I hear it, you are asking, "What have you done (or tried) with Subsumption, that Subsumption made it so it couldn't be done. (Notice the twisted implication the programmer does something and Subsumption does something, when in actuality, only the programmer is an agent of action, of doing or trying, and the paradigm is inactive always.) When you ask a question with a null answer set, you shouldn't be surprized when the (null) answers seem evasive.
Until now, no one could answer your question, and know with any certainty, if the problem they couldn't do with subsumption was because of the programmer's ability or the paradigm restricting the programmers ability. How can one know if a problem not done in Subsumption but can some other way, is one that can't be done in Subsumption at all? Any answer given could be dismissed with "Well, you just don't know how to program in Subsumption".
But I say "until now" because I have a growing confidence something has been found in this discussion which answers something like your question. So I will try to answer the question, What have I tried to do in subsumption, and have not been able to solve, and have a reasonable basis to believe cannot be solved in the spirit of Subsumption according to the principles demonstrated in Brooks' Subsumption model and examples, without ungainly and inelegant modifications.
The problem to be considered has to do with gaits on my EH3 walker. For context, see Cambrian Intelligence, Ch 2, Pg 34. "The final piece of the puzzle is to add a single ARSM which sequences walking by sending trigger messages in some appropriate pattern to each of the six up leg trigger machines. We have used two versions of this machine, both of which complete a gait cycle once every 2.4 seconds. One machine produces the well known alternating tripod... The other produces the standard back to front ripple gait... Other gaits are possible by simple substitution of this machine."
Now notice exactly what that last sentence says, and what it does not say. It says, other gaits are possible by simple substitution. It does not say, other gaits are possible by simple subsumption. Was Genghis a single gait machine because they didn't do it, or because they couldn't do it (and be true to Subsumption)? I suspect the latter.
In my EH3 (Lynxmotion round hexapod with 3 dof in each leg), I have added about eight gaits. Stand is one with a single state and no transitions. Another is the single leg ripple with 6 basic states. It is the slowest of gaits. Another is the 2 leg ripple, with 3 basic states. Another is the tripod, which has two basic states.
Just for clarity, and following of Brooks style in describing Genghis, that behaviors should follow evolutionary development when possible, so let us consider Stand as the lowest behavior as he did, and the Tripod as the highest state. So we have a stand walk trot gallop set of locamotions. Refering to Genghis as standard by which to design, we would then make four separate machines, with Stand the lowest priority, Walk the next priority, and so on. We make them run according to the timing as required for each. Stand has one state that doesn't change. Walk has 6 states continuously rolling, trot has 3, and gallop has 2.
Here is where Subsumption now fails. In order to make smooth transitions from one independent machine to another, say a the higher priority machine is about to release to a lower level machine. There are only certain points when this is possible smoothly, just there are only certain points in a horses stride it can go crom gallop to trot.
The higher priority machine can message the lower level machine forcing its state to make it synchronized so the higher level machine can let go of subsumption. This is a bit of a violation of Subsumption because to force this state change, the higher priority machine must be heavily tied into the lower machine, and force off all states but the one it wants to allow, or the transition cannot be smooth.
Now consider it the other way around. A lower priorty gait is to be subsumed by a higher one. Certainly the higher level gait can use its trigger as a reset of its state number to a know value, but now, in order to mesh with the lower level machine, the higher must know the state of the machine about to be subsumed. This is a huge violation of Subsumption. It cannot be the responsibility of the lower level machine to tell the higher priority machine it is okay to subsume. The lower level machine cannot be allowed to message the higher level machine in any way (because the presuposes evolution will put in output message channels for behaviors that aren't even evolved yet, prior to them being allowed to evolve!) The higher level machine can only look at sensors, and not the outputs of lower level machines. So there is no way for the higher level behavior to know when to subsume the lower. It cannot have enough information to do so.
Notice there is no representation of which locamotive sequencing machine is active either. How complex would gallop have to be, to know which of the three other machines will be taking over when gallop releases subsumption, let alone the wiring to smoothly go from any one to any other in general.
Hence, my growing certainty that Subsumption cannot be reliably used in situations where there are multiple behaviors with multiple states. We can say with confidence, the subsumption model is inadequate to maintain control, given two machine with local (hidden) state information.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote: <snip>
Finally!!!!! Someone willing to answer the question! Someone who speaks from experience!
Thanks Randy. I need sime time to digest (no pun intended) your thoughts here and determine if you have indeed found one of the boundaries of what behavior-based robotics can do (or if you're just not a very creative programmer...;>)
Lest you guys get too tied up in Brooks worship (i.e., subsumption can't do anything He didn't do) I would also have you contemplate the following notes on the life of Ada Byron:
"Byron also saw potential in Babbage's machine that even the inventor himself never fully imagined. "
In this case, at least, the originator of the idea (Babbage, computers) did not see all the potential in his own indeas. That happens a lot. Might be happening here.
best, dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

Oy, Brooks has had LEGIONS of doctoral students working on these ideas for 20 years. And thence to students of his students. If anyone has the proper perspective on the matter by now, it's got to be Brooks.

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dan michaels wrote:

It isn't in what Brooks and students have done, it's what Brooks and students haven't done.
One of those classic cases of looking not at what has been done, but looking for what hasn't been done. When I read that Brook liked state machines in the late 1990's I was excited, because I'd been hooked on the state machine paradigm since the early 1980's. I have for ever since been looking at what he's called AFSM's looking for the state part of the state machines.
My search for the FSM part of AFSM reminds me of the Wendy's commercial of ~1983, where the little old lady peers into the big (empty) bun and says, "Where's the Beef?"
Finally in Jones, with his comments about servo behaviors versus ballistic behaviors, it became clear the reason I didn't find much of any state machines with states in them, is they were deliberately being avoided, like they were some necessary but highly undesireable evil.
Then in our discussions, it became clear we'd never seen any Subsumption machine implimentations that had more than one ballistic behavior (i.e. a state machine with actual state information in it, rather than a purely reactive servo response sans historical responses, the stage was set. What was missing was any implimentation that showed two or more such state machines with state, and one subsuming the other.
So if anyone has an example of a (nontrivial) pair of true state machines implimented in a pure Subsumptive architecture, I'd love to hear about it. But if there is one, I think I can make predictions about how it was done, and there will be patches that violate the spirit of Subsumption.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

Yes, except I would phrase it as .... what Brooks and students weren't able to do. I think that's why Brooks says what he says today, and not what he was saying 20 years ago. Then, it was the world is it's own best representation, today it is something is "missing", etc.

My feeling, after reading Jones' latest book, is that the best way he sees to proceed with BBR is that the individual behaviors each be extremely simple, and that the overall behavior of the machine "emerge" out of these simple behaviors, rather than from some kind of overall strategy or use of complex multi-state behaviors [as we were discussing recently].
If you get into trouble, such as those repetitive canyoning loops, then you add "another" simple behavior on top to solve the "specific" dilemma - see page 132 ... "a better approach is to understand the interactions between pairs of behaviors that are frequently called in close combination with each other. ... we might avoid thrashing by building a separate behavior that specifically handles the [mutually-conflicting] case of homing and obstacle avoidance ..".
To me, what he's doing is mainly adding patches for specific problems, and for general real-world problems, you would spend your time just adding patch after patch, in order to fix individual problems that pop up. I better like the approach he rejects on pg 131, which is "... employ a cycle detection behavior", which "provides the robot with a modicum of introspection". Basically, Minsky's B-Brain. For my money, Jones' patching and overall approach doesn't get you very far "up" the tree of intelligence, and which is why Jones says the things he says on pg 238, regards systems with 1000s of sensors ... "... might new organizing principles be necessary? ...". The answer is yes.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

My thoughts as well. This is the classic zero sum game that no experienced debate team would take! It involves a negative proof, which is considered impossible by some; nevertheless it is seldom defendable. Coming from an academic background I'm surprised dpa posed it this way. From a scientific standpoint, you don't want to be in the position of defending that something cannot be done. I see you've tried that, and I suppose it's now up to dpa to replicate your test set in order to prove or disprove your hypothesis.
-- Gordon
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Howdy!
Great discussion on this glorious Fall day!
Gordon McComb wrote:

You and Randy have it exactly backwards.
I work with scientists, geologists and geophysicists, and they beat it into our heads on a regular basis. The analogy typically goes like this:
If a prospector returns from the mountains with only a single nugget of gold, he can claim with confidence and certainty, "There's gold in them thar hills." And no one can prove him wrong.
But, if that same prospector spends his life in the mountains and finds no gold, he still CANNOT say with certainty, "There's NO gold in them thar hills." All he can say is that he couldn't find any.
So, if a prospector comes down form the mountains and claims with great certitude "There is no gold in those mountains," then any scientists can be reasonably expected to ask, "Where did you look?" That's how science works.
That's exactly the question I'm asking here. Where have you looked? And not in a theoretical or argumentative way. I am truly seeking what sort of things that others have done to expand on Brook's original intuition, which he called subsumption. You guys act as if the question itself is somehow illegimate and unasnwerable. Neither is true.
However, that's all irrelevant now, as Randy has risen to the occasion with some actual data. He has returned from the mountains with a nugget.
Actually, I have several other examples (nuggets) where designers have extended the basic subsumption paradigm to solve particular problems. They seem so far to cluster as a couple of different approaches; the "gait" problem that Randy describes is one of those, and basically has to do with how differerent layers in a behavior-based implementation can communicate state information to each other. (There it is, Randy!)
But more on this anon...
Again, I am not asking the list to prove a negative. I am asking for those who have experience to draw on to share it.
best dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Hardly. it seems 20 years worth of "practical" research by Brooks and company isn't enough for you. And it's hardly theoretical. Brooks hasn't spent 20 years writing theorems about subsumption. Anything but. He's been building practical situated and embodied machines, working in the real world.
And this is probably one of the primary reasons he had so much trouble with the theorem-oriented AI guys back in the old days. Not theoretical enough.
And did you REALLY mean to use the word "expand" above, or "extend" below? See my comment later.

Randy's example involves how does a subsumption machine can select between gaits in different situations. There are at least 2 answers to this. In lower animals, like arthropods, they can go from metachronal-wave to ripple-gait to alternating-tripod gait simply by using the same little sequence for each step, and changing only the relative phasing between legs.
For more complex animals, like vertebrates that change from walk to trot to gallop, these are advanced far beyond the insect-level, and their brains are more than subsumption machines. They can perceive at long distances, like humans do, and predict ahead regards routes to follow and which way other animals are traveling.

Incredible. You actually used the word "extended" here. What have I been saying for weeks about this, citing Arkin book, etc. You stole my word :).

3-layer architectures, hybird architectures, it's all in Arkin's book, as I mentioned several times.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Evening,
dan michaels wrote:

Exactly.
It does seem at this point that only Randy and Dave have understood the question I am posing. I think maybe Gordon understands but doesn't like the way I phrased it. Fair enough.
Onward! dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

This is a telling statement. These others have arrived at a positive proof but have done so by not using subsumption as Brooks invisioned it. Isn't that the whole point of this thread (or at least its subtext)? They've extended, tweaked, expanded, re-evaluated, invented anew. Some of these hybrid techniques are far less Brooksian than they are applications of state machines and other AI approaches that were discussed half a century or more ago. What's old is new again!
A while back Randy posted some messages about what he saw in subsumption, but I warned that rather than redefine what constitutes the term, it's better to leave whatever remains of it to the original Brooks definition, and expand upon it in uniquely individual ways. Then refer to these new concepts in those terms, rather than try to lump everything as "subsumption." Otherwise, confusion, havoc, and anarchy ensue!
IMO, the hybrid variants serve as implied proof that "X" could not be done with Brooksian-style subsumption, and I believe this was one of Dan's points: if others felt the need for non-classical Brooks, isn't it reasonable to assume "basic subsumption" (whatever you want to call it) was found or deemed inadequate for that particular job?
To continue with your analogy, I'd say you have prospectors coming back with copper ore or silver; still valuable in their own right but not gold nuggets.

"Where did you look?" doesn't refute the prospector's claims. It simply devalues the prospector's statement to the sematics you yourself said is the end of any useful discussion. *Of course* the prospector meant "I did not find gold." The prospector cannot make such a broad statement without monitoring the whole hill into a pile of wet slush. I may have missed someone making such an absolute statement in this thread, but all I recall is others indicating a belief that subsumption has inherent limitations, if for no other reason there are so many hybrid approaches taken in order to solve problems.
Direct personal experience is not required to arrive at every scientific hypothesis. Not speaking for Dan, but I believe he gave you what he felt was more cogent proof of his hypothesis than what his direct personal experience might have provided him.
Subsumption is a theory that has many methods of application. Surely, much of it has yet to be developed. But the unknowns don't make the open-ended "anything should be possible" attitude correct. Anything unproven in a theory is assumed to be false, from a scientific viewpoint, until it is proven otherwise.
I'm not really refuting what you've posted, nor argeeing with everything Randy and Dan have said. I just found the wording of your question seeking negative proofs something other than what you might have intended it to be.
-- Gordon
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Hello Gordon,
Nice to have your measured response, as always.
you wrote:

I guess I have still not been clear. There is no "anything should be possible attitude" implied here in any way. It is exactly the opposite. There are limits. We all agree. What are they? Where have you, Gordon McComb, found the limits to be in your own robot building and problem solving? What problems have you confronted that could not be solved with a behavior-based architectire and required some other approach. And what was that other approach? Any input you have would be greatly appreciated.
What problems have you solved that required you to in some way modifiy whatever it is that you believe is the "standard subsumption" paradigm?

Einstein was ask by a reporter what he thought about a certain experiment that seemed to have proven the theory of relativity to be true. He replied that no amount of experiments could prove his theory true, but it would take only a single experiment to prove it false.
You are exactly 180 degrees wrong on this.
But this is really a useless digression away from the heart of the discussion. I would really like to hear of the experience of an old hand like yourself in the practical problem solving that you have done as relates to this thread.
best, dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

To tell the truth, subsumption and even AI is not a current area of interest, not because I don't think it has merit, or that I haven't read all of Brooks' papers, or that I haven't tried some rudimentary experiments, or that I am not a fan of Rod Brooks. Quite the opposite on all of these. (I consider Brooks one of my heros.)
Rather, my personal interest for the last 6+ years has been to come up with sturdy but inexpensive ways to build robots, so that the people who *do* want to experiment with these things can do so without large cash outlays or living with fragile constructions. Ask me about problems encountered in robotic designs and I could write a book!
This is why I haven't joined in this thread on a practical level, though I will say that "just for grins" I've actually used some subsumption techniques in a non-robotic application I wrote for Technicolor, one of my "day job" clients. It seemed to make sense for a Windows GUI environment that allowed a multitude of ways to interact with the program. I'm sure I could have done it any number of other ways that, in truth, doesn't work any better than if I used a more traditional approach. In all, this one application does not make me an expert in the subject, so I'll leave it at that!

I think you might be confusing Einstein's humility with at least one variation of scientific philosophy. Einstein believed in his theory, but also knew that the theory would not "automagically" be considered true just because he published it. If all untested theories started out assumed to be true, doctoral students would have no need to defend their theses! I have a theory aliens live among us (not really, I'm just trying to make a point). The beauty is my theory can never be proven wrong unless the Earth blows up and there's no one left anyway. Does that mean my theory is guaranteed to be true by default? Of course not.

Actually, I feel one's approach to a scientific problem to be just about as important as the solution to the problem. Very much a glass half full/half empty thing, except with science, unless the glass is in a vacuum it's never really empty of anything! <g> I guess my point is that if we can agree on a methodology, we can more readily find common ground by which to compare notes.
-- Gordon
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Hi Gordon,
you wrote:

And bless you for it good Sir.

In fact I think you have. More than once. Indeed I have one right here!

Interesting. I think this is perhaps the first non-robotics use of subsumption that I've encountered. Wonder how many others there are?
<snip>

I don't think so. This is something we discuss a lot, both at SMU and at the Los Alamos Laboratory. The short-hand for it among the teachers of scientific philosophy is the "last man standing" paradigm. Multiple working hypotheses are contructed to explain any given phenomenum, and then we set about the process of trying to disprove each hypothesis, designing tests and experiments that can be used to eliminate competing explanations. Whatever hypothesis cannot be proven wrong (the "last man standing") is ASSUMED to be the truth until proven otherwise. Not the other way around. This is the way that most modern science is done, in my experience.

Reminds me of a joke. The opptimist sees the glass as half full, where the pessimist sees it as half empty. But the engineer sees that the glass is just too big...

Agreed.
best, dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

Ha! Finally something you're clearly wrong about ....
http://palpatine.chez-alice.fr/Dilbert/Dilbert.html
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dan michaels wrote:

That's hilarious! Thanks. dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

This would tend to be the opposite of Einstein's comments you related earlier (no amount of positive proof would prove relativity as true), but I believe we're also talking about a matter of degrees. The more a theory is proven by scientific fact, the more true it becomes. But by setting all new theories to a value of False to begin with, by definition it MUST undergo scientific testing in order to be reevaluated as True. I make a disinction between "last man standing" and the "no man ever standing" of untested or unverifyable theories!
This is relevent to subsumption because while the theory has undergone independent evaluation, only a bottom tier or two has really been developed out. That part has been tested by the many individuals who have each reinvented the programming wheels to make subsumption happen. I believe there's much more, and the question is, can subsumption infinitely scale ("infinitely" to the limits of current-day computing power), and if not, where are its boundaries?
What we need now is a common test set, an apparatus if you will, that allows the next set of tiers to be scientifically quantified. Perhaps its time for a specialized open source multi-tasking subsumption RTOS that is designed around a robust enough architecture that the hardware isn't the first limitation encountered. The RTOS might be designed bottom-up to follow pure Brooks subsumption. Statements of "it can't be done" can be better evaluated because the test set is identical for everyone doing the research -- the same idea in science is used to eliminate as many variables as possible that can affect the outcome. Object libraries might be added to extend subsumption in order to experiment with the hybrids people like to use.
Such a task is beyond my coding abiilities, but I wonder if this might be the next logical step, seeing how this area of AI has stalled over the past few years.
-- Gordon
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.