What is Subsumption?

dan michaels wrote:


Yes, I was thinking of even a worse one, say, say some mechanical part had to be positioned, like a gripper, so a can could be picked up. Another higher priority behavior closes the grip. Now returning control to the grasp grip, the arm is extended, bluntly striking the can and tumbling it away, perhaps as a projectile across the room.
Ho ho! Now there is the answer for dpa. David has complained Subsumption has been painted as not sufficient for higher level tasks. He asks in reply: Just what is the task Subsuption has been shown insufficient to do, because he has found when he asks for specifics, he gets no answers to support the claim.
Now here is an answer. What isn't subsumption good for? Keeping state information current and applicable when operating with multiple ballistic (meaning those having sequenced, multiple-state-driven) behaviors. Without exceptions built into the behavior itself, the interruption by higher priority behaviors, may (but not always) cause loss of accurate state information.
Subsumption is rather a poor multitasking model when it comes to higher level behaviors which keep internal state information.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

I looked thrugh Jones' books and code examples, but nowheres I can find does he address this problem we're discussing. Most of his "behaviors" are trivial and simply do direct input-output [sensor-motor] calculations. Where he implements behaviors using FSMs which go through a sequence of states on a timed basis, there is no provision for termination of the sequence in the middle, once begun. They will just continue to completion.
In the first book, he uses signal-flags to indicate when a behavior is triggered, and the only thing that seems to reset the flags is when the FSMs cycle through all their states to completion. In the 2nd book, he doesn't even have the signal-flags anymore, just a state++ inside each case statement, which advances until the final state sends the FSM back to state=0. Maybe I'm missing something.
Regards David/dpa, I imagine like the rest of us, he hasn't stuck precisely with the subsumption architecture the way Brooks and Jones describe, but rather adds his own embellishments - based on his own programming style - to get around such issues as we're describing.
For my part, I think it's obvious where subsumption breaks down. It's really only good for simple behaviors that don't require true planning or symbolic operations. You will notice that Brooks really never "successfully" took it beyond insect-like behavior. At least as I see it.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dan michaels wrote:

This is a very profound statement dan. Let me suggest where it leads me. Planning or symbolic operations to be effective in robotics need to be boiled down to their essence and put into action. In otherwords, to have a plan means you have information about what to do, and what not to do, in the future.
Likewise, but in a converse way, state information is sufficiently boiled down history kept to know context, to have information about what to do, and what not to do, in the future.
The two are mirrored images of each other. History gives context. Plans give context. One is from the past, one is for the future, but both are used to make future decisions.
Subsumption and the Behavior Based Approach strangely are strong on AFSM's having context history, but makes very little use of them in the examples provided, and those who follow on to Brooks try to supress their use as much as possible. But both try to eliminate the planning which is so very much related. Brooks strongly tries to eliminate any kind of planning. Toto is an example where this is take to the extreme with its distributed building of maps. However, Herbert was even more so. Quoting from "Cambrian Intelligence".
"The laser-based soda-can object finder drove the robot so that its arm was lined up in front of the soda can. But it did not tell the arm controller that therer was now a soda can ready to be picked up. Rather, the arm controller behaviors monitored the shaft encoders on the wheels, and when they noticed that there was no body motion, initiated motion of the arm, which in turn triggered other behaviors, so that eventually the robot would pick up the soda can."
In other words, rather than having a plan to operate the arm, or looking at the state information of the object finder to see it was stopped, it looked at an indirect and much less reliable form (or I should say an artifact) of the same information instead of intermachine communications.
To me, this isn't clever, but taking an idea to the extremes to avoid using a better quality of information just because it looks like planning.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

The historical context in subsumption bots really only lasts till about the next clock tick, when the behavioral FSMs go to the next state. They really don't keep track of the history over the past few seconds, beyond just a few, and certainly not over minutes. They are really living-in-the-present machines. That's why tehy can get stuck in repetitive behavioral loops so easily, unless you superimpose methods to break the loops.
Plus they don't know **anything at all** about the future, because as both Brooks and Jones say in many places, their behaviors are really tighly coupled to sensor input, and that is dictated by the what they encounter second-by-second in the environment, and little else.
In essence, these machines have no memory and no predictive capabilities. Think of what your life would be like, if your brain worked in a similar fashion. There have been a couple movies about this recently, neither if which I went to see.
Reactive robotics means just that. This is why Arkin talks about hybrid architectures that combine BBR with planning modules. At the very least, I figure one should implement something like the A-brain, B-brain stuff that Minsky talks about in Society of Mind. The A-brain is the general executor, andthe B-brain monitors the A-Brain [essentially tracks and keeps the A-Brain from going into pathological conditions].
The Minsky A/B-brain thing should be a relatively easy to add to a reactive bot. Mainly just some kind of memory trace that monitors behaviors triggered and tracks them over a few minutes, and watches for pathological patterns.

Jones talks about such as this in particular. The bot doesn't have any "goal" that says pick or move cans. It just knows if it bumps into something that is light enough to move, and not so large it stalls the motors, then another behavior is triggered, like pick it up or start heading towards the homebase light. There is no planning for a task, rather emergence of a sequential set of behaviors that accomplishes a goal. Complete zombie.

The other problem you get into with this is that, it won't work so well if you have half a dozen different goals to implement. How does it choose which behavior repertoire is supposed to "emerge" in a given situation, if behavior is an emergent phenomenon? These are some of the limitations on pure reactive architectures. And I put it to you that dpa doesn't recognize this because, as I said last time, he's probably actually implementing something more advanced by relying back upon his previous programming experience.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Hi
dan michaels wrote:

Maybe.
Perhaps "reactive" is the operant word.
The robot might "react" to a buffer full of sonar measurements, from which it deduced an obstacle on the left, and so turned right. It might instead "react" to a buffer of accelerometer values from which it deduced a collision and so executed some recovery.
Similarly, the robot might "react" to a buffer full of history data, locations, perhaps, from which it deduced that it was stuck, or needed to change some high level modes or turn certain behaviors on or off.
That all seems to flow logically from the same subsumption model that Brooks and Jones describe. The sort of high-level planning I believe you are envisioning is quite doable with this sort of "reactive" subsumption, in my experience.
dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

Hello David. From what I can tell from your comments, you are keying on a word - "reactive" -which has a possibly very wide connotation, while Randy and I have been discussing the "specific" architectures described by Brooks and Jones. Saying something like this robot is reactive or responsive can mean and cover pretty much anything.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Hi Dan,
I think perhaps you misunderstood my point. You and Randy seem to argue that subsumption is by its nature stateless, without history information. I point out that a subsumptive behavior is triggered in most cases by analyzing a buffer full of data: sonar readings or accelerometer values were the two examples. A subsumptive behavior can also be triggered by analyzing a buffer full of HISTORY information, like for example the robot's locations for the last N minutes. This requires no paradigm shift or adding of non-subsumption elements. It follows logically from the subsumption methodology that Brooks and Jones describe. And it allows for hi-level behaviors and planning that at least some folks seem convinced is not possible with a subsumption architecture.
best dpa
dan michaels wrote:

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

Strictly speaking, I think you're correct. However, I also think this is an "extension" that you have added to Brooks' original idea. It seems to me he is very adamant about how the external environment is its own best representation, and that his robots are really concerned with the "here and now", via being tightly grounded or coupled to current sensor readings, and not with the past or the future, as I mentioned to Randy last time. I just don't see historical tracking appearing in anything said by Brooks or Jones, or in the examples in Jones' books, although I might have missed it.

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dan michaels wrote:

One other comment I did want to make along these lines is that, as mentioned before, what I see you talking about are really "extensions" to the original subsumption idea of Brooks, but to me, this is actually one of its best features. From the beginning, the architecture was designed for multi-processing and easy addition of new behaviors on top of the old.
I've always viewed subsumption as a really good foundation for building up more powerful robots in a modular fashion, by making it relatively easy to add symbolic, perceptual, and planning modules, etc, even though this is not what Brooks seemed to have in mind originally, when he published papers taking an anti-representationist stance, such as ....
==================="Intelligence without representation" ... Just as there is no central representation there is not even a central system. Each activity producing layer connects perception to action directly. It is only the observer of the Creature who imputes a central representation or central control. The Creature itself has none; it is a collection of competing behaviors. Out of the local chaos of their interactions there emerges, in the eye of an observer, a coherent pattern of behavior. There is no central purposeful locus of control. Minsky [10] gives a similar account of how human behavior is generated. ===================
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Hear hear!
The problem however is that these sorts of systems aren't easy to understand or to build. I think they evolve in humans with the help of learning algorithms that create them. Dan probably believes they evolve more with the help of evolution - but it's the same problem for the robot programmer either way. It's a design that ends up being too complex for a human to understand.
As robot programmers, we must either harness the power or learning algorithms to build these systems for us (much work is still left to be figured out how to do that), or we do the best we can producing parallel behavior-based-robotics like systems with finite state machine "scripts" that we can understand merged in with it (as Dan seems to be suggesting above).
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
BTW, regards our previous discussion about interrupted behaviors, on reading in Arkin's book, around page 132, he shows that Brooks had a "reset" line going into his original augmented-FSMs, which could put the behavior back into state 0, but he doesn't much discuss its use in various implementations, nor the problem we've been discussing. More like, implied reset is used during booting up the machine.
RMDumse wrote:

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dan michaels wrote:

Actually I've seen the reset in Brooks as well, which is probably where Arkin found basis to include it. Cambrian Intelligence, Ch1, pg 15, Fig 4. "A module can be reset to state NIL."
One of the candidate ideas I was going to give dpa was that Subsumption was going to get into trouble when there are shared outputs. For instance, when we walk, we swing our arms in opposition to our legs naturally (normally) for efficiancy sake. So that would be a normal walk gait. Our arms are moved by that gait. However, if we carry something, our legs are still are controlled by the walk gait, but our arms are assigned to different control.
So in the Jones simplified model of subsumption, this would not be possible. A module cannot be partially subsumed.
However when you look at this Fig. 4 in Brooks, you realize he allows for some outputs to be subsumed while others are not. The same with inputs, some inputs are suppressed, and others are not. So we have to reserve a more neural-net-like structure for Brooksian Subsumption.
While on that, I want to make another note. Early Brooks models seem to allow outputs to be wrapped back to inputs. See Fig. and 6 on following pages. With this loop, I argue, comes feedback, and with feedback comes a sort of internal representation. Similar loops occured in Genghis (Ch 2, Fig 4) between beta-pos and leg-up leg-down, and between alpha-position and alpha-pos and alpha-balance. Such loops of outputs to inputs seem to be less prominante in later examples, making me wonder if they were exorcised in the name of elimination of self-representations. But I am meerly speculating on the trend, and have no hard evidence I can point to in the text that suggests one thing or another.
-- Randy M. Dumse www.newmicros.com \Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

Cool! I hope you do. Exercise #1 is surely doable, yes?
However, this is not a "robomagellan" practice. That is basically a "cone finding" task. This is a set of navigation exercises, unrelated to orange traffic cones -- a different sort of task altogether. I kind of like the name "Off-RoadBots" for this challenge.
cheers dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

Well, it is a necessary first step. Unfortunately, I was way past first steps. Ed and James Koffeman were over working with me and we were outside doing GPS drive to a point, when the GPS module apparently failed, and then I had my stroke very soon after that. Tanks hasn't moved since. Having looked for the specific code I was in, has rather escaped me, so my struggle is to find what is now lost on my hard drive. Plus the tank has been savaged for parts. Typical of something not moving around here. But yes, Exercise #1 should have been doable.
For those of you not familiar with dpa's proposed exercises, #1 is to cause your robot to go out straight and back straight. I think the target is 100 feet.
Later exercises are to do UMBenchmark squares.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

Perhaps you need to develop it again? And what better time?! You have until next Saturday to get your robot to drive out 100 feet and back, in a big empty parking lot.
You don't have to do it well. That's a separate problem. Exercise #1 is just to do it at all. ;)

The "straight" part is not critical. Just out 100 feet and back, and stop.

and also some simple obstacle avoidance.
Hope to see you there! dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

And then Friday evening about 5:30PM I fried the whole stack of electronics. New H-bridge, PlugaPod(TM), Zigbee Adapter and XBee Pro module. Oh well... I'll have to start over on the electronics in the tank.
Randy
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

A likely story! What did you really do with all those parts? Where's my tinfoil hat?

Well, we missed you Saturday. Nobody could do even the simplest exercise -- too much debating, not enough building. I handed out some "I can turn my robot on!" awards.
cheers, dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

I've done that. With time running out on the deadline, and my hair turning grey on the spot, I've removed the fuse and claimed massive electronics failure. But, of course, Randy doesn't use fuses.
[but RD could have easily gone into the other room and gotten all new parts off the shelf. Plugapod, zigbee adapter, they're all in his catalog ;-)].

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
I'm rather late in this discussion and would like to add a few comments about subsumption in addition to those already given.
1. Brook's caused a revolution in robotics in the mid 1980's with his subsumption architecture because his robots were the only ones that could run in real environments in real time.
2. The current three level architectures essential use the basis of the subsumption architecture, tight coupling of sensor and actuators, some times termed reactive, in the "lowest" layer.
3. The subsumption architecture was supposed to be stateless and not use reperesentation; it did however use "augmented" FSM; the augmentation was a time out, so in a sense the state had a short "memory"
4. Most of the posts talk about some feature of implementation and some of the likely problems with interrupting behaviors. I think the "augmentation feature" may ameliorate that.
5. Also most people who implement subsumption do it on a single processor and this seems to create its own problems. I started a hardware implementation. I wanted to have each behavior use its own microcontroller, no interrupts The arbiter "merely" needs to know, for mobility, two motor commands. In order for each behavior to communicate its output to the arbiter without using interrupts or networks, I used a priority encoder and a data selector. When a behavior has completed its cycle and wants to send motor commands to the arbiter, it sets its activity line to the priority encoder high. The output of the encoder (3 its to negotiate eight behaviors in this case) goes to the data selector which then selects the motor commands, which come in serially from the behavior, and sends it to the arbiter. The arbiter monitors its serial input for the commands. A behavior will keep sending its commands, with a frequency depending on long it takes it to process its sensor inputs, as long as it is active. This may result in the arbiter serial line having a command from one behavior interrupted by that from another. However, the arbiter looks for a valid start character followed by valid commands. As long as it can do this fast enough to prevent input buffer overflows, all is well.
I tried this out using five BasicX microcontrollers running five different loops with different time scales; one was triggered by a button press. The output of the arbiter was connected to a motor controller turning wheels on a real robot, albeit on a stand. All seemed to work as planned - the motors changing direction and speed according to the various loops. Each BasicX would light an LED when its behavior was active so that the wheel motion could be compared to the behavior in that micro. Unfortunately, I haven't yet implemented behaviors on the robot. Work in progress.
John-
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Harry Rosroth wrote:

Actually, by then, more centralized robots were working in real time, but not many of them. The "plan, then execute" people were still influential back then, and many of the planners weren't real time.

That's really just hierarchial control. Most industrial control systems are hierarchical in that sense.

The stateless thing was more of a Connell thing. For his PhD thesis, he built a robot to find empty soda cans and dispose of them. The thing had very little internal state; if the hand had a can in it, the goal was to find the trash can; if the hand was empty, the goal was to find another can. This was to explore the limits of stateless systems, his PhD topic. It wasn't really a useful direction.
The "no representation" thing was a major feature of subsumption architectures. But back then, mapping systems weren't very good, although Moravec at CMU had done some good work. Now that the "simultaneous localization and mapping" problem has been to some extent solved, mapping is working much better.
The basic problem with the "no representation" approach is that you'll never get beyond insect-level AI that way. It's just too dumb.

Conflict between behaviors is tough, if you're trying to do anything complicated. If you have some hysteresis in the behavior switching, things work better.

Brooks had one M68000 CPU per leg, communicating using an I2C loop.
                John Nagle                 Animats
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.