What is Subsumption?

I don't know if I came out and said it or not. Several times I've tried to post to this thread without having enough time to finish the post, so what I wrote got lost. But...

The reason not to stop as soon as you find a behavior that fires is the lower behaviors, if they are state machines, they should be allowed a time slice to keep their state current. However, I think there's problems even here, because as the Jones examples only show state in Escape behaviors, I would argue, the Escape behavior should not hold its state if interrupted, but should force the state number (or vector or pointer or...) back to the initial state upon return.

Well, there's another one. It's on my bookshelf next to Mitchel "An Introduction to Genetic Algorithms". I'm back in Ch 7 of Arkin trying to move forward. I think I'll try Murphy's "Introduction to AI Robotics" next.

Neat! Glad to hear it.

You mention going around the house. Is that meaning outside? Will this be a robomagellan type robot?

dpa is promoting a robomagellan practice here in Dallas, and I'm hoping to get my Tankbot out, and get it running again. Depends on what other loading I've got this week.

Yes, I'm looking forward to getting into GA and learning as a subject. I still don't have enough reading there yet.

Well good luck and keep us posted on progress.

Reply to
RMDumse
Loading thread data ...

Hi Randy. This week I re-read the 80 pages on subsumption in "Mobile Robots", and as mentioned before, they always compute all behaviors, but have 2 different arbiters in different examples.

In one example, they talk about how the arbiter uses "message passing", and they go down through the entire list. The most-critical behaviors, like bump, are at the **bottom** of the list. This way, the messages of the more-critical behaviors will replace those of less-critical higher up in the list.

In the other example, they use a nested if-then-else structure, and place the more-critical behaviors at the **top** of the list. This way, if one fires, then the rest of the list is abandoned. Obviously it works either way.

Also, you're right about the idea of keeping "state current". All of the different behaviors are implemented as separate FSMs, and with many of these FSMs having a #of different states. So, this is one reason to compute every behavior on every time-tick.

And this is where the "augmented" part of augmented-FSM comes in. Sometimes, you want a behavior to stay in each state for a certain length of time, and then go to the next state. Eg, Escape might be .... back up for 2-sec, then turn to the right for 1-sec, then go forward. All the while, of course, you're still computing the entire list of behaviors, and doing the arbitration. Therefore, this can be implemented either as separate cpus entirely, as Brooks talked about, or by using some scheme of multi-tasking on a single cpu.

I can see one problem, however, with continually processing behavioral FSMs in the background while others have taken over the machine. Let's say, one behavior with 6-states gets control, and running to completion will take 20-sec. But after only 8-sec, a higher priority behavior takes over, and completes in 5-sec. Well, you still have the first behavior underway and somewhere in its time-frame to completion. Now, if it then takes over the machine again, it may have actually skipped having performed a state or two, and when it restarts executing, it may cause some havoc in the machine because those states were skipped.

This might lead to highly pathological behavior. I think this can be a serious problem, and so your comment about "forcing the state number back to zero" above makes some sense. I'm not sure is this is considered in Brooks/Jones' stuff. Something to look up.

See the other thread I started. I'm hacking a tracked RAD the Robot base. It's probably not able to run outside over rough terrain very well, however. Other than that, it's pretty cool. Large enough to run around the house fairly quickly, over bare and carpeted floors, and also to carry lots of sensors.

Done the mechanical and electrical hacks now, and ready to start programming. Got a wireless cam so I can see where it's going from afar, plus zigbee comms so it can report back its internal state info, plus I can control its directions. Should be a good base for testing pure subsumption techniques. EG, I will be able to specifically tell it to do something, like go into a corner, and then see how well it can get itself out, etc. Put it into a situation where canyoning can take place, etc.

Reply to
dan michaels

Cool! I hope you do. Exercise #1 is surely doable, yes?

However, this is not a "robomagellan" practice. That is basically a "cone finding" task. This is a set of navigation exercises, unrelated to orange traffic cones -- a different sort of task altogether. I kind of like the name "Off-RoadBots" for this challenge.

cheers dpa

Reply to
dpa

Well, it is a necessary first step. Unfortunately, I was way past first steps. Ed and James Koffeman were over working with me and we were outside doing GPS drive to a point, when the GPS module apparently failed, and then I had my stroke very soon after that. Tanks hasn't moved since. Having looked for the specific code I was in, has rather escaped me, so my struggle is to find what is now lost on my hard drive. Plus the tank has been savaged for parts. Typical of something not moving around here. But yes, Exercise #1 should have been doable.

For those of you not familiar with dpa's proposed exercises, #1 is to cause your robot to go out straight and back straight. I think the target is 100 feet.

Later exercises are to do UMBenchmark squares.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

Exactly!

I don't think the issue of stored history and maintenance is considered in any writings I've seen on Subsumption. I think it is a hole in the design. State information and it's maintenance is critical. Just taking control away from a state machine, disconnecting its outputs, and then changing them, and giving control back, is a recipe for serious problems.

Honestly, I suspect this is why ballistic behaviors are not encouraged in the reactive Brooks model, and I also suppose this is one of the major road blocks standing in the way of more intelligent systems based on the Behavior Based paradigm, as we've inherited it.

We're not going to get smarter machines until the idea of stored history and its maintenance (state information) is better handled. I'm very serious about this point. My interest in state machines comes from exactly this root believe. In short, my position is both AI and Robotics depend on advances in the understanding of stored history and its maintenance, in the same sense that early calculators/computers were not of much use until the advent of stored programs enabled them. Again, it's an issue of storage, but it isn't data, or program, but history. What active history storage needs to be kept, to allow an automata to function intelligently.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

I agree with you 100% here. In the scenario I gave last time, of the behavior that takes 20-sec to go to completion, but gets intterupted part-way through, and then later is resumed from where it left off, one would seem to need a higher -level routine to look at what had all just happened, and put it into the perspective of both past events and current situation, before deciding how to proceed.

As a trivial example, what if the behavior that was interrupted was reaching for a block, and the interrupting behavior caused the bot to back-up and turn 90-degrees for whatever reason. The interrupted behavior is now completely useless - and worse, clueless - if it resumes from where it left off.

Reply to
dan michaels

Yes, I was thinking of even a worse one, say, say some mechanical part had to be positioned, like a gripper, so a can could be picked up. Another higher priority behavior closes the grip. Now returning control to the grasp grip, the arm is extended, bluntly striking the can and tumbling it away, perhaps as a projectile across the room.

Ho ho! Now there is the answer for dpa. David has complained Subsumption has been painted as not sufficient for higher level tasks. He asks in reply: Just what is the task Subsuption has been shown insufficient to do, because he has found when he asks for specifics, he gets no answers to support the claim.

Now here is an answer. What isn't subsumption good for? Keeping state information current and applicable when operating with multiple ballistic (meaning those having sequenced, multiple-state-driven) behaviors. Without exceptions built into the behavior itself, the interruption by higher priority behaviors, may (but not always) cause loss of accurate state information.

Subsumption is rather a poor multitasking model when it comes to higher level behaviors which keep internal state information.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

Perhaps you need to develop it again? And what better time?! You have until next Saturday to get your robot to drive out 100 feet and back, in a big empty parking lot.

You don't have to do it well. That's a separate problem. Exercise #1 is just to do it at all. ;)

The "straight" part is not critical. Just out 100 feet and back, and stop.

and also some simple obstacle avoidance.

Hope to see you there! dpa

Reply to
dpa

I looked thrugh Jones' books and code examples, but nowheres I can find does he address this problem we're discussing. Most of his "behaviors" are trivial and simply do direct input-output [sensor-motor] calculations. Where he implements behaviors using FSMs which go through a sequence of states on a timed basis, there is no provision for termination of the sequence in the middle, once begun. They will just continue to completion.

In the first book, he uses signal-flags to indicate when a behavior is triggered, and the only thing that seems to reset the flags is when the FSMs cycle through all their states to completion. In the 2nd book, he doesn't even have the signal-flags anymore, just a state++ inside each case statement, which advances until the final state sends the FSM back to state=0. Maybe I'm missing something.

Regards David/dpa, I imagine like the rest of us, he hasn't stuck precisely with the subsumption architecture the way Brooks and Jones describe, but rather adds his own embellishments - based on his own programming style - to get around such issues as we're describing.

For my part, I think it's obvious where subsumption breaks down. It's really only good for simple behaviors that don't require true planning or symbolic operations. You will notice that Brooks really never "successfully" took it beyond insect-like behavior. At least as I see it.

Reply to
dan michaels

This is a very profound statement dan. Let me suggest where it leads me. Planning or symbolic operations to be effective in robotics need to be boiled down to their essence and put into action. In otherwords, to have a plan means you have information about what to do, and what not to do, in the future.

Likewise, but in a converse way, state information is sufficiently boiled down history kept to know context, to have information about what to do, and what not to do, in the future.

The two are mirrored images of each other. History gives context. Plans give context. One is from the past, one is for the future, but both are used to make future decisions.

Subsumption and the Behavior Based Approach strangely are strong on AFSM's having context history, but makes very little use of them in the examples provided, and those who follow on to Brooks try to supress their use as much as possible. But both try to eliminate the planning which is so very much related. Brooks strongly tries to eliminate any kind of planning. Toto is an example where this is take to the extreme with its distributed building of maps. However, Herbert was even more so. Quoting from "Cambrian Intelligence".

"The laser-based soda-can object finder drove the robot so that its arm was lined up in front of the soda can. But it did not tell the arm controller that therer was now a soda can ready to be picked up. Rather, the arm controller behaviors monitored the shaft encoders on the wheels, and when they noticed that there was no body motion, initiated motion of the arm, which in turn triggered other behaviors, so that eventually the robot would pick up the soda can."

In other words, rather than having a plan to operate the arm, or looking at the state information of the object finder to see it was stopped, it looked at an indirect and much less reliable form (or I should say an artifact) of the same information instead of intermachine communications.

To me, this isn't clever, but taking an idea to the extremes to avoid using a better quality of information just because it looks like planning.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

The historical context in subsumption bots really only lasts till about the next clock tick, when the behavioral FSMs go to the next state. They really don't keep track of the history over the past few seconds, beyond just a few, and certainly not over minutes. They are really living-in-the-present machines. That's why tehy can get stuck in repetitive behavioral loops so easily, unless you superimpose methods to break the loops.

Plus they don't know **anything at all** about the future, because as both Brooks and Jones say in many places, their behaviors are really tighly coupled to sensor input, and that is dictated by the what they encounter second-by-second in the environment, and little else.

In essence, these machines have no memory and no predictive capabilities. Think of what your life would be like, if your brain worked in a similar fashion. There have been a couple movies about this recently, neither if which I went to see.

Reactive robotics means just that. This is why Arkin talks about hybrid architectures that combine BBR with planning modules. At the very least, I figure one should implement something like the A-brain, B-brain stuff that Minsky talks about in Society of Mind. The A-brain is the general executor, andthe B-brain monitors the A-Brain [essentially tracks and keeps the A-Brain from going into pathological conditions].

The Minsky A/B-brain thing should be a relatively easy to add to a reactive bot. Mainly just some kind of memory trace that monitors behaviors triggered and tracks them over a few minutes, and watches for pathological patterns.

Jones talks about such as this in particular. The bot doesn't have any "goal" that says pick or move cans. It just knows if it bumps into something that is light enough to move, and not so large it stalls the motors, then another behavior is triggered, like pick it up or start heading towards the homebase light. There is no planning for a task, rather emergence of a sequential set of behaviors that accomplishes a goal. Complete zombie.

The other problem you get into with this is that, it won't work so well if you have half a dozen different goals to implement. How does it choose which behavior repertoire is supposed to "emerge" in a given situation, if behavior is an emergent phenomenon? These are some of the limitations on pure reactive architectures. And I put it to you that dpa doesn't recognize this because, as I said last time, he's probably actually implementing something more advanced by relying back upon his previous programming experience.

Reply to
dan michaels

Maybe.

Perhaps "reactive" is the operant word.

The robot might "react" to a buffer full of sonar measurements, from which it deduced an obstacle on the left, and so turned right. It might instead "react" to a buffer of accelerometer values from which it deduced a collision and so executed some recovery.

Similarly, the robot might "react" to a buffer full of history data, locations, perhaps, from which it deduced that it was stuck, or needed to change some high level modes or turn certain behaviors on or off.

That all seems to flow logically from the same subsumption model that Brooks and Jones describe. The sort of high-level planning I believe you are envisioning is quite doable with this sort of "reactive" subsumption, in my experience.

dpa

Reply to
dpa

Hello David. From what I can tell from your comments, you are keying on a word - "reactive" -which has a possibly very wide connotation, while Randy and I have been discussing the "specific" architectures described by Brooks and Jones. Saying something like this robot is reactive or responsive can mean and cover pretty much anything.

Reply to
dan michaels

Hi Dan,

I think perhaps you misunderstood my point. You and Randy seem to argue that subsumption is by its nature stateless, without history information. I point out that a subsumptive behavior is triggered in most cases by analyzing a buffer full of data: sonar readings or accelerometer values were the two examples. A subsumptive behavior can also be triggered by analyzing a buffer full of HISTORY information, like for example the robot's locations for the last N minutes. This requires no paradigm shift or adding of non-subsumption elements. It follows logically from the subsumption methodology that Brooks and Jones describe. And it allows for hi-level behaviors and planning that at least some folks seem convinced is not possible with a subsumption architecture.

best dpa

dan michaels wrote:

Reply to
dpa

Strictly speaking, I think you're correct. However, I also think this is an "extension" that you have added to Brooks' original idea. It seems to me he is very adamant about how the external environment is its own best representation, and that his robots are really concerned with the "here and now", via being tightly grounded or coupled to current sensor readings, and not with the past or the future, as I mentioned to Randy last time. I just don't see historical tracking appearing in anything said by Brooks or Jones, or in the examples in Jones' books, although I might have missed it.

Reply to
dan michaels

BTW, regards our previous discussion about interrupted behaviors, on reading in Arkin's book, around page 132, he shows that Brooks had a "reset" line going into his original augmented-FSMs, which could put the behavior back into state 0, but he doesn't much discuss its use in various implementations, nor the problem we've been discussing. More like, implied reset is used during booting up the machine.

RMDumse wrote:

Reply to
dan michaels

You should regard subsumption as an abstract model, not as an actual description of the algorithm.

I think that an efficient way to implement subsumption in a serial machine is to have a list of behaviors sorted by priority, and define for each behavior two subroutines: one which determines if the behavior requests to be activated, and one which actually calculates the motor output. For each cycle iterate through the list starting form the highest priority behavior, calling the activation test subroutine until you find one that requests activation. Call its output subroutine. End of cycle.

(untested) example (scheme language):

(define (process-cycle behavior-list) (if (request-activation? (car behavior-list)) (calc-output (car behavior-list)) (process-cycle (cdr behavior-list)) ) )

Of course, if you implement subsumption using ad-hoc hardware, with behaviors corresponding to physical circuits, you can have all the behaviors running in parallel.

Reply to
vend82

One other comment I did want to make along these lines is that, as mentioned before, what I see you talking about are really "extensions" to the original subsumption idea of Brooks, but to me, this is actually one of its best features. From the beginning, the architecture was designed for multi-processing and easy addition of new behaviors on top of the old.

I've always viewed subsumption as a really good foundation for building up more powerful robots in a modular fashion, by making it relatively easy to add symbolic, perceptual, and planning modules, etc, even though this is not what Brooks seemed to have in mind originally, when he published papers taking an anti-representationist stance, such as ....

==================== "Intelligence without representation" ... Just as there is no central representation there is not even a central system. Each activity producing layer connects perception to action directly. It is only the observer of the Creature who imputes a central representation or central control. The Creature itself has none; it is a collection of competing behaviors. Out of the local chaos of their interactions there emerges, in the eye of an observer, a coherent pattern of behavior. There is no central purposeful locus of control. Minsky [10] gives a similar account of how human behavior is generated. =====================

Reply to
dan michaels

Hear hear!

The problem however is that these sorts of systems aren't easy to understand or to build. I think they evolve in humans with the help of learning algorithms that create them. Dan probably believes they evolve more with the help of evolution - but it's the same problem for the robot programmer either way. It's a design that ends up being too complex for a human to understand.

As robot programmers, we must either harness the power or learning algorithms to build these systems for us (much work is still left to be figured out how to do that), or we do the best we can producing parallel behavior-based-robotics like systems with finite state machine "scripts" that we can understand merged in with it (as Dan seems to be suggesting above).

Reply to
Curt Welch

This is pretty much what I do, though in Java.

That would be nice. Though it shouldn't be that expensive now with an FPGA as a base.

Reply to
D. Jay Newman

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.