What is Subsumption?

RMDumse wrote: <snip>


<snip examples> These all seem like examples of garden-variety conditional execution, if-then-else. Perhaps you are confusing subsumption with conditional execution. I notice that all your examples only have two behaviors, which in my experience is really not enough to get the benefit of the subsumption paradigm. If you only have two behaviors then you probably can just use if-then-else.

Subsumption and FSMs can both be made with if-else-then conditional execution. So by this logic conditional execution is the larger concept? I'll grant that, but I don't really understand your point.
That said... I actually do code all my "behavor-based" robot "subsumption architecture" behaviors using finite state machines. It's how all the behaviors work on all my robots. It seems to me the simplest way to write the code.
dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

Ah, the point is slightly off. Going back to the opening sentence, "Brooks's subsumption architecture provides a way of combining distributed real-time control with sensor-triggered behaviors."
Brooks may have claimed FSM's below his behaviors in Subsumption, although I have argued here, he has attempted to remove all the state information possible from the FSM's, as evidenced by his examples, and Jones explicit discussions of preferance of servo over ballistic response.
Now if you've removed all the "stateness" and reduced the FSM's to purely servo outputs, then you have a no-state machine with no-state transitions. If you've got no transitions, you've got no conditionals. So where are the triggers for transitions, the changes, which characterize a state machine? Where do you put the conditional that turns on and off the calculated output?
If your Brooks, you move it up a level, and call it a triggered behavior in Subsumption.
If you are a FSM guy, perhaps you look at behaviors and Subsumption and think, huh, here is a broken of piece of a FSM. It looks like the initial state before anything is triggered. And then when it triggers it transitions into the rest of the machine. It looks like a little two-state machine, with nontriggered and triggered state. Somehow it must save its state of being active, or it could be retriggered.
One of the most difficult thing in doing FSM design, is approaching the problem and identifying how many states are in a statemachine, and the larger problem of which states go with which machine. It is always possible to create too many states in a machine. Often when you see a proliferation of states, you may need to see if perhaps you need to split the problem, and you are trying to make a complex machine out of two simple ones. Conversely, you can always over-factor, and split a machine into several machines, when one machine would have been a simpler and more elegant solution.
When you see one machine, start off another machine, and then cannot return to it's initial state until another machine finishes back to its intial state, you should suspect you have too much factoring, that you don't have two machines, but one split off from the other.
So a behavior itself is a two state machine. Which state it is in is determined by an input conditional, to make a transition, into the behavior's "nonFSM" to calculate the output, or, to run a larger FSM with its own transitions.
Now this brings me to a really important point I hadn't noticed before. This is strong evidence that behaviors in Subsumption are a broken off piece of a machine. While the trigger in the behavior intiates the entry into the FSM and the act of Subsumption, if the FSM has any state to it, the FSM and _not_ the behavior trigger determines when it comes out of Subsumption.
The behavior has a trigger in it which starts the Subsumption. Once triggered, though, there remains the memory of being triggered, state is kept. The behavior remains subsuming until the FSM is finished. The FSM has control of releasing Subsumption (if it has state in it at all), and without that go ahead, the behavior cannot go back to state where it releases Subsumption and then can again be triggered.
A ballistic behavior like Escape can be triggered by a bump switch, causing a subsumption of a lower behavior like wander. However, the release of the trigger input does not cause the end of the behavior or the release of Subsumption. The FSM finishes before the two-state behavior machine can release.
Evidence that the bumper being pressed does not reset the Escape to its initial state. Evidence that the bumper being released does not set Escape to its initial or final state. Nor does any intermediate noise cause state change. The behavior itself therefore must have state, two states, ready-to-trigger and triggered. But it has no way to reset itself, it cannot retrigger. The release must come from communications from the FSM regarding its progression of state. Therefore the state in behavior is an erroneously separated part of a larger whole.

Conditional logic is a necessary part of Subsumption. Conditional logic is a necessary part of FSM's. But by the same token, in programming, instructions of all kinds are needed to make programs. Does that make instructions the larger concept? or a sub component of programs?
In the realm of programming, if-else-then allows for bifurcation of code execution. When you enter a state machine program periodically to update states and outputs, you first must find what state it is in, before you can determine what conditions to check to know if transition is necessary. A strong feature of many folks FSM code, is this finding of state is made with a large if-else-then structure (often hidden by a "switch"-like statement). All the transitions firing will be predicated on a Boolean result and action or not is if-then based.
You can't have a transition from state to state without a conditional between them. But you can't have a state machine if all you have are conditionals. You need state information for the conditionals to work on as well, and without it, FSM's would be meaningless.
Conditional execution is a necessary, but insufficient, component of FSM's.

Yes.
I need to retract the comment in my previous post about parallelism. Seem's I'm hoisted by my own opening post: "Thus, the architecture is inherently parallel and sensors interject themselves throughout all layers of behavior."
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

Ah. I think we may be zeroing in.
Your reference to "purely servo outputs" suggests that servo behaviors do not have triggered (active) and non triggered (non active) states, in addition to their servoing. That only ballastic behaviors have triggers.
That is not the case.
Only the lowest priority behavior can be continuously triggered, i.e. always active. All others must have triggered (active) and non-triggered (inactive) states. Otherwise the lower priority behaviors would never run.
That's how subsumption works.
This is not obvious if you think only in terms of two behaviors; a subsuming and a subsumed. The power of subsumption is that it is a hierarchical manager of lots of behaviors.
void subsumption {
while(1) { cruise(); <--- this is the only not triggered behavior. navigate_path(); // servo sonar_path(); // servo sonar_avoid(); //servo ir_avoid(): //servo bump_recover(); // ballastic jammed_recover(); // ballastic arbitrate(); // send highest priority to motors } }
best regards, dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
I reply to my own post below. Hmmm... it's late. dpa wrote:

To clarify the above example: behaviors are listed from loweset priority, cruise() to highest priority, jammed_recover().
The criuse() behavior tries to ramp the motors up to full speed, straight ahead. It is the only behavior that can be continuously triggered, as there are no lower priority behaviors that it could mask. That is _why_ it can be continuously triggered, but it does not require it to be. It _can_ be continuously triggered, but _none_ of the other normal subsumption behaviors can be.
cruise() can only output to the motors if the navigate_path() behavior is happy (i.e., non-triggered). That effectively means only when the navigation target is straight ahead. And if the navigate_path() behavior wants to turn left or right or slow down or whatever, it can only do so if the sonar_path() behavior is not triggered (i.e. can't find a long path) and everything above it in priority are also not triggered.
Likewise, sonar_path() can only output when sonar_avoid() and up are not triggered, which can only output when ir_avoid() and up are not triggered, which can only output when bump_recover() and up are not triggered. In this example, jammed_recover() is the highest priority, and can run whenever it darn well pleases.
The key is that each behavior level has a trigger that detemines when the behavior is active and inactive that is separate and distinct from the servoing values that it may send to the motors when it is active.
It is during these untriggered, inactive moments that all lower priority behaviors have a chance to output to the motors.
best, dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

Or maybe not.
But we can hope clarification will come from further discusion.

Ah, no, not at all.
I am beginning to see why this is a difficult explanation. To understand the structure of Subsumption, you have to blend two very different presentations of it, that Brooks shows which changes/evolves as he works on it, and Jones which is a very simplified (practical) version of it.
The use of "behaviors" in place of "FSM's" causing confusion. The meaning of behavior is closer to that of layer as Brooks originally uses layers. Subsumption is the architecture of layers (behaviors), and the interconnects are either internal to a layer (behavior) or are from higher layers (behaviors) to lower ones. Inside the behaviors are FSM's, also called modules by Brooks, and Control Systems by Jones, which are the processors of computation and change.
Subsumption layers (behaviors) have parts. There is only one behavior per layer, and the names layer and behavior are used interchangably. But there may be many state machines in a layer (behavior) (per Brooks, see Cambrian Intelligence, Ch 5, Fig. 2 caption, pg 94. "We wire, finite state machines together into layers of control. Each layer is built on top of existing layers. Lower level layers never rely on the existence of higher level layers.")
If you will look in Brooks, Cambrian Intelligence, Ch 2, Fig. 1, pg 30, you will see a) a AFSM and b) an AFSM with Subsumption connections attached to it. Note the Subsumption connections are explicitly external to the AFSM. So the elements of Subsumption are not part of, or internal to, the FSM's. The same thing is shown in Ch 1, Figure 4, pg 15, where the subsumption conections are shown to be outside the module. (Remember Brooks uses module and AFSM interchangabley.)
So with these definitions/clarifications in mind, I'll try again to express my meaning.
I am saying, purely servo control systems don't have triggers or conditional or states. The behavior has the trigger, and allows the control system (purely servo in this case) to make an active output (per Jones, Robot Programming..., Ch 3, Fig. 3.1) This presence of the trigger in the behavior was discussed extensively in the "What is a Behavior?" thread. I was chided for wanting to take this trigger out of the behavior (to move it into the control system AFSM) The consensus was a behavior always had a trigger (except for the lowest level one). So in a behavior with a purely servo control system, the active/inactive state is kept by the trigger. So it seems the behavior which was not described as a state machine, actually has a bit of state information hidden in it, active and inactive.
On the other hand, if you have a ballastic behavior, you must have a statemachine in your control system. Most ballastic behaviors, like Escape, have a FSM with several states, and stay in each state based on some conditional test for a period of time. Each transition to the next state will have a conditional that will allows it to advance. The conditional part of a transition has about the same purpose as a trigger in a behavior. In fact, they are so similar, I would have to say they are essentially the same thing.
Now, in the ballastic case, the behavior level has the trigger to active part, but it is the AFSM which has the conditional to release the behavior to the inactive state. What I am saying is this is a very odd flaw. The behavior has 1 bit of state in the servo case, and 1/2 a bit of state in the ballastic case.

Agreed. No mystery here.
In all but the lowest level behaviors, the test for being triggered is determined in the behavior. Not the servo/FSM control system.

Agreed... Well, no, actually the lower priority behaviors always run. All behaviors always run.
It's just their outputs never get selected by arbitration if higher priorities didn't release from being active.

Agreed.
Agreed, with the proviso that your example is one of many ways to implement Subsumption.
If outputs are simply placed in the output registers (quiclky enough they are not physically expressed), and the order of execusion is by priority as listed, lowest to highest, there is no need for arbitrate(). Put another way, if the code runs in 1/10000th of a second, and the motors can only express changes in terms of 1/100th of a second, subsumption can be simple controlled by the last behavior to write the register wins.
On the other hand, if you have an arbitrate() which sends only the highest priority active outputs to the output registers as a final step, and the output registers are written only once per time slice, there's no need to run the behaviors in any particular order. arbitrate() will select the outputs on the basis of the highest priority level it finds that is active.
On the other hand, running a very Brooks like implementation, where higher levels reach in between state machines with inhibits and supression of messages, you might want to run in the opposite order of priority, from highest to lowest, to allow those messages to be generated to the lower level machines prior to them running in this time slice. But this is not much of a problem, because if the higher routine runs last, the new message may already be loaded in the
There just isn't enough information in the example as shown to know the details of implementation, and there are many ways to implement the same thing..
dpa also wrote:

Agreed, it's obvious as shown.

Well, the higher behaviors can be continuously triggered, but that would mean nothing below it in priority would ever be expressed.
BTW, couldn't we have the lowest level triggered or triggerable as well? Then the default behavior would be "stop" or "sit" because there is no motor output expressed?

Still agreeing, doing fine.

Yes, I'm with you right along.

Right. The trigger is in the behavior (per Jones and not Brooks). The servoing will be in the FSM the behavior calls (which probably isn't a FSM at all).

Okay. Nothing much there to disagree with.
Have I said anything that indicates I don't understand Subsumption? I don't think I've ever convinced you I understand Subsumption. I think I understand it well enough to criticize it, or as above, suggest alternative implementations to the example you have proposed.
The point I found most fascinating and was trying to make, was a bit of state has been fractured out of the original AFSM's and shuffled up to the behavior level. The behavior level has either 1 bit or 1/2 a bit of state information, active or not active. I've often talked to you before about how people hide state information without considering it as such. Here is an example. No one has seemed to notice the trigger Jones insists is in the behavior level gives it state. But it is like a bit of every AFSM has been pulled out of it. Because the AFSM decides to "untrgger". The behavior cannot let go of its active subsumption over lower levels, once triggered, unless the AFSM is finished. The trigger can decide a layer should subsume. The trigger going away cannot decide the layer should stop subsumption. That permission comes from the control system. Logical consistency would say a better way to go would be to remove the trigger from the behavior, and put both halves of it in the control system, and all control systems would be AFSM's, having their state nature returned to them.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

I don't think so. You would have to code a low level "stop" behavior to make that work. Even if the behavior was just coded as "left_wheel = 0; right_wheel = 0" it would still be an implicit coding of a low level, untriggered, stop behavior (even if you choose to not think of it that way).
If you didn't initalize the value of the variables (or the control registers if you were using the registers directly as you talked about) before all the higher priority behaviors were checked, then the default behavior might be to just continue to do what the last active behavior suggested instead of making "stop" the default. One way or another, you would have to structure your code to make "stop" the default, and if you have done that, then you have in effect coded the low level behavior without a trigger. This of course is just semantics but my point is that you can't make make it work without someone else being able to claim you did code a low level default behavior.
In real life, you might not want to halt the machine that quickly and you might instead write something more complex to quickly ramp down the speed:
if (left_wheel > 0) left_wheel--; if (left_wheel < 0) left_wheel++; if (right_wheel > 0) right_wheel--; if (right_wheel < 0) right_wheel++;

What's an AFSM? Asynchronous FSM?

I'm trying to learn subsumption terminology the hard way - by reading these posts instead of reading a real reference. :)
Trying to understand how subsumption makes use if the word "trigger" is problematic to me from these posts.
If a behavior is "triggered" by an external state, such as:
if (light_level > 5) do_behavior();
Then the behavior has a "trigger", but it has no internal state - it's triggered only by external state of the light sensor.
But if you do something like:
if (light_level > 5) bright_light_behavior_active = TRUE;
if (bright_light_behavior_active) do_bright_light_behavior();
do_bright_light_behavior() { behavior_stuff...
if (behavior_no_longer_needed) bright_light_behavior_active = FALSE; // end behavior }
Then the behavior has a clear internal state that is triggered by the external event. The behavior is driven by a single bit FSM.
To respond to a transient event, like a bump switch activating because the bot drove into a wall, the system must have internal state which acts as a "memory" of the external event in order to do something complex like back up and turn.
So, how do you code something like a back up and turn behavior in subsumption as a response to a bump switch? How do you put that state into the code without violating the "no state" idea?
One way it could be coded is to implement temporal memory of past events. You could for example include a clock in the system, and do something like this:
if (bump) last_bump = ms_clock;
if (ms_clock - last_bump < 1000) // back up for 1 second do_backup();
if (ms_clock - last_bump > 1000 && ms_clock - last_bump < 1200) do_turn_right(); // turn right for .2 sec
The only "state" such a system has is memory of past events which makes it look less like a FSM (but it is still is a FSM of course). Does subsumption play games like this to "hide" state?
The other way you might "hide" state is to allow the system to remember past behaviors in a similar way the above code "remembers" when the bump switch was last hit. If you included a variable to remember the last time the "do_backup()" behavior was used you could use that as a trigger like this:
if (bump) last_bump = ms_clock;
if (ms_clock - last_bump < 1000) { // back up for 1 second do_backup(); last_do_backup = ms_clock; }
// .1 second after do_backup() is done, start turn_right for 200 ms. if (ms_clock - last_do_backup > 100 && ms_clock - last_do_backup > 1300) do_turn_right();
So the only "state" the system uses is memory of past events. Does subsumption allow such tricks in some way to make it look like the system has less state?
I have no problem understanding how to code any of this, it's just an issue for me to understand the subsumption paradigm and the normal terminology used with it.

Ok, I'm confused. Are these ASFMs you are talking about part of some version of subsumption or are you talking about a system that uses FSMs instead of using subsumption?

The "control system"? What's that? I though the idea of subsumption was to define a control system?

I'm lost because I don't understand what the control system is and what the ASFM is and why the "behaviors" are not the control system.
If you could help me get a bit less lost I would appreciate it.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Curt Welch wrote:

Yes, I agree with what you say, only, Brooks did lots of experimenting with his messaging between modules. I think in later versions he talks about having messages have specific time limits on their domain of effectiveness. So in this sense, if the message to go times out, it might mean its effects time out as well. To wit, maybe when the go message expires, the wheels stop.

Yes, I can see your point. Whatever invalidated the message after it timed out would either 1) leave it's effects in effect, or, 2) cancel its effects, which involves clearing the registers it controlled, which could be an active and explicit change in behavior.

It's a Brooks term. In Cambrian Intelligence, Ch1, he calls FSM's with LISP instance variables, Augmented, so they are AFSM's. In Ch 2, he calls FSM's with timers (alarm clocks) built in Augmented, so they are AFSM's. Unfortunately, Brooks has a tendancy to do this in paper to paper. Concepts introduced shift as he works/evolves his experience.
Personally, I think FSM's can have timers and not be considered Augmented. To me the timer is just another kind of input, like a Sharp Ranger, or what have you. There is a sense where some of their FSM's state information is held in the numerical state of the timer. But I see that no different than saying some of the state information is held in the digital conversion of the Ranger. A number not contained in the state machine is used to determine if the state machine advances. However, the timer count doesn't effect the state the machine is in at all. Only the event of timing out can effect the state the machine goes to. So again, I have no idea why Brooks chose to call these Augmented Finite State Machines (AFMS's).
I once discussed it with Joe Jones, and he said he couldn't help me. Iirc, Brooks never explained this to Jones.

Okay, that is difficult, because as I mentioned, you have two different descriptions of Subsumption and they don't dove tail very well.

Apparently for us all.
Here's how I understand it.
Subsumption has behaviors, or layers. These layers are called repeatedly, by scan loop, or by periodic interrupt, or run on separate processors which have a messaging scheme. (Per Brooks.)
Inside the behavior is a trigger. (Per Jones.)
Also inside the behavior is one control system (Jones) or one to several AFSM (Brooks)
The control system runs (Jones), or the AFSM's run in parallel (Brooks), based on the inputs from the sensors (Jones) or based on the sensor or an inhibiting message from a higher level that replaces the sensors true reading (Brooks) and compute a result .
Whether the result is sent out of the behavior to the robot depends on the trigger (Jones) or a subsuming message from a higher level that replaces the result (Brooks).
Here's roughly how I see Jones would code his servo method:
do_behavior(); { do_this_behavior_control_system(result) if (light_level > 5) this_behavior_output = result this_behavior_active = TRUE else this_behavior_output = 0 // or nil??? or don't bother??? not obvious this_behavior_active = FALSE; // end behavior then }

Yes, but hang on, look at the ballastic approach for Jones:
do_behavior(); { do_this_behavior_control_system(result) // control system always runs if (light_level > 5 && this_behavior_active == FALSE) // trigger decides output this_behavior_output = result this_behavior_active = TRUE // turning active is half a state bit then }
do_this_behavior_control_system(result) { if (this_behavior_state == NIL) this_behavior_init() result = xxx this_behavior_state = 1 else if (this_behavior_state == 1) ... result = xxx this_behavior_state = 2 else ... else ... if (this_behavior_state == final) ... result = 0 // or nil??? or don't bother??? not obvious this_behavior_active = FALSE; // end AFSM behavior other half a state bit this_behavior_state = NIL then then then then }

Yes, and Joes calls this a ballistic behavior because it has an FSM in the control system, once launched it doesn't quit until the FSM says it can quit.

You don't. You actually put in a FSM. Jones, Robot Programming, Ch 3, pg 52 says, "Ballistic behaviors, although some times essential, should be used with caution. ... Before resorting to a ballistic behavior, it is always best to try first to find a servo behavior, that will accomplish the same result."

Not delibarately. But there is a tendancy to hide state. That's my beef. Why have state machines, then hide state all over the place to instead use (what looks like) servo behaviors?

Hopefully the descriptions of Subsumption Architectures above quoted from Brooks and Jones has helped you know what a layer/behavior is, that Jones says these have triggers, and control systems/AFSM's generate outputs that the trigger/subsuming messages decide if they get applide to output/motors or not.
And hopefully with the psuedo code, I've shown how I think the trigger, and the control system/AFSM's each control half of one bit of state information. The trigger can activate the active output. But the control system, if it is an FSM must have control over deactivating the output.
So what I'm complaining about is this active/deactive state bit being partially in one place, the behavior, which is not supposed to be a state machine at all, and partially in another which is a supposed to be a state machine, only you are told its better if you don't use it that way, and leave the other half of the bit back up in the behavior.
My point is the whole division is a false dichotomy. All of Subsumption can be done in a subset of what is possible with state machines, and we can eliminate lots of this confusion.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

So does that mean he played with systems that queued messages? Otherwise I don't understand how they could time out. A queue of course if a big time state/memory device.

Yeah, you can always think of a clock as an external part of the environment and the system them has the ability to sense this aspect of the environment (time passing). From that type of view, you can implement timers in many different ways.

Are the FSM's arbitrary complex FSMs with as many states and transition rules you care to put in them, or are they limited in some way? Such in how they change state?

Sure, I understand that.

Well, my guess is that they are not trying to hide state as much as they are advocating splintering the state machine up into small distributed pieces each which are triggered by environmental events instead of internal events.

Yeah, I'm understanding it a little better. But of course I really need to read the books to get a real understanding. The idea of a "servo" behavior from Jones is still not clear but I understand the ballistic concept so I can guess what he's thinking about for servo behavior and I'm probably not too far off.

Are you familiar with OO programing? What you are talking about reminds me a lot about the differences between structured programming and OO programing. And your objections remind me of the type of things some people complain about when trying to learn OO programing but still thinking in structured programing terms. I think there's some important parallels here.
The point of OO programing is that you have to learn to look at the software very differently. Instead of thinking of it as sequential programs that manipulate data structures, you learn to think of it as collections of data structures (objects) that each have large collections of simple functions. You think of the objects as "doing" things instead of your code doing things. The entire view of the program changes from a collections of code (aka algorithms) to a collections of objects.
In structured programming, it's easy to understand the program flow, but hard to see the data. In OO programming, it's easy to understand the data, but almost impossible to understand the program flow. You can write any program using either paradigm, but some applications are far easily to write using OO (complex data structures with no real performance issues), and some are better as structured code (complex performance intensive algorithms with simple data structures).
With subsumption, and these other behavior based robotics approaches, I see it being the OO approach to bot programing. Except the focus is on behaviors, instead of objects. But it's similar to OO programing because the idea is to fracture your code into lots of little independent functions (aka behaviors) and create simple rules (environmental triggers and priorities) to define when each behavior should be used.
Now, a state machine, is very much the same. Every state in effect is a behavior, and you slice the behavior of the robot into lots of little pieces, where each piece is one state. But state machines general include code in each state, that determines when it should change to another state.
The problem with the state machine approach is that every time you add a new behavior, (aka a new state), you have to go back and think about all the other states the machine can be in, and think about when it might make sense to abort those old states, and switch to his new state. And of course, when the state machines get large, we seldom do this as much as we should, or worse we try to do it, but fail to test the code, in all possible states, and think our code is working well when it's still got some bad bug in it.
It also means that when you write code to add a new behavior, you end up having to add code to some old behaviors (states) that make reference to the new behavior. So the code connected with the new behavior (aka state) gets scatter all over the state machine.
The intent with using environmental triggers and arbitration systems (like behavior priorities), is that for a lot of stuff, you don't have to spend any time thinking about the other states and hand coding a lot of new automatic state transitions. All you have to do is think about where to insert the new behavior in the priority list. And all the code for each new behavior gets put together in one place.
So, without being an expert on the subsumption terminology, I think the idea is not as much to stay away from internal state (even though that might get pushed at times), but instead, to try and fracture the behaviors into as many micro behaviors as possible, and to try and trigger each from environmental conditions instead of triggering them internal state.
The advantage to this is that the micro behaviors end up being used in conditions you never thought about, and which they would not have been used if you had used more traditional FSMs to drive behavior.
For example, if you want to back up and turn when you hit a wall, you could write this as one ballistic behavior driven by an FSM that carries out a fixed sequence of behaviors once it's triggered. But if you break it up into a backup behavior triggered by being too close to obstacles (measured by two sonar sensors), and a left turn behavior triggered by a backup condition with the right sonar registering a close object, and right turn triggered by a backup with the left sonar closer.
But then when you add a reverse cruse behavior later, you find the turn behaviors you wrote to get out of a trap work nicely to make you turn away from walls as you back up without you ever having to steal code from your backup and turn behavior. So you get automatic sharing of behaviors you never would have gotten had you scripted all the behaviors with state machines.
I think the general idea is simply that you should try to trigger all your actions based on what is happening in the environment instead of based on internal state machines, because the more you do that, the more likely the behavior will be put to good use in situations you never thought about. And even if the trigger is based on self awareness (aka what your were just recently doing - like backing up) that's better than just triggering it because of the last state we were in.
Another way to look at this, is that as programmers, we are very invested in the idea of producing a sequence of instructions to follow.
1. back up 2. turn right 3. cruse forward again
We learn to think about programing as lists of instructions, so we naturally learn to think about programing a bot as lists of instructions. W think that step three comes after step 2.
State machines, are just complex lists of instructions where we specify a more complex flow of control. We specify that we go to step 2 after we finish step 12. We are used to hand specifying all the sequence and all the rules for changing sequence.
The ideas like subsumption that has been developed is just the idea that when we program an interactive agent, we need to break out of this mode of thinking that we as the programmer need to specify the sequence, and instead, specify the environmental conditions which should trigger each possible action, instead of specifying the sequence.
So we replace the above sequence with:
Trigger Action
Bump switch Back up
backup & no bump switch Turn right
when nothing else to do go forward
We give up most our control of sequence, and turn the control of sequence over to the environment as much as possible. But when we have to, we still use small state machines to control sequence within a behavior.
Now, I don't know if any of the above perspective gives you any new insight into your thoughts about subsumption and it's use of state, but I think the above ideas are the prime motivations behind paradigms like subsumption. The more you stay away from state machines, and the more you are able to specify action based on environmental triggers, the more likely the bot will produce useful behaviors in environments you never thought about. The more you use state machines to hand-code the action sequences, the more likely the bot will do something stupid like drive itself off a cliff.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

So you say. But by the end of this very missive the line has again been confused, to wit you write:

Notice you are referring to all ASFM behaviors (all the behaviors on my robots, for example, are implemented this way) servo included, and not just ballastic behaviors here.

Only true for ballastic behaviors. Not others.

Again, that is not correct. The trigger going away does end all behaviors except the ballastic ones. Only ballastic behaviors continue in the absense of a trigger, hence the name. All other behaviors become inactive as soon as the trigger state is no longer met.
I'm sort of suprised that you did not comment on the two ballastic behaviors in the jBot example. one of which obviously can subsume the other. Wasn't this one of your deep dilemmas? ;)
best, dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

Man, I feel like I was just rope-a-doped. I don't follow your objections at all.
I am saying the behavior has either 1-bit of state, or it has 1/2-bit of state. (The other half is in the control system/FSM.) My justification the state bit is there, is because for arbitration to work, it has to have two values from the behavior. 1) the output value generated 2) Whether the output is to be used or not (i.e. has a trigger caused us to be active). The second is a state value.
Are you saying a behavior has no state? or I am saying it has no state? I don't follow your point.

Yes. Saw that. I was holding off for concensus on some of these points, which we don't seem to have reached.
I was going to come back to it and ask how you managed the release of Subsumption from the higher level one to the still running lower one.
One way you could do it is have the higher reset the lower. Another way would be to be sure the higher one always ran longer than the lower one.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Howdy,
RMDumse wrote:

<,,,>
I understand. Bear with me here on a cycle-by-cycle description, and perhaps this will make more sense.
Let me try two simple examples, an IR avoidance behavior and a bumper behavior, one "servo" and one "ballastic" behavior. Assume a subsumption looping running at 20hz for this example.
An IR avoidance detector "sees" a detection on the right, so it outputs a command to turn left and sets its subsumption flag to active, to signal the arbitrater. Because it is the highest priority flag for that 1/20th of a second, the arbitrater sends its command to the motors, and that is what the motors do (ignoring any PID slewing,etc) for the next 50 millisecs, until the next time through the subsumption loop. But that's all, just for the next 50 millieconds.
Now, 1/20 second later as the loop executes again, the robot has hardly moved at all, and the IR avoidance detector still "sees" the detection on the right, and again outputs a command to turn left, and again sets its subsumption flag to active to signal the arbitrater. Again it is the highest priority layer signalling and the arbitrater again passes its command to the motors to turn left, and that's what they do for the next 50 milliseconds. But only for the next 50 milliseconds.
This processes contiinues each time through the subsumption loop, with the IR avoidance winning the priority contest in little, 50 millisecond chunks, and passing its command to turn left to the motors each time 20 times a second. After, lets say, 2 seconds (40 times through the loop, 40 turn left commands to the motors) the robot has finally turned left far enough that the next time through loop, the IR avoidance detector no longer "sees" a detection on the right. So on that pass through the loop, the IR avoidance behavior's flag becones FALSE (no detection) and some other lower priority behavior gets to pass its commands to the motors.
The output from the IR avoidance behavior goes away as soon as the trigger goes away. Is this clear?
Now the second example, a ballastic bumper behavior. The bumper gets a press on the right, and triggers the start of a ballastic behavior. The first segment outputs a command to backup and sets its subsumption flag to active, to signal the arbitrater, and also sets an internal TIMER to, lets say one second. Because it is the highest priority layer asserting its flag, the backup command is passed by the arbitrater to the motors. So for the next 50 milliseconds, that's what the motors do.
Now, 1/20 second later as the loop executes again, but the layer does _not_ test the trigger condition again, as in the case of the IR avoidance behavior, but rather tests the TIMER to see if it has timed out. That is why it is a ballastic behavior. Its termination depends on an internal timer, rather than the absence of an external trigger. When it tests the timer it sees that it has not timed out, so the layer leaves the output command = backup, and leaves the subsumption flag = active. And for the next 50 milliseconds, the robot continues to backup.
This continues for another 18 times though the subsumption loop. On the 21st time through the loop, the timer has expired, and the layer sequences to the next state, which is a turn left command. It leaves the subsumption flag as active, and resets the internal TIMER to, say, half a second, and sets its output command = turn left.
And it does that for the next 10 times through the loop, each time testing the TIMER, not the trigger condition. Finally, the TIMER times out, the ballastic behavior is finished, and its subsumption flag is set to FALSE.
Your description (and critique) assumes incorrectly that all behaviors (layers, whatever) start execution with the presence of a trigger and end execution through some internal measurement. This is true for ballastic behaviors (the exception to normal behavior in my experience) but is not true for all other behaviors.
I still haven't been able to convinced myself that you really understand how subsumption works. I think this is one of several areas of confusion.

Well, I agree 'cus I'm gonna ambush one of your basic tenets here. So let's be sure we're on the same page first. 8>)
best dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

It seems to me that, that instead of using a ballistic behavior like that, it would be far better to implement this as a test for an environmental condition (with the general idea that ballistic behaviors are just always bad).
To do that, you would write code that would record the time of the last bumper hit, and then write a sensory test routine something like:
rt_bumper_active(ms_time_window)
Which tests to see if the right bumper was hit in the last so many ms.
Then you could use it like:
if (rt_bumper_active(1000)) back_up().
So that would turn the ballistic behavior into a servo behavior. That is, you can write it like a servo behavior without having to have an internal timer dedicated to the behavior.
It also makes it very clean if you have higher priority behaviors which interrupt this back up behavior. If the higher priority behaviors take longer than a second, then this behavior times out and there is no issue with having to run the behavior again to deactivate the behavior or reset its internal timer.
Now, this may work exactly the same as the ballistic behavior, so it might be splitting hairs to not call this a ballistic behavior, but the idea is that the trigger, or test, is simply testing a condition of the environment in a more complex way and as a result, you don't need to use a ballistic behavior.
And, for example, if you wanted to back up slow at first, and then faster, you could write it as two servo behaviors:
if (rt_bumper_active(500)) // higher priority back_up_slow() else if (rt_bumper_active(1000)) // lower priority back_up_fast()
instead of using a complex two state ballistic behavior with two timers in it.
And since it's all triggered off of environmental conditions, it's more likely the behaviors will be but to good use in conditions you didn't think about.
It seems to me that all ballistic behaviors could be turned into servo behaviors using temporal environmental tests structured something like that.
To do this, you would not only have to record sensory conditions (such as the bump switch hit) to allow you to do a temporal test on it, but you would probably also have to record when behaviors were last used, so you could do temporal tests on previous behaviors as well. Then you could code the above something like this instead:
if (rt_bumper_active(500)) // higher priority back_up_slow() else if (back_up_slow_active(500)) // lower priority back_up_fast()
So it would do the back_up_fast for 500 ms after the last use of the back_up_slow() behavior. In other words, the recent past activity of the behaviors, becomes part of the sensory environment you can test for when triggering other behaviors. This would move all the timers and state machines into the sensory system and keep them out the behaviors and I think make the code cleaner and more flexible. And in general, if you had a single good clock, you wouldn't really need any other timers or state machines other than the implied state of a trigger being active.
Thinking further alone this line, it seems to me you could use the same idea to implement a hierarchy of goals in the system. The idea is that you would define a behavior like get_ball() which was a null behavior - that is, it did nothing, other than record the fact it had been used.
So you you could specify some set of triggers that would active the get_ball() behavior.
You could then write a whole set of behaviors that were triggered in response to the "get_ball()" behavior being recently used:
if (get_ball_active(1000) & see_ball_rt()) turn_right();
So, for one second after the get_ball() behavior was used, the bot will perform some set of ball seeking behaviors.
The get_ball() behavior can be triggered over and over based on other conditions and every time it's used, it triggers the entire set of ball fetching behaviors for another second. So if conditions changed, the bot might switch to avoid_bad_guy behavior for a few seconds, and then return to the get-ball behavior when it was done. You could use this type of trick to create a complex hierarchy of goals and sub goals all without having to result to hand coding high level state machines and complex state change rules. It would all stay within the servo paradigm because all this knowledge about recent past behavior would be available as sensory conditions of the environment.
I do see a few issues with this approach that would need to be resolved, but I think attacking it from this direction might end up working rather well.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Hi Curt,
I think if you consider the actual implementation of what you are proposing that you will see that it is the same as I have described.
The bump event gets a time stamp, and then each time through the loop the code tests to see if that timestamp plus some constant (1 second in this case) is greater than the current timestamp.
You are still using a timer to essentiallty extend the width of the trigger, once the trigger has gone away. You are just moving the location of the timer from the behavior code to the detection code. It's still a timer. This does not turn a ballastic behavior into a servo behavior.
best dpa
Curt Welch wrote:

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Yeah, I agree. That's why I wrote:
> > Now, this may work exactly the same as the ballistic behavior, so it > > might be splitting hairs to not call this a ballistic behavior
It simply moves the timer into the sensory side of the problem.

The only point is that it allows the behavior to be coded the same way for both so that on the behavior side, it all looks and works the same.
But thinking a bit more about this, it seems to me the distinction between the two is a bit arbitrary. It seems to me that all behaviors actually end up as ballistic behaviors in some sense anyway. For example, if you code the servo behavior with a 40 Hz loop, then each servo behavior becomes a ballistic behavior with a 1/40th of a second timeout and with 1/20th of a second average delay before the behavior starts.
And all sensory and behavior systems have inherent delays in them, which means there is always an implicit timer created by this delay between the time the external sensory condition happens (such as the bot runs into a wall), and the behavior produced (like spinning the wheels backwards). So what's the difference between the implicit response delay of the servo system, or the reaction and in explicit delay of 1 second hard coded into the sensory system?
It looks to me like the only real difference is an arbitrary selection of time scale in the delays. All behaviors in effect are triggered by "timers" linked with sensory events. If the timer happens as an unintentional side effect, we call it a servo behavior, if the timer happens intentionally by the design, we call it a ballistic behavior.
Or, if don't add extra intentional delays to the code, and only work with the delays inherent with the technology we are working with (processor cycle times, sensor sampling rates, etc), then we can still think about it as a servo behavior, where as once we do anything to intentionally add longer delays, we start to think about it as a ballistic behavior.
This is not important to programing bots or the definition of subsumption, it's just me thinking out loud. It has a lot of interesting parallels to the type of networks I play with for AI which make very heavy use of variable delays in processing of all sensory data and the production of all behavior. All the learning in my networks happen by the system adjusting the processing delays and as a result, a large network will have a large selection of "triggers" and "behaviors" (millions) with a wide range of different delays from milliseconds to minutes. In this case, there is no clear line between ballistic and servo behaviors, they are all the same with just a wide range of different delays associated with them.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

Yes, you've done a wonderful job of describing Subsumption and its timing in a Jones model of Subsumption.
I'm going to clip most of it, not because it isn't wonderful, and should make a nice future record for anyone wanting to know what a Jones model does, but because I agree with most of it in great detail. There are a few little pieces I want to highlight though.

Yes. There in that flag is half a bit of state.

A meaningless rewriting because the flag is still set. (State of active doesn;t change.)

And this is the other half a bit of state information.
In the Jones model, that means one bit of state is kept in the behavior, and no state is required in the control system (where Brooks says the AFSM should go).
So in Jones model, with the trigger in the behavior, rather than the trigger in the the AFSM, Jones has hidden one bit of state information that is tested by the arbitrater, which lets the arbitrater know if the trigger was active or not. So Jones has turned the whole behavior into a state machine, so he can have a stateless control systems.

Abundantly.
Trigger condition met, set flag active.
Trigger condition not met, set flag inactive.
This is a classic two state state machine, much like a thermostat without hysteresis.

Well, it is arbitrary if we say the trigger in the behavior sets the active flag, or the trigger in the behavior sets the FSM running where otherwise it would be quiesent, which as first act, without fail, sets the active flag. In either case, it is the meeting of the condition of the trigger in the behavior that causes the active flag to be set. I still argue, therefore, if you have a trigger in the behavior, you have 1/2 a bit of state in the behavior.

I'd like to see a backup reference from Jones to that statement, "the layer does not test the trigger condition". I don't remember him addressing not running the trigger. On the other hand, the trigger can be made inert with an additional clause OR'd in. I do agree the trigger must not let subsumption go inactive while the state machine is running. Which is why I'm saying the state in the behavior is hidden, and not given the credit it deserves.

The "layer" doesn't sequence anything. The FSM may sequence. But the layer is the thing with the control system, and the trigger. The FSM is the thing in the control system for ballastic behaviors, refernce Jones Fig 3.1.

Misattribution here of what's doing what. Finally, the TIMER times out, the FSM in the control system is finished, and it sets the subsumption flag to FALSE. Or just as bad, passes state information back to the behavior to cause the the subsumption flag to FALSE. But however it is implemented, the ending of the FSM causes the clearing of the active flag.
This is the other half of the hidden state bit.

Ah, well, yes. But this is the issue.
My description (and critique) assumes correctly that all behaviors (layers, whatever) start execution with the presence of a trigger. In Brooks this trigger is a conditional in a AFMS. In Jones and Arkin, this is a trigger in the behavior. Execution end through some internal measurement. This is true for all behaviors, however Brooks internal measurement is in a AFSM, while Jones has the internal measurement in a behavior if servo, and an AFSM like Brooks if ballastic.
Brooks starts with AFSM's as the default processor/module. If you have a real AFSM in a behavior, you have state "down the hole" so to speak in the AFSM, and the behavior has none, Execution starts with presence of a trigger and end execution through some internal condition on an AFSM transition, releasing the state machine. Brooks never retracts the idea of anything but AFSM's as the things down in the modules. So if I have a tendency to see AFSM's as the default, then I have good reason to do so.
However, Jones has added the idea of Servo Behaviors. Alright, you can have a servo behavior by stripping out all the state stuff from the AFSM, and just doing some calculated response. However, if you use this model over Brooks, you have to have at least one bit state in the Behavior (set active, clear active) for the servo behavior. Or at least half a bit state in the Behavior (set active) and half a bit of state (clear active) for the ballistic behavior.

Yes, I hear that often.

In a John Cleas voice, he answers back, B^) Ah... Well yes, how lovely. We will definitely look forward to that, then, won't we!
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Howdy,
RMDumse wrote:

The Jones model and the Brooks model are the same. They are only described in different ways. Jones' description is essentially a subset of Brooks. (In particular, he doens't use the "inhibit" or "suppress" input that Brooks defines, and suggests that there is always another way to accomplish the same thing).
You are confusing the use of the AFSM to implement a particular type of behavior (ballastic) with the use of AFSMs to implement subsumption itself in the absence of a multi-tasking operating system. This is key.
Since you seem unwilling to accept my experience on this I will quote from the Gospel of St. Jones, "Robot Programming" chapter 9 section 4.2. This follows a section that describes how to implement subsumption using a multi-tasking operating system.
Jones then continues: "In the absence of a sophisticated scheduler, it is possible to build a subsumption program by implementing the behaviors as finite state machines."
This is separate and distinct from the use of an Augmented Finite State Machine to implement a ballastic behavior. You are conflating the two.

This is the normal subsumption method, and describes 99% of all subsumption behaviors. The behavior is active during the presence of a trigger, and inactive during its absence. Only the ballastic behaviors continue to be active in the absence of a trigger. It seems you have focused your whole critique on that 1%.

Once you actually understand the overarching principles of subsumption you will be able to determine whether or not a particular feature fits the paradigm with having to scan through the holy texts to see if you can find a reference. However, until that happy day, lets look at Jones' sample code in the gospel of "Mobile Robots" appendix B. p 291:
void bump() { while(1) { bump_check() if (bump_left && bump_right) /* bumped in front */ bump_active = 1; bump_command = BACKWARD; wait(msec_per_rad / 2); bump_command = LEFT_TURN; wait(rev_4); } else if (bump_left) { /* bumped on left */ bump_active = 1; bump_command = RIGHT_TURN; wait(rev_8); } else if (bump_right) { /* bumped on right */ bump_active = 1; bump_command = LEFT_TURN; wait(rev_8); } else if (bump_back) { /* bumped from behind */ bump_active = 1; bump_command = LEFT_TURN; wait(rev_4); } else { bump_active = 0; defer(): } } }
Now this Jones example uses a multi-tasking OS rather than coding subsumption using AFSMs, so the meaning does not jump out as clearly, but it is the same as I've described. Take the first "Bumped iin Front" section as an example. It sets the subsumption flag as active to signal the arbitrater, and outputs a BACKWARD command for the motors. Then it calls the routine wait() with an argument. Wait() is described earlier in the appendix as follows:
void wait(int milli_seconds) { long timer_a; timer_a = mseconds() + (long)milli_seconds;
while (timer_a > mseconds() ) { defer(); } }
So wait() suspends the task in the multi-tasking queue (that's what defer() does) until the timer expires. During this time, the bump() code is no longer testing the trigger which initiated the code. Indeed, the trigger in this case, the closure of a bumper switch, is long gone. It is, instead, each time through the multi-tasking queue, it is testing it's timer.
Unlike a servo behavior, which tests its trigger every time through the
loop, the ballastic behavior does not test the trigger again (i.e., run the bump_check() code in this example) until the entire ballastic behavior is completed.
So I repeat, "Now, 1/20 second later as the loop executes again, but the layer does _not_ test the trigger condition again, as in the case of the IR avoidance behavior.
Just to be completely clear, here is Jones' IR code from the same appendix, which does test its trigger each time through the loop:
void ir() { int val;
while(1) { val = ir_detect();
if (val == 0b11) { /* detections left and right */ ir_active = TRUE; ir_command = LEFT_ARC; /* Jone's tones() ommitted for clarity */ } else if (val == 0b01) { /* detetion on the left */ ir_active = TRUE; ir_command = RIGHT_ARC; } else if (vall == 0b10) { /* detection on the right */ ir_active = TRUE; ir_command = LEFT_ARC; } else { if_active = FALSE; } defer() } }
Notice that there are no calls to wait() in this code. Each time through the loop it tests the ir_detect() condition. The single defer() at the bottom of the loop is required by the particular cooperative multi-tasking of this particular implementation.
To sum up: servo behaviors, the most common, are active when their trigger is active, and inactive when there trigger is inactive. There is no AFSM involved with servo behaviors, unless the subsumption code itself is implemented as AFSMs, as might be required for robots controllers without a multi- tasking operating system.
Ballastic behaviors are set active when their trigger becomes active, but thereafter the trigger is ignored, and an internal timer is used to set the behavior to inactive. Ballastic behaviors are implmented as Finite State Machines that are Augmented by the used of an internal timer to change states, once initiated. Hence the name, Augmented Finite State Machines, or AFSMs. This is true whether or not the subsumption architecture itself is implemented using AFSMs or instead implemented using a multi-tasking OS.
<snip> you wrote:

No. Jones has added the terminology of Servo Behaviors in order to better explain the concepts of subsumption. The idea is present in Brooks' descriptions.
best dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

I agree there is always another way to accomplish the same thing. In this case, the way Jones does it, is by hiding a bit of state information in the behavior, which was previously a part of a state machine in Brooks implementation.

Sigh. That rope has two ends.

Alright, then I'll quote my private Epistle from St. Jones to Dumse here:
RMDumse wrote in past private email to Jones over a year ago:

Joe Jones in reply to past private email responded: I must confess that I don't have am especially scholarly understanding of FSMs or AFSMs, my knowledge is only deep enough to make them do what I want them to. But the way I think of them is this: State changes in a classical finite state machine depend only on the machine's current state and its inputs. When an input change leads to a state change, that change occurs instantaneously. Augmenting an FSM with a timer allows you to delay state changes. That is very useful if you want to build a robot program from FSM-like constructs. If you used strictly classical FSMs there would be no way to say, for example, "After the bump occurs back up for a while, then turn for a while."
So I took Joe at his word. He is not that interested in state machine structure, and only is concerned with the pragmatic issue of getting something to work the way he wants. His description of AFSM was pretty much what I had suggested, a timer was available to be able to cause events to time out.

That we can agree on. Jones model is much easier to understand for people who are uncomfortable with the deeper principles of Finite State Machine Theory. We can also agree on most behaviors are servo in the sense Jones describes, but I won't go to 99%. More like 80%. We can also agree that ballastic behaviors have explicit state machines, and these often use timers to advance or terminate.
Best regards, -- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

Replying to my own post: Ah ha. Here's something I just learned from posting with dpa.
Jones has identified two types of behaviors. One is Servo, and the other is Ballistic. What Jones left out and I hadn't seen until penning what we agree on, is there is another unmentioned potential type of behavior.
The use of the timer makes the ballastic behavior ballastic. That is, if the timer controls the changes of state, once launched, only time or a higher subsumption can change the outcome.
The previously unidentified behavior is one that has state, meaning it is a FSM, but does not (necessarily) have a timer associated with it. Since the timer is not used (or not the only input used, say) the outcome of the FSM's progressions will be predictably ballastic, but instead reactive.
So here is the new class of behavior overlooked in the Jones simplification, The Reactive State Behavior.
I'll give a quick example.
Let's say we have a minisumo with four corner line sensors. We want to trigger a behavior when it sees both front line sensors are now on the edge, or put another way, the front two line sensors see the edge. So our first state of our reactive state behavior, we back both motors. The transition to the next state comes when one line sensor comes off the edge. We go to a "left-off, right-on" state if the left came off first. We go to a "right-off, left-on" state if the right came off first. Or finally, we might have both come off so close to the same time, we can't tell which was first so we go to "both-off" state. By going to these three different states while we wait for the sensors to tell us we're clear of the edge, we are being purely reactive. However, which state we go to, saves information about the angle the sensors come off the edge. So when we come get both sensors off, we can choose what output we use next. If the left came off first, when the right comes off too, we might want to continue to back on the right motor and slow the left, so we back toward the center of the arena. Or If the right came off first, when the left comes off too, we might want to continue to back on the left motor and slow the right, so we back toward the center of the arena. If they both come off at the same time, we might want to leave them both back full to go straight toward the center. Up to this point, the state changes have been purely reactive.

take over, or we could time out when we got to the center and release, or if we wanted to stay completly reactive we could wait until the rear sensors see the edge on the far side, set a forward motion and release.
The details of the example aren't as important as the point, and the point is: there are useful behaviors which can neither be characterized as servo or ballistic. Maybe some programmers haven't considered using this use, because in Jones simplified explanation, he didn't identify they were possible. I think that's a pretty neat thing to come from our discussion.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

The tradition I was refering to was the tradition of a human designing the code. The point being that in goal oriented systems a programmer would encapsulate the knowledge required in programs by programming either from the top down or the bottom up.
I am not using the terms bottom up and top down as you have to refer to the direction the programmer is going when he designs and implements all the code. I am talking about a programmer writing a little bit of code that exhibits the desired behavior after learning it itself, the programmer has not been the source of the 'central control' there is none (subsumption).
The top-down I was refering to was the AI idea that a humoculous or entitiy at the top is thinking and has goals. The programmer creates abstractions to represent knowledge and inserts sufficient cases into the structure to encapsulate a goal driven system, programmed from this goal oriented top. (FSM might be an implementation technique for this.)
Brooks departed from that tradition showing that with a simple neuronal approach that theorists and neuroscientists had said was the basis of both reflexive and cognitive neural anatomy and system function. Brooks showed that no top-down, goal oriented, pre-progammed knowledge was needed, only a little bottom up neuronal design that could learn on its own and self-organize to solve programs that the programmer had not anticipated in detail.
Brooks showed that both reflexive and cognative behaviors could emerge from simple neuronal circuits in a system and he called it subsumption.
The theory is about how neurons are simply and run in parallel. To simulate it on a sequential computer on uses mutltitasking. But confusing subsumption with multitasking is missing what it all about. It is all about how no central control is needed, all this is needed is lots of simple parallel units connected in a way that allows them to learn and to have the 'emergent behavior' in 'complex adaptive systems.'
The theory was demonstrated with leg reflexes to allow locomotion instead of top-down programming of behaviors by a programmer. Bugs and reptiles come with pre-programmed top-down goal oriented behavior all the way down to simple reflexes. Mammals suppress much of this at the genetic level and force those creates to experiment and play and learn to move their limbs and get the coordinations needed for survivial.
But what Brooks demonstrated was that a theory that covers not just the legs of insects but the brains of humans could be demonstrated with simple toys that showed visible behavior but which had not been pre-programmed from the top-down with goals and a pre-designed mind. Brooks demonstrated using legs but exciting thing about the theory was not legs. It was that mind can emerge in this sort of system and does not have to be programmed from the top down. Instead it emerges from the bottom up because of the nature of neurons to organize into groups and for the groups to organize into societies and for something resembling complex social behaviors emerges out of all of that.
I am not talking about how in Forth some programmers write their code bottom up while some don't or some in other languages write all the code top-down. I was talking about how the theory of subsumption says that if you put learning circuits at the bottom you don't have to encode all the knowledge from the top-down yourself.
"goal oriented systems" have a goal because a programmer programmed it. In subsumption goals emerge.
Yes, in implementing a simple demonstration of the theory on a simple robot using simple serial microprocessors Brooks used multtitasking to increase the virtual parallelism. But to say that 'subsumption is really just multitasking' misses the whole point.
It is about no central control, it is about parallelism. If you want to demonstrate it you can with processors running in parallel or you can use multitasking to imitate that which is what Brooks did. But the theory has been demonstrated on much more complex super computers and with real massive parallelism.
Subsumption is not just multitasking. Neurons are really parallel, and if you had a real parallel processor you wouldn't need or want multitasking to do subsumption.

Subsumption is done with multitasking if you have only a single or a few sequential processors because you are just simulating parallelism.

Perhaps this is semantic quibling.
"You" "your neurons" can "do subsumption" without any "concepts" because it is happening at the neuronal level way way below the "consciousness" that is involved in what programmers do.
When a programmer "does subsumption" they are programming and they have to have mental model, which I called the concept of subsumption. If the programmer has not grasped the concept of subsumption they are not likely going to be able to "do subsumption."

All human activity can be attributed to subsumption if you say that human behavior comes from the subsumption at the neuronal level.
If you think of the 'society of humans' that has a 'top-down' 'goal-oriented behavior' dictated from above that requires a teacher to follow a 'program' that was designed to get students to repeat right answers by repetition then I would say, no that is the sort of top-down goal-oriented behavor following pre-programmed knowledge in the form. On a social level it is clearly a case of humans making decisions and directing activity based on goals which are Educational District Policies.
I would say it is a good example of exactly what subsumption was suppose NOT to be.

No. Just the opposite. It is pre-programmed goal-oriented behavior based on 'consciousness' and 'awareness of the social good' being the central control.
There has to be central control with goals and policies and knowledge encapsulated in proceedures to be followed when following the commands of the central control.
It is a good example of what subsumption was not.

Well, I disagree and think you have missed what subsumption is altogether. But that is what I said when you said that you thought that subsumption was really just multitasking.

If you accept the theory of subsumption then you can't be a human without using subsumption in your neurons.
Subsumption is about self-organization, and emergent behavior and the lack of a central authority, the lack of pre-programmed knowledge put in by a creator.

If he is using his neurons he is using subsumption.
The problem here was that classic AI tried to model what the humans were doing by starting with a consciousness that was in charge. A central reasoning God-like authority and programmers were suppose to encode all that knowledge and the goals and the consciousness in their programs.
What various Nobel prize winners and Brooks showed was that AI could be done with self-organizing simple things running in parallel and that no central authority and top-down God-like consciousness had to be designed in from the top because such things emerge from the bottom-up.

All human activity can be called subsumption. But you keep giving examples of humans and human consciousness which is a big disconnect from leg reflexes. Subsumption is not just multitasking.

Railroad switches are not part of neurological structure. No that's not subsumption.

That has nothing to do with subsumption.

If you still think subsumption is just multitasking then I can understand why you keep syaing that subsumption wasn't anything new. It was.

you are describing a neuron. ;-) not subsumption.

Now connect enough of those neurons in parallel and you will see what Murray Gell-Mann called emergent behavior in complex adaptive systems then you can see subsumption in action. Subsumption is what happens when you put countless numbers of these neurons together.

You seem to be focused on individual neurons and on a programmer programming them using multitasking. You seem to be missing what subsumption is all about. Hence the title of the thread "What is subsumption?"

You are describing a neuron not subsumption.

I understand that this is what you have implemented.
I understand that this is what you have done with multitasking.
I understand that this led to your confusion that subsumption was really just multitasking in this way.

Subsumption can be demonstrated with FSM. True. But subsumption is MUCH larger concept than FSM. FSM are fine for modeling neurons. Multiply that times 10^14 then raise that to the proper power to account for the superlinear speedup seen in parallel genetic learning systems and you have subsumption at the level of single human but not a society of humans.
Your FSM implemention is nice, it is virtual parallel FSM done with multitasking on primitive small serial computers. FSM is easy to confuse with multitasking, subsumption is something completely different.
Best Wishes
Jeff Fox was UltraTechnology, AI hardware and sofware, parallel Forth working today at IntellaSys on parallel Forth hardware and software
P.S. we are still using some of your stuff in our test lab and I don't think your stuff is well understood or appreciated in the robotics community. I like your FSM implementation and virtual parallel machines.
But we don't appear to agree about what is subsumption. Too bad Brooks isn't here to say what he thinks the term means.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
snipped-for-privacy@ultratechnology.com wrote:

I missed the part where Brooks said neurons were simple. I think his point was to simplify in general, and the "neurons" in Brooksian subsumption emulate extremely simplisitic versions of what occurs in nature. Same with Tilden-like UJT transistors in some BEAMish neuron. But what happens in the neuron -- first rejecting impulses not meant for it -- is a step in the "bottom-up" process you eloquently described.
I think the basics of subsumption and its bottom-up approach have been well demonstrated. But what about when you get from the bottom to the middle. Isn't subsumption stuck there, too?
-- Gordon
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.