What is Subsumption?

formatting link
"Popper is perhaps best known for repudiating the classical observationalist-inductivist account of scientific method by advancing empirical falsifiability as the criterion for distinguishing scientific theory from non-science"

Reply to
RMDumse
Loading thread data ...

Fair enough.

However, as you know, there have been bottom up approaches to many languages and disciplines. Feynman's report on the Challenger disaster (1986) shortly before his early death (1988) explained how airplanes were built "bottom up". Of course, the "bottom up" nature of Charles Moore's Forth has been around since the 1970's.

I don't think there is any real requirement one way or another about Subsumption to be parallel, simulated or not. You can do Subsumption on a single processor, or several processors, by replacing results in registers, by messaging across networks, and a host of other ways I probably haven't seen.

I disagree here. You can do subsuption without the concept of subsumption.

For instance, what is the teacher doing when she asks the class a question, and continuing to select students until she gets the right answer? Is that Subsumption? Well, yes, perhaps. Inappropriate levels of results are replaced until on at the right level is found, then progress is made. If it isn't Subsumption, it is something very close, but would you normally see it as Subsumption? or be so bold as to say you can't teach without Subsumption concepts? Maybe not.

Or is a repairman using subsumption? He sets up tests to find an appropriate response, then repairs the item indicated by his following the elimination process? Is that Subsumption? Well, yes, in a stretched sort of way. But then let me ask, how stretched is Subsumption, if everyone you ask gives you a different answer on how to best they implemented it?

If you normally have a rail set to go straight through, but if an oncoming train is detected, if a side track is selected for one of the two trains, is that Subsumption? Yes, it is a very limited application, but it does seem to fit the Subsumption model, so maybe it is. The railroads got by all this time without the idea of Subsumption, and yet they did it.

Electromechanical voltage regulators date back decades. With a series of resistors and relays, operating at varying levels, a sort of PWM, the voltage output is "subsumed" so it remains around the desired voltage. Surely this application of Subsumption proceeded the very concept of Subsumption.

Okay, these examples are all a bit stretched. And I could probably find better examples if I put some time and thought into it. All I want to suggest is perhaps Subsumption really isn't that new a concept. Or put another way, it is a new concept, but it has many precursors.

I don't think so. Basically in Subsumption we start with a trigger. When the trigger is active, an output is generated. When the trigger is not active, the output is not applied.

This sounds exactly like class of two-state statemachines.

The transition into and out of the active state is a condition, just like a thermostat depends on too_hot? and too_cold? to go from passive, to heating.

Subsumption includes a requirement the condition also includes other conditions in the decision, subsuming the output, or inhibiting the reading or inputs, based on the state of other machines.

Therefore I suggest, Subsumption is a special subset of Finite State Machines, with a priority scheme built into their outputs, which activation depends on all higher prioity machines being in the inactive state. This makes Subsumption a subset of a subset of a subset ... of Finite State Machines.

Subsumption can be made with FSM's, but an FSM can be much more than a Subsumption. The FSM is the larger concept.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

These all seem like examples of garden-variety conditional execution, if-then-else. Perhaps you are confusing subsumption with conditional execution. I notice that all your examples only have two behaviors, which in my experience is really not enough to get the benefit of the subsumption paradigm. If you only have two behaviors then you probably can just use if-then-else.

Subsumption and FSMs can both be made with if-else-then conditional execution. So by this logic conditional execution is the larger concept? I'll grant that, but I don't really understand your point.

That said... I actually do code all my "behavor-based" robot "subsumption architecture" behaviors using finite state machines. It's how all the behaviors work on all my robots. It seems to me the simplest way to write the code.

dpa

Reply to
dpa

Ah, the point is slightly off. Going back to the opening sentence, "Brooks's subsumption architecture provides a way of combining distributed real-time control with sensor-triggered behaviors."

Brooks may have claimed FSM's below his behaviors in Subsumption, although I have argued here, he has attempted to remove all the state information possible from the FSM's, as evidenced by his examples, and Jones explicit discussions of preferance of servo over ballistic response.

Now if you've removed all the "stateness" and reduced the FSM's to purely servo outputs, then you have a no-state machine with no-state transitions. If you've got no transitions, you've got no conditionals. So where are the triggers for transitions, the changes, which characterize a state machine? Where do you put the conditional that turns on and off the calculated output?

If your Brooks, you move it up a level, and call it a triggered behavior in Subsumption.

If you are a FSM guy, perhaps you look at behaviors and Subsumption and think, huh, here is a broken of piece of a FSM. It looks like the initial state before anything is triggered. And then when it triggers it transitions into the rest of the machine. It looks like a little two-state machine, with nontriggered and triggered state. Somehow it must save its state of being active, or it could be retriggered.

One of the most difficult thing in doing FSM design, is approaching the problem and identifying how many states are in a statemachine, and the larger problem of which states go with which machine. It is always possible to create too many states in a machine. Often when you see a proliferation of states, you may need to see if perhaps you need to split the problem, and you are trying to make a complex machine out of two simple ones. Conversely, you can always over-factor, and split a machine into several machines, when one machine would have been a simpler and more elegant solution.

When you see one machine, start off another machine, and then cannot return to it's initial state until another machine finishes back to its intial state, you should suspect you have too much factoring, that you don't have two machines, but one split off from the other.

So a behavior itself is a two state machine. Which state it is in is determined by an input conditional, to make a transition, into the behavior's "nonFSM" to calculate the output, or, to run a larger FSM with its own transitions.

Now this brings me to a really important point I hadn't noticed before. This is strong evidence that behaviors in Subsumption are a broken off piece of a machine. While the trigger in the behavior intiates the entry into the FSM and the act of Subsumption, if the FSM has any state to it, the FSM and _not_ the behavior trigger determines when it comes out of Subsumption.

The behavior has a trigger in it which starts the Subsumption. Once triggered, though, there remains the memory of being triggered, state is kept. The behavior remains subsuming until the FSM is finished. The FSM has control of releasing Subsumption (if it has state in it at all), and without that go ahead, the behavior cannot go back to state where it releases Subsumption and then can again be triggered.

A ballistic behavior like Escape can be triggered by a bump switch, causing a subsumption of a lower behavior like wander. However, the release of the trigger input does not cause the end of the behavior or the release of Subsumption. The FSM finishes before the two-state behavior machine can release.

Evidence that the bumper being pressed does not reset the Escape to its initial state. Evidence that the bumper being released does not set Escape to its initial or final state. Nor does any intermediate noise cause state change. The behavior itself therefore must have state, two states, ready-to-trigger and triggered. But it has no way to reset itself, it cannot retrigger. The release must come from communications from the FSM regarding its progression of state. Therefore the state in behavior is an erroneously separated part of a larger whole.

Conditional logic is a necessary part of Subsumption. Conditional logic is a necessary part of FSM's. But by the same token, in programming, instructions of all kinds are needed to make programs. Does that make instructions the larger concept? or a sub component of programs?

In the realm of programming, if-else-then allows for bifurcation of code execution. When you enter a state machine program periodically to update states and outputs, you first must find what state it is in, before you can determine what conditions to check to know if transition is necessary. A strong feature of many folks FSM code, is this finding of state is made with a large if-else-then structure (often hidden by a "switch"-like statement). All the transitions firing will be predicated on a Boolean result and action or not is if-then based.

You can't have a transition from state to state without a conditional between them. But you can't have a state machine if all you have are conditionals. You need state information for the conditionals to work on as well, and without it, FSM's would be meaningless.

Conditional execution is a necessary, but insufficient, component of FSM's.

Yes.

I need to retract the comment in my previous post about parallelism. Seem's I'm hoisted by my own opening post: "Thus, the architecture is inherently parallel and sensors interject themselves throughout all layers of behavior."

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

The tradition I was refering to was the tradition of a human designing the code. The point being that in goal oriented systems a programmer would encapsulate the knowledge required in programs by programming either from the top down or the bottom up.

I am not using the terms bottom up and top down as you have to refer to the direction the programmer is going when he designs and implements all the code. I am talking about a programmer writing a little bit of code that exhibits the desired behavior after learning it itself, the programmer has not been the source of the 'central control' there is none (subsumption).

The top-down I was refering to was the AI idea that a humoculous or entitiy at the top is thinking and has goals. The programmer creates abstractions to represent knowledge and inserts sufficient cases into the structure to encapsulate a goal driven system, programmed from this goal oriented top. (FSM might be an implementation technique for this.)

Brooks departed from that tradition showing that with a simple neuronal approach that theorists and neuroscientists had said was the basis of both reflexive and cognitive neural anatomy and system function. Brooks showed that no top-down, goal oriented, pre-progammed knowledge was needed, only a little bottom up neuronal design that could learn on its own and self-organize to solve programs that the programmer had not anticipated in detail.

Brooks showed that both reflexive and cognative behaviors could emerge from simple neuronal circuits in a system and he called it subsumption.

The theory is about how neurons are simply and run in parallel. To simulate it on a sequential computer on uses mutltitasking. But confusing subsumption with multitasking is missing what it all about. It is all about how no central control is needed, all this is needed is lots of simple parallel units connected in a way that allows them to learn and to have the 'emergent behavior' in 'complex adaptive systems.'

The theory was demonstrated with leg reflexes to allow locomotion instead of top-down programming of behaviors by a programmer. Bugs and reptiles come with pre-programmed top-down goal oriented behavior all the way down to simple reflexes. Mammals suppress much of this at the genetic level and force those creates to experiment and play and learn to move their limbs and get the coordinations needed for survivial.

But what Brooks demonstrated was that a theory that covers not just the legs of insects but the brains of humans could be demonstrated with simple toys that showed visible behavior but which had not been pre-programmed from the top-down with goals and a pre-designed mind. Brooks demonstrated using legs but exciting thing about the theory was not legs. It was that mind can emerge in this sort of system and does not have to be programmed from the top down. Instead it emerges from the bottom up because of the nature of neurons to organize into groups and for the groups to organize into societies and for something resembling complex social behaviors emerges out of all of that.

I am not talking about how in Forth some programmers write their code bottom up while some don't or some in other languages write all the code top-down. I was talking about how the theory of subsumption says that if you put learning circuits at the bottom you don't have to encode all the knowledge from the top-down yourself.

"goal oriented systems" have a goal because a programmer programmed it. In subsumption goals emerge.

Yes, in implementing a simple demonstration of the theory on a simple robot using simple serial microprocessors Brooks used multtitasking to increase the virtual parallelism. But to say that 'subsumption is really just multitasking' misses the whole point.

It is about no central control, it is about parallelism. If you want to demonstrate it you can with processors running in parallel or you can use multitasking to imitate that which is what Brooks did. But the theory has been demonstrated on much more complex super computers and with real massive parallelism.

Subsumption is not just multitasking. Neurons are really parallel, and if you had a real parallel processor you wouldn't need or want multitasking to do subsumption.

Subsumption is done with multitasking if you have only a single or a few sequential processors because you are just simulating parallelism.

Perhaps this is semantic quibling.

"You" "your neurons" can "do subsumption" without any "concepts" because it is happening at the neuronal level way way below the "consciousness" that is involved in what programmers do.

When a programmer "does subsumption" they are programming and they have to have mental model, which I called the concept of subsumption. If the programmer has not grasped the concept of subsumption they are not likely going to be able to "do subsumption."

All human activity can be attributed to subsumption if you say that human behavior comes from the subsumption at the neuronal level.

If you think of the 'society of humans' that has a 'top-down' 'goal-oriented behavior' dictated from above that requires a teacher to follow a 'program' that was designed to get students to repeat right answers by repetition then I would say, no that is the sort of top-down goal-oriented behavor following pre-programmed knowledge in the form. On a social level it is clearly a case of humans making decisions and directing activity based on goals which are Educational District Policies.

I would say it is a good example of exactly what subsumption was suppose NOT to be.

No. Just the opposite. It is pre-programmed goal-oriented behavior based on 'consciousness' and 'awareness of the social good' being the central control.

There has to be central control with goals and policies and knowledge encapsulated in proceedures to be followed when following the commands of the central control.

It is a good example of what subsumption was not.

Well, I disagree and think you have missed what subsumption is altogether. But that is what I said when you said that you thought that subsumption was really just multitasking.

If you accept the theory of subsumption then you can't be a human without using subsumption in your neurons.

Subsumption is about self-organization, and emergent behavior and the lack of a central authority, the lack of pre-programmed knowledge put in by a creator.

If he is using his neurons he is using subsumption.

The problem here was that classic AI tried to model what the humans were doing by starting with a consciousness that was in charge. A central reasoning God-like authority and programmers were suppose to encode all that knowledge and the goals and the consciousness in their programs.

What various Nobel prize winners and Brooks showed was that AI could be done with self-organizing simple things running in parallel and that no central authority and top-down God-like consciousness had to be designed in from the top because such things emerge from the bottom-up.

All human activity can be called subsumption. But you keep giving examples of humans and human consciousness which is a big disconnect from leg reflexes. Subsumption is not just multitasking.

Railroad switches are not part of neurological structure. No that's not subsumption.

That has nothing to do with subsumption.

If you still think subsumption is just multitasking then I can understand why you keep syaing that subsumption wasn't anything new. It was.

you are describing a neuron. ;-) not subsumption.

Now connect enough of those neurons in parallel and you will see what Murray Gell-Mann called emergent behavior in complex adaptive systems then you can see subsumption in action. Subsumption is what happens when you put countless numbers of these neurons together.

You seem to be focused on individual neurons and on a programmer programming them using multitasking. You seem to be missing what subsumption is all about. Hence the title of the thread "What is subsumption?"

You are describing a neuron not subsumption.

I understand that this is what you have implemented.

I understand that this is what you have done with multitasking.

I understand that this led to your confusion that subsumption was really just multitasking in this way.

Subsumption can be demonstrated with FSM. True. But subsumption is MUCH larger concept than FSM. FSM are fine for modeling neurons. Multiply that times 10^14 then raise that to the proper power to account for the superlinear speedup seen in parallel genetic learning systems and you have subsumption at the level of single human but not a society of humans.

Your FSM implemention is nice, it is virtual parallel FSM done with multitasking on primitive small serial computers. FSM is easy to confuse with multitasking, subsumption is something completely different.

Best Wishes

Jeff Fox was UltraTechnology, AI hardware and sofware, parallel Forth working today at IntellaSys on parallel Forth hardware and software

P.S. we are still using some of your stuff in our test lab and I don't think your stuff is well understood or appreciated in the robotics community. I like your FSM implementation and virtual parallel machines.

But we don't appear to agree about what is subsumption. Too bad Brooks isn't here to say what he thinks the term means.

Reply to
fox

I missed the part where Brooks said neurons were simple. I think his point was to simplify in general, and the "neurons" in Brooksian subsumption emulate extremely simplisitic versions of what occurs in nature. Same with Tilden-like UJT transistors in some BEAMish neuron. But what happens in the neuron -- first rejecting impulses not meant for it -- is a step in the "bottom-up" process you eloquently described.

I think the basics of subsumption and its bottom-up approach have been well demonstrated. But what about when you get from the bottom to the middle. Isn't subsumption stuck there, too?

-- Gordon

Reply to
Gordon McComb

Ah. I think we may be zeroing in.

Your reference to "purely servo outputs" suggests that servo behaviors do not have triggered (active) and non triggered (non active) states, in addition to their servoing. That only ballastic behaviors have triggers.

That is not the case.

Only the lowest priority behavior can be continuously triggered, i.e. always active. All others must have triggered (active) and non-triggered (inactive) states. Otherwise the lower priority behaviors would never run.

That's how subsumption works.

This is not obvious if you think only in terms of two behaviors; a subsuming and a subsumed. The power of subsumption is that it is a hierarchical manager of lots of behaviors.

void subsumption {

while(1) { cruise();

Reply to
dpa

To clarify the above example: behaviors are listed from loweset priority, cruise() to highest priority, jammed_recover().

The criuse() behavior tries to ramp the motors up to full speed, straight ahead. It is the only behavior that can be continuously triggered, as there are no lower priority behaviors that it could mask. That is _why_ it can be continuously triggered, but it does not require it to be. It _can_ be continuously triggered, but _none_ of the other normal subsumption behaviors can be.

cruise() can only output to the motors if the navigate_path() behavior is happy (i.e., non-triggered). That effectively means only when the navigation target is straight ahead. And if the navigate_path() behavior wants to turn left or right or slow down or whatever, it can only do so if the sonar_path() behavior is not triggered (i.e. can't find a long path) and everything above it in priority are also not triggered.

Likewise, sonar_path() can only output when sonar_avoid() and up are not triggered, which can only output when ir_avoid() and up are not triggered, which can only output when bump_recover() and up are not triggered. In this example, jammed_recover() is the highest priority, and can run whenever it darn well pleases.

The key is that each behavior level has a trigger that detemines when the behavior is active and inactive that is separate and distinct from the servoing values that it may send to the motors when it is active.

It is during these untriggered, inactive moments that all lower priority behaviors have a chance to output to the motors.

best, dpa

Reply to
dpa

Maybe you missed the implied context of my statement that neurons are simple when compared to complex adaptive systems made up of large numbers of interconnected neurons and which exhibit emergent behavior that individual neurons do not have. The context was that neurons are by definition simpler than the complex adaptive systems built with large numbers of neurons.

I think you have taken the statement out of context to make it into something else, an absolute statement that isn't true. What had attempted to make was simply a statement of relative complexity, not absoulte.

Sure, if you study neuroscience you know neurons are fabulously complex, especially the mechanism that promotes dendrite interconnection. But again, the context was that a whole is more complex than its parts. 200 billion interconnected neurons is more complex than a single neuron which in comparison is quite simple.

Yes.

Yes. The theory says it is all subsumption except for what Edelman called the primary repertoire programmed into the DNA in his book Neural Dawinism, The Theory of Neuronal Group Selection.

I think Brook's point is that subsumption is the mechanism all the way to the top, where humans use the name consciousness to describe the emergent behavior of the perception of self. The classic AI approach was to have a programmer encode knowledge directly and build smarts from the top down. Subsumption says that you buiild in some smart reflexes at the bottom and let things organize their way up to the top with subsumption.

I think it has also been demonstrated that humans have a lot of genes that cause the neurons to be predisposed to organize in certain ways. An example would be superset of language genes that humans have when compared to say gorillas. But everything above the primary repertoire is done through self-organization as described in the theories of neuronal group selection and subsumption.

It is a lovely theory demonstrated on systems like ittle crawling robots with nothing more than subsumptive legs, to virtual animals with hearing and vision and memory and the ability to interact with other animals and humans and learn, to large scale AI assisting humans in solving difficult technical problems. It doesn't stop at virtual insect legs.

Subsumption is not just multitasking! Multitasking is just the poor mans way of simulating real parallelism. In this context multitasking is a technique for efficiently implementing more than one neuron on a sequential processor. Subsumption is more than just multitasking.

If someone looks at subsumption in the legs of a crawler and think that's all there is to subsumptoin then they miss that thinking is also

subsumption. And thinking is NOT multitasking on a sequential processor.

Best Wishes

Reply to
fox

I would say Brooks has pretty much given up on trying to implement this idea by now, ie high-level and/or human-like symbolic operations using the strictly subsumption approach as originally described. To wit, after 20 years of research, he doesn't say the sort of things today he was saying in the mid-80s. Nowadays, Brooks talks about "what's missing". OTOH, we do have Edelman's work, which for my 2-bits, are the best ideas around.

OK, if you lump both subsumption and [especially] NGS into the picture. But, of course, this is somewhat more than just Brooksianism.

Reply to
dan michaels

Or maybe not.

But we can hope clarification will come from further discusion.

Ah, no, not at all.

I am beginning to see why this is a difficult explanation. To understand the structure of Subsumption, you have to blend two very different presentations of it, that Brooks shows which changes/evolves as he works on it, and Jones which is a very simplified (practical) version of it.

The use of "behaviors" in place of "FSM's" causing confusion. The meaning of behavior is closer to that of layer as Brooks originally uses layers. Subsumption is the architecture of layers (behaviors), and the interconnects are either internal to a layer (behavior) or are from higher layers (behaviors) to lower ones. Inside the behaviors are FSM's, also called modules by Brooks, and Control Systems by Jones, which are the processors of computation and change.

Subsumption layers (behaviors) have parts. There is only one behavior per layer, and the names layer and behavior are used interchangably. But there may be many state machines in a layer (behavior) (per Brooks, see Cambrian Intelligence, Ch 5, Fig. 2 caption, pg 94. "We wire, finite state machines together into layers of control. Each layer is built on top of existing layers. Lower level layers never rely on the existence of higher level layers.")

If you will look in Brooks, Cambrian Intelligence, Ch 2, Fig. 1, pg 30, you will see a) a AFSM and b) an AFSM with Subsumption connections attached to it. Note the Subsumption connections are explicitly external to the AFSM. So the elements of Subsumption are not part of, or internal to, the FSM's. The same thing is shown in Ch 1, Figure 4, pg 15, where the subsumption conections are shown to be outside the module. (Remember Brooks uses module and AFSM interchangabley.)

So with these definitions/clarifications in mind, I'll try again to express my meaning.

I am saying, purely servo control systems don't have triggers or conditional or states. The behavior has the trigger, and allows the control system (purely servo in this case) to make an active output (per Jones, Robot Programming..., Ch 3, Fig. 3.1) This presence of the trigger in the behavior was discussed extensively in the "What is a Behavior?" thread. I was chided for wanting to take this trigger out of the behavior (to move it into the control system AFSM) The consensus was a behavior always had a trigger (except for the lowest level one). So in a behavior with a purely servo control system, the active/inactive state is kept by the trigger. So it seems the behavior which was not described as a state machine, actually has a bit of state information hidden in it, active and inactive.

On the other hand, if you have a ballastic behavior, you must have a statemachine in your control system. Most ballastic behaviors, like Escape, have a FSM with several states, and stay in each state based on some conditional test for a period of time. Each transition to the next state will have a conditional that will allows it to advance. The conditional part of a transition has about the same purpose as a trigger in a behavior. In fact, they are so similar, I would have to say they are essentially the same thing.

Now, in the ballastic case, the behavior level has the trigger to active part, but it is the AFSM which has the conditional to release the behavior to the inactive state. What I am saying is this is a very odd flaw. The behavior has 1 bit of state in the servo case, and 1/2 a bit of state in the ballastic case.

Agreed. No mystery here.

In all but the lowest level behaviors, the test for being triggered is determined in the behavior. Not the servo/FSM control system.

Agreed... Well, no, actually the lower priority behaviors always run. All behaviors always run.

It's just their outputs never get selected by arbitration if higher priorities didn't release from being active.

Agreed.

Agreed, with the proviso that your example is one of many ways to implement Subsumption.

If outputs are simply placed in the output registers (quiclky enough they are not physically expressed), and the order of execusion is by priority as listed, lowest to highest, there is no need for arbitrate(). Put another way, if the code runs in 1/10000th of a second, and the motors can only express changes in terms of 1/100th of a second, subsumption can be simple controlled by the last behavior to write the register wins.

On the other hand, if you have an arbitrate() which sends only the highest priority active outputs to the output registers as a final step, and the output registers are written only once per time slice, there's no need to run the behaviors in any particular order. arbitrate() will select the outputs on the basis of the highest priority level it finds that is active.

On the other hand, running a very Brooks like implementation, where higher levels reach in between state machines with inhibits and supression of messages, you might want to run in the opposite order of priority, from highest to lowest, to allow those messages to be generated to the lower level machines prior to them running in this time slice. But this is not much of a problem, because if the higher routine runs last, the new message may already be loaded in the

There just isn't enough > To clarify the above example: behaviors are listed from

Agreed, it's obvious as shown.

Well, the higher behaviors can be continuously triggered, but that would mean nothing below it in priority would ever be expressed.

BTW, couldn't we have the lowest level triggered or triggerable as well? Then the default behavior would be "stop" or "sit" because there is no motor output expressed?

Still agreeing, doing fine.

Yes, I'm with you right along.

Right. The trigger is in the behavior (per Jones and not Brooks). The servoing will be in the FSM the behavior calls (which probably isn't a FSM at all).

Okay. Nothing much there to disagree with.

Have I said anything that indicates I don't understand Subsumption? I don't think I've ever convinced you I understand Subsumption. I think I understand it well enough to criticize it, or as above, suggest alternative implementations to the example you have proposed.

The point I found most fascinating and was trying to make, was a bit of state has been fractured out of the original AFSM's and shuffled up to the behavior level. The behavior level has either 1 bit or 1/2 a bit of state information, active or not active. I've often talked to you before about how people hide state information without considering it as such. Here is an example. No one has seemed to notice the trigger Jones insists is in the behavior level gives it state. But it is like a bit of every AFSM has been pulled out of it. Because the AFSM decides to "untrgger". The behavior cannot let go of its active subsumption over lower levels, once triggered, unless the AFSM is finished. The trigger can decide a layer should subsume. The trigger going away cannot decide the layer should stop subsumption. That permission comes from the control system. Logical consistency would say a better way to go would be to remove the trigger from the behavior, and put both halves of it in the control system, and all control systems would be AFSM's, having their state nature returned to them.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

Where did you get the idea learning is connected with Subsumption? In all the Brooks books I've read, I only remember one single subsection on learning. Cambrian Intelligence, Ch 6.5, pg 180, and the legs learning gaits you describe on page 181, This single paragraph describes learning "to walk with subsumption architecture along with the behavior activation schemas", implying to me, the behavior activation schemas were the learning part.

That subsection ends on the following paragraph, "Learning in subsumptionis in its early stages but it has been demonstrated in a number of different critical modes of development."

I like you had heard of emergence generating gaits without human intervention or specific programming. (I heard this from Larry Forsley by the way). But in my searching, this was the only section I found on it in Brooks. I have seen by looking a head in Arkin he addresses this effort in his Ch 8 on Learning, but I'm still in Ch 7, so cannot comment on it other than its presence.

In fact, because of this rumor of emergence, I paid close attention to Genghis and the state machine structure. I would argue (and have in previous threads) there's no emergence in Genghis. His tripod gate is explicitly programmed internally. The only sense that it emerges, is that it is hidden in a factoring of the cycle into multiple machines, rather than being obviously programmed in a single place. But it is very explicitedly programmed, none the less.

I think the biggest evidence of this is something I mentioned elsewhere in this thread: It can only do one gait. If it were truly emergent, why did it pick the fastest, least stable gait first? And why didn't it emerge the ability to change gaits?

Okay, granted.

I'm willing to let the examples go at that.

Not this. This sounds pretty outrageous to me. If I'm human, I'm human. What I accept or not does not change my condition of being human. There's no clear proof I know of that says subsumption is what my neurons are doing.

Well, here we disagree.

Subsumption is definitely not about self organization.

Subsumption is definitely not about self emergent behavior. (Although there is lip service for it.)

Subsumption is definitely not about the lack of pre-programmed knowledge put in by a creator.

But yes, Subsumption is about the lack of a central authority. This point was central to Brooks writings.

I don't think so. I had Brooks Cambrian Intelligence, Ch1, Figure 4 in mind when I wrote that. Subsumption certainly seems to have neuron roots. But the described interactions are from Brooks own work there.

Haven't read Murray Gell-Mann so can't say.

Which we seem to have a very different view on, Subsumption, because of your emphasis on learning, which I do not see supported with any vigor in Brooks.

Thank you. Yes, I would have to say, from a financial view of results, my FSM language and virtual parallel machines is not well understood or appreciated in the robotics community. Don't get me wrong, some enlightened people, mostly professional and university professors, have picked it up and done amazing projects with it. (We even sold one to Brooks on his credit card.) But they are the minority. The average hobbiest doesn't reach as high. Again, not to paint all with the same brush. We really appreciate those who do get it, and use it to great advantage over the less adventurous.

Think I heard Brooks is retiring.

Too bad Joe Jones doesn't appear. He sometimes does post here. Joe authored and co-authored a couple books and worked with Brooks for over

15 years. I just got email he's leaving iRobot for something unspecified.

Or Arkin who has chronocilled so much work on BBRs.

So in their absense, what we have to do is read what they've written, and and make judgements on the references they've left us.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

How are you using NGS there? Next Generation Systems? Sorry, not familiar.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

I don't think so. You would have to code a low level "stop" behavior to make that work. Even if the behavior was just coded as "left_wheel = 0; right_wheel = 0" it would still be an implicit coding of a low level, untriggered, stop behavior (even if you choose to not think of it that way).

If you didn't initalize the value of the variables (or the control registers if you were using the registers directly as you talked about) before all the higher priority behaviors were checked, then the default behavior might be to just continue to do what the last active behavior suggested instead of making "stop" the default. One way or another, you would have to structure your code to make "stop" the default, and if you have done that, then you have in effect coded the low level behavior without a trigger. This of course is just semantics but my point is that you can't make make it work without someone else being able to claim you did code a low level default behavior.

In real life, you might not want to halt the machine that quickly and you might instead write something more complex to quickly ramp down the speed:

if (left_wheel > 0) left_wheel--; if (left_wheel < 0) left_wheel++; if (right_wheel > 0) right_wheel--; if (right_wheel < 0) right_wheel++;

What's an AFSM? Asynchronous FSM?

I'm trying to learn subsumption terminology the hard way - by reading these posts instead of reading a real reference. :)

Trying to understand how subsumption makes use if the word "trigger" is problematic to me from these posts.

If a behavior is "triggered" by an external state, such as:

if (light_level > 5) do_behavior();

Then the behavior has a "trigger", but it has no internal state - it's triggered only by external state of the light sensor.

But if you do something like:

if (light_level > 5) bright_light_behavior_active = TRUE;

if (bright_light_behavior_active) do_bright_light_behavior();

do_bright_light_behavior() { behavior_stuff...

if (behavior_no_longer_needed) bright_light_behavior_active = FALSE; // end behavior }

Then the behavior has a clear internal state that is triggered by the external event. The behavior is driven by a single bit FSM.

To respond to a transient event, like a bump switch activating because the bot drove into a wall, the system must have internal state which acts as a "memory" of the external event in order to do something complex like back up and turn.

So, how do you code something like a back up and turn behavior in subsumption as a response to a bump switch? How do you put that state into the code without violating the "no state" idea?

One way it could be coded is to implement temporal memory of past events. You could for example include a clock in the system, and do something like this:

if (bump) last_bump = ms_clock;

if (ms_clock - last_bump < 1000) // back up for 1 second do_backup();

if (ms_clock - last_bump > 1000 && ms_clock - last_bump < 1200) do_turn_right(); // turn right for .2 sec

The only "state" such a system has is memory of past events which makes it look less like a FSM (but it is still is a FSM of course). Does subsumption play games like this to "hide" state?

The other way you might "hide" state is to allow the system to remember past behaviors in a similar way the above code "remembers" when the bump switch was last hit. If you included a variable to remember the last time the "do_backup()" behavior was used you could use that as a trigger like this:

if (bump) last_bump = ms_clock;

if (ms_clock - last_bump < 1000) { // back up for 1 second do_backup(); last_do_backup = ms_clock; }

// .1 second after do_backup() is done, start turn_right for 200 ms. if (ms_clock - last_do_backup > 100 && ms_clock - last_do_backup > 1300) do_turn_right();

So the only "state" the system uses is memory of past events. Does subsumption allow such tricks in some way to make it look like the system has less state?

I have no problem understanding how to code any of this, it's just an issue for me to understand the subsumption paradigm and the normal terminology used with it.

Ok, I'm confused. Are these ASFMs you are talking about part of some version of subsumption or are you talking about a system that uses FSMs instead of using subsumption?

The "control system"? What's that? I though the idea of subsumption was to define a control system?

I'm lost because I don't understand what the control system is and what the ASFM is and why the "behaviors" are not the control system.

If you could help me get a bit less lost I would appreciate it.

Reply to
Curt Welch

I do, but it is not just my personal idea that the theories of subsumption and Neuronal Group Selection overlap. It was my opinion before I found that wikipedia explained how the theories overlaped.

Best wishes.

Reply to
fox

The little bit of subsumption I've seen and understand doesn't agree with this at all.

I'm a big believer in the power of learning systems that are able to self organize, but to me, everything I've seen about subsumption is that there is no learning at all - it's all hard coded behavior. What "emerges" is not learned behavior. The system does only what it was programmed to to do. What "emerges" is _unexpected_ behavior - i.e. behavior the programmer wasn't able to predict - but which he none the less did hard-coded into the design whether he realized it or not.

To create truly intelligent machines, I argue constantly (in comp.ai.philosophy), that we must build a learning system. The types of networks I like to play with really do mimic a lot of the ideas of subsumption. But the difference is that my networks have no pre-coded "intent", they must learn everything on their own. All the subsumption designs I've seen (which is very little BTW), is based on the idea the programmer would hard-code all the logic into the system instead of letting it learn anything on its own.

Though the type of emergent behavior we see coming from hard coded subsumption designs is interesting, it's a totally different class of emergence than what I talk about coming from a learning system.

I agree with what you wrote above (we need to build the lowest level and let it self organize all the way up), but I've never seen subsumption described like that.

BTW, the concept "emergence" really isn't very useful. It just means the machine did something we were too stupid to predict ahead of time. Because these things are all deterministic at the core (if not, the emergent behavior would not be repeatable and would be nothing but chaotic behavior) it was predictable ahead of time so the real magic of "emergence" is created not by the machine, but by the limits of our own intelligence. So "emergence" just means, "too complex for a stupid human to understand". To me, this is not some important new field to study (aka human stupidity). It's a field we need to work to eliminate.

Or in other words, anything I see as "emergent behavior" is just behavior we need to study more until it stops being "emergent behavior".

Reply to
Curt Welch

Yes, I agree with what you say, only, Brooks did lots of experimenting with his messaging between modules. I think in later versions he talks about having messages have specific time limits on their domain of effectiveness. So in this sense, if the message to go times out, it might mean its effects time out as well. To wit, maybe when the go message expires, the wheels stop.

Yes, I can see your point. Whatever invalidated the message after it timed out would either 1) leave it's effects in effect, or, 2) cancel its effects, which involves clearing the registers it controlled, which could be an active and explicit change in behavior.

It's a Brooks term. In Cambrian Intelligence, Ch1, he calls FSM's with LISP instance variables, Augmented, so they are AFSM's. In Ch 2, he calls FSM's with timers (alarm clocks) built in Augmented, so they are AFSM's. Unfortunately, Brooks has a tendancy to do this in paper to paper. Concepts introduced shift as he works/evolves his experience.

Personally, I think FSM's can have timers and not be considered Augmented. To me the timer is just another kind of input, like a Sharp Ranger, or what have you. There is a sense where some of their FSM's state information is held in the numerical state of the timer. But I see that no different than saying some of the state information is held in the digital conversion of the Ranger. A number not contained in the state machine is used to determine if the state machine advances. However, the timer count doesn't effect the state the machine is in at all. Only the event of timing out can effect the state the machine goes to. So again, I have no idea why Brooks chose to call these Augmented Finite State Machines (AFMS's).

I once discussed it with Joe Jones, and he said he couldn't help me. Iirc, Brooks never explained this to Jones.

Okay, that is difficult, because as I mentioned, you have two different descriptions of Subsumption and they don't dove tail very well.

Apparently for us all.

Here's how I understand it.

Subsumption has behaviors, or layers. These layers are called repeatedly, by scan loop, or by periodic interrupt, or run on separate processors which have a messaging scheme. (Per Brooks.)

Inside the behavior is a trigger. (Per Jones.)

Also inside the behavior is one control system (Jones) or one to several AFSM (Brooks)

The control system runs (Jones), or the AFSM's run in parallel (Brooks), based on the inputs from the sensors (Jones) or based on the sensor or an inhibiting message from a higher level that replaces the sensors true reading (Brooks) and compute a result .

Whether the result is sent out of the behavior to the robot depends on the trigger (Jones) or a subsuming message from a higher level that replaces the result (Brooks).

Here's roughly how I see Jones would code his servo method:

do_behavior(); { do_this_behavior_control_system(result) if (light_level > 5) this_behavior_output = result this_behavior_active = TRUE else this_behavior_output = 0 // or nil??? or don't bother??? not obvious this_behavior_active = FALSE; // end behavior then }

Yes, but hang on, look at the ballastic approach for Jones:

do_behavior(); { do_this_behavior_control_system(result) // control system always runs if (light_level > 5 && this_behavior_active == FALSE) // trigger decides output this_behavior_output = result this_behavior_active = TRUE // turning active is half a state bit then }

do_this_behavior_control_system(result) { if (this_behavior_state == NIL) this_behavior_init() result = xxx this_behavior_state = 1 else if (this_behavior_state == 1) ... result = xxx this_behavior_state = 2 else ... else ... if (this_behavior_state == final) ... result = 0 // or nil??? or don't bother??? not obvious this_behavior_active = FALSE; // end AFSM behavior other half a state bit this_behavior_state = NIL then then then then }

Yes, and Joes calls this a ballistic behavior because it has an FSM in the control system, once launched it doesn't quit until the FSM says it can quit.

You don't. You actually put in a FSM. Jones, Robot Programming, Ch 3, pg 52 says, "Ballistic behaviors, although some times essential, should be used with caution. ... Before resorting to a ballistic behavior, it is always best to try first to find a servo behavior, that will accomplish the same result."

Not delibarately. But there is a tendancy to hide state. That's my beef. Why have state machines, then hide state all over the place to instead use (what looks like) servo behaviors?

Hopefully the descriptions of Subsumption Architectures above quoted from Brooks and Jones has helped you know what a layer/behavior is, that Jones says these have triggers, and control systems/AFSM's generate outputs that the trigger/subsuming messages decide if they get applide to output/motors or not.

And hopefully with the psuedo code, I've shown how I think the trigger, and the control system/AFSM's each control half of one bit of state information. The trigger can activate the active output. But the control system, if it is an FSM must have control over deactivating the output.

So what I'm complaining about is this active/deactive state bit being partially in one place, the behavior, which is not supposed to be a state machine at all, and partially in another which is a supposed to be a state machine, only you are told its better if you don't use it that way, and leave the other half of the bit back up in the behavior.

My point is the whole division is a false dichotomy. All of Subsumption can be done in a subset of what is possible with state machines, and we can eliminate lots of this confusion.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

So you say. But by the end of this very missive the line has again been confused, to wit you write:

Notice you are referring to all ASFM behaviors (all the behaviors on my robots, for example, are implemented this way) servo included, and not just ballastic behaviors here.

Only true for ballastic behaviors. Not others.

Again, that is not correct. The trigger going away does end all behaviors except the ballastic ones. Only ballastic behaviors continue in the absense of a trigger, hence the name. All other behaviors become inactive as soon as the trigger state is no longer met.

I'm sort of suprised that you did not comment on the two ballastic behaviors in the jBot example. one of which obviously can subsume the other. Wasn't this one of your deep dilemmas? ;)

best, dpa

Reply to
dpa

Man, I feel like I was just rope-a-doped. I don't follow your objections at all.

I am saying the behavior has either 1-bit of state, or it has 1/2-bit of state. (The other half is in the control system/FSM.) My justification the state bit is there, is because for arbitration to work, it has to have two values from the behavior. 1) the output value generated 2) Whether the output is to be used or not (i.e. has a trigger caused us to be active). The second is a state value.

Are you saying a behavior has no state? or I am saying it has no state? I don't follow your point.

Yes. Saw that. I was holding off for concensus on some of these points, which we don't seem to have reached.

I was going to come back to it and ask how you managed the release of Subsumption from the higher level one to the still running lower one.

One way you could do it is have the higher reset the lower. Another way would be to be sure the higher one always ran longer than the lower one.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

I understand. Bear with me here on a cycle-by-cycle description, and perhaps this will make more sense.

Let me try two simple examples, an IR avoidance behavior and a bumper behavior, one "servo" and one "ballastic" behavior. Assume a subsumption looping running at 20hz for this example.

An IR avoidance detector "sees" a detection on the right, so it outputs a command to turn left and sets its subsumption flag to active, to signal the arbitrater. Because it is the highest priority flag for that 1/20th of a second, the arbitrater sends its command to the motors, and that is what the motors do (ignoring any PID slewing,etc) for the next 50 millisecs, until the next time through the subsumption loop. But that's all, just for the next 50 millieconds.

Now, 1/20 second later as the loop executes again, the robot has hardly moved at all, and the IR avoidance detector still "sees" the detection on the right, and again outputs a command to turn left, and again sets its subsumption flag to active to signal the arbitrater. Again it is the highest priority layer signalling and the arbitrater again passes its command to the motors to turn left, and that's what they do for the next 50 milliseconds. But only for the next 50 milliseconds.

This processes contiinues each time through the subsumption loop, with the IR avoidance winning the priority contest in little,

50 millisecond chunks, and passing its command to turn left to the motors each time 20 times a second. After, lets say, 2 seconds (40 times through the loop, 40 turn left commands to the motors) the robot has finally turned left far enough that the next time through loop, the IR avoidance detector no longer "sees" a detection on the right. So on that pass through the loop, the IR avoidance behavior's flag becones FALSE (no detection) and some other lower priority behavior gets to pass its commands to the motors.

The output from the IR avoidance behavior goes away as soon as the trigger goes away. Is this clear?

Now the second example, a ballastic bumper behavior. The bumper gets a press on the right, and triggers the start of a ballastic behavior. The first segment outputs a command to backup and sets its subsumption flag to active, to signal the arbitrater, and also sets an internal TIMER to, lets say one second. Because it is the highest priority layer asserting its flag, the backup command is passed by the arbitrater to the motors. So for the next 50 milliseconds, that's what the motors do.

Now, 1/20 second later as the loop executes again, but the layer does _not_ test the trigger condition again, as in the case of the IR avoidance behavior, but rather tests the TIMER to see if it has timed out. That is why it is a ballastic behavior. Its termination depends on an internal timer, rather than the absence of an external trigger. When it tests the timer it sees that it has not timed out, so the layer leaves the output command = backup, and leaves the subsumption flag = active. And for the next 50 milliseconds, the robot continues to backup.

This continues for another 18 times though the subsumption loop. On the 21st time through the loop, the timer has expired, and the layer sequences to the next state, which is a turn left command. It leaves the subsumption flag as active, and resets the internal TIMER to, say, half a second, and sets its output command = turn left.

And it does that for the next 10 times through the loop, each time testing the TIMER, not the trigger condition. Finally, the TIMER times out, the ballastic behavior is finished, and its subsumption flag is set to FALSE.

Your description (and critique) assumes incorrectly that all behaviors (layers, whatever) start execution with the presence of a trigger and end execution through some internal measurement. This is true for ballastic behaviors (the exception to normal behavior in my experience) but is not true for all other behaviors.

I still haven't been able to convinced myself that you really understand how subsumption works. I think this is one of several areas of confusion.

Well, I agree 'cus I'm gonna ambush one of your basic tenets here. So let's be sure we're on the same page first. 8>)

best dpa

Reply to
dpa

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.