What is Subsumption?

So does that mean he played with systems that queued messages? Otherwise I don't understand how they could time out. A queue of course if a big time state/memory device.

Yeah, you can always think of a clock as an external part of the environment and the system them has the ability to sense this aspect of the environment (time passing). From that type of view, you can implement timers in many different ways.

Are the FSM's arbitrary complex FSMs with as many states and transition rules you care to put in them, or are they limited in some way? Such in how they change state?

Sure, I understand that.

Well, my guess is that they are not trying to hide state as much as they are advocating splintering the state machine up into small distributed pieces each which are triggered by environmental events instead of internal events.

Yeah, I'm understanding it a little better. But of course I really need to read the books to get a real understanding. The idea of a "servo" behavior from Jones is still not clear but I understand the ballistic concept so I can guess what he's thinking about for servo behavior and I'm probably not too far off.

Are you familiar with OO programing? What you are talking about reminds me a lot about the differences between structured programming and OO programing. And your objections remind me of the type of things some people complain about when trying to learn OO programing but still thinking in structured programing terms. I think there's some important parallels here.

The point of OO programing is that you have to learn to look at the software very differently. Instead of thinking of it as sequential programs that manipulate data structures, you learn to think of it as collections of data structures (objects) that each have large collections of simple functions. You think of the objects as "doing" things instead of your code doing things. The entire view of the program changes from a collections of code (aka algorithms) to a collections of objects.

In structured programming, it's easy to understand the program flow, but hard to see the data. In OO programming, it's easy to understand the data, but almost impossible to understand the program flow. You can write any program using either paradigm, but some applications are far easily to write using OO (complex data structures with no real performance issues), and some are better as structured code (complex performance intensive algorithms with simple data structures).

With subsumption, and these other behavior based robotics approaches, I see it being the OO approach to bot programing. Except the focus is on behaviors, instead of objects. But it's similar to OO programing because the idea is to fracture your code into lots of little independent functions (aka behaviors) and create simple rules (environmental triggers and priorities) to define when each behavior should be used.

Now, a state machine, is very much the same. Every state in effect is a behavior, and you slice the behavior of the robot into lots of little pieces, where each piece is one state. But state machines general include code in each state, that determines when it should change to another state.

The problem with the state machine approach is that every time you add a new behavior, (aka a new state), you have to go back and think about all the other states the machine can be in, and think about when it might make sense to abort those old states, and switch to his new state. And of course, when the state machines get large, we seldom do this as much as we should, or worse we try to do it, but fail to test the code, in all possible states, and think our code is working well when it's still got some bad bug in it.

It also means that when you write code to add a new behavior, you end up having to add code to some old behaviors (states) that make reference to the new behavior. So the code connected with the new behavior (aka state) gets scatter all over the state machine.

The intent with using environmental triggers and arbitration systems (like behavior priorities), is that for a lot of stuff, you don't have to spend any time thinking about the other states and hand coding a lot of new automatic state transitions. All you have to do is think about where to insert the new behavior in the priority list. And all the code for each new behavior gets put together in one place.

So, without being an expert on the subsumption terminology, I think the idea is not as much to stay away from internal state (even though that might get pushed at times), but instead, to try and fracture the behaviors into as many micro behaviors as possible, and to try and trigger each from environmental conditions instead of triggering them internal state.

The advantage to this is that the micro behaviors end up being used in conditions you never thought about, and which they would not have been used if you had used more traditional FSMs to drive behavior.

For example, if you want to back up and turn when you hit a wall, you could write this as one ballistic behavior driven by an FSM that carries out a fixed sequence of behaviors once it's triggered. But if you break it up into a backup behavior triggered by being too close to obstacles (measured by two sonar sensors), and a left turn behavior triggered by a backup condition with the right sonar registering a close object, and right turn triggered by a backup with the left sonar closer.

But then when you add a reverse cruse behavior later, you find the turn behaviors you wrote to get out of a trap work nicely to make you turn away from walls as you back up without you ever having to steal code from your backup and turn behavior. So you get automatic sharing of behaviors you never would have gotten had you scripted all the behaviors with state machines.

I think the general idea is simply that you should try to trigger all your actions based on what is happening in the environment instead of based on internal state machines, because the more you do that, the more likely the behavior will be put to good use in situations you never thought about. And even if the trigger is based on self awareness (aka what your were just recently doing - like backing up) that's better than just triggering it because of the last state we were in.

Another way to look at this, is that as programmers, we are very invested in the idea of producing a sequence of instructions to follow.

  1. back up 2. turn right 3. cruse forward again

We learn to think about programing as lists of instructions, so we naturally learn to think about programing a bot as lists of instructions. W think that step three comes after step 2.

State machines, are just complex lists of instructions where we specify a more complex flow of control. We specify that we go to step 2 after we finish step 12. We are used to hand specifying all the sequence and all the rules for changing sequence.

The ideas like subsumption that has been developed is just the idea that when we program an interactive agent, we need to break out of this mode of thinking that we as the programmer need to specify the sequence, and instead, specify the environmental conditions which should trigger each possible action, instead of specifying the sequence.

So we replace the above sequence with:

Trigger Action

Bump switch Back up

backup & no bump switch Turn right

when nothing else to do go forward

We give up most our control of sequence, and turn the control of sequence over to the environment as much as possible. But when we have to, we still use small state machines to control sequence within a behavior.

Now, I don't know if any of the above perspective gives you any new insight into your thoughts about subsumption and it's use of state, but I think the above ideas are the prime motivations behind paradigms like subsumption. The more you stay away from state machines, and the more you are able to specify action based on environmental triggers, the more likely the bot will produce useful behaviors in environments you never thought about. The more you use state machines to hand-code the action sequences, the more likely the bot will do something stupid like drive itself off a cliff.

Reply to
Curt Welch
Loading thread data ...

robots did with learning to coordinate legs with primtive reflexes.

I would argue that about all the emergence you should ever expect to see from legs is locomotion. This is very different than the idea that brain directs all this activity. If you are expecting legs to reason that "I think therefore I am!" then you are expecting too much from Brook's demonstration of legs.

If you want to look at more than gates, such as vision, language, and understanding of problem solving, math, physics, engineering and materials science you have to look beyond leg demonstrations to larger scale demonstrations. Those things take more neurons than coodinating a few simple legs.

When I got exposure to follow up work on larger scale demonstrations of Neuronal Group Theory I felt that this was further confirmation of what Brooks had demonstrated with legs. Even wikipedia talks about the various theories that fit into the connectionist camp.

Which means that you have a couple of hundred billion neurons but no single source of control. There is no master processor in the brain controlling a hundred billion slave processors following the goals and directives from the master processor. What one has is an example of sumsumption and not of multitasking of serial processor.

Humans are examples of subsumption if you accept the theory of subsumption. Subsumption is not just a simple multitasking example of driving a primitive leg on a primitive serial microprocessor. The theory has to cover how you think as well as more primitive reflexes.

No, but if you accept subsumption you accept that you are human and that humans provide examples of subsumption unless you erroneously think that subsumption is just multitasking.

"Clear proof" is in your mind. Your mind does not reside in cenral spot. It is distributed as the theory of subsumption dictactes it must. There certainly is clear proof that there is no central control in human brains although you don't have to accept it as clear proof to you.

Surely you are not arguing that there is a single 'master processor' running a control program inside of your head?

It sounds like you are just arguing creationism versus evolution. I don't want to get into that sort of debate with you. It is as simple as that.

If one 'believes' in evoultion one believes that organic compounds and organic life self organized into life as we know it. Some cell specialized as nerve cells developed the properties of neurons. Neurons became specialized and self-organzied in groups and groups into societies.

Classic AI tried to mode a God like consciousness that people had just assumed was what "Intelligence" required. These classical AI types programmed knowledge directly into their machines, akin to God breathing life, and creating his greatest creation Man from nothing.

Connectionists argued that since life evolved and that human consciouness came much later than the chicken or the egg that "intelligence" also had to evolve from simpler organisms to ones with larger brains. And they speculated that their must be a mechism for nerves to learn and organize.

Neuroscience research showed this to be the case and people took Edelman's Nobel Prize winning work in neural organization to build things with much more emergent intelligence than what you get from a handful of neurons and a leg.

And it is generally accepted that connectionists and knowledge encoders don't see the world the same way, but it is generally accepted that subsumption is a theory that fits in with the other connectionist theories. This is true simply because it is abouut no central control.

I think you are just missing the forest because you are focused on the trees. You are looking at the most primitive example, of a leg, and not seeing past it.

There is not much of it to be found it the most primitive demonstrations. But it would be good to not get stuck there.

Even the primary repertiore of the DNA is 'programmed in' by a creator if you consider self-organization and evolution to qualify as a creator.

it seems to be more fundamental than that. I believe in science and evolution, I accept the Nobel Prize winning work about how neurons organize and I have seen very intelligent AI build using theories like NGS and subsumption when I worked for the government.

I believe that what Brooks demonstrated so long ago was that with a little pre-programmed intelligence a very crude robot could learn to move and exhibit similar behaviors to those that had been created by classic centrally controlled goal driven designs.

I think you are just engaging in semantic quibling again about the term 'learning' but you have alluded to Brook's robots 'learing' to coordinate movement of multiple legs.

I think the virtual parallel machines running FSM is both a good way to organize a simple systems using subsumption. It is simple, efficient, structured, and mathematically provable. I expect that that is why some professionals and professors have embraced it. I think it is clever and ahead of its time.

Of course I tend to see it as a bridge to more real parallelism. I like the idea of having a lot of simple processors running simple state machines like simulated neurons rather than simulating many simple virtual processors on a single sequential microcontroller.

Subsumption can be simulated with FSM, and simulated or real parallelism. Subsumption is about parallelism. Multitasking is just a cheap imitation of real parallelism which is why I object so strongly to the idea that subsumption is just multitasking.

If I implement a system with lots of tasks running on lots of processors, and it does just what your FSM on virtual parallel machines do, but truely in parallel with no multitasking at all wouldn't that still be subsumption? Subsumption requires NO multitasking as far as I am concerned. That is just an implemenation technique.

The average hobbiest is indeed not a highly visible researcher.

And you have my sympathy that the average robotics hobbiest seems to have a budget of zero and a lot of unrealistic dreams. I know people who have a lot of diffent hobbies, but this newsgroup is dominated by folks who want a very inexpensive hobby perhaps with no more cost than an internet connection.

it seems we are mostly arguing about definitions of 'intelligence' and 'creator' which clearly isn't going to go anywhere.

My main objection to your comments in this thread is that you said that isn't subsumption just multitasking? I say you can have subsumption without any multitasking, and working on parallel processors I think you should have subsumption without multitasking. But that too is really another subject, but consider say subsumption on SEAforth with real parallelism and without multitasking and then try to convince me that subsumption is just multitasking.

But I think we have gone about as far as we can without us going to comp.robotics.philosphy to argue about whether definitions of intelligence and whether self-organization qualifies as a creator.

So I will just accept that we see things differently.

Best Wishes

Reply to
fox

It seems to me that, that instead of using a ballistic behavior like that, it would be far better to implement this as a test for an environmental condition (with the general idea that ballistic behaviors are just always bad).

To do that, you would write code that would record the time of the last bumper hit, and then write a sensory test routine something like:

rt_bumper_active(ms_time_window)

Which tests to see if the right bumper was hit in the last so many ms.

Then you could use it like:

if (rt_bumper_active(1000)) back_up().

So that would turn the ballistic behavior into a servo behavior. That is, you can write it like a servo behavior without having to have an internal timer dedicated to the behavior.

It also makes it very clean if you have higher priority behaviors which interrupt this back up behavior. If the higher priority behaviors take longer than a second, then this behavior times out and there is no issue with having to run the behavior again to deactivate the behavior or reset its internal timer.

Now, this may work exactly the same as the ballistic behavior, so it might be splitting hairs to not call this a ballistic behavior, but the idea is that the trigger, or test, is simply testing a condition of the environment in a more complex way and as a result, you don't need to use a ballistic behavior.

And, for example, if you wanted to back up slow at first, and then faster, you could write it as two servo behaviors:

if (rt_bumper_active(500)) // higher priority back_up_slow() else if (rt_bumper_active(1000)) // lower priority back_up_fast()

instead of using a complex two state ballistic behavior with two timers in it.

And since it's all triggered off of environmental conditions, it's more likely the behaviors will be but to good use in conditions you didn't think about.

It seems to me that all ballistic behaviors could be turned into servo behaviors using temporal environmental tests structured something like that.

To do this, you would not only have to record sensory conditions (such as the bump switch hit) to allow you to do a temporal test on it, but you would probably also have to record when behaviors were last used, so you could do temporal tests on previous behaviors as well. Then you could code the above something like this instead:

if (rt_bumper_active(500)) // higher priority back_up_slow() else if (back_up_slow_active(500)) // lower priority back_up_fast()

So it would do the back_up_fast for 500 ms after the last use of the back_up_slow() behavior. In other words, the recent past activity of the behaviors, becomes part of the sensory environment you can test for when triggering other behaviors. This would move all the timers and state machines into the sensory system and keep them out the behaviors and I think make the code cleaner and more flexible. And in general, if you had a single good clock, you wouldn't really need any other timers or state machines other than the implied state of a trigger being active.

Thinking further alone this line, it seems to me you could use the same idea to implement a hierarchy of goals in the system. The idea is that you would define a behavior like get_ball() which was a null behavior - that is, it did nothing, other than record the fact it had been used.

So you you could specify some set of triggers that would active the get_ball() behavior.

You could then write a whole set of behaviors that were triggered in response to the "get_ball()" behavior being recently used:

if (get_ball_active(1000) & see_ball_rt()) turn_right();

So, for one second after the get_ball() behavior was used, the bot will perform some set of ball seeking behaviors.

The get_ball() behavior can be triggered over and over based on other conditions and every time it's used, it triggers the entire set of ball fetching behaviors for another second. So if conditions changed, the bot might switch to avoid_bad_guy behavior for a few seconds, and then return to the get-ball behavior when it was done. You could use this type of trick to create a complex hierarchy of goals and sub goals all without having to result to hand coding high level state machines and complex state change rules. It would all stay within the servo paradigm because all this knowledge about recent past behavior would be available as sensory conditions of the environment.

I do see a few issues with this approach that would need to be resolved, but I think attacking it from this direction might end up working rather well.

Reply to
Curt Welch

Yes, you've done a wonderful job of describing Subsumption and its timing in a Jones model of Subsumption.

I'm going to clip most of it, not because it isn't wonderful, and should make a nice future record for anyone wanting to know what a Jones model does, but because I agree with most of it in great detail. There are a few little pieces I want to highlight though.

Yes. There in that flag is half a bit of state.

A meaningless rewriting because the flag is still set. (State of active doesn;t change.)

And this is the other half a bit of state information.

In the Jones model, that means one bit of state is kept in the behavior, and no state is required in the control system (where Brooks says the AFSM should go).

So in Jones model, with the trigger in the behavior, rather than the trigger in the the AFSM, Jones has hidden one bit of state information that is tested by the arbitrater, which lets the arbitrater know if the trigger was active or not. So Jones has turned the whole behavior into a state machine, so he can have a stateless control systems.

Abundantly.

Trigger condition met, set flag active.

Trigger condition not met, set flag inactive.

This is a classic two state state machine, much like a thermostat without hysteresis.

Well, it is arbitrary if we say the trigger in the behavior sets the active flag, or the trigger in the behavior sets the FSM running where otherwise it would be quiesent, which as first act, without fail, sets the active flag. In either case, it is the meeting of the condition of the trigger in the behavior that causes the active flag to be set. I still argue, therefore, if you have a trigger in the behavior, you have

1/2 a bit of state in the behavior.

I'd like to see a backup reference from Jones to that statement, "the layer does not test the trigger condition". I don't remember him addressing not running the trigger. On the other hand, the trigger can be made inert with an additional clause OR'd in. I do agree the trigger must not let subsumption go inactive while the state machine is running. Which is why I'm saying the state in the behavior is hidden, and not given the credit it deserves.

The "layer" doesn't sequence anything. The FSM may sequence. But the layer is the thing with the control system, and the trigger. The FSM is the thing in the control system for ballastic behaviors, refernce Jones Fig 3.1.

Misattribution here of what's doing what. Finally, the TIMER times out, the FSM in the control system is finished, and it sets the subsumption flag to FALSE. Or just as bad, passes state information back to the behavior to cause the the subsumption flag to FALSE. But however it is implemented, the ending of the FSM causes the clearing of the active flag.

This is the other half of the hidden state bit.

Ah, well, yes. But this is the issue.

My description (and critique) assumes correctly that all behaviors (layers, whatever) start execution with the presence of a trigger. In Brooks this trigger is a conditional in a AFMS. In Jones and Arkin, this is a trigger in the behavior. Execution end through some internal measurement. This is true for all behaviors, however Brooks internal measurement is in a AFSM, while Jones has the internal measurement in a behavior if servo, and an AFSM like Brooks if ballastic.

Brooks starts with AFSM's as the default processor/module. If you have a real AFSM in a behavior, you have state "down the hole" so to speak in the AFSM, and the behavior has none, Execution starts with presence of a trigger and end execution through some internal condition on an AFSM transition, releasing the state machine. Brooks never retracts the idea of anything but AFSM's as the things down in the modules. So if I have a tendency to see AFSM's as the default, then I have good reason to do so.

However, Jones has added the idea of Servo Behaviors. Alright, you can have a servo behavior by stripping out all the state stuff from the AFSM, and just doing some calculated response. However, if you use this model over Brooks, you have to have at least one bit state in the Behavior (set active, clear active) for the servo behavior. Or at least half a bit state in the Behavior (set active) and half a bit of state (clear active) for the ballistic behavior.

Yes, I hear that often.

In a John Cleas voice, he answers back, B^) Ah... Well yes, how lovely. We will definitely look forward to that, then, won't we!

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

The Jones model and the Brooks model are the same. They are only described in different ways. Jones' description is essentially a subset of Brooks. (In particular, he doens't use the "inhibit" or "suppress" input that Brooks defines, and suggests that there is always another way to accomplish the same thing).

You are confusing the use of the AFSM to implement a particular type of behavior (ballastic) with the use of AFSMs to implement subsumption itself in the absence of a multi-tasking operating system. This is key.

Since you seem unwilling to accept my experience on this I will quote from the Gospel of St. Jones, "Robot Programming" chapter 9 section 4.2. This follows a section that describes how to implement subsumption using a multi-tasking operating system.

Jones then continues: "In the absence of a sophisticated scheduler, it is possible to build a subsumption program by implementing the behaviors as finite state machines."

This is separate and distinct from the use of an Augmented Finite State Machine to implement a ballastic behavior. You are conflating the two.

This is the normal subsumption method, and describes 99% of all subsumption behaviors. The behavior is active during the presence of a trigger, and inactive during its absence. Only the ballastic behaviors continue to be active in the absence of a trigger. It seems you have focused your whole critique on that 1%.

Once you actually understand the overarching principles of subsumption you will be able to determine whether or not a particular feature fits the paradigm with having to scan through the holy texts to see if you can find a reference. However, until that happy day, lets look at Jones' sample code in the gospel of "Mobile Robots" appendix B. p 291:

void bump() { while(1) { bump_check() if (bump_left && bump_right) /* bumped in front */ bump_active = 1; bump_command = BACKWARD; wait(msec_per_rad / 2); bump_command = LEFT_TURN; wait(rev_4); } else if (bump_left) { /* bumped on left */ bump_active = 1; bump_command = RIGHT_TURN; wait(rev_8); } else if (bump_right) { /* bumped on right */ bump_active = 1; bump_command = LEFT_TURN; wait(rev_8); } else if (bump_back) { /* bumped from behind */ bump_active = 1; bump_command = LEFT_TURN; wait(rev_4); } else { bump_active = 0; defer(): } } }

Now this Jones example uses a multi-tasking OS rather than coding subsumption using AFSMs, so the meaning does not jump out as clearly, but it is the same as I've described. Take the first "Bumped iin Front" section as an example. It sets the subsumption flag as active to signal the arbitrater, and outputs a BACKWARD command for the motors. Then it calls the routine wait() with an argument. Wait() is described earlier in the appendix as follows:

void wait(int milli_seconds) { long timer_a; timer_a = mseconds() + (long)milli_seconds;

while (timer_a > mseconds() ) { defer(); } }

So wait() suspends the task in the multi-tasking queue (that's what defer() does) until the timer expires. During this time, the bump() code is no longer testing the trigger which initiated the code. Indeed, the trigger in this case, the closure of a bumper switch, is long gone. It is, instead, each time through the multi-tasking queue, it is testing it's timer.

Unlike a servo behavior, which tests its trigger every time through the

loop, the ballastic behavior does not test the trigger again (i.e., run the bump_check() code in this example) until the entire ballastic behavior is completed.

So I repeat, "Now, 1/20 second later as the loop executes again, but the layer does _not_ test the trigger condition again, as in the case of the IR avoidance behavior.

Just to be completely clear, here is Jones' IR code from the same appendix, which does test its trigger each time through the loop:

void ir() { int val;

while(1) { val = ir_detect();

if (val == 0b11) { /* detections left and right */ ir_active = TRUE; ir_command = LEFT_ARC; /* Jone's tones() ommitted for clarity */ } else if (val == 0b01) { /* detetion on the left */ ir_active = TRUE; ir_command = RIGHT_ARC; } else if (vall == 0b10) { /* detection on the right */ ir_active = TRUE; ir_command = LEFT_ARC; } else { if_active = FALSE; } defer() } }

Notice that there are no calls to wait() in this code. Each time through the loop it tests the ir_detect() condition. The single defer() at the bottom of the loop is required by the particular cooperative multi-tasking of this particular implementation.

To sum up: servo behaviors, the most common, are active when their trigger is active, and inactive when there trigger is inactive. There is no AFSM involved with servo behaviors, unless the subsumption code itself is implemented as AFSMs, as might be required for robots controllers without a multi- tasking operating system.

Ballastic behaviors are set active when their trigger becomes active, but thereafter the trigger is ignored, and an internal timer is used to set the behavior to inactive. Ballastic behaviors are implmented as Finite State Machines that are Augmented by the used of an internal timer to change states, once initiated. Hence the name, Augmented Finite State Machines, or AFSMs. This is true whether or not the subsumption architecture itself is implemented using AFSMs or instead implemented using a multi-tasking OS.

you wrote:

No. Jones has added the terminology of Servo Behaviors in order to better explain the concepts of subsumption. The idea is present in Brooks' descriptions.

best dpa

Reply to
dpa

Hi Curt,

I think if you consider the actual implementation of what you are proposing that you will see that it is the same as I have described.

The bump event gets a time stamp, and then each time through the loop the code tests to see if that timestamp plus some constant (1 second in this case) is greater than the current timestamp.

You are still using a timer to essentiallty extend the width of the trigger, once the trigger has gone away. You are just moving the location of the timer from the behavior code to the detection code. It's still a timer. This does not turn a ballastic behavior into a servo behavior.

best dpa

Curt Welch wrote:

Reply to
dpa

I agree there is always another way to accomplish the same thing. In this case, the way Jones does it, is by hiding a bit of state information in the behavior, which was previously a part of a state machine in Brooks implementation.

Sigh. That rope has two ends.

Alright, then I'll quote my private Epistle from St. Jones to Dumse here:

RMDumse wrote in past private email to Jones over a year ago:

Joe Jones in reply to past private email responded: I must confess that I don't have am especially scholarly understanding of FSMs or AFSMs, my knowledge is only deep enough to make them do what I want them to. But the way I think of them is this: State changes in a classical finite state machine depend only on the machine's current state and its inputs. When an input change leads to a state change, that change occurs instantaneously. Augmenting an FSM with a timer allows you to delay state changes. That is very useful if you want to build a robot program from FSM-like constructs. If you used strictly classical FSMs there would be no way to say, for example, "After the bump occurs back up for a while, then turn for a while."

So I took Joe at his word. He is not that interested in state machine structure, and only is concerned with the pragmatic issue of getting something to work the way he wants. His description of AFSM was pretty much what I had suggested, a timer was available to be able to cause events to time out.

That we can agree on. Jones model is much easier to understand for people who are uncomfortable with the deeper principles of Finite State Machine Theory. We can also agree on most behaviors are servo in the sense Jones describes, but I won't go to 99%. More like 80%. We can also agree that ballastic behaviors have explicit state machines, and these often use timers to advance or terminate.

Best regards,

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

RMDumse wrote:

Replying to my own post: Ah ha. Here's something I just learned from posting with dpa.

Jones has identified two types of behaviors. One is Servo, and the other is Ballistic. What Jones left out and I hadn't seen until penning what we agree on, is there is another unmentioned potential type of behavior.

The use of the timer makes the ballastic behavior ballastic. That is, if the timer controls the changes of state, once launched, only time or a higher subsumption can change the outcome.

The previously unidentified behavior is one that has state, meaning it is a FSM, but does not (necessarily) have a timer associated with it. Since the timer is not used (or not the only input used, say) the outcome of the FSM's progressions will be predictably ballastic, but instead reactive.

So here is the new class of behavior overlooked in the Jones simplification, The Reactive State Behavior.

I'll give a quick example.

Let's say we have a minisumo with four corner line sensors. We want to trigger a behavior when it sees both front line sensors are now on the edge, or put another way, the front two line sensors see the edge. So our first state of our reactive state behavior, we back both motors. The transition to the next state comes when one line sensor comes off the edge. We go to a "left-off, right-on" state if the left came off first. We go to a "right-off, left-on" state if the right came off first. Or finally, we might have both come off so close to the same time, we can't tell which was first so we go to "both-off" state. By going to these three different states while we wait for the sensors to tell us we're clear of the edge, we are being purely reactive. However, which state we go to, saves information about the angle the sensors come off the edge. So when we come get both sensors off, we can choose what output we use next. If the left came off first, when the right comes off too, we might want to continue to back on the right motor and slow the left, so we back toward the center of the arena. Or If the right came off first, when the left comes off too, we might want to continue to back on the left motor and slow the right, so we back toward the center of the arena. If they both come off at the same time, we might want to leave them both back full to go straight toward the center. Up to this point, the state changes have been purely reactive.

take over, or we could time out when we got to the center and release, or if we wanted to stay completly reactive we could wait until the rear sensors see the edge on the far side, set a forward motion and release.

The details of the example aren't as important as the point, and the point is: there are useful behaviors which can neither be characterized as servo or ballistic. Maybe some programmers haven't considered using this use, because in Jones simplified explanation, he didn't identify they were possible. I think that's a pretty neat thing to come from our discussion.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

No, I'm not, and stop calling me Shirley. ;)

The key here, is "said" versus, "asked".

I asked this question, "Is subsumption really necessary, or is this just a fancy name for multitasking? Is this just an issue of creating the allusion of parallelism in a serial machine? Can the same thing be written in FSA without the need for the concepts of subsumption? Thoughts?"

I have never thought subsumption was multitasking. I asked to see if anyont thought that way, and to provoke thoughts on what subsumption was. The only one of those questions I actually believe to be correct is that the same thing can be written in Finite State Automata, without the need for a larger over bridging concept.

Far enough. Nice to see you post here, though.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

Yeah, I agree. That's why I wrote:

It simply moves the timer into the sensory side of the problem.

The only point is that it allows the behavior to be coded the same way for both so that on the behavior side, it all looks and works the same.

But thinking a bit more about this, it seems to me the distinction between the two is a bit arbitrary. It seems to me that all behaviors actually end up as ballistic behaviors in some sense anyway. For example, if you code the servo behavior with a 40 Hz loop, then each servo behavior becomes a ballistic behavior with a 1/40th of a second timeout and with 1/20th of a second average delay before the behavior starts.

And all sensory and behavior systems have inherent delays in them, which means there is always an implicit timer created by this delay between the time the external sensory condition happens (such as the bot runs into a wall), and the behavior produced (like spinning the wheels backwards). So what's the difference between the implicit response delay of the servo system, or the reaction and in explicit delay of 1 second hard coded into the sensory system?

It looks to me like the only real difference is an arbitrary selection of time scale in the delays. All behaviors in effect are triggered by "timers" linked with sensory events. If the timer happens as an unintentional side effect, we call it a servo behavior, if the timer happens intentionally by the design, we call it a ballistic behavior.

Or, if don't add extra intentional delays to the code, and only work with the delays inherent with the technology we are working with (processor cycle times, sensor sampling rates, etc), then we can still think about it as a servo behavior, where as once we do anything to intentionally add longer delays, we start to think about it as a ballistic behavior.

This is not important to programing bots or the definition of subsumption, it's just me thinking out loud. It has a lot of interesting parallels to the type of networks I play with for AI which make very heavy use of variable delays in processing of all sensory data and the production of all behavior. All the learning in my networks happen by the system adjusting the processing delays and as a result, a large network will have a large selection of "triggers" and "behaviors" (millions) with a wide range of different delays from milliseconds to minutes. In this case, there is no clear line between ballistic and servo behaviors, they are all the same with just a wide range of different delays associated with them.

Reply to
Curt Welch

a while ago) any program or set of programs/tasks which is capable of running on a processor like what we're using today is representable in a FSA.

Fundamentally, anything run on today's computers can be run on turing machine. - therefore it can be represented in an FSA.

The whole concept of multi-threading is just something to make it easier for programmers to slice and dice a problem.

When you step away from it, you really just have an execution engine with some registers and memory. The fact that we assign a purpose to some of those registers (like a stack pointer) and create concepts like a stack is just a way of compartmentalizing things. The computer doesn't understand the notion of tasks.

So, to me, concepts like multi-tasking, or subsumption are just ways of categorizing things, giving you semantics that you can use for the purposes of communication.

Reply to
dhylands

Yes, true enough. But that really wasn't the sense in which I was asking.

Good point.

I am fairly certain Subsumption could be done in a purely FSM method. But there is something fascinating about the priority scheme. To accomplish this in in a purely FSM method would require a particularly strict set of requirements on the transitions to create the priority section. Beyond the sense it can be done, I haven't worked it all the way through.

Well, I have worked it through in some of my robots in actual implementation. But I rely on the ordering of the called FSMs, with the higher priorities last. So my arbitrater is just who writes the output last. Since the scan loop is so much faster than the mechanicals can respond, the subsumed outputs never have a chance to be expressed physically.

You are right, though. Often it is just an issue of semantics. You can choose the paradigm that most appeals to you, or is easiest to think about. With my bias, of course, I like to see state made explicit. Obviously its a passion of mine, because I feel state is such a fundamental concept. As programmers we've learned all sorts of tricky little ways to hide state, in flags, in variable, in the program counter, etc. You can say these differences are semantics, or you can say they are obsfication. But then, whatever works for you works.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

Yes, true enough. But that really wasn't the sense in which I was asking.

Good point.

I am fairly certain Subsumption could be done in a purely FSM method. But there is something fascinating about the priority scheme. To accomplish this in in a purely FSM method would require a particularly strict set of requirements on the transitions to create the priority section. Beyond the sense it can be done, I haven't worked it all the way through.

Well, I have worked it through in some of my robots in actual implementation. But I rely on the ordering of the called FSMs, with the higher priorities last. So my arbitrater is just who writes the output last. Since the scan loop is so much faster than the mechanicals can respond, the subsumed outputs never have a chance to be expressed physically.

You are right, though. Often it is just an issue of semantics. You can choose the paradigm that most appeals to you, or is easiest to think about. With my bias, of course, I like to see state made explicit. Obviously its a passion of mine, because I feel state is such a fundamental concept. As programmers we've learned all sorts of tricky little ways to hide state, in flags, in variable, in the program counter, etc. You can say these differences are semantics, or you can say they are obsfication. But then, whatever works for you works.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

Yes, that is what I'm concluding.

Subsumption was stated in such a way, it is so broad, it can be implemented on one processor using multitasking, or on several processors with multitasking and communications, or on a processor for each process without any multitasking, or even down to the level of dedicated hardware.

I agree.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.