AFSM Brooks timers

Anyone have a good description of what Brooks calls an "Augmented Finite State Machine".

My best memory is AFSM in Brooks useage means adding timers. Reference "Mobile Robots" 1st edition pg 257, 2nd edition pg 299, footnote: "Strictly speaking, these timed operations do not fit the definition of a finite-state machine. Rather we must think of the structure described here as enhanced or augmented finite-state machines." But it seems "Augmented" FSM means something to each author I've looked at by internet search.

I'd be curious to know Brooks concept of "Augmented" FSM more precisely if anyone is up on that concept.

To wit:

I've been experimenting with using counters to augment state information in the case of a MiniSumo robot, and the counters represent the latency since the last time a sensor had contact.

In the case of the line sensors, intended to keep the robot from falling over the edge of the arena, this means, the line sensor with the most recent (smallest) count was the one which contacted the edge. This gives me information on which way to turn, to avoid the edge of the arena and retake the center.

In the case of the ranging sensor, the sensor with the latest detection of the "target" is the one to turn toward to re-aquire the target. So if the left detector was the last to see the other bot, and neither sees it now, then it must have escaped to the left, and an appropriate behavior response would be to turn to the left.

Now it is easy to see how the last active detector could also be kept as purely state information, but what I notice about using the timer/counters, is, a proportial response may also be possible. Therefore there is an "analog" content to the state information of the timer/counter not possible in ordinary state information.

For instance. If I detect both the left or right sensor has detected an edge, and I back straight away, the timing each sensor pulls off, will tell me my angle to the edge of the board, and I can compute the exact direction to the center of the arena. I can do the center computation with timer augmented state machines, which I can't with purely digital state history of which sensor touched, and which didn't (or both did).

Reply to
Randy M. Dumse
Loading thread data ...

From

formatting link
"One writes a subsumption program by specifying layers of networks of augmented finite state machines. These are finite state machines augmented with timers which can be set to initiate a state change after some fixed time period has passed."

Again I am left with the idea timers associated with state machines are pretty much the whole ide behind Brooks AFSM's. Anyone know of evidence of a larger context beyond simple timers?

Reply to
Randy M. Dumse

Randy, remember that Brooks writes about subsumption as it relates to behaviors, not sensory input. They're not the same, though behaviors are modified by senses. In your previous message, you described a system of detecting which sensor was triggered last, but this direct-to-hardware view isn't what Brooks tried to promote. Since Brooks wasn't trying to directly assimilate specific sensor inputs, how the timers were used or not used in AFSMs for this purpose doesn't really matter.

Continue on in reading this document, and you'll get a better idea. The inputs and outputs they talk about are not direct sensor inputs, but message "pipes" from one layer or SM to another. Timers could then be used (merely as an example here) to indicate when a particular behavior response is no longer critical. For instance, a touch contact from a minute ago is probably not all that cogent of information to continue processing, though it MIGHT be, given the behavior. As behaviors are associated with function, the function of the device will dictate how the behaviors are wired.

You can probably get to where you want with the approach you mention, but (it seems to me) you're still thinking like an electrical engineer, not an AI philosopher!

-- Gordon

Reply to
Gordon McComb

Actually, that's how railway signalling has worked since about 1938 - interconnected state machines with timers that cause state changes. See "

formatting link
"for a simulator that illustrates this.

John Nagle Animats

Reply to
John Nagle

Hummm. I get the impression direct-to-hardware is what Joe Jones is presenting in his book, "Robot Programming, A Practical Guide to Behavior-Based Robotics".

Again, my understanding of Brooks comes from reading Jones.

Basically Jones talks about two kinds of behaviors. One is servo. In this case, a reaction exists as long as a stimuli exists. The "trigger" is always active.

The other type of behavior Jones calls, "ballistic". This takes the meaning of ballistic, as in a gun shot. Once launched, the behavior continues according to a fixed sequence of actions, without further need of sensory input. Some trigger starts the behavior, but the action of the behavior may cancel the stimuli which created the trigger. Yet the behavior continues even without the stimuli until some criteria is completed. More than likely, that criteria will be a timer.

But I do see the distinction, that the behavior is timed, rather than the input.

Yes, I see these "pipes" but that's part of what my original complaint about reading Brooks is, it's difficult to read past the extended high-level extraction into what the actual implementation is about.

Thanks for the reply, Gordon.

You might be right. ;)

But why should state machines be the domain of AI philosophers? Why aren't there some electrical engineers publishing some research on state machine applications? I've found very little.

Well, I've been working with statemachines since my introduction to them in Kohavi in 1973 at UNI. I'm still finding surprizes.

I'm finding different ways of putting them together, using different techniques to implement subsumption, and instantiating them in their calling sequences, arranging them as short-lived sequencers, and calling them from states in other state machines (which I used in a GPS parser).

The adding of Timer information (which I consider sometimes to be its own little statemachine, sized by the number of bits in the counter) can be used to add an analog aspect to the responses I hadn't anticipated. In the MiniSumo (mentioned in opening post) I'm now timing the time since I've seen something and the time I've seen something. And I'm just beginning to grasp some of the implications.

With the line sensors I can tell my angle to the arena edge, so I know how to manuever to go to the center of the ring. With the range sensors, I can choose to turn left or right after coming off an edge detection, knowing which direction I last saw the enemy. I can tell if something has relative motion left, or right. I can tell if something is openning or closing. Given those bits of information, I can choose different attack strategies. I should even be able to make sweeping attacks from the side or back, as opposed to meeting head to head.

Anyway, I'm surprised by all I'm learning about state machines, and now timer Augmented Finite State Machines. And what I'm thinking is I'm amazed there isn't some "FSM" guru in the Univeristies out there publishing on such issues. Seems most are interested more in the natural language aspect of FSM's which are not at all real time oriented.

I'd really like to see some serious research published on state machines, so if anyone knows a professor specializing in the field, please let me know.

Reply to
Randy M. Dumse

Do you mean there are references to state machines prior to the mid

1950's? That would be a very interesting reference to me.

I've glanced at the web site now.

Reply to
Randy M. Dumse

Electrical engineers deal with state machines all the time. They are still taught as a fundamental technique in introductory classes (usually in a digital logic course), and they're found all over the place, from hardware to software. Everything from vending machines to microprocessors and software compilers. However, as system complexity increases, dealing directly with state machines can become a limiting factor; there is a need for abstraction.

I'd say the modern gurus of state machines are doing computer architecture. They specialize in finding efficient implementations and useful instruction sets. The book by Hennessy and Patterson is highly recommendable (and surprisingly readable).

Research seems to have moved away from FSM's proper and into slightly higher-level abstractions. Take a look at Petri nets, for example. They are much better at encoding concurrency, multiple requirements, and random behavior. Neural networks are another field you might find interesting.

In any event, don't confuse the tool with the purpose. State machines provide a clear abstraction for certain discrete tasks, but they are poorly suited to other tasks. Sometimes a little algebra or differential equations can solve a problem which can hardly be expressed in discrete terms.

Have fun building your robots. Its easy to lose focus, forever looking for where someone else may have done what you could just do yourself.

Later, Daniel

Reply to
D Herring

Or another way to view it, the input modifies the behavior, and the "criticality" of the behavior has some finite response. Even fear is "timed"; if the source (input) of the fear goes away, so does the fear. Should the source never go away, eventually the body modifies the behavior in other ways -- some people go into a catatonic state, for example. This also occurs in mice when in the jaws of a cat. (Interestingly, it turns out this genetically programmed behavior sometimes saves the mouse's life, as the cat occasionally loses interest, and lets it go.)

My belief is that Brooks decided to commercialize his ideas after writing up his basic theories, so there's little "meat" to his concepts. Much of hat we have comes from other researcher afterward, like Jones (who works with Brooks at iRobot), or other authors who have augmented Brooksian augmented state machines. In doing so, the ideas are no longer pure Brooks, and there's a mismash of concepts floating around. (My advice: investigate them all! )

They aren't just the domain of the AIers, but the implementation is different. A traffic light with loop detectors and pedestrian crossing buttons is a state machine, but they may not use behavior-based AI -- nor need it (some, indeed, use AI, especially lights that are part of a citywide grid). The timing relationships are carefully worked out, and maybe there's even a "supervisor" programs that prevents any accidental programming that would cause conflicting green lights. But in any case, it's the approach to the coding that differs here, not the end result.

My point is that I'm not sure you *have* to understand Brooks or subsumption to use state machines, augmented or otherwise. Unless you're going after the whole behavior-based architecture, FSMs can be effectively used in other ways that might prove just as useful.

-- Gordon

Reply to
Gordon McComb

Alan Turing described state machines in the mid 1930s. I'm pretty sure he even called them that, though I know mathmeticians have referred to the concept by many different names.

-- Gordon

Reply to
Gordon McComb

Hi Randy. On pg 298-299 of "Mobile Robots", Joe Jones et al discuss augmented-FSMs. Basically, they are FSMs which will remain in the same state for a certain timeout period before proceeding on to another state. Apparently, the original formal definition of an FSM did not include time as one of its parameters ... pg 299 footnote "Strictly speaking, these timed operations do not fit the definition of a FSM", so the term a-FSM was apparently invented to include the timed variety.

The example shown in the book is of an Escape behavior aFSM, where bumper switches can invoke 3 kinds of evasive maneuver, such as turn-left, turn-right, and back-up. The behavior has to remain in one of those 3 states "long enuf" for the obstacle to clear. If the timeout period is too short, then the robot may not properly clear the obstacle and may get hung up in a short iterative loop ... ie, bump, backup [a little bit], goto forward, bump, backup, goto forward, on and on.

I have a similar scheme built into my Walking Machine Controller. A walker is quite slow in changing behaviors, and can easily get hung up in such alarm cycling loops, so I added the capability to stay in an evasive follow-on state [or sequence of states] for a pre-determined #cycles after an alarm [eg, a bump] occurs, then go back to the original behavior. One example of an evasive behavior contains a succession of several states - bump, backup for 2 full-steps, then turn-left for 1 or 2-steps, then goto straight ahead [default]. Basically, I have a fairly straightforward state machine formulation, except that I've added the ability to stay in a given state for an assigned #cycles, plus the ability to set the period for timing loops in the given state, plus the ability to automatically link to a follow-on state after the #cycles is done. On top of this is also the alarm structure, which can breakout to another sequence, etc.

A differentially-steered bot can get by with relatively simple evasive maneuvers - eg, bump --> turn_away some amount -->

go_forward - but a walker and other types of steering may require more complicated state "sequences" for evasive action, such as I mentioned above. You can see this in operation in the "Roam Free" movie of Nico2.

formatting link
Free - n-roam1.avi

Each "bump", in this case from an IRPD trigger, invokes a sequence of successive timed states, including back-up and turn, before returning to go straight ahead.

In the "Robot Programming" book, Joe Jones also discusses the idea of adding execution periods [timeout delays] to FSMs without specifically calling them augmented-FSMs, like he did in the other book, but it's the same idea. From a philosophical perspective, even simple sensory-motor reactive robots, as well as living bugs, need more than just reflexes to operate successfully, even at a minimum level of intelligence. They also need a bit of short-term memory, else they will immediately get hung up in tiny repeat-loops. I imagine Brooks realized this the first time he built a real-life subsumption bot ... "hmmm, even a bug needs a little bit of STM ...".

Furthermore, if you follow the scheme that Jones talks about in the 2nd book, you have each and every behavior executing [meaning being computed] literally 100s of times/second, then it helps to have an additional control structure, besides just the arbiter, otherwise I imagine behaviors with the "same" level of priority will constantly be pre-empting and clashing with each other. The subsumption approach is really quite different from what we implement when we do more classical state-machines, where we have a more straight-forward and deterministic control-structure. With subsumption, all behaviors are constantly trying to assume control over the system.

- dan michaels =====================

Reply to
dan

Gordon, this sent me on an interesting search. I thought I remembered state machines in Kohavi's description of Turing's machine as well, but the notion was too foggy for me to be sure one way or another. So I looked for my copy of Kohavi (2nd ed. 1978). No luck. Pulled out "Evolutionary Robotics" instead, because the cover is a similar color. So without the reference in hand, I went internet searching. There are many references to Turing and state machines, but nothing close enough or specific enough to know if he used them or not.

I found a much earlier text, Introduction to the Theory of Finite-State Machines, by Arthur Gill, 1962. I searched his references, and the earliest I found was to work by D. A. Huffman of MIT, published in 1954. These predate the Mealy and Moore papers (1955, 1956).

Just yesterday I found my Kohavi on another shelf. Looking in it, about the same result. The discussion of a Turing machine, as an abstract concept without much of the approach Turing took himself.

Then today, looking up Turing in Gill, on pg 6, I found this footnote:

"The idea of a 'state' as a basic concept is the representation of systems was first introduced in 1936, by A. M. Turing (On Computable Numbers, with an Application to the Entscheidungsproblem, Proc. London Math. Soc., ser 2, vol. 42, pp 230-265, 1936-1937..."

So you are right. Turing originated the concept of state and therefore FSM's. How did you know? Read originals? or have a prefered reference on Turing's work which stuck with you?

Reply to
Randy M. Dumse

I got my exposure to digital logic and state machines in a math class in

1973, "Finite Automata" where we used Kohavi's 1st Ed.

My first application of a state machine was in 1982, a music juke box, based on tapes, rather than records. It was a real time challenge to run lights, accept coins, play one song, while queuing up another, and rewinding a third. I was able to manage it all with a single 1MHz processor with application of FSM's to interrupt driven code.

I wonder why you say that? Could you explain further your meaning? Are you saying designing with state machines is an insufficient design tool for large applications?

Which one are you recommending by those authors? Computer Architecture or Computer Organization and Design.

Neural networks are familiar to me. I've heard of Petri nets, but haven't a working understanding of them yet.

Hmmm. I am again surprized by this position. I would think that state machines are suitable for any real time programming task. I see no reason not to use algebra or any other sorts of math in state machines.

I was going to say that's exactly what I've done with forward and inverse kinematics applications, such as the movie crane work I did with blueeyedpop for Panavision. But it may actually be the case, there is no algebra per sa, as part of the state machines themselves. I'll have to look at that.

Anyway, I see state machines as essential to real time structure, and the procedural parts of the responses as not limited to purely discrete responses, much more easily and natively than, for instance, neural nets.

Reply to
Randy M. Dumse

Yes, I was quoting that section of Mobile Robots in my opening post. I really wonder why timed operations did not fit the definition of a FSM. Who's definition was that that left out timers? Where was that indicated? Wish I knew.

I suppose the original application of state machines was to understand digital logic, and when circuits grew more complex than a very few gates, they were used to describe the outputs given a set of inputs. So there would be no timers involved. But that seems to me limiting the tool to the inspiration for the tool, rather than seeing the implications of the tool in its own light.

Seems to me, the use of any conditional, wether digital logic, timer, or even analog levels, can be used to trigger an transition from state to state. The exclusion of timers seems an arbitrary one. As does their inclusion to allow "Augmented" Finite State Machines.

Maybe I should call my approach, "Liberated" Finite State Machines, or "Enabled" Finite State Machines. What matters to me is the structure and implications. The conditions from transition, as long as they can be boiled down to a boolean answer, seem immaterial. As do the actions which may follow the transition, which do not have to be digital logic in nature. For instance, I often tag my state transitions with a brief string sent to the system terminal announcing the state change, so I can follow along by serial link (often an RF link).

Yes, in another post I remarked Joe Jones talks about two kinds of behaviors. One he calls "servo", where a reaction exists as long as a stimuli exists. The other type he calls, "ballistic", as in a gun shot. Once launched, the behavior continues according to a fixed sequence of actions, without further need of sensory input. The sequence will controlled be a timer.

So you detect bumps, and then do something else, even though the bump will go away almost immediately if you back up/off your original movements.

I'm curious, can anything then cause this escape sequence to terminate? other than time? what is the behavior if another bump occurs as it backs up the two steps?

I don't know if I agree that the evasive maneuver is any more complicated for a differentially steered bot than a walker. It depends on the level of abstraction. At the level of the evasion, or avoidance, the same sequence is used. At the level of the locomotion, certainly the amount of signalling necessary to make the joints go is more intricate, but that's true of any motion. The evasion is still the same set of states. fwd-stop-back-turn-fwd.

I know I've looked at some of these movies from this page before, but I can't get the Windows Media Player to open these files.

Okay, I downloaded them, and then played it locally and was able to see Nico2 work his way out of a dead end corner (canyon).

This is essentially the same avoidance my MiniSumo does. If it detects a line (which should be the ring edge), it stops, backs up, then turns, then begins a forward motion.

At first, these are ballistic states, which happen in a purely time based manner. What I've just added is the timing on the sensors. Now it makes variations on its escape move. The most prominent is after the back up, it turns away from the edge. The choice of turn is not fixed, but away from the sensor with the most recent sense of a line. So far this strategy seems to be working fine.

This is a bit like Braitenberg meets FSM. The MiniSumo turns away from the "light" after it has pulled back into the dark. It's not pure Braitenberg vehicle, and it's not extensively planned. And I'm thinking it is even a bit different from the ballistic idea of states.

One of the things I noticed on my first MiniSumo, Mike-3, was my ballistic behaviors could get me in trouble. I ran up on someone's blade, which was reflective, and Mike-3 thought he was on an line edge, so he backed off. I had it so if I saw both front line sensors, it should back a long way to try to retake center. Unfortunately, Mike-3 was a short distance from the edge. He backed right off the arena. He had no rearward line sensors and no programming to react to them even if they were there.

This lead me to believe ballistic behaviors in themselves were rather dangerous responses. They need to be kept as short as possible, and need to be interruptable themselves.

Huh, I guess I missed he wasn't specific.

No, its there. Starting in pg 57 Finite State Analysis. And proceeds to the Escape Behavior we've been discussing.

So, I take your meaning he did not mention "Augmented" in association with FSM's.

Yes, and that's what I'm finding with my timers augmenting my inputs. Treating them as short term memory and making decisions on them.

Yes, the idea of various behaviors offering up steering commands, and the highest priority being the only one that actually gets to steer, is not an obvious conclusion. We tend to "think" like we "think", or linearly with a train of thought, with a single "voice" in our head, and not like someone in a room of "sensors" all screaming for attention.

Subsumption seems more like the latter.

Reply to
Randy M. Dumse

Yes, well, maybe the guys who originally invented FSMs didn't think of this, but seems to me it hardly matters - except possibly in the formal sense [maybe in class, and in refereed pubs] - whether you call them a-FSMs or just FSMs, like J.Jones does in his 2nd book.

With my Walking Machine Controller software, you can leave a state in several ways -

a. Timeout-Linking [meaning on completion of a specified #cycles in the state], after which you automatically transfer to a specified "linked" state [there is only 1 linked state for any given state].

Note - that what I call a state is actually better termed a "behavior", as each state/behavior can have a number of separate operations, eg, for moving walker legs, etc. My "states" are complete behaviors like walk straight, turn left, etc, each of which actually requires a lot of

low-level activity [or which consist of smaller FSMs].

b. Synchronous Alarm Events. Each state/behavior has an alarm table specified, so sensor events can trigger state changes.

Note - that, with timeout-linking, you can have a "sequence" of states executed in response to an alarm event, etc. Take a look at the diagram

here ...

formatting link
E-Ger is a mini-sumo frame that has multiple sensors front and rear for detecting table edge drop-offs. E-Ger actually uses the Walking Machine Controller software, since it's just powered by servos. The state diargram on the page gives you some idea how the WMC system operates.

c. A host cpu can also initiate state changes. The WMC can operate standalon by setting up the FSMs, state-link, alarm tables, etc, as described, and it can also be monitored and controlled by another processor with more smarts. The WMC was originally designed to control multiple servos for vsrious walker gaits, but then I got going and also added in the alram structure, timeout-linking, etc.

===================

Yeah, I'm not sure if a-FSM is a good term for this, or whether the field [academia] has special terms for different cases.

=========

I do this too. The WMC software has a report to host capability.

===========

..............

Yeah, to deal with this, I am able to disable certain alarm conditions when entering a new state. This prevents immediate reaction to the original or changed alarm condition. Also, I can specifically watch for the original alarm to clear, if I like. This is especially useful for evasive sequences, as mentioned above. You could have a timeout on the alarm-disable, but I did it more deterministically. When in state=x, alarm_y=disabled, etc. Or when in state=x, do-something when alarm_y=false.

===============

All other alarms, such as the backup sensors on E-Ger, are still enabled when responding to a given alarm - such as a right-front bump. Only the rt-front bumper alarm is disabled during backup. So, E-Ger won't back off an edge immediately after it hits a frontal object. However, of course, once you transition from the backup state to the turn-left state [eg, to move away from the orig alarm bump], and then to go-forward, you can have all sensor/alarms re-enabled again. As I mentioned, each state has its own alarm table, so they can have different actions for different behaviors.

===========

A round differentially-steered bot with the wheels exactly midline front to back can turn w/o hitting something, and then simply go forward. I think J.Jones makes this point in the 2nd book. Evasive movements can be a little simpler than with other configurations, like ackerman steering, etc.

==================

Nico2 is setup for standalone ops there, meaning timeout-linking and alarm tables are being used.

====================

This is really the same scheme being used with both Nico2 and E-Ger on my page. In each case, an alarm occurrence starts up a sequence of states linked by timeout-links, but in which all of the alarms on the system are still selectively active to deal with new events.

=================

As noted, E-Ger has the backup drop-off sensors, and the behavior programming for this was done just like for every other state. One nice thing about the WMC scheme is that there is no high-level programming. It's all done with tables. I can setup a new gait or other behavior in a matter of minutes. And change it and test it even quicker :).

OTOH, the max intelligence of the system is limited, which is why it's setup from the getgo to be controlled by another higher level [programmable in HLL/etc] processor.

===============

Which is why I have the complete alarm capability on every behavior [state].

================

Pure reactive devices aren't very smart.

=======================

Yeah, subsumption is powerful, but possibly not overly deterministic regards what you get as output behavior. Winner takes all output, etc. Which is probably why Brooks used the "emerge" word. There is the arbitration routine to help indicate which behaviors take priority, but I imagine there would be a lot of thrashing around without the timeouts ["augmented" bit]. Saves the concept from chaos.

BTW, this reminds me of Minsky's society of mind. I need to go back and re-read the parts about how to prevent the loudest shouters from ruling the day. With so many little agents, you need a good overseer mechanism. Our brains have 100s of subsystems working in parallel, but obviously the output is mainly controlled and serial, step by step, so how to control all of those parallel agents is a good question.

have fun,

- dan michaels =================

Reply to
dan

I wrote Joe for his input, and he confirms the idea. A state machine depend only on the machine's current state and its inputs. When an input change leads to a state change, that change occurs instantaneously. Augmenting an FSM with a timer allows you to delay state changes. If you used strictly classical FSMs there would be no way to say, for example, "After the bump occurs back up for a while, then turn for a while."

This brings up a whole subject of abstraction levels in state machines, or modules of machines as parts of larger machines, and I've never seen a write up on these concepts done well.

I remember reading a write up by a company... think the name might be A Better Cad (ABC) or such (apriori appologies if I've got the wrong name, nothing shows up in several pages of googling), that had hierarchical levels to statemachines, and I remember reading it, and wondering what they were thinking. Seemed they'd just draw boxes around parts of state machines and make them subprocesses, but the reasoning in making substate sections absolutely escaped me. They also showed an example of a traffic light, which had only the slightest relationship to a real traffic light (having worked professionally on designing traffic lights at Eagle Signal).

So there's a whole field here with no significant literature as far as I know, on making subsets of machines, and sequencers, and modules and abstractions. This is the sort of thing I wish I could find some university researcher establishing his name. There is certainly a need for further research in these fields, in my opinion.

So it is very interesting your highest level abstractions you consider behaviors, and that FSM's run under them to support them. (For instance, I'm curious about S09 that sends morse code 4 times. Do you have the programming you could show for the lower machines that do that? How do the lower and higher machines communicate? How does S09 know to wait for the lower machines to sequence?)

This was essentially what I was talking about in my post, above to Gordon, about the more I work with the state machines the more I'm still finding surprizes... finding different ways of putting them together, using different techniques to implement subsumption, and instantiating them in their calling sequences, arranging them as short-lived sequencers, and calling them from states in other state machines (which I used in a GPS parser).

There seem to be principles, rules and reasons on uses, but I've never seen anything written consistent describing those rules. At least not logically to my satisfaction.

Huh. Interesting. Seems to me WMC was intended to be what I would call a sequencer. States follow rigid paths around in circles, or sequences without much deviation. This I take from the K paths, which exit one state to enter only one other state.

The alarm structure allows programming beyond continuous sequencing. You've added quite a bit of flexibilty there, but very few inputs which they can be applied to. Is it 8 analog and 4 digital? Can the state of a submachine be another alarm trigger?

You know, I never liked the "Finite" part of State Machine anyway.

It's like saying Finite Software Programming. Nobody bothers. It's a given that the software will only has a finite number of instructions. There is never infinite memory to hold programs. But nobody bothers to point that out in the case of software these days, because memories always seem to be getting larger, so the sense is there is either enough memory, or will be soon. In otherwords, memory capacities expand faster than we find need to fill them up. So "Finite" and "Program" just isn't worth worrying about. Maybe when we get genetic algorythms that write themselves, but for now, with a human somewhere writing the softare... there's no point discussing finite program memory beyond a given individual piece of hardware as currently configured.

I think the distinction dates to limitations of Turings day, and the idea was to keep the "electronics" small (finite), and make up the difference by having an infinitely long tape. So there was the finite "expensive" part, and an essentially infinite "cheap" paper/mag tape part. (Can you imagine a megabulk of electronic tubes? Neither could anyone else in 1937.)

Now, I think I'll stick to this point even though I've reread Feynman on Computation, and his argument about finite states only allow finite operations. My whole laptop has the same limitation, and yet we don't call them finite laptops. Yes, as Gordon says, I often think more like an engineer than an AI guy.

Let's just say I recognize both modes of thought.

Hummm... then I think this situation is not completely described in your graphical representation. Or I am missing something in the description.

Ah, then, since each behavior settings for each alarm, then this is the missing information I suspect is not shown in the graphic representation. Do you agree? Or am I misunderstanding?

Okay, now I understand. If the comment is about differentially steered round bots, yes that can be simpler. Of course, the MiniSumo being square doesn't fit that definition.

Noted.

I'm not sure I agree, but then, since I sell high-level language based controllers, that might be expected. We do have a graphical interface in the works, which outputs ASCII text source code from state diagrams. I don't know if tables really would be any faster than programming in our languages, but we can agree to note our differences here, and wait for later tests to decide.

Yes, understood... but... the communications link becomes the limit on max intelligence with these designs. As I am oft prone to say, communications adds two additional real time tasks to the process control already in progress, so rather than speeding things up, it makes three problems out of one.

Yes. Very important point.

One of the reasons I am so interested in state machines is exactly this point. To me state information IS the essense of smarts.

We don't learn until we can remember. State information is rememberence of all the history necessary to make the decision. Often boiling that history down to its barest essentials means we only need to store a bit (or a few) to describe the state we're in. In a thermostat program, we only need to know enough history to know if we left the heater running or off to take the new stimuli from the temperature sensor (too cold or too hot) to make an appropriate output response.

So to me, state information is almost the same stuff as used in an AI inference engine. I'll say it again. To me state information IS the essense of smarts.

Yes, there are many other schemes of course as Arkin points out in his work.

Yes, we've discussed this before.

And a discussion of chaos also might be appropriate, but I'll save that for another time.

Indeed. I have Society of the Mind, but again, it is a text I've had only time to scan, and not give a thorough read.

Reply to
Randy M. Dumse

I am a fan (certainly not an expert) of the work Turing did during WWII that directly impacted the Enigma code decrypts. If I lived in the UK, I'd probably want to volunteer at the Bletchley Park museum. These guys did stunning stuff -- and much of it is still shrouded in secrecy.

So, in addition to Turing, look up Bletchley Park, Enigma, Colossus, and Jack Good. I'm not sure what online literature there is related to their work in FSMs, but the "bombes" they created to decode the Enigma messages were, as was the Enigma itself, basically mechanical finite-state machines.

-- Gordon

Reply to
Gordon McComb

OK. ==============

Since I've not read any theoretical books on FSMs lately, I can't comment about any of this. ====================

BTW, I updated the E-Ger page, with mnore info about alarm changes viz-a-viz state changes ....

formatting link
Regards S09, it's exactly the same as every other "behavioral state", except its behavioral table has instructions to manipulate port pins instead of move servos. I think I mentioned before, the way the WMC works is that it has tables of actions [eg, servo movements]. There are

44 entries [subcells] in a given table, which is enuf to specify at least 2 movements for up to 20 servos, plus change. In general, this is enuf to specify a complete gait for a hexapod or even an octopod, like Gimlee-U8, one of my other walkers. The 44 limitation comes from the amount of available RAM, but even at that, the tables get to be rather complex. Also, besides servo movements, table entries can also include port pin changes, plus a few other things. ===================

Again, I cannot comment about formal FSM theory. I just do what I want, and don't **ever** ask anybody else for permission ;-). Behaviorist theory is lost on me. ===============

Yes, of course. First and foremost, the WMC is intended to control gait sequencing+timing of multiple servos. The alarm structures, standalone FSM capability, etc, were added as I went along.

This gets back to what we talked about a year or so ago, regarding the philosophy of using co-processors versus not using co-processors. As I noted many months ago, my feeling is that certain tasks - vision and gait-generation, for instance - are ideal candidates for the use of co-processors, and this is the direction I've been going. When the main processor doesn't have to worry about the low-level details of timing multiple leg joints, it has more cycles to devote to functions like high-level decision making, etc. This is more critical when your main processor is a lowly B.Stamp or OOPic, etc, less so when it's an Isopod, with many h.w. timers and a powerful interrupt structure. =====================

Yes. 8 analog and 4 digital alarms. This is dictated by the #pins on the chips being used. This allows up to 12 sensors, which is pretty good for a small bot. And of course, the host processor can always be tied to a few more.

Regards the last question, it apparently wasn't clear, but there are NOT really "submachines". What there is, is that each behavioral state has a table of up to 44 movements/etc which are executed by the scheduler, which means first off that a state is really a behavior is really a complete gait, like take a step forward using timed movements of all 8 legs and 16-joints, and then repeat this N times. ========================

OK. You and Gordon will have to discuss history. I'm don't know much of it. =================

The updated E-Ger page shows a little better how the alarms are enabled and disabled as you enter new states. However, it doesn't really show the entire idea, as this would be too complex for the purpose of the page. I did a lot of trial and test regards different alarm structures, and finally came up with what I am using now.

All alarms are what I call "pseudo-static". This means they are all "global", and remain in effect until specifically changed. Certain alarms, like the bumpers, etc, are typically enabled and disabled as a new state is entered. This prevents tiny-looping problems, like we discussed last time. However, other alarms like the low-battery shutdown, for instance, are set once at bootup and never changed thereafter. They could be changed, but typically will not be.

Also, since all alarms are global, this means the system can be in ANY behavioral state out of the 126 possible, and the low-battery can occur, and the WMC will branch to the sequence of routines for dealing with this situation. This isn't specifically shown in the state digram on the E-Ger page, since I would have to have 12 lines, plus a K-link, coming out of every state in the figure. Too complicated for the purpose of the page. ===============

As discussed above. =============

Well, the WMC is a co-processor and was specifically meant to be a slave to another host. ===================

This is why you have a co-processor which has enuf capability to do its job on its own, without the host having to supervise too closely. The host simply says "use walking straight-ahead gait", and the WMC takes care of it. Regards reports back to the host, I only ever send 4 bytes back - the current WMC behavioral state, 2 bytes with alarm flags [ie, those which are currently triggered], and a 4th byte [basically optional] that specfies the state of the WMC [on, off, executing, etc]. Minimal comms. ====================

It will take me a while to dig through SOM. Basically, it's a hodge-podge of about 300 different ideas about the mind and intelligence, without an obvious structure [that I can perceive], plus from page to page it jumps from the top of the discourse to the bottom and back, etc. One page talks about low-level agents like "move a block", and 2 pages later it's on to consciousness, etc. That being said, it does have a grand array of ideas that are generally useful for AI. It's just gonna take a while to dig out the useful ones.

Specifically, I'm looking for those ideas that can serve as sueful control structures for a lot of FSMs, agents, etc - something more deterministic than going the subsumption route. So far, there are 3 useful ideas I've run across - conflicts percolate up hierarchies, it might be useful to have unresolvable situations habituate over time [ie, go away], and you might use a "supervisor" [Minsky's B-brain] specifically intended to constantly oversee lower-level processes and which has "just enuf" power to prevent them from getting hung.

Reply to
dan

OK. ==============

Since I've not read any theoretical books on FSMs lately, I can't comment about any of this. ====================

BTW, I updated the E-Ger page, with mnore info about alarm changes viz-a-viz state changes ....

formatting link
Regards S09, it's exactly the same as every other "behavioral state", except its behavioral table has instructions to manipulate port pins instead of move servos. I think I mentioned before, the way the WMC works is that it has tables of actions [eg, servo movements]. There are

44 entries [subcells] in a given table, which is enuf to specify at least 2 movements for up to 20 servos, plus change. In general, this is enuf to specify a complete gait for a hexapod or even an octopod, like Gimlee-U8, one of my other walkers. The 44 limitation comes from the amount of available RAM, but even at that, the tables get to be rather complex. Also, besides servo movements, table entries can also include port pin changes, plus a few other things. ===================

Again, I cannot comment about formal FSM theory. I just do what I want, and don't **ever** ask anybody else for permission ;-). Behaviorist theory is lost on me. ===============

Yes, of course. First and foremost, the WMC is intended to control gait sequencing+timing of multiple servos. The alarm structures, standalone FSM capability, etc, were added as I went along.

This gets back to what we talked about a year or so ago, regarding the philosophy of using co-processors versus not using co-processors. As I noted many months ago, my feeling is that certain tasks - vision and gait-generation, for instance - are ideal candidates for the use of co-processors, and this is the direction I've been going. When the main processor doesn't have to worry about the low-level details of timing multiple leg joints, it has more cycles to devote to functions like high-level decision making, etc. This is more critical when your main processor is a lowly B.Stamp or OOPic, etc, less so when it's an Isopod, with many h.w. timers and a powerful interrupt structure. =====================

Yes. 8 analog and 4 digital alarms. This is dictated by the #pins on the chips being used. This allows up to 12 sensors, which is pretty good for a small bot. And of course, the host processor can always be tied to a few more.

Regards the last question, it apparently wasn't clear, but there are NOT really "submachines". What there is, is that each behavioral state has a table of up to 44 movements/etc which are executed by the scheduler, which means first off that a state is really a behavior is really a complete gait, like take a step forward using timed movements of all 8 legs and 16-joints, and then repeat this N times. ========================

OK. You and Gordon will have to discuss history. I'm don't know much of it. =================

The updated E-Ger page shows a little better how the alarms are enabled and disabled as you enter new states. However, it doesn't really show the entire idea, as this would be too complex for the purpose of the page. I did a lot of trial and test regards different alarm structures, and finally came up with what I am using now.

All alarms are what I call "pseudo-static". This means they are all "global", and remain in effect until specifically changed. Certain alarms, like the bumpers, etc, are typically enabled and disabled as a new state is entered. This prevents tiny-looping problems, like we discussed last time. However, other alarms like the low-battery shutdown, for instance, are set once at bootup and never changed thereafter. They could be changed, but typically will not be.

Also, since all alarms are global, this means the system can be in ANY behavioral state out of the 126 possible, and the low-battery can occur, and the WMC will branch to the sequence of routines for dealing with this situation. This isn't specifically shown in the state digram on the E-Ger page, since I would have to have 12 lines, plus a K-link, coming out of every state in the figure. Too complicated for the purpose of the page. ===============

As discussed above. =============

Well, the WMC is a co-processor and was specifically meant to be a slave to another host. ===================

This is why you have a co-processor which has enuf capability to do its job on its own, without the host having to supervise too closely. The host simply says "use walking straight-ahead gait", and the WMC takes care of it. Regards reports back to the host, I only ever send 4 bytes back - the current WMC behavioral state, 2 bytes with alarm flags [ie, those which are currently triggered], and a 4th byte [basically optional] that specfies the state of the WMC [on, off, executing, etc]. Minimal comms. ====================

It will take me a while to dig through SOM. Basically, it's a hodge-podge of about 300 different ideas about the mind and intelligence, without an obvious structure [that I can perceive], plus from page to page it jumps from the top of the discourse to the bottom and back, etc. One page talks about low-level agents like "move a block", and 2 pages later it's on to consciousness, etc. That being said, it does have a grand array of ideas that are generally useful for AI. It's just gonna take a while to dig out the useful ones.

Specifically, I'm looking for those ideas that can serve as sueful control structures for a lot of FSMs, agents, etc - something more deterministic than going the subsumption route. So far, there are 3 useful ideas I've run across - conflicts percolate up hierarchies, it might be useful to have unresolvable situations habituate over time [ie, go away], and you might use a "supervisor" [Minsky's B-brain] specifically intended to constantly oversee lower-level processes and which has "just enuf" power to prevent them from getting hung.

Reply to
dan

Do you have any documentation of what the entry in a subcell looks like? How big are the subcells? How do you select if you are moving a servo or changing an output pin? (such as when you're sending morse code).

I'm not wild about table driven state machines. I don't find them to be efficient. At least they are regular though! And they are the way many FSM's are described in the literature I've mentioned.

BTW, when things "get to be rather complex", I suspect what is happening is you have several simpler machine forced into a single machine, and possibly factoring out machines would greatly simplify the problem.

This opinion traces back to the Eagle Signal Traffic light I've mentioned before. For years they had tried to get a state diagram for their traffic machines, but no one had been successful in drawing one, entirely too complex. I succeeded where others had, by not trying to draw a single machine. Instead I found there were (afaicr) 7 separate machines, all with interprocess communications. Once I broke up the problem into more machines, each machine became quite "local" and greatly simplified.

I think if you imagine seven machines with a few states each, then try to write a single state machine to replace them, you can see how combinations and permutations of finding one state to describe the conditions of dozens of states just explodes.

So you might find your RAM goes further if you make several smaller tables, rather than one large one. But again, WMC has another purpose besides being a general state machine controller, which may probably have their own special requirements.

While we are not far out of agreement, I continue to think there must be a better way. I suspect the problem is in the coupling between the coprocessors.

For instance, (think back 15 years) if you had a 386, which had a math coprocessor 387, and the linke between them was a 1200 baud serial channel, you wouldn't be too happy with performance. But since the two processors were linked by the same high speed bus, the performance was a major advance.

Coprocessing by old speed serial channel just leaves me cold.

Yes, there we agree. But this ties into what I was saying about coupling, because if the h.w. timers, etc., weren't on the processor bus, the IsoPod(TM) would be nearly as limited as the the other micros.

The most powerful form of coprocessing would be one where state information came directly into the host processors conditions code register, and it could test and branch on it as easily as "zero, carry, etc." Now that would really take overhead for coprocessing down. (Or even take an interrupt or context switch with a coprocessor change of state.)

In the IsoPod(TM) you instead have to do a load from an address and test to find out if a timer has finished, etc.

But the processors you mention, strung serially have to receive multiple bytes serially, then look up and translate the message sent, before they can know of the state change of the co processor. While they themselves are slow, the process of interprocessor communications is orders of magnitude slower yet.

Thus my point. It doesn't matter what processor you use, if the communications is the limiting factor in coprocessor cooperation, then it is the limiting factor. Faster processors won't help..

So this is probably the clearest statement I have on how the WMC works. But, are the movements attached to an output? of which there are 22? and then a second output to follow, making 44?

And what is the scheduler? Do you get to pick how long the delay is between the first 22 positions and the second? And then after that too?

Sorry for being dense. Obviously I bring preconceptions to the game that are standing in my way of conceptually grasping just what you've done.

Again I am reminded of the multiple machine context. To me, when complexity is too great to show, it means there is something else happening and it should be described separatley, as another machine. This problem sounds like a two state machine to me. Operate, and Low battery get home! Then subsumption like priorities take over if in Low battery.

Yes, minimal comms, but a minimal comm is a single wire with the logic level answer on it.

I think about these low priority tasks too, and suspect they have an "I" term, as in, PID. What I mean is if a low level task has an error state long enough, an integrative term will bring it up in priority.

Example: Imagine you're running from a bear. The "pebble in your shoe" alarm just doesn't get serviced. On the other hand, with no "bear alarm" even though a "walk the trail" behavior has a higher priority than the "pebble in your shoe" alarm, eventually, you will find the pebble increasingly annoying until you subsume the walking to remove the pebble. The longer the pebble annoys you, the more likely you are to put aside moderately higher priorities. Particularly if the pebble removal might actually improve the higher priority of "walk the trail"

Hummmm... thinking about it just now, the "walk the trail" behavior might have just the opposite of an I term as well. The longer you do something, the more likely you become to being "bored" and accept otherwise lower level priorities, which we normally think of as "amuzing distractions".

Now tying back into the original thread, I guess the I term being a integral of time, seems FSM's in nature really are commonly tied to timers.

Reply to
Randy M. Dumse

see later ... =====================

I've not studied how other poeple have defined FSMs enuf to know how they do it using tables, I just did it my own way. However, one nice thing about my approach, that I've had in the back of my mind, is that

- had I still some RAM and ROM space left in my chip, I could fairly easily implement a geneic algorithm routine for learning gaits. Something in the long to-do queue. Basically, the values in the tables would be easy to manipulate using GA's. ====================

What I really meant here was that the tables of 44 entries could get rather complex. Afterall, what you're doing is defining an entire gait with one of those tables - all of the movements needec for a walker to take one complete step, involving movemetns of upwards to 20 servos. ====================

Joe Jones talked about this in his 2nd book. He discussed an FSM that got very complex, very fast, and it was obvious that he was leading up to the idea that the FSM could be greatly simplified by factoring it. ================

As I mentioned in the last thread, I actually have a "lot" of functionality off-loaded to the WMC coprocessor chip - ie, generation of the entire gaits - so that WMC-host comms is extremely minimal. Essentially, all the host says is "walk forward", and all the cop says back is "I'm in this behvarioal state", and "the following alarms are active". This only amounts to 4-5 bytes sent in each direction, every few 100 msec or so. =================

I thought I had already mentioned it a couple of times, but the table is a table of servo movements and the scheduler reads the table and executes the movements as directed. There are 4 subcells in each table entry, the 1st being the time to take the action, and the other 3 the specifics of the action. You could have up to 44 movements for 1 servo [which could give some extremely complex repertoire of movements for the servo], 1 movement for 19 servos and 24 movements for 1 servo, 2 movements for 20 servos, or any combination, on and on. The table method is extremely flexible. Also, the entries can be another action, pin I/O, etc, as dictated by the info in the 3 additional subcells. BTW, I should add that I went through several permutations on how the WMC chip should work, starting back in 2002 with my 1st walker, Nico-1, which had a very rigid control scheme. The current table approach is incredibly flexible - which is why I can have morse code on a pin if I want it. ====================

Again, it's more flexible than you seem to have gloomed onto. There are

126 tables total, and they can be grouped together into any fashion using the k-links. Having a walking-roaming-avoidance sequnce [or FSM] is just one. Having a homing sequence [FSM] is another. These only take up maybe 10-20 tables, and you still have 100+ left. So, you can have another k-linked FSM for sitting/standing - 2 tables. Another for a dance - 1 table. Another to scan a servo-mounted sonar sensor, and take action based upon reflections - 1 table. On and on. Up to 126 tables total. ===================

What Minsky calls his B-brain is actually something that I had thought about before reading his book, and is something that can very easily be implemented on a small bot. If you will recall, Joe Jones talks about the problem with subsumption bots that they can easily get stuck in moderate-sized loops, for example, going back and forth across a small hallway - hit one side go to the other and hit it, go back and hit the other again, on and on. Forgetting about subsumption for a moment, this is also a problem with the FSM-WMC approach - triggering on real-time sensor-alarm events. However, it would be fairly easy to implement a B-brain to monitor for repetitive-looping activity, and break out to something else. I had assumed the WMC host processor would take care of this. With a little STM, it could store and see that the WMC is repeating thestate sequence, say 1-2-3, 1-2-3, 1-2-3, etc, and break out. ======================

Reply to
dan

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.