What is Subsumption?

In Mobile Robots, Jones and Flynn ,say in 9.2, "Brooks's subsumption architecture provides a way of combining distributed real-time control with sensor-triggered behaviors. Subsumption architecture, instead of making explicit judgements about sensor validity, uses a strategy in which sensors are dealt with only implicitly in that they initiate behaviors.

"Behaviors are simply layers of control systems that all run in parallel whenever appropriate sensors fire. The problem of conflicting sensor data then is handed off to the problem of conflicting behaviors. Fusion consequently is performed at the output of behaviors (behavior fusion) rather than the output of sensors. A prioritized arbitration scheme is used to resolve the dominate behavior for a given scenario.

"Note that nowhere in this scheme is there a noticion of one behavior calling another behavior as a subroutine. Instead, all behaviors actually run in parallel, but higher-level behaviors are no longer triggered by a given sensor condition, however, they cease suppressing the lower-level behaviors and the lower level beaviors resume control. Thus, the architecture is inherently parallel and sensors interject themselves throughout all layers of behavior."

Is subsumption really necessary, or is this just a fancy name for multitasking? Is this just an issue of creating the allusion of parallelism in a serial machine? Can the same thing be written in FSA without the need for the concepts of subsumption? Thoughts?

Reply to
RMDumse
Loading thread data ...

Re-read the previous sentence a couple of more times. Also, the definition of subsume and subsumption in the dictionary.

Subsumption is really the "priority arbitration scheme" in which certain behaviors, usually lower-level ones can take over control of the system, under certain conditions. You will notice, in Joe Jones' book, he mentions that each behavior is actually executing 100s of times per second, and for the priorization scheme to work properly, then each behavior, regardless of level, must have real-time and current information.

On pg 135 of Arkin's book, he says: "... The name subsumption arises from the coordination process used between the layered behaviors within the architecture. Complex actions subsume simpler behaviors. A priority hierarchy fixes the topoolgy. The lower levels in the architecture have no awaremness of higher levels. This provides the basis for incremental design. Higher-level competencies are added on top of an already working control system without any modification of those lower levels ..."

Also look at the picture on pg 94 of Jones' [new] book. Subsumption is really the multiple arbitration scheme.

Where Arkin says "... added on top of an already working control system without any modification of those lower levels ..., this means those lower levels still continue to function - and without any awareness of the higher levels. This is where the multitasking comes in, but the arbtration scheme is the key.

Reply to
dan michaels

But if the lower levels have no awareness of the higher levels turning their output off, wouldn't it be better or more efficient not to run the lower levels in the first place? Their outputs are subsumed anyway. Unless they are the highest priority, then their operation is just another way to waste computer time. Why calculate an answer to be thrown away? What possible use is there in running multiple behaviors, when there only one behavior really winds up controlling the outputs anyway?

All the efforts to instantiate the behaviors that them aren't used just slow down the computing. So really the question boils down to this. Is there an equivalant computational model to subsumption? In subsumption you go all the way throught the calculations of behavior, and then the arbitrator takes the results, which either are there (if the thresholds are met) or aren't there (if the thresholds are not met) and takes the first set of outputs in a priority scheme that are there. Wouldn't an alternative that first looked down the thresholds until one was found active, then calculated the output based on that behavior?

I am being somewhat rhetorical in this question, because I have an answer to the above question, but then, it brings up yet another problem with theory which needs to be discussed. I thought I'd first see if there was discussion about the idea of a different approach to subsumption, call it pre-subsumption-behavior-selection, as opposed to the current Brooksian model which will be post-behavior-arbitration. Are their outputs not equal? is not pre-subsumption-behavior-selection actually much more computationally efficant?

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

I don't know how subsumption systems are actually implemented, but of course, "running" a lower level when it's output is being thrown into the bit bucket is a total waste of computing resources and I would assume on any system where it made a difference the actual implementation would be different from the logical abstraction that describes it. Whether Brookes had developed languages and compilers to take care of these implementation details or whether it was dealt with by hand coding I have no clue. Or maybe, in all his typical applications, the computing load was so low as to make no difference?

Of course. There always are.

But if the behavior is something no more complex than:

// go forward behavior if sensor=ON then RightWheelSpeed = 1 LeftWheelSpeed = 1

then you aren't talking about something that's going to make much difference if it's "constantly running" for each loop and being ignored. And my limited understanding of subsumtion is that something like the above is typical of their "behaviors".

Reply to
Curt Welch

As I see it, the answer to this question is that, you need the output from each of the individual "behaviors" in order to perform the arbitration correctly.

First off, the "behaviors" are really simple sensor-input --> compute

--> output-value routines. The term behavior may be a cause of confusion. These are very simple routines, as befits the nature of the mechanical beasties here. They aren't performing a lot of high-level symbolic computations. Subsumption forgoes those sorts of things, by definition. So, it doesn't take a lot of processing power to compute the individual behaviors, as the processing tends to be minimal. No memory, no symbolics, no language, no sophisticated sensory processing, like for vision, etc. Basically, simple I/O, and which is of course the reason why such bots are ultimately quite limited in what they can achieve. And which is why many people have patched the basic idea into so-called "hybrid" systems.

If you look at some of Jones' pictures, they will show say bumper -->

escape behavior --> suppression of all lower behaviors --> motor output. So, 2 things are going on here. First, escape suppresses all other behaviors, and secondly, it sends relevant signals to the motors. The computations involved here are basically trivial. IF(left bump) THEN(turn right), etc. The signals to the motors are simple, and the arbitrator takes care of suppressing the other behaviors.

Secondly, you will notice that, on Brooks' orginal robots, he used multiple small-processors to compute individual behaviors. Eg, he had a separate 8-bit cpu on each leg, etc. So, he was actually doing multiprocessing.

Also, as Arkin points out, and is fairly obvious from Brooks' scheme, it is roughly based on how the brain operates. IE, there are many levels of processing going on simultaneously, and with the higher later-evolved levels taking over control from the lower levels, when performing more complex tasks, but with the lower levels able to break in and re-assume control in critical situations, like fight or flee taking over from feeding, etc.

This is a very different idea from what we're all used to doing from the old days, and/or "currently" with most programming. The brain evolved, and wasn't designed the way we design our computer algorithms. New more-powerful systems evolved onto top of the older systems, but the older systems are still there and operational. Look up "triune brain".

This is also one of the truly incredible things that is coming out of current molecular biology research. Parts of the genome that "work", and aid survival, are "conserved". This is why some of the proteins we have and the DNA that transcribes them exist all the way back to bacteria. Also, something like 90% or more [whatever] of our DNA is so-called junk DNA. These are mainly evolutionary deadends. DNA that was once active, in earlier forms, but whose functions have since been taken over by other functional parts of the genome. The junk DNA no longer works, and has not been conserved, but it's still in there. Apparently there is no mechanism to get rid of it. I've got a lot of code like that, too. Similarly, it seems the brain re-uses and modifies earlier-evolved modules, rather than getting rid of them wholesale.

We're listening :).

Reply to
dan michaels

You are "so" not alone.

I chose Mobile Robots to quote for exactly that reason. Of all the writings of Brooks and his people, Joe Jones most often explains the practical, and he and Anita Flynn really wrote a useful classic in Mobile Robots. They come closest of all to explaining code for subsumption. Otherwise, subsumption is a pretty high level concept left floating for any possible transcription to lay claim to following.

Yes, that is the premise. But of all people, from our past discussions, I expected you to find the flaw in the plan to not give every behavior a time slice. :)

Well, having read for exactly that answer, I have picked up just a few clues. Brooks was involved with Lucid-3D, a hypertext spreadsheet which was way before its time. Or after. Anyway, it was brilliant and missed its mark. So Brooks is apparently quite a talented programmer in his own right. He seems to be a LISP programmer/expert. I don't know what Lucid-3D was written in. (I owned a copy and used it, anybody else?) There is mention of two implementations of languages he seems to have written himself, I assume both in LISP. I assume it was easier to explain psuedo code descriptions of his language than it was to explain the language and publish the source code, because the source has been conspicuously missing from all the publications I've been able to find.

By '93 when Mobile Robots was printed, the examples had become Interactive C based. They still call it psuedo code in most places, but in some places they have actual C code.

So it looks to me they normally try to write in high level language, in early papers the high level language was something Brooks wrote, and later they started using C.

I doubt they ever thought their load was low. The reason I say this, is while any individual behavior such as cruise, might only take a dozen microseconds (if compiled code) on a micro of the day, they rerun these behaviors as often as they can in the outer scan loop, so that the combination of multiple behaviors run as often as possible, particularly with the interactivity adding to the load, turns into a significant computational load.

There are mentions of multiprocessors in the early robots.

What was the count of genghis behaviors? I don't have the book here, but I think it was something over 50 behaviors. Even if the individual behavior is small, 50 of them run 1000 times a second would be 50,000 uS even if a call was 1us long.

Actually, being quite familiar with the HC11, I'd imagine their call, setup and return would be more likely around 200 cycles, or 100 uS, as a rough average estimate (some rountines shorter approaching 40uS, and some much longer, moving the average). So the real time load we could estimate for the 50 routines would be 5000 uS a pass, or run at 200 times a second, would completely consume an HC11.

The HC11 is a 1987 product. The 8051's and 6502's that proceed it are not that much slower. Brooks original papers on Subsumption date from close to that same period (1986 iirc.)

So save for one point I am in complete agreement with your assessment. A good compiler approach should have stripped out all the calls to the behaviors which weren't active. Subsumption running behaviors that don't matter is a very good simulation of nature. But it's a very wasteful (inefficient) computer design.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

Um, the description of the a system (i.e. subsumption) is not directly tied to the example language they use (e.g. Behavior Language or Interactive C etc.) per se. I know a little of this stuff because I was hanging around MIT a little bit during the mid-80s and I did my master thesis with subsumption multitasking kernel. Joe Jones bought a compiler or two from us too but I am not sure whether they are used officially in any iRobots products.

The high level behaviors are usually not run based on timers. Think of them as event driven. An event fires, and some high level "function/behavior" eventually kick in...

The HC11s were not taxed at all. The only reason they out one HC11 per Genghis' leg was because they can. They were playing with multi-processing not because they need to, but because they can.

Not sure if you have a full grasp of how subsumotion works....

Reply to
Richard

I agree, but as a Forth afficiando, I know it's often easier to try to explain something as psuedo code rather than teach someone how to read Forth. I suspect, although I am purely speculating, the Behavior Language suffered similar difficulties in translation since I don't see it persisting in the literature anywhere.

Now that's interesting. Is your thesis web accessible?

Did you stop at your Masters? or go on? I remember talking to Joe Jones, who decided to stop at a Masters. He and I had pretty similar backgrounds before that, both did physics for our Bach. Viet Nam ended my education there, as I had a low draft number, and signed up with the ROC program.

Good to know they were HC11's. I wasn't sure they were. I see the Genghis paper is date '89. So that's about the right time frame. Do you happen to know what the communications link between the HC11's was?

Yeah, funny, I get that alot.

David P. Anderson just suggested the same thing. Our discussion of subsumption last Tuesday at RBNO (Robot Builder's Night Out) had a lot to do with the inspiration of the o.p. Of course I imagine I understand BBR and subsumption pretty well, enough to criticize it, perhaps as well as can be given what's published and not an MIT insider.

So let me describe the example I use in my Intro to Robotics class. The class is focused around the building of a Mini-Sumo, and concepts are taught along the way. Here's how I explain subsumption.

After teaching line following concepts, we again apply Braitenburg concepts with the Sharp IT range sensors to get the robot to follow a target by turning toward the sensor that last saw something. When both sensors are on, both motors drive. If one sensor looses contact, the wheel on the opposite side is stopped. For instance if the right sensor looses contact, the left wheel will be stopped, swinging the robot body toward the left, and vice versa. We implement this in a single FSM which always drives one or both wheels forward.

Then we develop a second FSM that only generates outputs when the front edge sensors (triggers) see the white line. Again, the machine is a little more complex than a purely reactionary servo response. In this case it implements a ballastic escape behavior. First it backs straight up, then it turns away depends on which seonsor comes off the edge first to orient itself pointing toward center. So the backing behavior only produces back up commands on both or one wheels. When the escape behavior is complete, the routine no longer generates backup commands.

The FSM's are called in a priority order, with the search called first, generating it's outputs into the hardware PWM generation registers, then the backup machine is called, replacing the commands in the hardware PWM generation registers if they are triggered. Because the background task runs 100 times a second, and the PWM generation only is applied to the modified RC Servo Motors half as often as results are available, the backup task will virtually always have the last say what actually is applied to the wheels.

The calling order determines the arbitration priorities, and the periodicity of the updates ensures the subsumption of the earlier called routine by the later. If Backup were called before Search, the Backup commands would never be seen by the motors.

I think this makes for a very good example for subsumption. The robot can be tested with Backup removed from the chain, and it can be shown the robot only does forward or turn modes. The Backup can be added before the Search, and the same seen. Then Backup can be added after search, and then, backing actions occur only when there is a front edge sensor "trigger", and the Search mode becomes operational again as soon as the Backup behavior has completed.

Do you disagree?

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

Don't think so. Should have the framemaker file 5.x (?) file somewhere :-) If you google REXIS, hopefully there is some links to it. That's my first commercial product, which is basically an implementation of the kernel.

Have kids, started a company, no time for school :-)

It uses the SCI bus using a variant of the I2C protocol, IIRC, but it has been a while.

Yes, in some aspects, subsumption as implemented originally is just FSMs with a difference. It's been a while so I am not sure if I can articulate the "difference" that well :-) The abstraction though can be implemented differently, e.g. Behavior Language is at a higher level than the original subsumption language. I haven't read Jones and Flynn's book for a while either, but I believe their Interactive C implementation is also just some FSMs with the subsumption difference.

Ultimately, I think one needs to diverts from the implementation (e.g. fancy FSMs) to the idea of layered control architecture with low level stimuli-->responses at the bottom level and higher level of behaviors subsuming and inhibiting lower level of control.

Reply to
Richard

Absolutely. It was a complete departure from tradition. Things had been done top down from abstracted goal-driven intelligence. Subsumption describes reflex driven bottom up behavior. Brooks demonstrated a system without central control.

Absolutely not. Multitasking is a sort of simulation of parallelism on a sequential processor. Subsumption mimics true massively parallel neural architectures. Subsumption is naturally parallel. if you can only afford one processor then you can simulate multiprocessing with some multitasking.

Any serial machine can only give a simulation of true parallelism. It is not an illusion. It is what it is. It is not a trick.

Subsumption is a concept. You cannot do subsumption without the concept of subsumption.

Can you implement subsumption as FSA sure, but they are very different things. If you can only afford one processor you can simulate a lot of neurons. And you can use multitasking to simulate a bunch of virtual parallel finite state machines.

Subsumption was a departure from traditional AI theory. Related theories include Minsky's Society of Mind and Edelman's Neuronal Group Theory. The technique of multitasking does not have the same relationship to those theories as does subsumption architecture.

Yes. Subsumption applies to thoughts the same way it applies to reflexes in a creature with 6 legs and no eyes, antenae or brain. But a demonstration of subsumption on a machine which has only reflexes to move will move well but will not demonstrate thought. Now add sensory input and memory and as Murray Gell-Mann would say you may see emergent behavior out of a complex adaptive system. It may exhibit thoughts, but it will not be programmed top down as goal-driven systems were decades before.

Reply to
fox

I think Randy was trolling for thoughts on the matter, rather than asking if a subsumpton architecture could support thought. By definition, Brooks' robots were simple sensory-motor reactive machines, and didn't even possess memory, let alone ability for any kind of high-level symbolic processing. His manifesto essentially indicated this was really "unnecessary" for robotics, but of course not many people agree with this, who want to build something with any actual intelligence, as opposed to ability for basic operational survival, and performance of fairly simple tasks.

Also, although the distributed nature of SOM bears some resemblance to subsumption, I'm sure Minsky would contend it goes far beyond Brooks' ideas. OTOH, I do think the basic scheme for arbitration used in subsumption could support high-level processes that could take over control of the machine. No reason you can't have a vision or speech processing system, working in complete isolation from everything else [and which is the nature of individual "behaviors" in the subsumption scheme, after all] contending for control along with lower-level survival subsystems.

Reply to
dan michaels

Okay, I don't agree, but let me follow your argumnet here.

Well, behaviors can be really simple. But I think that's biased to what we've seen as examples because only simple cases are used in examples.

Amen.

So your argument to this point, if I understand it, is there isn't much to compute here, so why bother not to compute it?

This seems to me to support my side, that if the other levels of behavior aren't used, all subsumed, then why calculate them? They are as good as dead, when escape behavior is envoked.

Well, here, the point if you've got the hardware to do it, why not let it run makes sense, but the counter argument, that if you did more efficiently with the software, you wouldn't need the extra hardware.

Here I would summarize your argument as subsumption is a biologically inspired. I don't disagree. In fact I quite agree. Jon Connell's paper on coastal snails was one of the most supportive things I'd seen suggesting subsumption.

formatting link

Isn't it DesCarte that wrote a freind and appologized for writing a long letter, because he didn't have time to write a short one? The meaning being, it is more difficult to be suscinct, and to do so requires a full writing of ones thoughts, careful reorganization, and then a rewriting in better/best form.

If we embrace the randomness of evolution, it is not surprizing there is no mechanism to trim waste. On the other hand, we usually don't pop out an extra dozen arms, because there is a survival quotient of what is fitting and what is excess. So to the degree more DNA has negative effects on survival, you'd think it would be trimmed.

I thought Curt might have picked it up, but here goes, the only reason to instantiate subsumed behaviors, that is to give them a time slice, is if they are state-based, to allow them to track their state. In terms Curt and I have discussed, if the input signal has sequential information, the behavior will need to be "alive" to be able to track and extract that information.

Now, where I say this causes another problem for the theory is, 1) almost none of Brooks AFSM's are actually state-based, but are simple servo responses, so there is little reason to instantiate them with calls when they are subsumed. But 2) the state based example we do have from Jones, the escape behavior, would be a very poor candidate for this use. My meaning there is if escape is subsumed, and yet instantiated by being called, it might see a bumper hit and generate a back up sequence, a turn sequence, and then becoming unsubsumed, it would output a short push ahead phase. So in the case of escape, it would actually work better if not run, not advancing through the first two ballastic states, rather than subsumed and dormant until unsubsumed, so that the ballastic part of its actions aren't passed over, and control returned at an inapporpriate time.

Usually escape sequences are the top of the subsumption chain in the examples offered, so they never show a situation where they are subsumed. Yet, I think the idea that they can could be shows ballistic behaviors are not well suited to the whole subsumption approach.

I'll be travelling by Thursday, and may have limited opportunities to respond for a few days. Please pardon me, and carry on in my absense.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

Randy, Dan, and company,

This interesting and lively thread has gotten me a bit curious... I am wondering if there is anything resembling a "reference implementation" of a subsumption architecture out there on the web. You know... code, design artifacts, supporting theory, that kind of thing. If there were, it would certainly go a long way toward answering the question "What is Subsumption?"

to take a crack at writing one, but I've never really felt qualified to do so. A cursory Google search reveals a few vague discussions, but nothing particularly substantial. Is there anything worth check out?

Gary

------------------------------------------------------------------------------- Computer Science is the Art of the Possible

Reply to
gwlucas

No, I'm just trying to interpret Brooks' philosophy. By definition, what he calls "behaviors" are very simple sensory-motor reflexes [in essence], and by definition they all operate in parallel simultaneously. That's HIS idea, not mine. I would immediately patch it, at least to include some semblance of short-term memory. Without the latter, the bot can't do very much of use, AFAIAC.

Ok, if we use your scheme, then we ALWAYS have to calculate ALL of those behaviors with HIGHER priority in the arbitration list than the behavior we're currently running. If we're in the middle of the list, then we still have to calculate all those in half the list in case any of them goes active. That's certainly doable, but makes the arbiter somewhat more complex. It's also not what Brooks does. For my part, I try to find the weak points in pure subsumption. They are many, imo.

Actually, I'm not too sure why he used so many small cpus, because I'm pretty sure, given bots like Atilla and Genghis, he could have done it all with just one 8-bitter and it would have about worked the same.

Also, Joe Jones page 93: "... Subsumption was inspired by the development of brains over the course of evolution. In this process, the lower, more primitive functions of the brain are never lost; rather, higher functions are added to what is already there ...."

You talking about me here, or evolution of the brain? :)

Yes, were "I" the Grand Designer of all creatures great and small, I'd have lopped off all the 90% or more of junk DNA, so the little blighters wouldn't have to carry around all that useless baggage. Travel light is my motto.

BTW, there is a really interesting article about this in the latest Oct

2006 issue of Natural History mag ... not available online, found it in the public library ...

formatting link
Broken Pieces of Yesterday's Life Traces of lifestyles abandoned millions of years ago are still decipherable in "fossil genes" retained in modern DNA. Story by SEAN B. CARROLL

Well, I see this all as a matter of timing. IE, once the ballistic movement is started, it will run to completion unless something like a bump occurs and takes higher priority. The subsumption scheme does specifically allow for real-time modifications of behavior, as environmental situations constantly change. In real nervous systems, this happens continually.

Actually, I think the bump or escape behaviors will always take priority over everything else, especially ballistic movements, so not a problem. You put the bump switches so they'll always take command, regardless. They're like the supreme court being the last guys to talk, and then we get a new president [ok, that's a little OT, but it's the same idea]. Somebody has to be on top of the heap, in order to avoid total chaos.

Reply to
dan michaels

Not sure about a reference implementation, per se. There are all of Rod Brooks' papers from the late 80s and early 90s, plus those written by his various grad students. I'd suggest Maja Mataric [USC] for someone who pushed the orginal scheme to higher levels of represenation.

Also, see Joe Jones' two books, Mobile Robots and Robot Programming, plus Ron Arkins' book Behavior-Based Robotics. Joe's books show sample source code.

Reply to
dan michaels

BTW, one of the things I forgot to mention last time is responsivity. Brooks made a big issue about how slow previous [top-down] implementations were to respond in real-world situations. Eg, many robots were taking many minutes just to move across a room, and he wanted a scheme which would cut this time down to seconds. Therefore, one advantage to constantly computing ALL behaviors is that, just as soon as a higher-priority behavior relinquishes control, any of the others can immediately take over the machine, without any significant delay.

Reply to
dan michaels

Hi, well, other than the various books we've mentioned, and a few short web entries, there just isn't that much out there on subsumption. I can't remember having much of any questions on subsumption answered by doing web searches.

Problem is, I think, Brooks' description is very etherial, spread thin enough it would be difficult to fill such a site with detail. Jones has detail and examples in "Robot Programming: A Practical Guide to Behavior Based Robotics." But Jones approach is very narrow, so you find nothing about those rich intertwined subsumption and inhibition routes Brooks uses like dendrites. Jones only uses something akind to a "interrupt priority controller" to implement his subsumption. Not that his examples aren't excellent, and his point clear, but I feel there's much to the potential of subsumption left on the cutting floor. Arkin, however, seems to be his own man, and while he covers Brooks version of subsumption sufficiently, he also covers all the literature with many many other schemes of robot behavior programming clearly not of the same tree.

And then, if you were to make a site from what is available in books, you'd have nothing but controversy. While you can probably get the everyone to agree on the skelleton of what subsumption is from what little has been published, but just as there is in this thread, the controversy will be no one can agree on what subsumption isn't.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

Except I don't think it actually uses interrupts. Just continually computes each of the behaviors, and then goes down the priority list in a linear fashion, with the highest priority [meaning survival-critical] behaviors, like bump, at the top of the priority list, and roam/etc at the bottom. You'll notice in his book, Jones says the whole thing just described is computed 100s of times per second.

Reply to
dan michaels

Oh, yes. I don't know if he uses interrupts or not. It might be there's a periodic interrupt. But that was kind of aside from my intended purpose. I was just comparing the style of programming Jones uses, to a physical piece of special purpose hardware that does a similar thing, and drawing an analogy.

The analogy goes along with my opening premise, whether subsumption really has to be fully calculated, or if there are short cuts, similar to better state machine design. It occured to me Jones style was like a state machine that transitions on a higher priority, and then the analogy to the interrupt priority controller came to mind. I think its a rich analogy, and deserves some more thought about what we're really doing with subsumption... but then I supposed you'd rather expect such a comment from me.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

Yeah, as I mentioned a time or three ago, it seems like it would certainly be possible to use early termination on behavior computation. IOW, terminate going down the priority list as soon as one of the behaviors fires. Implict suppression of all the rest. The main impact I can see of doing this is that it would possibly delay responsiveness. OTOH, if you didn't have to compute every behavior, then you could go around the overall behavioral loop faster [at least on a single-processor system]. Possibly there is something going on here that I'm not seeing. I'm mainly basing these comments from reading the C code in Jones book Mobile Robots.

Brooks' original idea was based on the fact that the brain has many areas working continually, and whose outputs are always computed but may be suppressed by other areas. The ultimate multi-processing. OTOH, if we're implementing something similar on just a single-cpu, then we might do things a bit differently. I'm not quite sure where Jones' ideas sit along this many-processor to single-processor continuum.

BTW, you might be interested in taking a look at Evolutionary Robotics by Nolfi and Floreano. I've been meaning to read it for sometime, and picked it up this morning, and it has a couple of chapters on practical aspects of reactive-subsumption architectures, especially regards limitations of same.

Also, the past couple of days, I started hacking a new mobile base that should be perfect for some experiemnts with subsumption-based software. It's 12" x 12", and fast enough to run around the house in just a few minutes. I'm gonna try implementing a pure subsumption engine on it, and use a wireless cam to see where it's going, plus an RF link back to the PC for reporting its internal state info. This way I can read its mind at a distance.

Gonna try to piece some things together about subsumption between Brooks, Arkin, Jones, and Nolfi/Floreano. The latter stuff [their book] is very interesting because they're using learning and GA extensions to simple subsumption. The next step, as I see it.

Reply to
dan michaels

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.