What is Subsumption?

Gordon McComb wrote:


Nope, it's just another way of explaining the same concept. I only know about this quote from a lecture at SMU on the philosophy of science. Theories can only be proven false, never true. Einstein's comment and the "last man standing" paradigm are one in the same.

My experience is the other way around. Hypotheses are set to a default of "true" until proven otherwise. And some are untestable. Like string theory. Then it become less science and more theology -- i.e., plausible but untestable hypothesis.
So can all hypothese are eliminated?
Sure, it happens all the time. Sometimes more than "one man" is left standing -- it's always possible that a natural phenonenum may have more than one cause, in which case another descriminant needs to be determined.
More often, however, ALL of the hypotheses are eliminated -- no man left standing -- and it's back to the drawing board. These in general are for me the most interesting problems to study.
A good example. as long as were way off topic here, is the background noise of seismic signals from which geophyicists deduce earthquakes, explosions, and so on. That "noise" is highly correlated, not random.
So far we've chased dozens of hypotheses as to why that should be the case -- unrecognized phases of earthquakes, the tug of the moon, noise in the instruments, some unknown deep-earth events at the core/mantle boundary, meteorite activity, cosmic dark matter, and a dozen more. Each hypothesis we've been able to eliminate with carefully designed tests and experiments. No man has been left standing. Yet. That's why it's such a nifty problem to work on.

Well said.

Interesting idea. What form do you see this taking? Like a spec, or actual code that you could download and run on one of your robots?
best, dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

http://en.wikipedia.org/wiki/Karl_Popper
"Popper is perhaps best known for repudiating the classical observationalist-inductivist account of scientific method by advancing empirical falsifiability as the criterion for distinguishing scientific theory from non-science"
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

But he may yet *be* wrong. All he can really claim with certitude is that "there *was* at least one nugget of gold in them hills". A lot of gold-bearing areas are mined out, or close enough that there's little point prospecting there.
Some ideas are initially promising and yield useful results and yet prove to really go no-where in the long term.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

So the only way the prospector can say, "There's gold in them thar hills" Is to find and return with at least one nugget for proof, and leave one nugget where no one else can find it.
Sticky, semantics are.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Hi Randy,
I've only been following bits and pieces of this particular thread but I feel compelled to jump in.

So why can't the leg positions be treated just like sensor inputs? Particular patterns of leg positions can trigger transition points. It's not specifically an output from a lower level machine it's just a pattern of leg positions that happens to be a good place to start another gait from.
You've also only shown what I would call steady state gaits, which are gaits which cause the robot to move forward. There are also going to be transitional gaits which might not be used very often but which would allow smooth transitions from one gait to another.
-- Dave Hylands Vancouver, BC, Canada http://www.davehylands.com /
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
More on this anon...
snipped-for-privacy@gmail.com wrote:

Hi Dave,
I think that you have maybe hit on a key element here. Worth some contemplation.
The idea is in some ways very similar to the previously referenced use by Jones of leaky integrators and running averages to trigger and/or synchronize certain events.
Good observation. I need to mull on this a bit.
best, dpa
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

As I replied to Dave, I don't think looking at anything internal to the robot is anywhere demonstrated by Brooks in example, and on the contrary case, is demonstrated in force by his explicit example of not doing so when it was the easiest way to get the information.
I don't think Brooks thought reading the output of the robot, as opposed to reading the input to the robot in as purely reactive way as possible, was desireable. The world is its own best model, and he does not seem to think the output of the robot to be reading the world, but instead, the robots representation of the world. His efforts at eradicating representation was exceedingly thorough.
Now, I have long thought your buffer of past readings is a bastardization of Brooks Subsumption, for one reason because you've found a way to import state information he never employed, but I hesitate. I don't think I've ever said that in "print" before, and I'll tell you why. He makes one comment about actually there were slightly more state machines in Genghis than he normally documents. That was because some inputs were noisy, so he had a machine to smooth the data. But he caviates his comment by saying he wouldn't have needed them if their inputs were better quality. Maybe this is where Jones finds a basis for the leaky integrator. Of course Jones worked with Brooks for 15+ years.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
snipped-for-privacy@gmail.com wrote:

Sure, reasonable question. Why can't the leg positions be treated just as sensor inputs? The answer has to do with how very careful in my reply to dpa to make clear I was answering in compliance with Brooks examples. You have to read both of his books completely, I think, to pick up the clues about what Brooks does and doesn't think is appropriate.
In this instance, I will refer to a feature of can finding and picking up robot, Herbert iirc. Think I mentioned this issue earlier in this thread. Anyway, the robot did not use any internal representations to pass the state of the can finding behavior to the arm up behavior. Brooks made a distinct point to explain when the sensors (encoder base) stopped indicating motion, the lift arm behavior assumed the can finding behavior had found a can. So I take from this example, he did not believe in proprioceptive sensing. To wit, he could have looked at the motor control output to know if the robot had stopped. So in the same way, looking at the position of the legs, which were internally set to some position, would be cheating. After all, the world is its own best model, therefore you sense the world, and not some model of what the world should be inside the robot.
But, even if he had not indicated this reluctance in his examples, there is still another significant problem. Let's say, well our robot is in the world and part of the world, and we can add sensors to look at our legs. (He didn't have position feedback in Genghis, btw, only a sort of force sensing feedback on how much current the servo was taking to move.) But if we did have sensors, say, one the forward/backward position of each leg, consider this. Can you instantaneously read the forward/backward position of the legs and determine the state of the active machine? The answer is no. There is a place in each of these gates, if you look at that exact moment, the legs will be in exactly the same position as Stand, with all legs centered. Might be a great moment to switch gates, but you cannot determine the internal state of the FSM by looking at the swing outputs. You need more information to know uniquely which machine has control and then what state that machine is in. Now given you can read both forward/backward position and up/down position, you might be able to infer machine and state in control, if you know how every single gait machine works. But what does that mean for evolution? Can you add another subsumptive layer above the existing gaits, and have the lower gaits know how to recognize the newly evolved gait? No.
Again, you have to look at the totality of what Brooks Subsumption model is, to know this would be a problem.

Yes, there might be transitional gaits, but again, I'm following Brooks. I found a case where he had a statemachine with real state information. In the case of Genghis, the core routine (see the quote) send out leg up message in a particular patern that created the gait. My theory was if I found such a state based machine, it would not be subsumed by another machine with state information. There in his work I found the statement about substituting machines rather than subsuming machines. So in this test case, my theory made a prediction, I went looking for the evidence, and found what I suspected.
If you don't know its state, you can't fit a transition to it. Consequently, I think the only way to make a transition is to first make a smooth transition out of the machine with state information and return to know static position without changing state information (stand), then transition into the new gait. I might predict (not yet convinced though) that if a cockroach runs on a Subsumptive model, if couldn't go from walk to without stopping first. Conversely if we see a cockroach go from a walk to a run without coming to a stand first, we can conclusively say, the cockroach is running in a purely Brooksian Supsumption model.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
I need to correct the above post.
"Conversely if we see a cockroach go from a walk to a run without coming to a stand first, we can conclusively say, the cockroach is NOT running in a purely Brooksian Subsumption model."
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:
..........

................
Looking through Jone's recent book, he talks about external sensors almost exclusively, and the only reference to an "internal" sensor I found was a brief note on a motor encoder. Also, he implied that his low-level motor routines used encoder feedback to maintain constant motor speed, but never seemed to make much of it in the book.
In this sense, the architecture of subsumption greatly deviates from that of real animals and also from my idea [and I believe your idea] of how robots should be built. Real animals, and real robots, need to "constantly" sense their own internal states in order to survive best.
In addition, if you want something like a roomba to find its way back to the battery charger, it helps if it can sense it needs charging. Even Grey Walter's simple bots in the 1940s, with just a single flip-flop for brains, had this feature already built-in. There is ALWAYS a problem with too strict adherence to dogmatic ideas, in any endeavor. Flexibility and adaptation rule the evolution of living creatures.
..............

> couldn't go from walk to without stopping first. Conversely if we see a

I have to agree with dpa on one account ---> that real insects are probably a lot more complex than simple subsumption idea. For one thing, they have 1000s of internal sensors.
For my money, we have to view subsumption as a bottom line architecture. Largely, something to build upon. Not the final anything. And which is exactly what I talked about in the thread on "Neo-subsumption" a couple of years ago.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dan michaels wrote:

Thanks for the support on the lack of internal sensors observation. I'm in chapter 7 in Arkin now, and he makes the distinction between proprioception and exteroperception (pg 241). Brooks in order to focus on the world seems to exclusively use exteroperception which is one of the points I think he has gone long on for principle sake, that has limited the utility of Subsumption in general. This idea belongs to "Reactive based Robots" as opposed to "Behavior Based Robots" or "Subsumption". None the less, I can see where opponents of Brooks could have used internal sensing as a point to pick, saying he was reading the robots own representations of the world, rather than the world itself.

Yes, I remember. And that very much takes me back and puts me in the frame of mind of the opening post.
In that vein, we approach 100 posts here, and no one has addressed the OP question I am most interestd in. Can the same thing be written in FSA without the need for the concepts of subsumption? My answer is yes. In fact, that's why the previous thread on "What is a behavior" interested me. It looked like putting the trigger into the behavior level of subsumption was a way of removing it's natural place in a finite state machine as a condition of transition from states that did not have outputs to states that did. So I'm wondering if Brooks didn't have a tremendous insight in using the FSM approach, and in bastardizing it, threw the baby out with the bath water, or put another way, threw the power of minimize historic information (state information) out of the FSM by putting an unnecesary layer of control (Subsumption) external to it. Perhaps all the power and utility of Subsumption comes from the FSM's and not the other way around.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:
..........

I disagree with this, for several reasons.
First, as mentioned several times, for my part, I see subsumption - in the guise of no internal representations, or even internal sensor loops - as being just the "bottom level" in a hierarchy of control. As noted, even insects rely on many internal feedback loops, and more complex organisms clearly use internal representations, as a basis for making comparsions with real-time incoming sensory data. AFAIAC, you cannot get away from this in building more intelligent machines.
Secondly, as I see it, Brooks capitalized on the FSM idea, in that [at least originally] the augmented-FSM was the explicit prototype of all behaviors. I'm not so sure he actually threw out FSMs as just made them simpler and simpler, ala what Jones talks about.
Thirdly, the subsumption [meaning priority-selection] scheme is really where the power lies, not in the FSMs. The priority scheme selects which FSMs run the machine at any point in time. You will always need something like this, especially as you keep adding more and more levels of complexity to the machine. The FSMs are the worker bees, and the subsumption scheme is controller/overseer bees. Actually, bee colonies don't seem to have overseers, but the brain sure seems to.
However, I don't see that this means you actually have to run every behavior in parallel at "all" times. That's one place where I differ from the original rule, as we've discussed before.
I think you have some inertial reason to keep trying to make FSMs fit all matters, but I don't think this is an especially fruitful way to go. Theoretically, maybe you can reduce everything to an FSM, just like the idea that all computation can be reduced to a universal turing machine, but there are better ways to implement certain functions, eg, neural nets, signal processing algorithms, on and on.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dan michaels wrote:

Well, yes and no. First, I think FSM's are fundamental. I learned about the concept of state first from physics, with the idea of the Bohr model of the Hydrogen atom explaining the spectra observed. The electrons where in ground state and could be elevated to an excited state taking up energy only in fixed quanta, etc. In fact our very concept of time is based on change of state. If there is nothing with regular changes of state, we can't even tell if time exists. The further I look in physics, the more I learn, the more convince the concept of state is primal to our existance. For instance, through E=mc^2 we know energy and mass are equivalent. I suspect, mass is an attribute of energy in a state. No state no mass. At the limits of gravity, time runs slower. When energy falls into a black hole, it changes "state" and contributes to the mass of the black hole. In the holographic principle, existance comes from information, "It from Bit" it is called. Yet I have hypothesized, you cannot have a "bit" of information without first having two states. So a state is half a bit, and therefore more fundamental than information itself.
So yes, I tend to think of things in terms of state. The above evidence provides the inertia you suggest.
Likewise, I often see what can be reduced to FSM's given the above inspiration. So like looking at the Turing machine for equivalence, I think, can this be reduced to the thing that is more fundamental than a Turing machine?
But no, I don't do it always. I only do it for the real time portion of any program, because of the close association of state with time. I recognize there are other kinds of computing models that work better algorythm solutions. In fact, I see the linear code portions in the condition testing, and the action portions to be of this kind. But I expect the control associated with time always to be best represented by state machines.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

being, somewhat inconsistent. You try to have it both ways, but you cannot.
Again and again you've asked what subsumption "cannot do", and when people come up with examples you discount them, or just seem to change the subject back to how cool insects are, and what they "can" do. There are 2 different issues here, at least. And at least above you write ...
============For the record, I do not believe a modern hominid brain can be modeled or explained by subsumtion. ===========So, this is the sort of thing Curt and I have been saying all along, and giving examples for. Ten, you flip again back to the following ...

This is again a different question from asking, as you have many times, what can subsumption "not" do? And I think Jones has answered that fairly explicitly in his book, and Brooks has done it indirectly, in the sense that he no longer seems to "actively" reject representation like he did 20 years ago in his manifestos. Today, he asks "what is missing?" instead, but he can't bring himself to say representational mechanisms.
Brooks specifically rejected the idea of representation, but it seems clear to everyone else in AI community that use of some sort of representational mechanism is necessary to replicate human-level perception and symbolic reasoning. I keep asking, how can you identifiy and predict where a car 100 yards away will travel to in the next few seconds, or how can you take a calculus test, without using symbolic representational formulations and internal maps of some sort?
The fact that I cannot prove this to your satisfaction, because I haven't actually tried it, is because I don't have 50 years and 500 graduate students to do the work.
So, I'll say again, to me, subsumption makes a nice "base platform" to build upon, and add the sorts of symbolic and representational modules that can do the things non-representational systems do not do. I cannot prove this, but I'm not about to spend the next 20 years of my life trying to do what Brooks could not do, using his ideas.

NO ONE ever said this, that I recall. This is not the issue that's been being discussed.

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Harry Rosroth wrote:

This is the main problem that Randy and I were discussing. Although the built-in timeout feature will cause an "individual" state to be vacated after a short time, many of the behavior examples shown in Jones' books were built using FSMs with a "number" of sequential states, which were cycled through on a timed basis.
Therefore, if such a behavior had been begun, and shortly thereafter was interrupted by a higher priority behavior, the interrupting behavior would execute and complete, but then the machine would return again into the "middle" of the sequence of the interrupted behavior. However, this would probably no longer be appropriate.
Eg, BEH1 = the machine is bending over to pick up a block or something. BEH2 interrupts and causes the machine to turn 90-degrees. BEH2 finishes, and BEH1 resumes from where it left off, but this is no longer relevant to the new machine orientation, as it is no longer pointed at the block.
You can partly get out of this dilemma by using a very fine partitioning of behavior actions, such that none involves an actual "sequence of operations', but rather each is just a simple movement. But then the machine is very whimsical.
Randy and I were discussing that it seems the correct way to get out of the problem described is to have BEH1 reset back to its state-0 when interrupted, so it will not resume later in the middle of the sequence. However, this feature of resetting to state-0 didn't seem to appear anywheres in Jones' example discussions in his books.
IOW, if you actually want the machine to perform something more sophisticated than finely-parititioned behaviors, then you need something else superimposed on top of the basic subsumption scheme.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

No, I'm just trying to interpret Brooks' philosophy. By definition, what he calls "behaviors" are very simple sensory-motor reflexes [in essence], and by definition they all operate in parallel simultaneously. That's HIS idea, not mine. I would immediately patch it, at least to include some semblance of short-term memory. Without the latter, the bot can't do very much of use, AFAIAC.

Ok, if we use your scheme, then we ALWAYS have to calculate ALL of those behaviors with HIGHER priority in the arbitration list than the behavior we're currently running. If we're in the middle of the list, then we still have to calculate all those in half the list in case any of them goes active. That's certainly doable, but makes the arbiter somewhat more complex. It's also not what Brooks does. For my part, I try to find the weak points in pure subsumption. They are many, imo.

Actually, I'm not too sure why he used so many small cpus, because I'm pretty sure, given bots like Atilla and Genghis, he could have done it all with just one 8-bitter and it would have about worked the same.

Also, Joe Jones page 93: "... Subsumption was inspired by the development of brains over the course of evolution. In this process, the lower, more primitive functions of the brain are never lost; rather, higher functions are added to what is already there ...."

You talking about me here, or evolution of the brain? :)

Yes, were "I" the Grand Designer of all creatures great and small, I'd have lopped off all the 90% or more of junk DNA, so the little blighters wouldn't have to carry around all that useless baggage. Travel light is my motto.
BTW, there is a really interesting article about this in the latest Oct 2006 issue of Natural History mag ... not available online, found it in the public library ...
http://naturalhistorymag.com /
Broken Pieces of Yesterday's Life Traces of lifestyles abandoned millions of years ago are still decipherable in "fossil genes" retained in modern DNA. Story by SEAN B. CARROLL

Well, I see this all as a matter of timing. IE, once the ballistic movement is started, it will run to completion unless something like a bump occurs and takes higher priority. The subsumption scheme does specifically allow for real-time modifications of behavior, as environmental situations constantly change. In real nervous systems, this happens continually.

Actually, I think the bump or escape behaviors will always take priority over everything else, especially ballistic movements, so not a problem. You put the bump switches so they'll always take command, regardless. They're like the supreme court being the last guys to talk, and then we get a new president [ok, that's a little OT, but it's the same idea]. Somebody has to be on top of the heap, in order to avoid total chaos.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
BTW, one of the things I forgot to mention last time is responsivity. Brooks made a big issue about how slow previous [top-down] implementations were to respond in real-world situations. Eg, many robots were taking many minutes just to move across a room, and he wanted a scheme which would cut this time down to seconds. Therefore, one advantage to constantly computing ALL behaviors is that, just as soon as a higher-priority behavior relinquishes control, any of the others can immediately take over the machine, without any significant delay.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

Absolutely. It was a complete departure from tradition. Things had been done top down from abstracted goal-driven intelligence. Subsumption describes reflex driven bottom up behavior. Brooks demonstrated a system without central control.

Absolutely not. Multitasking is a sort of simulation of parallelism on a sequential processor. Subsumption mimics true massively parallel neural architectures. Subsumption is naturally parallel. if you can only afford one processor then you can simulate multiprocessing with some multitasking.

Any serial machine can only give a simulation of true parallelism. It is not an illusion. It is what it is. It is not a trick.

Subsumption is a concept. You cannot do subsumption without the concept of subsumption.
Can you implement subsumption as FSA sure, but they are very different things. If you can only afford one processor you can simulate a lot of neurons. And you can use multitasking to simulate a bunch of virtual parallel finite state machines.
Subsumption was a departure from traditional AI theory. Related theories include Minsky's Society of Mind and Edelman's Neuronal Group Theory. The technique of multitasking does not have the same relationship to those theories as does subsumption architecture.

Yes. Subsumption applies to thoughts the same way it applies to reflexes in a creature with 6 legs and no eyes, antenae or brain. But a demonstration of subsumption on a machine which has only reflexes to move will move well but will not demonstrate thought. Now add sensory input and memory and as Murray Gell-Mann would say you may see emergent behavior out of a complex adaptive system. It may exhibit thoughts, but it will not be programmed top down as goal-driven systems were decades before.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
snipped-for-privacy@ultratechnology.com wrote:

I think Randy was trolling for thoughts on the matter, rather than asking if a subsumpton architecture could support thought. By definition, Brooks' robots were simple sensory-motor reactive machines, and didn't even possess memory, let alone ability for any kind of high-level symbolic processing. His manifesto essentially indicated this was really "unnecessary" for robotics, but of course not many people agree with this, who want to build something with any actual intelligence, as opposed to ability for basic operational survival, and performance of fairly simple tasks.
Also, although the distributed nature of SOM bears some resemblance to subsumption, I'm sure Minsky would contend it goes far beyond Brooks' ideas. OTOH, I do think the basic scheme for arbitration used in subsumption could support high-level processes that could take over control of the machine. No reason you can't have a vision or speech processing system, working in complete isolation from everything else [and which is the nature of individual "behaviors" in the subsumption scheme, after all] contending for control along with lower-level survival subsystems.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
snipped-for-privacy@ultratechnology.com wrote:

Fair enough.
However, as you know, there have been bottom up approaches to many languages and disciplines. Feynman's report on the Challenger disaster (1986) shortly before his early death (1988) explained how airplanes were built "bottom up". Of course, the "bottom up" nature of Charles Moore's Forth has been around since the 1970's.

I don't think there is any real requirement one way or another about Subsumption to be parallel, simulated or not. You can do Subsumption on a single processor, or several processors, by replacing results in registers, by messaging across networks, and a host of other ways I probably haven't seen.

I disagree here. You can do subsuption without the concept of subsumption.
For instance, what is the teacher doing when she asks the class a question, and continuing to select students until she gets the right answer? Is that Subsumption? Well, yes, perhaps. Inappropriate levels of results are replaced until on at the right level is found, then progress is made. If it isn't Subsumption, it is something very close, but would you normally see it as Subsumption? or be so bold as to say you can't teach without Subsumption concepts? Maybe not.
Or is a repairman using subsumption? He sets up tests to find an appropriate response, then repairs the item indicated by his following the elimination process? Is that Subsumption? Well, yes, in a stretched sort of way. But then let me ask, how stretched is Subsumption, if everyone you ask gives you a different answer on how to best they implemented it?
If you normally have a rail set to go straight through, but if an oncoming train is detected, if a side track is selected for one of the two trains, is that Subsumption? Yes, it is a very limited application, but it does seem to fit the Subsumption model, so maybe it is. The railroads got by all this time without the idea of Subsumption, and yet they did it.
Electromechanical voltage regulators date back decades. With a series of resistors and relays, operating at varying levels, a sort of PWM, the voltage output is "subsumed" so it remains around the desired voltage. Surely this application of Subsumption proceeded the very concept of Subsumption.
Okay, these examples are all a bit stretched. And I could probably find better examples if I put some time and thought into it. All I want to suggest is perhaps Subsumption really isn't that new a concept. Or put another way, it is a new concept, but it has many precursors.

I don't think so. Basically in Subsumption we start with a trigger. When the trigger is active, an output is generated. When the trigger is not active, the output is not applied.
This sounds exactly like class of two-state statemachines.
The transition into and out of the active state is a condition, just like a thermostat depends on too_hot? and too_cold? to go from passive, to heating.
Subsumption includes a requirement the condition also includes other conditions in the decision, subsuming the output, or inhibiting the reading or inputs, based on the state of other machines.
Therefore I suggest, Subsumption is a special subset of Finite State Machines, with a priority scheme built into their outputs, which activation depends on all higher prioity machines being in the inactive state. This makes Subsumption a subset of a subset of a subset ... of Finite State Machines.
Subsumption can be made with FSM's, but an FSM can be much more than a Subsumption. The FSM is the larger concept.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.