What is Subsumption?

Gordon McComb wrote:

Maybe you missed the implied context of my statement that neurons are simple when compared to complex adaptive systems made up of large numbers of interconnected neurons and which exhibit emergent behavior that individual neurons do not have. The context was that neurons are by definition simpler than the complex adaptive systems built with large numbers of neurons.
I think you have taken the statement out of context to make it into something else, an absolute statement that isn't true. What had attempted to make was simply a statement of relative complexity, not absoulte.
Sure, if you study neuroscience you know neurons are fabulously complex, especially the mechanism that promotes dendrite interconnection. But again, the context was that a whole is more complex than its parts. 200 billion interconnected neurons is more complex than a single neuron which in comparison is quite simple.

Yes.
Yes. The theory says it is all subsumption except for what Edelman called the primary repertoire programmed into the DNA in his book Neural Dawinism, The Theory of Neuronal Group Selection.
I think Brook's point is that subsumption is the mechanism all the way to the top, where humans use the name consciousness to describe the emergent behavior of the perception of self. The classic AI approach was to have a programmer encode knowledge directly and build smarts from the top down. Subsumption says that you buiild in some smart reflexes at the bottom and let things organize their way up to the top with subsumption.
I think it has also been demonstrated that humans have a lot of genes that cause the neurons to be predisposed to organize in certain ways. An example would be superset of language genes that humans have when compared to say gorillas. But everything above the primary repertoire is done through self-organization as described in the theories of neuronal group selection and subsumption.
It is a lovely theory demonstrated on systems like ittle crawling robots with nothing more than subsumptive legs, to virtual animals with hearing and vision and memory and the ability to interact with other animals and humans and learn, to large scale AI assisting humans in solving difficult technical problems. It doesn't stop at virtual insect legs.
Subsumption is not just multitasking! Multitasking is just the poor mans way of simulating real parallelism. In this context multitasking is a technique for efficiently implementing more than one neuron on a sequential processor. Subsumption is more than just multitasking.
If someone looks at subsumption in the legs of a crawler and think that's all there is to subsumptoin then they miss that thinking is also
subsumption. And thinking is NOT multitasking on a sequential processor.
Best Wishes
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
snipped-for-privacy@ultratechnology.com wrote:

I would say Brooks has pretty much given up on trying to implement this idea by now, ie high-level and/or human-like symbolic operations using the strictly subsumption approach as originally described. To wit, after 20 years of research, he doesn't say the sort of things today he was saying in the mid-80s. Nowadays, Brooks talks about "what's missing". OTOH, we do have Edelman's work, which for my 2-bits, are the best ideas around.

OK, if you lump both subsumption and [especially] NGS into the picture. But, of course, this is somewhat more than just Brooksianism.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dan michaels wrote:

How are you using NGS there? Next Generation Systems? Sorry, not familiar.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dan michaels wrote:

I do, but it is not just my personal idea that the theories of subsumption and Neuronal Group Selection overlap. It was my opinion before I found that wikipedia explained how the theories overlaped.
Best wishes.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
snipped-for-privacy@ultratechnology.com wrote:

The little bit of subsumption I've seen and understand doesn't agree with this at all.
I'm a big believer in the power of learning systems that are able to self organize, but to me, everything I've seen about subsumption is that there is no learning at all - it's all hard coded behavior. What "emerges" is not learned behavior. The system does only what it was programmed to to do. What "emerges" is _unexpected_ behavior - i.e. behavior the programmer wasn't able to predict - but which he none the less did hard-coded into the design whether he realized it or not.
To create truly intelligent machines, I argue constantly (in comp.ai.philosophy), that we must build a learning system. The types of networks I like to play with really do mimic a lot of the ideas of subsumption. But the difference is that my networks have no pre-coded "intent", they must learn everything on their own. All the subsumption designs I've seen (which is very little BTW), is based on the idea the programmer would hard-code all the logic into the system instead of letting it learn anything on its own.
Though the type of emergent behavior we see coming from hard coded subsumption designs is interesting, it's a totally different class of emergence than what I talk about coming from a learning system.
I agree with what you wrote above (we need to build the lowest level and let it self organize all the way up), but I've never seen subsumption described like that.
BTW, the concept "emergence" really isn't very useful. It just means the machine did something we were too stupid to predict ahead of time. Because these things are all deterministic at the core (if not, the emergent behavior would not be repeatable and would be nothing but chaotic behavior) it was predictable ahead of time so the real magic of "emergence" is created not by the machine, but by the limits of our own intelligence. So "emergence" just means, "too complex for a stupid human to understand". To me, this is not some important new field to study (aka human stupidity). It's a field we need to work to eliminate.
Or in other words, anything I see as "emergent behavior" is just behavior we need to study more until it stops being "emergent behavior".
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
snipped-for-privacy@ultratechnology.com wrote:

Where did you get the idea learning is connected with Subsumption? In all the Brooks books I've read, I only remember one single subsection on learning. Cambrian Intelligence, Ch 6.5, pg 180, and the legs learning gaits you describe on page 181, This single paragraph describes learning "to walk with subsumption architecture along with the behavior activation schemas", implying to me, the behavior activation schemas were the learning part.
That subsection ends on the following paragraph, "Learning in subsumptionis in its early stages but it has been demonstrated in a number of different critical modes of development."
I like you had heard of emergence generating gaits without human intervention or specific programming. (I heard this from Larry Forsley by the way). But in my searching, this was the only section I found on it in Brooks. I have seen by looking a head in Arkin he addresses this effort in his Ch 8 on Learning, but I'm still in Ch 7, so cannot comment on it other than its presence.
In fact, because of this rumor of emergence, I paid close attention to Genghis and the state machine structure. I would argue (and have in previous threads) there's no emergence in Genghis. His tripod gate is explicitly programmed internally. The only sense that it emerges, is that it is hidden in a factoring of the cycle into multiple machines, rather than being obviously programmed in a single place. But it is very explicitedly programmed, none the less.
I think the biggest evidence of this is something I mentioned elsewhere in this thread: It can only do one gait. If it were truly emergent, why did it pick the fastest, least stable gait first? And why didn't it emerge the ability to change gaits?

Okay, granted.

I'm willing to let the examples go at that.

Not this. This sounds pretty outrageous to me. If I'm human, I'm human. What I accept or not does not change my condition of being human. There's no clear proof I know of that says subsumption is what my neurons are doing.

Well, here we disagree.
Subsumption is definitely not about self organization.
Subsumption is definitely not about self emergent behavior. (Although there is lip service for it.)
Subsumption is definitely not about the lack of pre-programmed knowledge put in by a creator.
But yes, Subsumption is about the lack of a central authority. This point was central to Brooks writings.

I don't think so. I had Brooks Cambrian Intelligence, Ch1, Figure 4 in mind when I wrote that. Subsumption certainly seems to have neuron roots. But the described interactions are from Brooks own work there.

Haven't read Murray Gell-Mann so can't say.

Which we seem to have a very different view on, Subsumption, because of your emphasis on learning, which I do not see supported with any vigor in Brooks.

Thank you. Yes, I would have to say, from a financial view of results, my FSM language and virtual parallel machines is not well understood or appreciated in the robotics community. Don't get me wrong, some enlightened people, mostly professional and university professors, have picked it up and done amazing projects with it. (We even sold one to Brooks on his credit card.) But they are the minority. The average hobbiest doesn't reach as high. Again, not to paint all with the same brush. We really appreciate those who do get it, and use it to great advantage over the less adventurous.

Think I heard Brooks is retiring.
Too bad Joe Jones doesn't appear. He sometimes does post here. Joe authored and co-authored a couple books and worked with Brooks for over 15 years. I just got email he's leaving iRobot for something unspecified.
Or Arkin who has chronocilled so much work on BBRs.
So in their absense, what we have to do is read what they've written, and and make judgements on the references they've left us.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

robots did with learning to coordinate legs with primtive reflexes.

I would argue that about all the emergence you should ever expect to see from legs is locomotion. This is very different than the idea that brain directs all this activity. If you are expecting legs to reason that "I think therefore I am!" then you are expecting too much from Brook's demonstration of legs.
If you want to look at more than gates, such as vision, language, and understanding of problem solving, math, physics, engineering and materials science you have to look beyond leg demonstrations to larger scale demonstrations. Those things take more neurons than coodinating a few simple legs.
When I got exposure to follow up work on larger scale demonstrations of Neuronal Group Theory I felt that this was further confirmation of what Brooks had demonstrated with legs. Even wikipedia talks about the various theories that fit into the connectionist camp.

Which means that you have a couple of hundred billion neurons but no single source of control. There is no master processor in the brain controlling a hundred billion slave processors following the goals and directives from the master processor. What one has is an example of sumsumption and not of multitasking of serial processor.
Humans are examples of subsumption if you accept the theory of subsumption. Subsumption is not just a simple multitasking example of driving a primitive leg on a primitive serial microprocessor. The theory has to cover how you think as well as more primitive reflexes.

No, but if you accept subsumption you accept that you are human and that humans provide examples of subsumption unless you erroneously think that subsumption is just multitasking.

"Clear proof" is in your mind. Your mind does not reside in cenral spot. It is distributed as the theory of subsumption dictactes it must. There certainly is clear proof that there is no central control in human brains although you don't have to accept it as clear proof to you.
Surely you are not arguing that there is a single 'master processor' running a control program inside of your head?

It sounds like you are just arguing creationism versus evolution. I don't want to get into that sort of debate with you. It is as simple as that.
If one 'believes' in evoultion one believes that organic compounds and organic life self organized into life as we know it. Some cell specialized as nerve cells developed the properties of neurons. Neurons became specialized and self-organzied in groups and groups into societies.
Classic AI tried to mode a God like consciousness that people had just assumed was what "Intelligence" required. These classical AI types programmed knowledge directly into their machines, akin to God breathing life, and creating his greatest creation Man from nothing.
Connectionists argued that since life evolved and that human consciouness came much later than the chicken or the egg that "intelligence" also had to evolve from simpler organisms to ones with larger brains. And they speculated that their must be a mechism for nerves to learn and organize.
Neuroscience research showed this to be the case and people took Edelman's Nobel Prize winning work in neural organization to build things with much more emergent intelligence than what you get from a handful of neurons and a leg.
And it is generally accepted that connectionists and knowledge encoders don't see the world the same way, but it is generally accepted that subsumption is a theory that fits in with the other connectionist theories. This is true simply because it is abouut no central control.

I think you are just missing the forest because you are focused on the trees. You are looking at the most primitive example, of a leg, and not seeing past it.

There is not much of it to be found it the most primitive demonstrations. But it would be good to not get stuck there.

Even the primary repertiore of the DNA is 'programmed in' by a creator if you consider self-organization and evolution to qualify as a creator.

it seems to be more fundamental than that. I believe in science and evolution, I accept the Nobel Prize winning work about how neurons organize and I have seen very intelligent AI build using theories like NGS and subsumption when I worked for the government.
I believe that what Brooks demonstrated so long ago was that with a little pre-programmed intelligence a very crude robot could learn to move and exhibit similar behaviors to those that had been created by classic centrally controlled goal driven designs.
I think you are just engaging in semantic quibling again about the term 'learning' but you have alluded to Brook's robots 'learing' to coordinate movement of multiple legs.

I think the virtual parallel machines running FSM is both a good way to organize a simple systems using subsumption. It is simple, efficient, structured, and mathematically provable. I expect that that is why some professionals and professors have embraced it. I think it is clever and ahead of its time.
Of course I tend to see it as a bridge to more real parallelism. I like the idea of having a lot of simple processors running simple state machines like simulated neurons rather than simulating many simple virtual processors on a single sequential microcontroller.
Subsumption can be simulated with FSM, and simulated or real parallelism. Subsumption is about parallelism. Multitasking is just a cheap imitation of real parallelism which is why I object so strongly to the idea that subsumption is just multitasking.
If I implement a system with lots of tasks running on lots of processors, and it does just what your FSM on virtual parallel machines do, but truely in parallel with no multitasking at all wouldn't that still be subsumption? Subsumption requires NO multitasking as far as I am concerned. That is just an implemenation technique.

The average hobbiest is indeed not a highly visible researcher.
And you have my sympathy that the average robotics hobbiest seems to have a budget of zero and a lot of unrealistic dreams. I know people who have a lot of diffent hobbies, but this newsgroup is dominated by folks who want a very inexpensive hobby perhaps with no more cost than an internet connection.
it seems we are mostly arguing about definitions of 'intelligence' and 'creator' which clearly isn't going to go anywhere.
My main objection to your comments in this thread is that you said that isn't subsumption just multitasking? I say you can have subsumption without any multitasking, and working on parallel processors I think you should have subsumption without multitasking. But that too is really another subject, but consider say subsumption on SEAforth with real parallelism and without multitasking and then try to convince me that subsumption is just multitasking.
But I think we have gone about as far as we can without us going to comp.robotics.philosphy to argue about whether definitions of intelligence and whether self-organization qualifies as a creator.
So I will just accept that we see things differently.
Best Wishes
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
snipped-for-privacy@ultratechnology.com wrote:

No, I'm not, and stop calling me Shirley. ;)

The key here, is "said" versus, "asked".
I asked this question, "Is subsumption really necessary, or is this just a fancy name for multitasking? Is this just an issue of creating the allusion of parallelism in a serial machine? Can the same thing be written in FSA without the need for the concepts of subsumption? Thoughts?"
I have never thought subsumption was multitasking. I asked to see if anyont thought that way, and to provoke thoughts on what subsumption was. The only one of those questions I actually believe to be correct is that the same thing can be written in Finite State Automata, without the need for a larger over bridging concept.

Far enough. Nice to see you post here, though.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

a while ago) any program or set of programs/tasks which is capable of running on a processor like what we're using today is representable in a FSA.
Fundamentally, anything run on today's computers can be run on turing machine. - therefore it can be represented in an FSA.
The whole concept of multi-threading is just something to make it easier for programmers to slice and dice a problem.
When you step away from it, you really just have an execution engine with some registers and memory. The fact that we assign a purpose to some of those registers (like a stack pointer) and create concepts like a stack is just a way of compartmentalizing things. The computer doesn't understand the notion of tasks.
So, to me, concepts like multi-tasking, or subsumption are just ways of categorizing things, giving you semantics that you can use for the purposes of communication.
--
Dave Hylands
Vancouver, BC, Canada
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
snipped-for-privacy@gmail.com wrote:

Yes, true enough. But that really wasn't the sense in which I was asking.

Good point.
I am fairly certain Subsumption could be done in a purely FSM method. But there is something fascinating about the priority scheme. To accomplish this in in a purely FSM method would require a particularly strict set of requirements on the transitions to create the priority section. Beyond the sense it can be done, I haven't worked it all the way through.
Well, I have worked it through in some of my robots in actual implementation. But I rely on the ordering of the called FSMs, with the higher priorities last. So my arbitrater is just who writes the output last. Since the scan loop is so much faster than the mechanicals can respond, the subsumed outputs never have a chance to be expressed physically.
You are right, though. Often it is just an issue of semantics. You can choose the paradigm that most appeals to you, or is easiest to think about. With my bias, of course, I like to see state made explicit. Obviously its a passion of mine, because I feel state is such a fundamental concept. As programmers we've learned all sorts of tricky little ways to hide state, in flags, in variable, in the program counter, etc. You can say these differences are semantics, or you can say they are obsfication. But then, whatever works for you works.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
snipped-for-privacy@gmail.com wrote:

Yes, true enough. But that really wasn't the sense in which I was asking.

Good point.
I am fairly certain Subsumption could be done in a purely FSM method. But there is something fascinating about the priority scheme. To accomplish this in in a purely FSM method would require a particularly strict set of requirements on the transitions to create the priority section. Beyond the sense it can be done, I haven't worked it all the way through.
Well, I have worked it through in some of my robots in actual implementation. But I rely on the ordering of the called FSMs, with the higher priorities last. So my arbitrater is just who writes the output last. Since the scan loop is so much faster than the mechanicals can respond, the subsumed outputs never have a chance to be expressed physically.
You are right, though. Often it is just an issue of semantics. You can choose the paradigm that most appeals to you, or is easiest to think about. With my bias, of course, I like to see state made explicit. Obviously its a passion of mine, because I feel state is such a fundamental concept. As programmers we've learned all sorts of tricky little ways to hide state, in flags, in variable, in the program counter, etc. You can say these differences are semantics, or you can say they are obsfication. But then, whatever works for you works.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

You should regard subsumption as an abstract model, not as an actual description of the algorithm.
I think that an efficient way to implement subsumption in a serial machine is to have a list of behaviors sorted by priority, and define for each behavior two subroutines: one which determines if the behavior requests to be activated, and one which actually calculates the motor output. For each cycle iterate through the list starting form the highest priority behavior, calling the activation test subroutine until you find one that requests activation. Call its output subroutine. End of cycle.
(untested) example (scheme language):
(define (process-cycle behavior-list) (if (request-activation? (car behavior-list)) (calc-output (car behavior-list)) (process-cycle (cdr behavior-list)) ) )
Of course, if you implement subsumption using ad-hoc hardware, with behaviors corresponding to physical circuits, you can have all the behaviors running in parallel.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
snipped-for-privacy@virgilio.it wrote:

This is pretty much what I do, though in Java.

That would be nice. Though it shouldn't be that expensive now with an FPGA as a base.
--
D. Jay Newman ! Author of:
snipped-for-privacy@sprucegrove.com ! _Linux Robotics: Programming Smarter Robots_
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
snipped-for-privacy@virgilio.it wrote:

Yes, that is what I'm concluding.
Subsumption was stated in such a way, it is so broad, it can be implemented on one processor using multitasking, or on several processors with multitasking and communications, or on a processor for each process without any multitasking, or even down to the level of dedicated hardware.

I agree.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.