Well, I finished Joe Jones _Robot Programming, A Practical Guide to
Behavior-Based Robotics_ (which I highly recommend) and am looking
around for my next "read". One of my friends recommends _Behavior-Based
Robotics_ by Arkin. Anyone read this one, and have comments?
Lots of theory, but that's not bad. It's pretty good at discussing all
the core issues, and it's commonly used as a textbook on the subject. Do
keep in mind that behaviorial AI is just one approach to the science,
and it may or may not be the best for any given application. I think a
reading of something like Robin Murphy's Introduction to AI Robotics
serves as a good "rounding out" of the other ideas out there. The latter
book is very approachable, though it's also a textbook.
Author: Constructing Robot Bases,
Robot Builder's Sourcebook, Robot Builder's Bonanza
I must admit, I was surprised how little learning (none) and little
state information (absolute minimum possible, for brief ballistic
behaviors) was used in Joe Jones book.
There is a specific comment on pg 239, where Joe says, "One subject area
that especially interested me when I began working at the MIT Artificial
Intelligence Laboratory was machine learning. I quizzed my boss on the
subject, but he advised me not to bother trying to build learning
mechanisms into a robot. He reasoned that I, the programmer, was much
smarter than the robot. Given the state of the art at the time, I would
always find it more effective to build an ability into the robot
directly, rather than have the robot acquire that ability on its own
through learning. So far, no robot that I know of has proved the good
So I'm guessing that is Brooks, and I'm wondering if this isn't the
Brooks - Minsky split in approaches to AI driving things toward the
extremes of opinion.
there. The latter book is very approachable, though it's also a
Thanks, Gordon, I ordered it. I'll do both, then.
I personally think stored information, i.e. "state information" is
absolutely necessary to produce AI. Since Joe's book surprised me at how
little state information was used, I am interested in getting a balance
of what the current state of thought is.
I bought Jones' book based on your experience with it and have not
been disappointed. In fact, it is now one of my favorites. I bought
Arkin's book last fall and I also have thoroughly enjoyed it. Where
Jones' book has more practical applications and goes into greater
detail of various control scenarios, a significant portion of Arkin's
book is a survey of behaviour based methods including motor schema,
subsumption, and others. One of the aspects that I liked about it was
that he doesn't just talk about these from a purely academic
perspective, but relates them to actual robots built that use the
various methods, talks about where each work well and where they don't
work so well.
Arkin also talks about the issues and differences between an
analytical approach vs a reactive approach. While each has their
merits, Arkin seems to support a hybrid approach which plays on the
strengths of each. Here is a choice quote which begins Chapter 5:
"A significant controversy exists regarding the appropriate role of
knowledge within robotic systems. Oversimplifying this conflict
somewhat, we can say that behavior-based roboticists generally view
the use of symbolic representational knowledge as an impediment to
the efficient and effective robotic control, whereas others argue
that strong forms of representational knowledge are needed to have a
robot perform at anything above the level of a lower life form."
He goes on to explain that a hybrid approach may be more useful and
either by themselves, showing examples where knowledge representations
have been successfully integrated into otherwise reactive based
Another book that I also like a lot is "Vehicles - Experiments in
Synthetic Psychology" by Valentino Braitenburg. If you don't have
that one, by all means pick up a copy. It is basically a sequence of
thought experiments where each chapter represents a robot that you
build and examine how it behaves. Each chapter adds new sensors and
control circuits along with discussion of how they affect the
behaviour of the robot. In later chapters, and with a little
imagination from the reader, the behaviours are so complex that it may
be difficult to distinguish the behaviour of the robot from that of a
So glad to hear.
I have had the book on my shelf for a while, but haven't more than
scanned it. Unfortunately, I buy books faster than I can read them, and
my interests shift faster than I can finish reading as well. Life is
short and there are so many interesting things to study...
Yes, Jones also mentions Arkin's book in his introduction, and suggests
it is " an excellent example of a more rigouous approach" but that he
(Jones) will take a more practical approach.
Yes, doesn't this appear to be a very interesting controversy?
Joe's approach has very little state usage. He relies on emergence of
the interactive behaviors.
I've always been fascinated with state machines and state information. I
have to think the emergence of competing behavior is simply a case of
hidden state information. In otherwords, the state information is hidden
in the priority structure of the behavior arbitration. I figure its
still there, but not explicit.
Personally, I like explicit. If for no other reason is making the state
information of a system explicit makes it easy to store, and restore, so
systems can easily made more fault tolerant, and are easily recovered
after system crashes by simply replacing the explicit state information.
But then again, that's my approach.
If Arkin is in the middle, I need to read him, as I am probably mroe
aligned with his intent, if not the details of his preferences.
There are some interesting studies in the field of chaos, that suggests
on the edge of fixed and free, is a region of potential chaotic
response, and this response seems "life like" to me. So likewise,
between a stateless system, and one that is overly variable ridden, I
think lies the better way.
Yes, I have this one and have read enough of it to put it to application
in my programming. Joe also mentioned this book in his text.
Very interesting. Another book I can highly recommend is called
"Blondie24 - Playing at the Edge of AI" by David B. Fogel. I really
enjoyed this one. The book describes an experiment in designing and
teaching a neural network to play checkers. The teaching method was
genetic based by starting with random weights for N neural nets and
having them play each other for Y iterations. After Y iterations, the
N/2 nets (weights) which performed the best are duplicated, modified
slightly at random and added back into the pool, while the half that
performed the poorest were discarded. Then the process repeated with
the N/2 original and the N/2 cloned+random variations. After a short
while and with no other influence other than the genetic training
method described above, the neural net was good enough to play
checkers against a real person and win. And after hundreds of
generations, the resultant net could beat very good checker players.
The name "Blondie24" of the book happens to be the on-line name the
researchers used when they matched their neural net against
unsuspecting human players using internet gaming sites. Turns out
that "Blondie24" somehow got many more offers for game play than other
names they tried :-) A very fascinating book.
What I also found interesting was the author's take on chess playing
AIs that use enormous amounts of compute power. He basically says
that while playing chess was once considered the holy grail of AI due
to the required strategy and intellectual skills, it has been morphed
into such a specialized single application that it is nearly useless
in application to other fields. And while he admits that his checker
playing neural net may not reach the level of expertise in playing
checkers as current chess playing programs have reached in chess, the
approach, especially the training method, is highly adaptable to
nearly any application where inputs are transformed into actionable
outputs given proper feedback.
I first utilized state machines in the context of implementing
communication protocols. Without state machines, their implementation
quickly becomes unweildy and fragile. The use of state machines make
their implementation straight forward and easy to understand,
maintain, and extend. When I began programming robots, I found the
use of state machines to be a good tool for managing the complexity
there as well.
Yes, if you change the arbitration priority and you have drastically
different results even though the behaviours themselves stay the same.
I think I am still at a point where I am learning what's out there in
terms of other approaches and cannot say that I have any one approach.
I have experimented a good deal and am now beginning to explore more
of what others have done. There's a lot to learn about this field and
books like that of Jones' and Arkins' are great for summarizing the
research of decades.
However, I must admit that I prefer to spend significant time
experimenting on my own first and form some ideas myself. Only then
do I like to see the approaches that others have taken. It's not that
I like reinventing the wheel, but I like forming my own ideas without
too much preconception from others first. If I first read all the
research available, chances are I'll view the problem in the context
of others' approaches instead. Also, after developing my own approach
and ideas, I generally better appreciate others' approaches and ideas
when I do study them.
Hmmm, "fractal behaviours" has a nice ring :-) Makes me think of
heirachical control distribution. Coprocessors for my coprocessors,
ad infinitum :-)
Well, I am more than a chapter into Arkin now, and at the end of the
first chapter, the controversy is very clear. You have the "model the
world" crowd, originally lead by Minsky, and the "react to the world"
crowd lead by Brooks, and the "It can be both" approach of Arkin.
Pg. 24 on Scalability. "Many of the strict behaviorist, persist in their
view that the approach has no limits; notably Brooks (1990b) states that
"we believe that in principle we have uncovered the fundamental
foundation of intelligence." Others advocate a hybrid approach between
symbolic reasoning and behavioral methods, arguing that these two
approaches are fully compatible: "The false dichotomy that exists
between hierarchiacl control and reactive systems should be dropped"
I guess I hadn't realized how strong the behavior crowd was on the
concept they had "uncovered the fundamental foundation of intelligence."
I had thought it was more like, "hey we've found an interesting approach
to lower level control". I didn't appreciate it was considered the be
all and end all of AI.
Interesting. I hadn't heard of this work before.
I can relate to this also. Often you can't appreciate something until
you try to derive it yourself first.
Well, I hadn't taken that path... but I have considered fuzzy state
machines. A system partially in one state and partially in another. Or
maybe the "many worlds" approach, where it is in every state in a
parallel universe (next allocation block of memory).
Hi Randy. Arkin's book is written on more of an academic level than
other books cited here. Tons of refernces to research papers.
Also, from my take on the matter, the original idea of subsumption
involved complete avoidance of internal representations, ala Brooks,
but what is called behavior-based robotics was originally an outgrowth
of this which did include internal representation in the system:
Maja Mataric, "Learning in Behavior-Based Multi Robot Systems:
Policies, Models, and Other Agents", 2001:
"... Behavior-based systems grew out of the reactive approach to
control, in order to compensate for its limitations (lack of state,
inability to look into the past or the future) while conserving its
strengths (real-time responsiveness, scalability, robustness) ..."
You might take a look at Maja's stuff - she was a student of Brooks,
and has done a lot to extend the original subsumption ideas:
Also, the hybrid approaches, as laid out by Arkin, combine subsumption
[or behavior-based] with more classical AI techniques. Here is what I
wrote in a previous post:
It is rather clear that Arkin wasn't trying to present new ideas so
much as doing a compilation and discussion of the many examples of the
behavior-based approach. However, he clearly perceives the limitations
of a purely reactive approach and discusses representational issues.
Also, in chapter 6 he discusses hybrid approaches -
".... multilayered hybrid architecture comprising a top-down planning
system and a lower-level reactive system is emerging as the
architectural design of choice ... hybrid system designers look toward
a synthetic, integrative approach that applies both of these paradigms
... using each where most appropriate ..."
I can't help but thinking that there is something to this. I've got
three kids of ages 2, 7, and 11. Having closely observed the
behaviours from birth to 2 three times now, I can't help but think of
goal oriented, NN training methods using immediate feedback. They go
from staring off into space, with only minimal responses, to
discovering how to focus on objects using their eyes. How do they do
that? What causes that to happen? They go from flailing arms and
legs on the floor, to eventually discovering the right impulses to
make their arms and legs scoot them across the floor, and eventually
to balance and walk upright. They discover that they have hands,
sometimes staring at them for long periods of time. They feebly try
them out, knocking over blocks and toys. They learn to elicit
responses from their parents by smiling at just the right times (OK,
sometimes it's gas), or by crying. Then they begin to associate words
and language and begin to communicate verbally.
I don't really think this has much to do with behaviors as put forth
by Arkin or Jones or Brooks - their methods are deterministic by
design. But I certainly believe that given enough state, a few basic
built-in "goals", and a wealth of feedback, highly complex animal-like
organisms will emerge.
I guess I kind've went off on a tangent. Sorry about that.
Yes, I agree with your "take" as I look further into this.
To me, the behavior-based approach appears to be statelessness taken to
I think Joe's use of ballisitc behaviors demonstrates how the attempt to
get completely away from state and use only reactive servo behaviors is
To me this is good news. It means that some middle road (perhaps
Arkin's, perhaps not) will be more successful than this reactive one.
There is something deep and fundamental about "state". Perhaps it is my
training as a physicist that gives me this impression, going back to
ideas like "ground and excited states of an electron". Further, the
conversion of mass-to-energy and energy-to-mass is an issue of "state".
But! Applied to controls, state information capacity is memory and its
contents is history. State information acquired is learning. State
information compiled in extention is context, and context is the basis
for symbolic interpretation.
Children don't start out with a blank slate. There
are hundreds of millions of years of evolutionary
'optimization' built in. They "know how to learn"
already, unlike some semiconductor beastie built
in the basement. Basic goals, and good starter-
methods for achieving those goals, are hard-wired.
A great time-saver.
As for electronic intelligence, I think it's becoming
clear that the most fruitful approach will be through
electronic emotionalism. Almost everything animals do
seem to be a result of 'emotions' balancing against
each other. Apparent 'intelligence' results from the
synthesis of competing goals - an emergent behavior
made possible by, though not inherently designed into,
the system. Learning biases the interaction of emotions
and emotionally-driven goals.
Truely "life-like" behaviors and versitility can surely
be achieved using electronics, possibly not all that
MUCH electronics. Those cutsie Japanese electrical
dogs and cats that apparently employ all of what I've
been speaking about - they don't use THAT much computer
power to emulate suprisingly life-like behaviors. If
we start with about an order of magnitude more CPU
and memory ...
The trick is going to be in getting exactly the right
interactions between the various 'emotions' and anything
learned. Nature has done this for animals through the
evolutionary process, but we will have to experiment
with it on our own when we build electronic beasties.
Get it wrong, perhaps even slightly, and our creations
may be 'insane' - unable to generate appropriate
Of course, WE can do one thing nature cannot - build
"virtual" beasties and let the Darwininan process
proceed at computer speeds. This will save us a lot
of time weeding out the "insane".
Human-level intelligence though ... probably eventually,
but impractical using contemporary semiconductor tech.
Of course, "human level" doesn't necessary imply "human
LIKE". We will be creating aliens, something that's our
equal or better but doesn't necessarily think LIKE we
do. Best to incorporate an 'off switch' early on :-)
Anyway ... I'd suggest supplimenting your reading
material with stuff by Oliver Sacks and V.I.
Ramachandran - "brain guys". Their insights into
behavioral neurology might be useful to anyone
trying to create electronic intelligences.
Hi Randy. I think Brooks had a lot of really brilliant ideas - esp
regards giving simple systems robust survival capabilities - but these
are really bug-level intelligences. I don't quite understand that he
would make such broad generalities, as you mentioned ...
... other than to take the word "foundation" in a very general sense -
ie, that reactive mechanisms form the "bottom-most" level of a
I cannot see that higher-order intelligence as found in humans can be
seen in any other way, and I imagine the vast majority of AI
researchers probably feel the same way. IOW, reactive/subsumption
forms a nice low-level and robust way to interact with the real world,
but it's only the first step along the way to real intelligence. If
you look at the ideas of Marvin Minsky, for example, you see this laid
out in detail:
And of course, as I mentioned, Mataric and others immediately started
extending reactive architectures by adding more representational
levels, similarly Arkin's hybrid systems ... on and on.
To me, Brooks' real contribution was dwelling on the ideas of physical
groundedness, embodiment, and situatedness - basically the idea that
an AI should be developed with respect to the environment that it will
be existing inside of. IE, an AI existing purely within a computer
core, with no reference to the outside world, 'lives' a fantasy life.
Those of us who build small robots know the importance of good sensor
and end-effector arrays.
And PGE+S is how evolution did it. It helped develop the brain via
natural selection of those organisms which had brains which allowed
the bodies they were carried by to survive and reproduce. As I see it,
Brooks saw how brittle most AIs were, compared to something as simple
as a living bug, and asked why this was so. However, what he came up
with was only the bottom-most level of a truly intelligent system.
Hi Rady, I found an interesting reference on hybrid robotics systems,
which is not referenced in Arkin's book.
I found a reference in the following paper to Tani's robotic work,
mentioned below. The approach involves recurrent NN's and
self-learning schemes, integration of bottom-up perception and
top-down "subjective" prediction processes, in a real robot in a
real-world environment. The philosophical citation is just for
reference. Tani's dynamical systems approach is the meat.
"Philosophical Conceptions of the Self: Implications for Cognitive
Science", Shaun Gallagher, 2000.
"... Tani (1998) explores the possibility of establishing an
artificial version of Strawson's minimal self in a machine ... Tani,
however, in contrast to Strawson, makes it clear that the robotic self
he is designing is the result of physical interaction between the
robotic body and its environment...."
"An Interpretation of the 'Self' From the Dynamical Systems
Perspective: A Constructivist Approach", Jun Tani, 1998.
"... This study attempts to describe the notion of the "self" using
dynamical systems language based on the results of our robot learning
experiments. A neural network model consisting of multiple modules is
proposed, in which the interactive dynamics between the bottom-up
perception and the top-down prediction are investigated...."
"... A difficulty exists in the embodiment of the higher cognitive
levels by means of the dynamical systems approach. The question is how
the dynamical systems approach can embody the complex subjective
processes without employing the AI scheme of symbolic representation
and manipulation. The key to solve the problem can be found in a
recent scheme developed in the field of artificial neural networks,
called recurrent neural network (RNN) learning (Elman, 1990; Pollack,
1991). The RNNs are considered to be adaptive dynamical systems from
which dynamical structures can be tuned by means of neural connective
weight modification using certain self-learning schemes. Elman (Elman,
1990) and Pollack (Pollack, 1991) showed that the RNNs can learn
certain language syntactic structures from example sentences. The
particular finding in their research is that grammatical rules cannot
be seen explicitly in the neural internal representation, but the
rules are actually embedded in attractor dynamics of the RNNs. Using
the RNNs is suitable for our objective since we can exclude the
"homunculus" from the systems which attempts to look down at the
representation from the top and to manipulate the elements of the
representation. The symbol grounding problem may not exist for RNNs
since there exist no explicit forms for the symbols in the RNNs which
need to be grounded...."
: Well, I finished Joe Jones _Robot Programming, A Practical Guide to
: Behavior-Based Robotics_ (which I highly recommend) and am looking
: around for my next "read". One of my friends recommends _Behavior-Based
: Robotics_ by Arkin. Anyone read this one, and have comments?
I found Arkin's book to be really useful in debating/contrasting/
combining the various types of robotics. Some of the book is behavioral,
some is more rigid than the Brooksian styles. His vector summing
navigational discussions are very thought provoking (on my part anyway).
Arkin is a behavioral "purist" so you get a bit more variety in there,
and a lot of theory rather than practicum. I refer back to it fairly
: Randy M. Dumse
: Caution: Objects in mirror are more confused than they appear.
Thanks for the quick review Dennis.
I just received my "Robot Programming" (Jones) book and I'm looking
forward to reading it. I added the Arkin one to my Amazon wishlist so
I'll remember next time I buy a book. It's not exactly cheap so I want
to make sure it's worth it and knowing how much you know your stuff it
made me decide to get it later.
I'd like to read a good intro book on Neural nets too but It's hard to
find something that is more practical than theorical. (I'm definitely
more a hands-on type of guy) Any suggestions?
Keep on bot'ing
Abebooks is a co-op/central agent with many small book sellers in the
US, and also CA, I beleive. I've gotten books from several places
around the country. Shipping to CA is not much different from
somewhere within the US.