This video was linked to on Slashdot:
In the video, the robot learns how to walk. How does the robot know it
has arms in the first place? Did it likely have to figure this out
(perhaps prior to the video), or is it likely explicitly programmed to
move its arms randomely at first and go from there?
Along the same lines, how does a baby figure out it has arms and legs?
How does it figure out that they can be used for locomotion -- by
moving them around randomely at first?
I'm very curious...if anyone has any input, please post.
Well, I think that question shows how the approach taken by this project is
not general enough to explain how we learn.
Beyond the question of how does it know it has arms, are all the other
general question of how does it know that it's even in a 3D environment and
how does it know there is gravity holding the object to a flat surface?
All these things seem to have been programmed into the model by the
programmers instead of being learned by the robot. That is, without
studying the research, I'm guessing the basic parameters of the model were
selected by the programmers - such as the fact that the unit was a block
with arms existing in a 3D space trying to move on a flat surface. As
such, only a few very specific parameters of the model were then searched,
like the length of the arms, and the number of joints. So most of the
details of the "model" were selected by the programmers and only a very
small part seems to have been learned.
I don't think anyone knows the answer to that question. The starfish
project is just one of many exploring ideas of machine learning to try and
help us better understand just how humans learn these things.
I happen to lean towards a belief that we have far less knowledge about our
environment built into us than the starfish model has built into it. I for
example don't believe we have any knowledge built into us about the 3D
nature of the world, or that gravity exists, or that we have "arms" and
"legs". What is built into the brain is lots of sensory inputs and
effector outputs coming from eyes, and ears, and going to arms and legs.
The brain doesnt have to "learn" that it has sensory inputs and outputs,
that's just built in. What it has to learn, is how different signals on
the effector outputs, sent based on the context created by all the sensory
inputs, is likely to lead to pain and pleasure events. So what the brain
learns, is how to react to current context.
I believe all our "understanding" that we have "arms and legs" and that we
live in a 3D environment with gravity holding us to the ground, is just the
result of learning in complex detail, how to react to our environment in
ways that maximize our long term pleasure.
What I believe the brain is actually modeling at the lowest level is
expectations of rewards. So as it produces different behaviors (generates
pulses in different parts of the brain), it's able to use its model of
expected future rewards to test and evaluate each behavior as it happens,
without having to wait for real rewards. In psychology, this modeling of
future rewards is seen as secondary reinforcers. For example, a baby will
learn to see it's mother as a predictor of future rewards (like when it
gets milk). So even though the actual tasting of milk is the real reward,
the brain learns that simply seeing the mother is a good predictor of
higher future rewards (high odds of getting milk soon) and not seeing the
mother is a predator of lower future rewards. As a result, any behavior
which allows the baby to see the mother, acts as a secondary reward for
that behavior. As a result, it learns to turn its head, and eyes, and keep
them focused on the mother when the mother is around - because just looking
at the mother has become a reward in the world-model the brain has created.
So the brain learns how to create output signals, in response to the
current sensory context, to maximise rewards. But to do so, it must create
some type of general understanding of what the sensory signals means. It
must for example build signal classification circuits that allow it to
recognize that different images of a dog, is the same dog, and not just
different dogs. And at the same time, it must learn to recognize that
different images of our right arm, is the same right arm. It must do this,
because when it learns how to respond to a dog, it should respond the same
way every time it sees the dog. even if the current image of the dog (the
dog butt) is very different from the last image seen (the dog face).
So the other part of the problem the brain solves is building these
classification circuits to lump together different sensory data patterns as
being similar so that when it learns how to react to one pattern, it will
correctly use that same reaction for other sensory patterns that represent
the same thing. Until it can classify different sensory patterns as being
"my right arm" it won't understand that it's right arm is "one thing"
instead of 1000 different unrelated sensory patterns.
Again, no one knows exactly how the brain does this. But many people have
been working on algorithms to classify sensory data in attempts to
understand and duplicate this power (such as all work done on pattern
Some people believe a lot of this power is hard-wired into us by evolution.
I suspect the brain is more general purpose and uses temporal clues to
build these classification circuits. I think that the closer in time
different sensory patterns tend to happen to each other the closer they
become to being treated as the same in the processing circuits.
For example, if you have a simple sequence of single digit numbers as a
sensory input, we might see sequences like 128457623745. In this small
example, 5 always follows 4. If this trend continues over time, the
processing circuit could start to classify 4 and 5 as being the same thing.
It would become the single item of a "45" pattern. Once the 4 showed up,
the system would classify it as the "45" pattern, even before the 5 got
there, simply because 4 was such a strong temporal predictor of 5. I think
this is the key to how the classification circuits work. They use temporal
prediction to shape the classification circuits. If one sensory pattern
(across multiple inputs) is a strong predictor of other patterns, then all
those patterns get lumped together as being "the same" thing. We learn to
understand that our arm exists, in terms of all the sensory patterns that
are created by our arm (like what our arm looks like in different
positions) because each is a strong temporal predictor of the others.
So what the brain has to start with, is its built in effector outputs, but
it has to learn that there is an arm in the environment by analyzing the
temporal sequences of the sensory inputs. And the reason we learn to see
the arm as part of us is because that what happens to it can directly
create pain and pleasure in us. If we hit our arm with a hammer it
produces pain, but if we hit that log with a hammer, it doesn't produce
pain - so the log is not "us" but the arm is.
People that have a disease which cause them to loose feeling in their arm
have the problem of forgetting that the arm is really part of them. They
are more likely to use it as a hammer to hit things with and not care if
it's getting damaged because they no longer see it as being part of them -
it's just a hunk of flesh attached to them that's not doing them much good.
If you could put sensors on your car body, and wire it into your brain so
you could directly feel when things touched your car, and so that you could
feel pain if things hit your car too hard, or if the engine overheated,
your sense of "self" would probably start to expand to include your entire
On Sep 1, 4:48 pm, firstname.lastname@example.org (Curt Welch) wrote:
I think the scientific evidence leans in the opposite direction.
People are born with a lot of inate knowledge about the
physical world. Here is an experiment you can do for
yourself: next time someone lets you hold their newborn
baby, let your arms drop a few inches so the baby is
momentarily in freefall. It will react with fear. We don't
learn to be afraid of gravity, it is instinctive. Warning:
They will not let you their baby a second time.
As far as knowledge about its body, I don't think there
is hard evidence that babies are born with knowledge
about their limbs and joints, but many other animals
clearly are. Horses, chickens, etc. are up and running
around very soon after being born. They don't learn
to walk and run, they are born with the knowledge.
That's interesting. However, I don't think the experiment really supports
What it shows, is that a fear response is innate (you didn't bother to
explain what the response actually was, or why you feel justified to call
it a fear response), and that we have some sort of innate drop sensor wired
so that it triggers the innate fear response.
This is a large step away from the concept of gravity which seems to be
hard wried into the starfish robot model. As far as I can surmise from the
demos, the starfish internal model knows that if it's got a leg of a given
size, and it moves the leg down against the flat ground, it will cause the
body to raise. This entire model is innate to the starfish robot. A Baby
with a drop sensor wired to a fear response has no real concept of it's
body or how it's effected by gravity where as the robot has an innate
concept of how gravity effects a 3D body. All the starfish seems to learn,
is the shape of it's own body. That's a neat thing to test in a robot to
see what it produces, but I think all the talk about this robot having some
unique sense of self (as if it were the first and only robot to have a
sense of self) is way off base.
That's very true.
But what I think happened in evolution of man is that the strong learning
system ended up replacing almost all the hard wired knowledge. Instead of
having that knowledge hard wired into our circuits, it was replaced with
strong generic learning circuits with hard wired motivations. Evolution
still had to make sure we would act in ways to maximize our survival, but
instead of making that happen by hard-coding the behaviors, it built a
system where it would hand select, and hand tune the motivations, instead
of the actual behaviors. All the complexity we are born with seems to
exist in our motivations, and not in our behaviors. Our pain sensors and
our pleasure sensors are the things that motivate our learning system.
Humans, instead of being born with a large and complex set of instinctive
behaviors, are now born with a large and complex set of motivations which
drive the learning system.
The baby-drop experiment you describe indicates to me that babies are born
with an innate motivation circuit to sense a drop and motivate the learning
system to prevent it. In other words, an innate motivation circuit that
makes dropping a negative reinforcer.
I know my wife has a very strong anti-drop motivation. It's so bad that
she doesn't ride roller coasters because of how uncomfortable the drops
make her feel. She also hates sudden dips in roads that create that
momentary sense of dropping. She even put a dent in her sister's glove
compartment during one car ride where there was a sudden drop in the road
which caused her to push against the glove box as a panic response.
For an organism that spent a good bit our of evolutionary past living in
trees, it's not hard to see why evolution might have given us a drop
sensors and wired it to our learning brain as a negative reinforcer.
Without such a sensor, we would learn not to fall out of trees only after
receiving the pain caused by hitting the ground. Of course, since one drop
could kill, using the physical damage sensors as the only motivation would
mean a lot of young chimps would die before learning not to fall. With a
drop sensor, the chimp would have an innate fear of falling and would learn
how not to fall, much faster, without ever once having to experience the
pain of hitting the ground.
I agree it's valid that this shows that some aspect of gravity has been
hard-wired into humans. But a drop sensor as part of the learning
machine's critic which is wired as a negative reinforcer is very different
from a hard wired sense of how gravity effects the things around us (like
how we learn to expect anything we are holding to drop to the ground when
we let go of it), or how we learn to expect that pushing down on the ground
with a leg will cause our body to rise, but that we will still stay in
contact with the ground.
The other interesting aspect of your example I think is the "fear
response". Humans seem to have hard-wired external responses connected to
the internal state of their learning system. All humans smile, and laugh,
and cry. This is not something you can easily explain as a learned
behavior even though these behaviors are to some extent under our voluntary
control. But instead, you can explain it as hard wired external indicators
of the internal state of the learning system. That is, evolution seems to
have found justification to make the state of the learning system visible
externally - just like we used to put lights on the control panel of a
computer to allow some of it's internal states to be visible externally.
The fact that a new born baby would show a fear response, I believe is just
an indication that the learning system is predicting future pain. I think
the fear response is just an external indication of the internal operation
of the learning system. When the learning system predicts danger, it's
automatically displayed externally, so that others in the group can
instantly sense that there is danger in the area.
Do babies show any sign of hard-wired appropriate response to a drop? Such
as reaching and grabbing in an attempt to grab a branch to stop the drop?
Or do they just show fear, and still have to learn that grabbing things is
a good way to stop a drop?
I believe that chimps do have an innate ability to grab and hold their
weight basically at birth but I don't think humans have that ability
anymore. I think this is just one more example of how evolution replaced
innate behaviors with innate motivations driving a strong learning system.
According to my observations (three children), they do it by observation.
When very young, a child is apt to flail sometimes when excited, and when
hit in the face by one of its own arms, to look startled and perhaps cry,
in a normal and otherwise-observed startle reflex. I've seen this at least
twice quite clearly. They have no idea the arm belongs to them.
Later, there is a definite observable time when they start to watch their
hand while concentrating on opening and closing it - they've just realized
they have control of the thing. They practice this control by watching it,
i.e. hand-eye coordination begins.
As far as standing and walking, my oldest was quite unusual - he free-stood
off the floor (before ever climbing up on furniture to any degree) at 5 1/2
months. Within a few days, he was clearly trying to work out how to get one
leg off the ground to move it without falling down - a step. IOW he wanted
to walk, and knew what he needed to do to achieve it. The first step took
however 2 1/2 months - his feet were just too far apart and his legs didn't
have the muscles until he was 8 months old. Within 4 weeks, he was running!
We spent most of the next decade running after him :-). Anyhow, this was just
to point out that walking was clearly an intellectual achievement before it
was a physical one, and I'm guessing that standing was the same.
I think this is a very wonderful bit of observation!
My thinking on the subject is similar. Children experiment to find
what they can control and what they cannot. I've heard discussions
about them discovering their toes, for instance.
A quick Google search shows:
"Watching our children discover new things is one of the great joys of
parenting. First, they discover their toes. Then come bubbles and
butterflies, books, bicycles and banana splits."
But one of the things I think they discover first, is their "thoughts"
are somehow connected to getting what they want.
Want to see a grown adult make a baby face? Tell them to cause
something "tele-kenetically". "Hey Joe, try to flip that wall switch
over there without touching it." Watch him close his eyes and scrunch
up his face. Maybe reach out in empty space, and make a groping
motion. (If you need a visual, in short, "Do the Yoda".) Basically all
the same moves as a baby trying to will a meal into its mouth.
How does this fit with robotics? We're still trying to move things
with our minds. Programming robotics may be a latent tendancy trying
to control things with our "thoughts" back like when we thought we
first figured it out.
Randy M. Dumse
Caution: Objects in mirror are more confused than they appear.
It's sure a shame that we don't have better tools to monitor what a brain
is doing so we could get a better understanding of what type of thoughts
develop in babies and when. There are just all sorts of interesting
questions we have no way to answer because of our lack of ability to
perform high resolution brain monitoring.
The example of a baby understanding he needs to move his legs before he
learns to do it doesn't have to be a very complex thought, but it would be
interesting to know just what type of thought a baby actually has about
this. Is he able to visual his leg moving as if he were walking before he
has ever walked? I suspect not.
I suspect the type of thoughts the baby is having is not even as complex as
visualizing himself walking. I suspect it's working at a much simpler level
at that point of his development. I suspect at that point he's developed
recognition circuits in his brain which is able to recognize legs, and
recognize simple walking actions because he's seen it in other humans. I
also suspect that other humans have become secondary reinforcers for the
baby because they are predictors of "good things to come". When he stands,
or moves his legs correctly, he is reward by these secondary reinforcers
simply because he has created in his environment the thing which acts as a
reward to him (a human walking). The reward only works when he actually
sees the legs moving in the right way, so this is why he will stare at his
hands, or legs, as he moves them.
In other words, I think this sort of behavior in babies is probably best
explained as simply mimicking behavior driving by secondary reinforcers.
The baby is attempting to reproduce the things he has learned to love (the
sight of a legs moving in a walking pattern).
I don't think the baby would yet have any sort of concept that walking will
allow him to get something he wants. So I don't think he's having any sort
of thoughts that could be described as wanting to learn to walk so he can
move around the room faster. He would by that point have developed the
desire to "get things" and he would have learned to use his arms and hands
to reach out and grab something, and learn to craw to get things, but the
thought that walking might be better than crawling I think would be beyond
what he could understand. I think the attempts to move his leg are just
attempts to mimic something he likes. That's my best guess as to what
level of "thoughts" the baby is dealing with at that point. Basically I
would say the thoughts translate to, "oh, that was cool, it looked just
like mom, lets try to make that happen again". :)
On Sep 2, 1:32 pm, email@example.com (Curt Welch) wrote:
Actually, I would encourage you to research some child development
texts. Turns out there are real fears associated with height built
into human babies, even from visual clues with no experience
(therefore unlearned). One experiment I remember is the precipice
detection. They take an set a baby of crawling age on a surface of
glass, continuous everywhere, so it can't fall. Then underneath there
is a tile pattern, one up against the glass and another well below the
glass. The baby will not cross the threshold from the close pattern to
the distant pattern, even though there is no possibility of falling.
I like clever setups like that, which give us insight into the mind.
I would love to be able to create others of similar scientific test,
which speak to the unconscious, even in adults. Very fascinating.
Ah ha. Found something important to this discussion. Google on Moro
"One of the more interesting observations in a newborn is the way they
seem to be scared of falling for no apparent reason. It seems there is
no justification for a baby to have such a fear, especially
considering that they've likely not experienced falling in the first
place. But Mother Nature equips human beings with an amazing set of
reflexes designed to protect us from all manner of possibilities.
"This odd fear of falling is known as the Moro reflex (also known as
the 'startle reflex'). It is present in newborns but usually
disappears within a few months. At birth, the pediatrician will test
the baby for this reflex by laying her down on her back and removing
contact with her. She is expected to throw her arms and legs out and
extend her head in fear."
Which supports the idea of them having many reflexes built-in.
The Rooting Reflex is another interesting one, and it's goals toward
survival are essential. Again, I suggest this one is also
unconsciously expressed many different ways in adult life. Freud's
"Sometimes a cigar is just a cigar" is easy decoded if one suggests
instead, sometimes a cigar is just an expression of rooting. (Original
idea of mine, btw.)
More see: http://en.wikipedia.org/wiki/Primitive_reflexes
I really disagree.
I don't think the baby consideres itself a human being yet. There is
it, and there is everything else in the world. This division starts as
soon as it discovers its toes. It figures out what it can wiggle and
what it cannot. What it can wiggle or feel is it. What it cannot, is
I would doubt that. I think walking is a much more personal act. I'd
suggest the mental process might be how do I keep this higher
viewpoint (visually rewarding) and "up", and not wind up smacking my
butt/face/etc. on the floor?
Disagree as above. The visual effect is already something they want.
Effort has delivered, and they wish to keep their "up" status.
Above in the thread, the automatic bias against down has been
identified. I would suggest the there is an equivalent desire for up
built into us. You see this displayed when babies reach up both hands
and open and close their fists, the universal language for "Pick me
So an elevated status is also something I think is a genetic
prediliction. I think this was manifest later as man's reaching toward
climbing mountains, taking the skies in planes, and to the stars in
Agreed. Speed is not the motivation. Neither is mimickery.
Randy M. Dumse
Caution: Objects in mirror are more confused than they appear
Yeah, I'm sure human babies have a lot more innate abilities than I don't
know about but which are well documented in the research.
But I'm not really interested in the fact that babies are born with
simplistic innate behaviors if they are not needed to understand our
general powers of intelligent behavior. For example, if we could remove or
disable the mechanism that creates the Moro reflex from a human baby, do
you suspect it would prevent them from developing normal human
intelligence? I don't believe it would. I think you could disable all
these innate behaviors, and still end up with an intelligent human. The
only thing you couldn't disable, is it's innate ability to learn, and you
would have to leave some innate motivations to guide the learning.
In other words, what do I have to build into a robot, to make it perform
any task we can train a human to perform? That's my real interest. I want
to build advanced machines, I don't really care about understanding humans
beyond the level of what I need to know to duplicate our high level powers
I'm sure many of these innate behaviors do have a real effect on the
personality, and habits, we develop in life. But I don't believe they are
foundation of why we are intelligence (why we are able to do the things we
do that no machine has yet been able to do). That's why I mostly ignore
If we figure out how to implement strong machine learning to duplicate
human learning skills, I think we will have machines that everyone will
agree are as intelligent as humans. But what we probably won't have, are
machines with human-like personalities. They will be more human like than
anything we have now, but still not close enough to be mistaken for a human
under testing. To make one of these learning systems develop a fairly
human-like personality, we may have to give it many of the same odd
motivational systems, and odd little innate behaviors, as well as a
learning brain with structure and powers very closely matched to the human
brain. I suspect getting that close enough to have human-like
personalities emerge from the machine will be quite hard. Though that is
work we will want to do, it's not work I care all that much about
personally because I'm a lot more interested in figuring out how to build
smart machines than I am in building things that act like humans.
I agree. Creating a self-image model that separates himself from the rest
of the world I suspect starts even before birth because of the skin sensors
which create the prime separation between us, and not-us. If you bite your
thumb, you get a very different set of sensations than when you bite
something that is not part of your body. Those differences are all part of
what the brain starts to learn about the basic nature of the world being
divided into that prime distinction. Even before learning you can wiggle
your toes, this image of self vs not self is starting to form because of
the skin sensors. When the eyes start to develop I'm sure it only adds
more detail to the me vs not-me view of the world.
However, what I was talking about, was a huge step away from a baby
thinking it's a human. Developing a simple pattern recognition circuit to
recognize a leg and recognize a walking-like action is just the ability of
the baby to recognize that his own leg is similar to a leg of a human.
It's no different than developing the power to recognize that a finger
looks more like worm, than it looks like rock. It's just the first steps
of basic image classification developing in the brain.
Yeah, I think there is probably a reward in standing effect at work in
there as well which motivates the baby to keep standing and not fall down.
Yeah, that can explain the learning to stand. Most babies have already
learned to pull up to a standing position in a crib or with other objects
before they develop the ability to stand without holding. The experience
of standing after pulling up, and being able to see things, and reach
things, they could not get without standing has already created motivations
to want to be upright at times.
But I was talking about the idea of what type of thoughts the baby might
have when we see it trying to take steps.
Is the motivation to stay upright, while trying to use the leg motions that
worked for crawling which is causing it to make those first awkward
attempts at a step? But does the Baby look at it's leg while trying to do
this? If so, I think we need to add more complexity to the idea of what
might be happening to explain why it looks at it's leg while doing this.
If it's not normal for the baby to look at its leg as if it were trying to
make it take a step, then maybe the more simple ideas of trying to stay
upright combined with trying to craw forward is what leads to the awkward
attempts at the the first steps. ???
Sounds like a stretch to me. You don't need to assume a genetic bias of
"upness" to explain why we like to be picked up. When babies are picked
up, they get rewards like being fed, and they get tactile sensations of
warmth. You don't often get to suck on a tit when you are laying on the
ground. I suspect there's all sorts of potential motivations being
satisfied when a baby is held and fed but I dont think there's any need for
some sort of "upness" sensor".
When babies learn to pull up, there are all sorts of potential rewards that
we can assume exist to explain why they would do this without the need for
height. A baby is probably more likely to be picked up when it's standing
in it's crib than when it's laying down. If the baby likes being held for
reasons other than height, this could condition in them a desire to be "up"
simply because he likes being held and he's always up higher when he is
As for why we like to climb mountains or get higher, I suspect there's
plenty of rewards associated with those actions that don't require a
genetic bias to explain. You can see further by being higher, which means
you can sense more about your environment - this allows us to quickly find
the things we want, and helps us to avoid the things we don't want. That
alone is a simple answer about why we have lots of interest in "getting
higher" without having to assume a special genetic motivational bias is at
On Sep 2, 7:37 pm, firstname.lastname@example.org (Curt Welch) wrote:
I understand you are not interested, but I am suggesting perhaps you
I know you have a bias to find a way from tabala rosa to learning. I
don't think intelligence works that way. I think intelligence has to
have a pretty strong base of hardwired knowledge, before any thing
made of soft learning can take hold. For example, you can't write an
article for Encyclepida Britania without an innate ability to use
language, let alone many other fundamentals, such as an alphabet of
some kind, a grammar of some kind, and so on. Some will be innate,
some learned. I don't think they can all be learned and none innate.
Actually, the reason these reflexes are known/important is exactly
what you doubt. Doctors check these when babies are born to see if
they are present. If not it is an indication of a defective human
being, who will have developmental problems.
I hear you. But. Baby - bathwater.
We have one example of advanced intelligence. I am suggesting we see
how it comes to be different from other life forms and use it as a
guide to acheiving intelligence.
Seems crawling is a hardwired reflex. It was listed as one babys
display on birth if put on their stomachs. They may not be strong
enough but held up, the legs and arms go.
Also the step reflex is innate. It can be detected just after birth.
It's not a full walk, only the tendency to move one leg forward when
both are touched by a flat surface.
To the best of my knowlege, babies do not look at their feet when they
True, but there is such a preponderance of the bias for up-ness once
you start looking for it, it shows up everywhere. Even in every day
expressions. For example, take the phrase: higher power. Why not
deeper power? wider power? longer power? etc. Also why do you know
which is better between a lower power and a higher power. There's a
bias, subfusing the word useage.
Randy M. Dumse
Caution: Objects in mirror are more confused than they appear
There's great confusion about the concept of tabula rasa. Many would argue
that learning without innate ability is impossible. That's just their
inability to understand that all learning systems have an innate ability to
learn. They choose to look at the system's innate ability to learn, as if
it were pre-wired knowledge, and then declare it's not tabula rasa learning
since the the pre-wired ability to learn means it's not a blank. It's a
stupid game of semantics.
All learning systems are tabula rasa learning because they all start with
the innate ability to learn, then then they add to that, the knowledge
which they learn.
Tabula rasa learning is not something I need to "find". Every learning
system I've built in the past 25 years was already a tabula rasa learning
What I need to uncover, is the type of learning system needed to duplicate
human level learning ability.
Exactly. A learning system that doesn't have the innate ability to learn
to use a language, will never learn to use a language. The problem here is
to uncover what type of learning hardware is needed for the class of
behaviors we call language use.
If I want to build a robot that can learn to navigate a maze, it has to
have the innate ability to move through a maze. If it's not born with the
innate ability to move, it won't be able to learn a maze. But to learn a
path through a maze, it needs more than the ability to move. It needs at a
minimum, the ability to sense when it's solved the maze correctly, and it's
likely going to need sensors to help it perform navigation tasks. At the
same time, it's going to need some systems that allow it to learn from
experience so that after lots of time moving around the maze, it has to be
able to use that experience to direct future actions, so that hopefully, it
will be able to travel from the start, to the end, without making any wrong
Even with all that pre-wired knowledge about movement, and sensing maze
walls, and location in a 2D maze space, the machine is still a tabula rasa
learning system because when it starts, it knows nothing about the maze
it's trying to learn. When it comes to the first turn, it knows nothing
about whether turning left, or turning right, is likely to be the right
All learning systems work like that. They have innate ability to learn
something, but when they start, they have not yet learned anything.
So, when trying to build a learning system, we must always answer the
question of what type of hardware does the learning system start with -
what are the basic primitives which must be built in as innate ability, and
what must it learn? The maze learning robot will likely have basic
primitives of moving forward, or turning, and the ability to sense walls
blocking it's motion and or sense when an attempt to move, or turn, has
failed, because it hit a wall, and it will have the power to learn a
sequence of these sorts of basic primitives.
But what type of basic primitives do we need to build into a machine to
produce human level behaviors? And what does the system learn?
You seem to be suggesting that we might need a basic primitive such as the
ability to crawl (or at least some basic crawl-like motions which will
later be refined through learning into a productive action). It can
certainly be done that way. For example, we can build a robot with legs,
and write code that makes it move the legs in some fixed pattern that makes
it walk. But then, because our walking gait isn't all that great (it makes
the robot move forward, but the legs are slipping and sliding and on a
rough surface there is much grinding of the gears in the servos because the
gate is wasting energy causing the legs to push in unproductive
directions). So, on top of this innate walking gait, we could add a
learning system to attempt to make small changes to the sequence to
optimize the gait to be more productive. But, how do we do this? How does
the learning system know when it tries some change to the gait, if the
change made the gait better or worse? We have to create some system to
measure success, and use that measure, to guide the learning process.
Learning after all is just a change in function. But random change is of
no use if we don't have a system of selection which has the power to
evaluate success. Something has to be assigning value to the changes so
that the purpose of learning is to actually make an improvement in
behavior. The entire concept of "improvement" implies you have the ability
to assign value to different behaviors.
So, for the robot, we have to add some sort of measure of "better". For a
walking gait, we could use speed as a measure. Or we could attempt to
minimize current draw for a given speed by trying to maximize speed per
None the less, once you have motivation defined (the value function that
the learning system is tryig to maximize) and the primitives that define
the search space of the learning function (leg motions), then what's the
point of having a pre-wired gait in the system to start with? If you have
a good learning algorithm, why not let it start with no pre-wired motion
sequences and let it learn to move on it's own? The goal of moving forward
as fast as possible is the same either way.
The only advantage of starting it with a pre-wired gait sequence, is to
reduce the amount of time it takes to find the optimal gait. Now,
depending on the quality of the learning system, this time can be
significant because as the size of the search space grows, the amount of
time to search it with a learning algorithm can grow exponentially. So
where as optimizing a bad gait might take hours, learning a gait from
scratch might take months or even years depending on the quality of the
In the case of living organisms, learning time is an important survival
factor. They might die before they learn to walk. So it's not surprising
that animals are born with lots of pre-wired behaviors, or that humans are
born with various amount of pre-wired behaviors.
But for the most part, the pre-wired behaviors just aren't interesting, or
important. Do you really think that the fact that a baby might start with
a few simplistic behaviors like a drop reflex or basic crawl motions is the
key to how humans were able to invent language and invent space ships to
fly us to the moon? Those low level initial behaviors are there only to
help us stay alive after birth. Our AI doesn't need help staying alive.
We will keep it alive as it learns. If you can learn to build space ships,
and solve problems like AI the robot is surely going to also be smart
enough to learn to crawl without it being pre-wire into the machine.
Some low level behaviors, which act as the starting point of the learning
system, have to be there. But at the level I approach the problem from
those starting behaviors are extremely low level and simple - pulse sorting
decisions in a generic signal processing network.
That has nothing to do with what I doubt. If a human baby is born without
a normal reflex, the odds are its got problems far worse than simply not
having that one reflex - it's probably got serious defects in the CNS.
They don't test for the reflex because they believe the reflex is
important. They test for because they believe a lack of the reflect is a
good indicator of much worse problems.
There's nothing wrong with that approach. But in my study of learning
systems I figured out many things which allows me to make educated guesses
way beyond the need to duplicate every little biological wart humans have
in order to create intelligence.
These insignificant behaviors humans start with at birth are nothing
compared the enormous set of interesting behaviors we find in an adult. We
can walk, and drink, and catch a ball, and cook food, and read a text book,
and put together a book shelf, and design space ships, and program
computers, and tie our shoes. There are billions and billions of different
behaviors a normal human can perform, and every one of them was learned,
not built into our hardware at birth.
What is built into our hardware at birth, is the ability of the learning
part of the brain to receive sensor data from many different sensors, and
control the motion of our arms and legs though it's output signals.
Everything else, between there, are circuits that get configured after
birth to allow us to do all the things we do. How those circuits get
configured by the learning systems in our brain is the key to how we become
intelligent through a life time of interacting with a a complex
The key to solving this problem is understand what type of circuits the
learning part of the brain is made up of, and what type of learning
algorithms are at work shaping those circuits.
Interesting. But again, not very important for solving the problem of how
we learn to design and build space ships.
Yes, I agree. There's a clear bias. Also, if you notice, we only elect
tall presidents. Tall people naturally get more respect. Where does that
bias come from? I think it comes from the fact that kids are small and
parents are tall. We spend our childhood learning that the tall people
have all the power and the short people have none. I think that bias
sticks with us for life. Even after growing up, we learn that tall people
generally have the edge on shorter people for any physical conflict.
In addition, we live in a world of gravity. Height means higher potential
energy which translates to real power. This translates to the high ground
being an advantage in any battle. Just the simple act of being knocked to
the ground means you are at a read disadvantage in a battle because it
gives the other guy the high ground - he can hit you a lot harder by
throwing a punch or swinging a club with gravity than you can hit him swing
your arm against gravity. The high ground is power because of gravity.
We see this "high-ground" effect translated into all sort of human
behaviors. We bow to a person who we want to show respect for - giving
them the high ground. We try to stand tall to intimidate someone and show
them our power. When we draw an org chart, the guy in the company with the
most power is put at the top of the chart. The guy with the most power
gets the highest location in the building (the top floor). And of course,
the guy with the most power (God) is placed in the sky above us all.
Between the effects of gravity and the conditioning we get as kids to
respect height, I don't think you need anything else to understand why
there's a clear bias of height being associated with power in humans.
There could still be some built in innate feature of humans that make us
associate height with power. But again, I don't see that it's needed -
there are plenty of things from the environment that explain it. The only
innate power we need to put into our robots, is strong learning. 99.9% of
everything we see humans do that we label as intelligent, is a learned
behavior - they didn't have it birth. The one innate thing they have at
birth that makes humans intelligent, is the ability to learn all this
complex behavior in only a few decades of training.
I'm not interested in the human "warts" because I know the number one thing
that's missing in our robots right now is strong learning. It's very easy
to program a little startle reaction into a robot and make it look very
human-like. But doing that won't give the robot the power to figure out,
on its own, how to get to the moon, like humans did. To do that, we have
to figure out how to add strong, generic, real time, learning to our
robots. Once we get a handle on the learning problem, then we can look at
the little warts to see what might be needed to make our robots act even
more like humans.
On Sep 3, 10:02 pm, email@example.com (Curt Welch) wrote:
I hear your argument.
What I think I'm suggesting, however, is in order to know how to
categorize and add new learning to the human-body-of-knowledge,
perhaps the human-body-of-knowledge needs to start out with a few
already installed items.
Again, imagine something like a Wikipedia, where it starts with no
articles whatsoever. Would it be possible to boot strap from zero, or
do there need to be a certain amount of articles already installed to
serve as examples? Likewise, in order to learn, does the human mind
need some hardwired examples to serve as a basis for adding more
knowledge. I certainly don't know. But I do suspect it to be the case,
based on what I have observed from nature.
Randy M. Dumse
Caution: Objects in mirror are more confused than they appear.
Yeah, it has to have something hard-wired at the start. The basic idea of
reinforcement learning is that there has to be a behavior to reinforce. If
a baby (or robot) was completely motionless, it could never learn anything.
It would stay completely motionless forever.
The interesting question however is what form does that initial behavior
take? What does it start with?
Obviously, wikipedia started from zero. I think your example is a poor
one, but your point is valid.
Well, if instead of trying to understand how humans do it, you simply look
at the generic problem of how a machine can learn from reinforcement
learning (aka operant conditioning, aka trial and error learning), you can
get a good grasp on the fundamental nature of the problem that evolution
has solved in its design of a human. I've spent a lot of time looking at
the problem from this perspective, and that experience is what causes me to
look at what humans do and write off some of their innate behaviors as not
significant to the harder problem of learning in general.
So to start, the machine has to have the power to do something, or else
there is nothing to learn. So it has to have some ability to perform
something we label as behavior (move, or blink some lights, or make a
decision, etc). It has to be able to make a decision (stand still, move
forward, turn right, etc).
But, when it starts, we assume it's got no knowledge about which decision
is better. So it doesn't know if turning right is better or worse than
standing still for example. That's the blank part of the blank slate. But
in order to learn from it's mistakes, it has to actually try something. If
it's initial technique was simply to pick one behavior, (turn right) and
stick with it until it's proven wrong, then it may never learn anything
else. If the only reward or punishment in the environment happened when
the robot found some food, and constantly turning right prevented it from
ever finding food, then it would never learn that there is a behavior
better than turning right. It would do nothing but turn right.
Basically, whatever the low level decisions the system is making, it must
try different combinations of the decisions to see which work better.
Given enough time, we want to make sure the system will try all possible
The general approach then, is to try different combinations of decisions
(basically randomly), and learn from experience which combinations seem to
produce better results.
When a behavior (sequential combination of low level decisions) produces
better results, the system will then bias it's selection of behaviors so
that the ones which seem to have worked better in the past, will get used
more in the future. Notice however that I say "bias" here. This is
because you never want to stop trying alternatives. Just because turning
right has worked the best in the past, the system can never assume that
turning right will always work better in the future. It needs to keep
trying to turn left at times. But the more times it tests turning right
and left, and the more times it finds that turning right is better, the
less often it should try the "turn left" test in the future. But it should
never stop trying the turn left test.
Those are the basic requirements of making trial and error learning work.
You have to start by deciding at what level the system will be learning
behaviors, and then you have create a system that will keep trying
different alternatives. It will then bias it's selection of alternatives,
based on statistical knowledge gained. This part of the problem is well
understood. The complexity is all in the implementation details.
Now, for something like a robot, you could pre-program 100 different basic
low level behaviors into it. You could have a low level behavior for "move
1 inch forward". And another behavior for "move 2 inches forward", and
another one for "turn right 10 degrees", and other for "sit down". You
could build 100 such behaviors into the machine, and them make the learning
work at the level of selecting which of these behaviors to perform next.
Or, you could make learning work at a lower level. If the robot was the
type with two wheels, then you could program only 4 basic low level
behaviors into the system, which would be turn the right wheel 1 degree
froward, or 1 degree backwards, or turn the left wheel one degree forward
or 1 degree backwards. The learning system could then attempt to do all
it's learning using only those 4 commands. So those 4 commands are built in
(innate) but when they are used and in what sequence they are used, is all
All learning systems must have innate built-in behaviors at the lowest
level, and what it learns, is then a problem of making a decision abotu
which innate behavior to use next.
Humans (and even simple robots) however don't just produce a string of
behaviors. They are reaction machines. We have sensors and we can select
behaviors as a function of the sensor data. So the learning problem is a
bit more complex. We can produce a fixed string of behaviors mostly
independent of the sensor data, or we can produce behaviors as simple
direct reactions to sensor data. This creates a lot of dimensions to be
learned at the same time. Finding ways to structure the learning problem
to reduce the complexity of what must be learned is key to making it work
For making a robot act like a human, I think the learning has to take place
at a very primitive level. Human behavior is fine-tuned at a very low
level by learning. It has to take place at a level of learning near to
the level of sending a single pulse to a muscle. In order to fine tune our
motion to learn to catch a ball, or hit a running rabbit with a rock, or
spear a fish, learning must be happening at a very low level of controlling
the timing of individual pulses.
If you have a learning machine, which is adjusting behavior at the level of
single pulses, then hard-wiring into the machine something high level like
crawling (which requires the sequencing of thousands of pulses) is just
redundant to what it's trying to learn. Typically, I would see it working
by being a pre-coded learned knowledge. That is, instead of starting with
no preference towards behavior, it would be build with a starting
preference of the sequence of motions needed to create a crawling type of
behavior. But because it's all part of what is learned, it would then
still have the ability to learn something different - to learn to never
crawl. So though starting it with pre-learned behaviors like that might be
useful in reducing the amount of time it would take to learn to crawl, it's
not required. What's required, is that there must be low level behaviors
hard-wired which are the foundation of all behaviors. But those behaviors
will always be the most primitive behaviors you want the system to use for
performing all behaviors - like turning a wheel, or when using pulse type
signing systems, it will be the ability to produce a single pulse. That's
the type of innate behavior you have to build into the system before it can
learn. Any innate knowledge you give it above the primitive behaviors are
just hints to the learning system to allow it to learn important skills
Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here.
All logos and trade names are the property of their respective owners.