Sensors and sensory processing are part and parcel of the same thing.
Sensor integration comes after that.
Yes, this is exactly the same idea I had 2 years or so ago [never
implemented !!] regards teaching my walkers how to walk. A simple sonar
"critic" feeds into a learning algorithm to evolve the gaits. EG, the
gaits on my walkers are generated by lookup tables of timed servo
movements. It would be fairly easy to have a higher-level program patch
new values into the lookup tables, based on some genetic algorithm or
other learning technique, in order to modify the servo movements. The
critic to guide the GA/learner would be the sonar input.
For instance, to learn how to stand up, you point the sonar towards the
ceiling, and set up the learning algorithm to fiddle the lookup table
values such that the desired goal is to reduce the distance to the
ceiling. Quite simple, really. Similar protocol for learning to walk,
once the bot has stood up successfully. This time, point the sonar
towards the wall ahead, and tell the learning algorithm the goal is
reduce the distance, once again. With 2 sonars, one pointed upwards and
the other ahead, you could get the bot to learn to walk forwards while
keeping the top of the bot fairly level at the same time. On and on.
No. This is just the basics we roboticists deal with everyday..
It might well make use of some short-term memory to keep it directed at
the proper goal, ie original light source, and not get distracted by
extraneous light sources that come along [eg, light coming though the
doorway it's passing by]. If it's too dumb, ie purely reactive, it'll
turn and go through the door, instead of following up on its orginal
goal. Here is the death knell of too-simplistic BBR.
I know from experience, because I implemented simple photovore behavior
in my hexapod, and it turns cutely and tracks light very nicely [thank
you], but it simple homes in on any "local maximum" it encounters, and
forgets which one specifically it was previously tracking.
Well, this is the challenge for small bots. Current computer vision
systems work best with supercomputer processing power to tap into. But
this thread is about directions for research.
Yes, exactly. But I see that perceptual mechanism as the same mechanism
needed to extract useful behavior from multiple sensory signals.
I don't really understand what you are thinking when you write that. What
do you mean by better? Are you talking cheaper sensors? Lower power?
Smaller? What exactly is wrong with any of the sensors we have now? I
don't see why we have a problem in that regard. The only issue is that
they cost to much for hobby work - both the high quality sensors and the
processing power to deal with the data. We have better sensors in terms of
what they can sense than just about all the senors in animals (except maybe
chemical sensors). So I don't get what your point is.
Or are you talking about the processing of the sensor data into an easier
to use form when you say better sensor?
I believe that processing raw sensor data in to a better form and sensory
integration are one and the same problems for example. For example, an eye
can be looked at as a million light sensors. To process that data into a
signal that represents "i see a cat", is a sensory data integration problem
to me. Automatic correlation of any two sensory signals should work the
same basic way whether it's two light sensors which are part of a bigger
eye or light sensor data and sonar distance data being integrated. It's
the same problem of correlating temporal signals and extracting the useful
information either way.
If you had the type of generic learning system I talk about working, then
there's no reason you wouldn't layer it on top of hard-coded behaviors
which you knew were a good starting point for what you wanted the bot to
do. The dynamic learning system would simply be configured to adjust and
override the instinctive behaviors as needed (and there might be some it
couldn't override). This would both reduce the amount of work the learning
system had to do as well decrease the time it takes for the system to
become good at some task. There is no end to the engineering options
available to optimize the design to fit the needs of the application. What
I feel we are most missing is that strong generic learning that can be
added at any point of a design as needed. Hard coding behaviors is
straight forward coding that any good programmer can do. Knowing what
behaviors to hard code is the harder part. The point of the learning
system is to do the real time testing to discover the behaviors that work
best. But that can only work if you can produce an automated test for
success (aka the critic). But many times, you can easily automate the test
for success and that's where dynamic learning systems would be very useful.
You write that is would be fairly easy to do what you suggest
above so why haven't you done it Dan?
Of course the "critic" has to evolve as well in response to
the higher critic, reproductive success.
Stable mechanisms will not evolve, they have no motivation.
Unstable mechanisms have the potential to evolve but that is
only if by chance they have the right "goals" (requirements
for stability) that lead in that direction.
The advantage of having a motivating critic to get a robot
walking is that should it lose a leg while trekking mars it
will than start adapting its gait. Should it come across
a new kind of terrain it would adapt to that as well.
Of course a "critic" will never guarantee a solution is
possible for any particular mechanism.
For those that can't afford a real robot there is always the
simulated robot to test your theories.
Interesting comment, Curt. My first reaction was, come on, you don't
know how poor these sensors are. Begrudgingly, I came around to suspect
you are right, and my recent experience supports you. I realized I have
been preparing to extract more information from my Sharp range sensors
for some time, and have not yet taken advantage of what I'm already
My mini-sumo base robot used in my university class is built on a
MarkIII chassis. In the mini-sumo competition, while the Sharps we used
were the analog output, all we have used so far is to threshold the
analog to say, there is or isn't something out there between 10 and
80cm. (Essentially the same as if we just had used the digital version
of the ranger.)
Since the class is hosted by the physics department, I laid the ground
work for taking more off the sensor. By reading the ranger, I got a
current range reading (even though its only use was as a comparison
point against a there/not-there range threshold. So I added a
differencing of each reading with the last, and thereby got a current
velocity. I differenced the new velocity to the old velocity and got a
current velocity reading. At that time I had not yet bothered to
linearize the Sharp which would have given real world significance and
scale to the numbers, so the readings were somewhat meaningless, but the
basis was there. (I just took the time to linearize the Sharp sensors on
my latest project, soccerbots for the Queen of Jordan's childrens
museum, and once mastered, I think the effort was well worth it, and the
code will undoubtedly migrate to the sum code next time I work on it.)
In other routines I had begung to compare the two adjacent (slight
horizontal displacement) parallel sensors. I checked the digital
conversion (target/no-target) and counted how long the detector had been
in the current state. This value was used in the sumo code. The detector
that last saw the opponent (left or right) determined which way the
robot turned to reacquire the target. On the analog side, I computed the
difference between the two current readings and an average of the
distance both saw.
Likewise, the simple digital edge detectors, looking for the presence of
the white line at the edge, were also further analyzed. I counted how
long the detector had been in the current state. These counts could be
used to know something about the angle the edge was approached by seeing
which one went on to the white line first. In pulling back from the
edge, a similar count comparison was used in the sumo code to decide
which way to turn around, to best position for an advance on the center
of the ring.
Now given our earlier conversation about the bump switches, this might
surprise you that I've made this much use of historical information on
two digital inputs. I will point out, this was done in a function
SENSOR-SCAN which was the first to run ahead of, and completely
independent of, my state based subsumption behavior routines. I see
nothing in Brooks that would approve of the choice, quite the opposite,
but I found it processor efficient to work out these somewhat
"statistical" details and have the processed information available to
(used in one or several or even none without matter) the later run state
Well, there are only so many hours in a day that I can assign for
play. Were I an academic, then I might be doing this on a paycheck :).
As it is, it's just for fun on the side.
This is a good point that Jordan Pollack discovered in his games with
evolutionary robotics, or whatever he calls it. If you just let the
algorithm crank, there is no telling what you might get as a solution.
Eg, if I were to use the algorithm plus sonar-critic to teach the
hexapod to walk forwards FIRST, instead of to stand up first, then it's
highly likely the gait would involve something like 1 or 2 front legs
literally dragging the rest of the passive and prone bot towards the
wall. Like an inch-worm. Unlikely this scheme would produce a nice
hexapod tripod or metachronal-wave gait.
OTOH, if I have the algorithm learning to stand first, and then make
use of that prior-knowledge during the walking phase, I'll probably get
a much better result. To me, this is the only way to go. To me, what
Curt usually proposes would take my bot 3.5 BY [of analogous computer
time] to learn a good solution. OTOH, if I help it a long a little, it
will produce a much better result many times quicker. IE, directed, and
opposed to purely random, evolution.
Right. This is why intelligence requires complex statistical learning.
There are very few simple answers about how to make use of the sensor data.
Though a sonar sensor can tell you important things about whether you were
moving, it's very hard to correctly make use of that because of all the
times it would be giving you false readings. The case you mention above is
the "I see nothing" case where it would not be giving you any indication of
movement. Other problems would happen when it's giving you a fix on
something else which was moving. The system must recognize all the
different exceptions to the case and use that to make estimates about what
it meant about the world.
The reason few robots try to take advantage of this complex information in
the sensory data is because it's so hard. My point above wasn't that real
life data was easy to make use of, but that simply, there is useful
information in the sensory data that humans, with brains, make use of all
the time, but which robots seldom make use of.
"Seldom" is not very good usage here, Curt. People have been looking at
sensor integration problems for many years. The fact that it's hard
hasn't meant that people haven't been doing it. Look at Arkin's book
for dozens, if not 100s, of examples. Also ...
Also, Thrun's new book on Probabilistic Robotics. The real trick in
driving an autonomous robot vehicle 130+ miles wasn't in controlling
its behavior. It was in proper sensor processing.
Ah, but the point is that the evolution _is_ directed and not random. It's
decent with modification - not just modification. Which means, any time if
finds something good, the "good" stuff stays around (decent), and is used
as the foundation for the next round. Learning systems that don't manage
to create a good decent with modification system are the ones that never
get very far. To continue to grow and advance, it must be able to build a
growing base of knowledge and not have to rebuild the entire knowledge base
from the ground up each time by random luck.
For example, on your walking machine, if it did learn to use two front legs
to drag the rest, it should then move on and find ways to modify the
behavior of the back legs, without forgetting the useful behaviors it
learned for the front legs.
Preventing new experience from erasing the valuable lessons learned in the
past is part of the trick of making evolutionary learning work. The way to
do that, is to generate behavior based on the current sensory context, and
to make learning, based on the same. When the sensory context changes, the
part of the machine which is currently being "trained" will change as well
(or the effects of training will be weighted differently at least). This
allows the old learned lessons, to remain mostly unchanged, while it's
learning a new lesson. The complex emergent behavior of the system is a
combination of all these little lessons it has learned over time.
Part of the trick to making that work, is giving it a good system for
evaluation the worth of each change so that progress can be constant. An
example of a bad critic, would be be on that taught it to walk by waiting
for it to move a fixed distance from the start, and then giving it a single
fixed sized reward for crossing the finish line. A good critic, will
reward it for every forward motion it made, and punish it for every
backward motion it made, such that the reward was always proportional to
the distance moved. Any small evolutionary change that helped it move
forward farther, or faster, would receive a greater reward.
In a walking machine, the behavior of each leg, needs to be a function, of
it's sensory awareness of the actions of all the other legs. For example,
a simple action like pushing a leg back might move the bot forward, and
that action might be rewarded. And pushing it forward, might move the bot
backwards, and be punished. So it would learn to push the legs back, and
never bring them forward. And the bot goes no where (but it at least
doesn't get punished for moving backwards). But, if the behavior, and
training, is made context sensitive, then it can learn to push it backwards
when lowered, and push it forward when raised. These two behaviors are
sensitive to the context of the current position of the leg. Learning to
push it forward when raised, is not erasing the previous learning that
happened when it was lowered.
Likewise, if the context is a function of the position of all the legs,
then it can develop a complex coordinated walking motion, one small step at
a time. One leg can learn to drag the body, and the others can learn to
respond to the motion of the first leg, in order to maximize forward
The point here is that it never has to learn a complex behavior (like 6
legged walking) by one very lucky random chance. The odds of it a complex
behavior by random chance is near zero. It's learning it by stringing
together 1000's of very trivial behaviors, each of which have very high
odds of being found quickly.
If you want to train a machine to output a sequence of 10 digits, and you
try to do it by letting it pick random combinations of digits until it gets
all 10 right, it will take a very long time for it to learn to produce the
right answer. But, if instead, you allow it to learn one digit at a time,
then it will find it very quickly, with each digit only taking around 10
guesses on average and all 10 only taking 100 guesses.
If you reward it based on the number of digits correct each time, you have
a problem somewhere between those two. It's not as easy as the second
because the system doesn't know which digits it's being rewarded for. It's
like a mastermind game (if you know that game). If the system tracks the
amount of reward, for each digit, in each position, it will find a
statistical correlation of more rewards for the right digit over time.
This system will be able to converge on the correct answer much faster,
than one which is only rewarded when all 10 digits are correct (but not as
fast as the one where each digit was trained separately). If the training
is able to reward closeness, or value, of a given behavior, then it can
converge on complex answers in reasonable time and not take billions of
years. It is just a matter of creating the training system (the critic)
and the learning techniques, to create these conditions where the system
can be directed to converge on a complex answer (as if it were being led by
the hand to the solution), instead of guessing it in one huge leap by
random chance. Large leaps of learning always take forever and will never
work. Learning is only workable when it can be broken up into many small
steps (decent with modification in small steps).
I more or less agree with what you've written here. Good learning
strategies vs bad strategies. Now, think of all the "innate" knowledge
the system has to contain in order to successfully implement your
"good" srategies, as compared to just doing everything randomly. All of
those good strategies require non-trivial protocols.
Eg, in regards my walker, a naiive solution, based upon a very
simplistic learning protocol might produce a result where only a single
leg learns to drag the walker along. This, I believe, is similar to the
kinds of solutions that Jordan Pollack was seeing. It's doubtful the
walker would learn a nice solution that would magically get all 6 legs
working together in a very efficient gait, like a tripod or metachronal
wave gait, by random chance. Similarly, for your 1-digit-at-a-time
OTOH, to implement your 1DAAT or an all-6 legs in synchrony protocol
will take a smart critic that oversees the results of previous learning
operations and strategically chooses the "correct" protocol for the
next learning stage. This is a non-trivial mechanism to implement, and
it does show up the weakness of raw generic learning protocols that
operate on the so-called tabula-rasa.
There are [at least] 2 approaches, as I see it. One is to implement
your temporal stage-sequencing "smart overseer". However, this thing is
already awfully intelligent itself, able to discern good results from
bad, at each stage, and then select the appropriate next stage ... and
somehow it needed to get that way. There's the rub.
The other way is the way I envision, which is to add the learning
modules atop a pre-existing mechanism that already performs the basic
operations, albeit non-optimally. IOW, use "genetically-determined"
gaits, which are very simple and probably non-optimal, and have the
learning module "fine-tune" the gaits to be more efficient and optimal.
Eg, with my controller, I can easily generate a basic gait in a couple
of minutes, by adding entries to the look-up table. However, it takes
me much longer to optimize the look-up parameters, even though there
are only 36 parameters for a 6-legged gait. Every small adjustment has
some effect on the overall result, and I can fiddle around seemingly
forever with hand-tuning.
In short, if I start with a basic gait, genetically-predtermined in
effect, and let the learning module make strategic adjustments to the
existing parameters, it should take a lot less time to produce an
efficient gait than if the system starts from zero in the first place.
I don't look at it like that because any single data stream from a sensor
can be decomposed into multiple data streams and once you do that, you end
up with multiple sensory signals and the same sensory signal integration
problem you get if they came from different sensors. All "sensory
processing" is done by decomposing, and recombining data in an attempt to
remove the data you don't care about, and extract the data you do care
In effect, every sensory signal is feeding limited information about the
environment into the robot. But every signal will include both information
the robot cares about, or could make good use of, along with information
the robot probably has no need for (noise generated in the sensor for
The more sensors you have, the more paths there are for information about
the environment to flow into the robot, but the problem the robot has,
which is to find the data that is useful in those flows and ignore the data
which isn't is the same whether you are talking about one flow, or 100.
A sonar sensor for example in it's raw form is really two inputs. One input
that tells the robot when a ping is generated, and a return echo which
tells the robot when echoes are heard. The return echo is always a complex
temporal pattern of sounds as the ping bounces off multiple objects. But
most sonar sensors will perform some type of average or threshold detection
operation on that complex pattern and reduce all the information down to a
single distance measurement. This might all be done with analog circuitry
on the sensor because the data is even sent to a central processor. But
the hardware is already performing multiple decomposing, and combining
functions on the raw data in order to filter out most the data from the
sensor and reduce it to one small piece of data that robot can easily use
(a single distance measurement).
To me, the process of extracting one useful piece of data (a single
distance measurement) from those two complex temporal signals, is the same
generic problem that has to be solved at all levels of sensory signal
Ultimately, the robot must produce outputs, which are a complex function
all all the raw sensory data. Defining those functions is the problem of
how to program the robot. I don't see it as one step of sensor processing,
then another step of sensory integration, then anther step of output
pattern generation. I see it ultimately as one large step. We break it up
into small steps not because that's the best way to do it (and not because
it must be done that way), but because, that's the only way our limited
minds can comprehend how the bot is working. Sonar sensors are designed to
return a very simple limited distance measurement because that's something
we as programmers we can easily understand and write code to respond to.
If it instead simply returned a raw echo profile, it's not something we
could easily make use of. But that doesn't mean the raw echo profile
doesn't include a lot of good data that a more sophisticated data
processing system could make good use of.
This is an interesting problem that I think would be a fun challenge for
A lot of people seem to think that the only way learning can work is if the
"teacher" (aka critic) is as complex as the task it's trying to get the
"student" to learn. Meaning - that for a machine to learn something
complex like multileg walking, the "teacher" has to be more complex (or
nearly as complex) as the walking algorithm would be in the first place.
I think this fails to understand the power of the system learning how to
teach itself. This is the point of secondary reinforcement. It not only
learns how the behave, it also learns to train itself by making predictions
of value. As it learns more complex behaviors, it is also learning to make
more complex predictions of rewards - which it then uses to train the new
behaviors. So as it's learning complex behaviors to produce better
rewards, it's also becoming a more complex critic of itself. This is how
strong learning systems become stronger and continue to advance in
Nah, if done correctly, it starts out very dumb - which means there isn't a
lot of complex code that needs to be written. But as it learns behaviors
that work better, it's also becoming smarter at training itself. It's
intelligence grows, without us, as programmers ever having to create a
complex overseer. You only need to give it motivation - just as you
suggested - to find ways to move forward faster.
This stuff I'm talking about is already implemented and working in some
systems, like the TD-gammon backgammon playing program. The only reward
that system was giving, was winning. It has no clues about how to tell if
one move in the middle of the game might be better than another. It has no
"smart overseer" to give it hints about why one move was better than
another in the middle of the game. It only had a very dumb overseer who
gave it a "good boy" at the very end of the game when it managed to win.
But yet, as the program learned to play the game, it also developed its own
complex evaluation function. Given any game position, the evaluation
function could produce an estimated reward (aka the odds of winning the
game) for that position. So, as the game learned, it was also developing a
very intelligent "overseer". This is what it was in fact learning, how to
be a better "overseer". Nobody had to write a complex overseer to teach it
to play backgammon. It learned how to teach itself.
Yes, that's a very practical solution if you don't have strong learning
systems to work with. You do as much as you can with our limited human
minds, and then let the poor learning system extend the design a bit
further by making minor adjustments to our design.
But, if you had a strong learning system to start with, then you don't need
to start with anything other than a basic goal (make the bot move forward),
and let the learning system, step by step, create and improve the design.
Your walking problem seems like a very interesting test bed for learning
systems. My claim is that a strong learning system could in fact, on its
own, learn to produce complex gaits (in a reasonable amount of time),
without us having to give it 90% of the answer to start with. I should
work on that problem and see what I can come up with....
Yeah, those are all good examples of how additional information is in the
sensor data which many times goes unused by simple hard-coded behaviors.
And much of it is in the temporal domain - meaning that when something last
happened (aka how long it has been since it happened) yields import
information for controlling the actions of the robot.
When we hand code these solutions, we write a limited number of if
statements to test sensor data and respond to. Like you example of testing
if the distance was >10 cm or < 80 cm (two simple IF statement tests).
This produces behavior which tends to be jerky and robotic like. That is,
when something is greater than 80 cm away, the robot behavior doesn't
change it all, it's as if the robot was totally unaware the thing existed.
When it crosses that 80 cm line, the behavior suddenly changes.
Humans and animals have very fluid and dynamic reactions to their
environment because they in effect, don't use 2 tests for data like that,
they might use 2000, or maybe 200,000. Every neuron in your brain is in
effect performing a simple conditional test for some very small but somehow
important sensory condition. To equal that type of complexity and fluid
reactions to the environment, we would have to hand-code thousands of tests
against the sensory data instead of the 2 you used in the first version of
your bot. This is just something that will never be practical for a human
to do. Like you have talked about, you might extract a few more bits of
data from the sensors and use 20 tests instead of 2. But creating a system
with 2000 subtle tests based on the current reading of a sonar distance
sensor is just something that isn't productive for a human to even try to
to. But it is the type of thing learning systems, like the brain, can
evolve, based on some goal the system is trying to reach.
The limit to how far we can hand-code reactive systems is the limit of what
we can understand. That I believe is what is slowing the advance of what
has been done with such systems. We need in effect to develop tools to
program the systems for us - and that's what a learning algorithm is - we
give it a goal, and the learning system evolves a complex design to
maximize whatever metric we choose to have it maximise.
That's something I would like to read. Much of the simple algorithms we
tend to write use very simple boolean tests of data (is distance >80cm) for
example. But much of how animals and humans react can be better understand
as probability based because of the very fine resolutions the decisions in
the brain are being made at.
Yeah, well, as I wrote in my previous message today, I don't think it's
valid to force such hard distinction on the processing as being sensor or
other types of processing. In the end, there's a lot of processing that
happens between the raw sensor inputs, and the final outputs, and any
decision to label part of the processing as "sensor" processing and part as
"output processing" is totally arbitrary and mostly meaningless.
If you look on the link cited, you will find one of Thrun's citations
near the top of the page, and this will lead you to many of his papers
online, which cover various parts of his work described in the book.
You might take a look at Braitenberg's book "Vehicles: experiments in
synthetic psychology", as he describe what you can do by sensing analog
signals and feeding processed analog to your motors.
You might take a look at Thrun's own words ...
In the 1970s, most research in robotics presupposed the availability of
exact models, of robots and their environments. Little emphasis was
placed on sensing and the intrinsic limitations of modeling complex
physical phenomena. This changed in the mid-1980s, when the paradigm
shifted towards reactive techniques. Reactive controllers rely on
capable sensors to generate robot control. Rejections of models were
typical for researchers in this field. Since the mid-1990s, a new
approach has begun to emerge: probabilistic robotics. This approach
relies on statistical techniques to seamlessly integrate imperfect
models and imperfect sensing.
If you check out what Thrun actually did on Stanley, the Darpa winning
vehicle, you'll see the real key to success was in integrating
short-range [out to 30' or so] Lidar maps with long-range video camera
images to plot a clear course ahead for the next few seconds.
I think I've read those words before. The idea of using probability as a
foundation of control systems extends at least back to Fuzzy logic in the
'60s (and the ideas continue to be used in real world applications today).
One of the guys I was chatting with at the DARPA meeting said he liked to
use Fuzzy logic for the low level control systems which actually operated
the various effectors combined with Bayesian probability to direct the
higher level decisions.
Yeah, I read the summary paper and have seen various videos. The neat
thing to me was that instead of trying to integrate the two sensors to
construct an model of the terrain, he instead used the accurate Lidar data
to identify safe driving areas on the lower portion of the video. He then
made the assumption that the top (aka more distant) areas on the video,
beyond the Lidar range, which looked similar, was probably also safe. I
think he made it work by doing simple pixel level tests. So instead of
using the video to try and construct an actuality 3D map of the terrain, he
just used it to identify safe, and not-safe, areas of the video. That then
translated directly into which direction the car would need to steer to
stay in the safest areas, and, I assume, control it's speed based on how
much "safe road" there seemed to be ahead.
The point here is that the output needs to reduce to only two basic things
- which way to turn, and how fast to go. It doesn't need cm resolution
terrain maps to make that decision. When we, as humans, drive down the
road, we have very little clue about the actual terrain we are driving
over. Instead, we have learned, through experience, what looks "safe" to
drive on, and what looks questionable or out right dangerous. We drive by
keeping the car in the "safe" zone. The total processing however, needs to
reduce the huge volumes of data coming from the sensors, down to very
simple outputs - turn a little left, or turn a little right. The mid level
representations, (half way between raw sensor data and outputs) is likely
to be mid way in complexity between the two as well. This mid level
representations are what we know of as our "understanding" of what we are
looking at, but yet, that understanding is nothing near what some of these
DARPA cars attempt to create for their internal representation of what they
are seeing. What was used by Thrun I think is closer to what naturally
arises in our brains.
It seems to me that the answer to the question of which way to turn, is
just a large probability problem where many clues are being picked up from
the environment and each clue will add its bias to the answer. Something
that looks like a car headed towards us on the left part of our vision will
bias us to turn to the right. More of the same road we have been driving
over to the left, will bias us to turn to the left. A shape that looks
like a line painted on the road will bias us to not drive over it. All
these things and a million more will bias our decision about which way to
turn the car from second to second. All the processing from sensory data
to effector data to me seems to be very probabilistic in nature - as does
I ran across the following blog via a link from sensors mag,
To read some of the comments, there seems to be some question as to how
much work the Stanford guys did versus how much work the VW lab in Palo
Alto did, which appears to be specifically doing research on autonomous
My understanding is that he got VW to do all the hardware work (they built
the car and added all the sensors and computers). His focus was on the
software and project management. From a comment in some video I saw him
make, it sounded like the standard VW Touareg was a drive-by-wire car to
start with. That would mean the hardware work was adding the sensors and
extra computing power and interfacing to the control systems already in the
car. He made it clear in the video that his contribution to the project
was the software design. How much the VW group helped with the software I
have no clue.