robots are all around though most don't have andriod bodies or
do very many different things. Smart appliances are robots.
Robots wash my dishes, robots wash my clothes, robots cook
the dinner, robots wash the floor, robots vacume, robots mow
the lawn, robots filter my email or answer it, robots answer the
phone, robots offer a route through traffic, robots interpret my
voice commands, and some will bring water or a beer and tell
a joke or do a dance.
Next they will track the supplies and place the orders and open
the packages and take out the trash, drive me to work, etc.
There are none that do all of that and are cheap, and we
won't see that for a while.
Do people want those jobs done, or do they want to see
an overpriced andoid struggling to do all those different
things at once and failing?
I want an overpriced android that can learn to do those things and and not
fail. Cheap specialized machines optimized for a specific task will always
be around but there is value in having a humanoid shaped machine with human
like learning skills. The value is that they can operate, and live in, a
world built for humans. They would be able to use all the tools designed
for humans to use - like driving a car or pushing a lawn mower, or walking
up and down stairs, coking meals in a kitchen, clean up a house, build
things with normal human tools.
Before we can create a general purpose learning android we have to solve
AI. I tend to be optimistic about how long that's going to take (I've
recently made another 10 year bet on the subject), but it's possible it
will still take another 50 or 100 years to duplicate human level learning
skills. This is the technology that's holding robotics back and no one
really knows how long it's going to take to solve. All we can do is make
some wild guesses. (and work hard to solve it).
Around the turn of the 19th to 20th century the new thing was electric
appliances that were run by motors. Motors were expensive, so
people who were lucky could afford one motor which powered all their
appliances. They had to multitask the motor attaching/detaching
it from each appliance and running them one at a time to get
assistance with their work.
Bad engineering, good economics while motors were so expensive.
Economics drove down the price and people got more convenient
appliances and people could get assistance on more work more
easily or even in parallel. They no longer had to wait for one motor.
Around the turn of the 20th to 21st century the new thing was
appliances that were run by embedded processors. Processors had
been expensive, so people who were lucky had been able to afford one
processor and if it was powerful enough it could multi-task and do more
than one computing job at a time. But around the 21st century
became cheap and powerful enough that people owned dozens of them
in their laptop, cell phone, ipods, and automobiles. They no longer
had to wait for one processor.
The machines got smarter and began doing most things previously
defined as AI, having conversations, monintoring conversations,
looking up answers on the Internet, searching databases, solving
math and logic proglems, playing games, doing new research,
precision positioning, group activity planning etc. and most of
this became possible when collections of computers were interconnected.
Is Santa Claus brining you one?
All of the problems except economic and engineering ones have been
solved. AI has been more advanced than most hobbiests realize for a
You are not taking into account the fact that human learning skills
are not a fixed quantity. Average human skills are dropping about as
fast as computer capabilities are rising.
Even so a million dollar humaniod robot is closer to a goldfish than
to a minimum wage human worker in practical functionality today.
Decent vision, voice, reflexes, navigation, planning, and general
purpose learning and reasoning are out of the range of computing
toys today but we are pretty close. They are way out of the range
of most toy robot.
Consider that if a neuron can be simulated with 100 instructions
per second a thousand dollar laptop can simulate 10^7
interconnected neurons and 10^4 of them interconnected could
do much of what a person does with a couple pounds of grey
matter. Now once you figure out how to get that ten million
dollars worth of computers and megawatt power supply
inside the head of that humanoid android that you want to
replace your minimum wage servant you have solved an
important remaining problem. Then you just have a few
other engineering problems to get those other manufacturing
costs and maintanence costs down below those of a jumbo
I heard that a lot fourty years ago before better and better
solutions to most problems were demonstrated.
Of course, remember that people argued that controled
heavier than air flying machines were simply impossible
for years after the Wright Brothers demonstrated it,
and people still argue whether space flight is possible,
or whether old AI problems are solvable. But this
sort of thing creates more new problems than the
number of old problems that it solves.
The real problems are neither AI nor econcomics. They
sort themselves out. The real problems are social just
as was predicted fifty years ago.
My main interest is in AI, not robotics. I am a hobbyist but yet a still
have I have a good understanding of the field of AI. No one has invented a
machine than can learn, on its own, to do all those things. The current
learning algorithms are very limited. They tend to only be applied, and
work with, very specific toy problems.
I've never heard that reported. What data is there to support that idea?
Right. There are reasons I made the bet that human level machine
intelligence would be here in the next 10 years. (9 now). I think we are
closer than most people believe.
Well, those calculates are very questionable (as are all attempts to
estimate the amount of computing power to duplicate human intelligence
since no one knows what computer power is needed - it's still an apples to
oranges comparison at best).
How many instructions per second does it take to simulate a transistor
which can switch a billion times per second in a digital circuit? Neurons
fire around 1000 times per second max and you said it takes 100
instructions per second to simulate one so I assume this means you would
estimate 10^8 instructions per second to simulate a transistor. Using this
logic, it would take a building full of interconnected lap tops just to
duplicate the power of a single CPU chip. And that of course is absurd
because we know it only takes one CPU chip to duplicate a CPU chip and not
a building full of lap tops.
We don't need to simulate transistors to create a computer, and we won't
create intelligent machines by simulating neurons in software. Cortex
simulation projects are great research tools, but they are not going to be
the basis of real intelligent machines any more than SPICE circuit
emulators are used to build radios.
The problem with estimating the amount of hardware that's needed to
duplicate human level intelligence is that we don't yet know how to build
it, and if we can't build it, all estimations of how much hardware it's
going to take are likely to be way off base - in either direction.
For example, with neurons switching at 1000 times per second and 100
billion of them in a brain, you get hardware with a max speed of around
10^14 operations per second. But transistors can switch 10^9 times per
second so you only need 10^5 transistors to get the same max switching
performance. A lap top with 1 GB of memory has over 8x10^9 transistors in
it (one per every bit of memory). That means a single lap top has 80,000
times more computing power than a human brain.
Now, I'm not trying to argue that this is a valid way to estimate the
amount of hardware required, but I am tying to argue that it's probably no
less valid than your numbers. A lap top sized board full of custom chips
has more information processing power than than a human brain. But once we
understand how to build human level intelligence into a machine, will we be
able to do build it in that form? We just don't know yet since no one
knows how to build it in any form.
I suspect however that as we finish mastering the technology, that is
exactly what we will end up with - custom hardware that if built with
today's electronics technology, would only be the size of desk top
Neurons range from 4 to 100 microns in size. Transistors are below .25
microns now. Large parts of the brain is filled with interconnects (white
matter), computers reduce interconnects by using high speed switching and
sharing interconnects. Evolution didn't have access to high speed
switching devices so to keep response time low, it had to use massive
interconnects. When we redesign with a different technology (transistors
instead of neurons) we will end up with a very different architecture.
It's likely a design with more switching devices (transistors) and fewer
interconnects will be the optimal solution when building with the higher
speed transistors. In the end, I expect our machines will be substantially
smaller than a human brain in order to duplicate it's same power.
Yeah, I was just reading about the amount resistance the idea got.
Well, the social issues are sorting themselves out as well. There's a lot
of momentum to be overcome. A surprisingly huge percentage of the
population still doesn't believe in evolution after 150 years. But like
the Wright brothers, once you build something that works, it doesn't take
long to make society believe. I believe it was only a few years to get rid
of the doubters in the case of the Wright Brothers work. With the speed of
communication we have today, word would spread very fast - if only we had
something to show them.
I design and program new computers. My background includes AI
Algorithms are not the problem. Toy computers do well on toy problems
but serious computers are expensive. I accept that you can't afford
a supersonic plane, I don't accept that that means that they are
impossible and don't exist at all.
I could point you to a lot of books and scientific papers. I would
recommend Jaques Barzun's From Dawn to Decadence or The
Twilight of American Culture by Morris Berman as excellent
references to trends like the decline of literacy. You know,
over half of American's don't know if the sun goes around the
earth or the earth goes around the sun etc.
I was just using a rule of thumb offered here by Hans Moovec a few
years ago. Read his book Mind Children for more details..
Your arguments seem to have mixed up the idea of simulating
neurons on a digital computer with simulating a digital computer
I design and program new computers for a living. They are optimized
for things like AI and robotics. This is a mixture of abstraction,
computations, realtime performance, low cost, and lower power use.
Designing new computers is mostly about simulating transistors
so I could not disagree more that simulating transistors is not
Who is the "WE" you refer to creating new computers? That
certainly doesn't include me or the people I work with or the
people that they worked with at Intel or General Dynamics
Who is the "WE" you refer to as creating intelligent machines?
That doesn't include me or the people who I have worked with who
did that with your tax money.
In a lot of cutting edge science people are using computers to
simulate neurons, or using genetic algorithms to make new
discoveries for them. Some of the hardware and software that
most impress people was not really designed by people.
We? Who is this we?
Is that the end of time, or just the end of humans?
Well it depends on whom you asked and when. You know there are
people who don't think man actually went to the moon.
Judging what technology exists by what the public is told is a very bad
idea. If you are part of the 'we' you know that the trick is getting
but not crossing the line of what the public is allowed to know. What
be made public is often a few decades behind what is classified. The
trick to commercializing technology is to not get too far ahead of what
the public is allowed to know about.
Here is a view of history: When _____ was still classified it was
knowledge that what it did was impossible.
My favorite is when amatuer scientists tell you that you can't break
laws of physics. It's funny, I thought that is what physicists
try to do every day, break the old laws, develop new ones. And
they are not interested in only things that the average person
Sure, I agree that some things require very large and expensive computers
these days. And some AI tasks might be doable with today's technology, but
simply haven't been done because the project can't be cost justified. But
there are still many things no one knows how to do, no matter how much
money they have to buy or build hardware.
Show me for example a robot that acts like real dog. I don't care if it's
a billion dollar computer connected wirelessly to a robot or if it's just a
simulation of robot dog. I've never seen any software that acts anything
like a real dog. Have you? If I gave you unlimited funds, could you build
one without first developing new technologies?
Ah, you were talking about how average human knowledge is declining (at
least in the US). I thought you were talking about how learning skills
were being evolved out of us. I see those as two very different things.
You seem to want to lump it all together as one.
I see the problem of AI as being a problem of building a strong
reinforcement learning machine. So I tend to use the word "intelligence"
to describe the raw learning power of a machine instead of using it to
describe it's base of accumulated behaviors (aka knowledge). The behaviors
do in fact improve it's learning skills, but still, I see only the core
learning skills of the machine as its real intelligence.
Cool. Which ones? Anything that I would have heard about?
You seem to have missed the point. You are talking about using simulation
tools to aid in the design of new computers. I was talking about how we
don't build transistors simulators into the new computers we build. We
instead, just put real transistors into the computers.
I did say "build" and the design process is generically all part of what we
do to build new computers, so I understand your reaction. But the point I
was making is that the most optimal design for a new computer isn't created
by including a transistor emulator in the computer - we just use
transistors. And likewise, when we better understand how to create
intelligent learning controllers, we not build them by using transistors to
emulator a neuron based learning controller. We will simply build a
transistor based learning controller.
Are you talking about neural nets or actual neuron simulations? Neural
nets logically might be thought of as some type of neuron simulation but
they are not in fact anything like a neuron simulation. They would not do
anything useful if you took a neuron out of your head and put a node from a
neural net in it's place.
All the real neural simulations I've seen reference to are only tools for
helping to understand what real neurons are doing and work just like all
our simulation tools.
Neural nets of different types, which are sometimes incorrectly talked
about as neural simulators, are in fact many times just research into the
behavior of new algorithms and the only connection to real neurons is that
their design was inspired by the existence of neurons. I've spent a lot of
my time working with these types of things, but I never made the mistake of
believing it was a neural simulator.
You seem to be jumping to places I just wouldn't go. There are a few
people that have worked for years to build real neural simulators in order
to understand both what individual neurons are doing, or to understand what
networks of neurons are doing. These are all driven by real data collected
from real neurophysiology studies and their results are compared to real
Neural nets, which are not designed from real neurophysiology data, and
who's results never get compared to the operation of real neural networks,
are not neural simulations even though the word "neural" is part of their
name. They are not simulations at all. They are data processing
algorithms optimized for their specific purpose.
Human level machine intelligence will no doubt in my view be created by
neural network like algorithms. But they won't be neuron simulators.
You, me, and everyone else working on some part of the AI problem.
Oh, 50 to 100 years order of magnitude I'd guess (to make a machine with
the power of a human brain smaller than a real human brain). Humans I
think will be evolved out of existence by the machines (not wiped out -
just replaced by attrition) but that will probably take thousands of years.
It's fun to speculate about but most likely what actually happens will be
different from what everyone was expecting.
Yeah, well, there will always be a fringe that believes all sorts of odd
So, you design classified technology for a living? I worked as a
contractor for the Navy for many years but I wasn't connected with any
Yes. It was not a physical dog, or an organic dog, and it didn't
sexually with real dogs. But it was designed to allow AI professionals
to demonstrate to non-professionals what a dog's neurons do. It
was a software simulation of virtual dog built with simulated neurons
and studied in a simulated reality.
After I took the money I would tell you to look for it on google.
I am working on SeaForth24 now. Scalable Embedded Arrays.
I doubt if you have heard of it, just announced, prototypes, but
not for sale to the public yet.
Gerald M. Edelman won a Nobel Prize for that kind of research. He
went on to create his theory of neuronal group selection and to apply
it to the real world. They did large scale implementations of
large self-programming learning neural societies demonstating a
continuum of intelligence.
To demonstate it to the public one of the people at the institute wrote
a small example, a simulation of a dog in a simulated world. The dog
would learn. Your job was to train it. Like a real dog it could learn
wrong things if you were not good at training it. It acted like a dog.
Alan Alda demonstrated his skills at training an artificial dog to do
on an episode of the Scientific American Frontiers program. The
dog demo was just a toy to show a small example of Neural Dawinism
to the public who are not well educated about such things. Google on
the theory of Darwinian Neuronal Group Selection or old computer
programs like Darwin II and the dog demo demonstrating
the theory in action.
Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here.
All logos and trade names are the property of their respective owners.