What's a Robot?

This is sure to spark a lively discussion:

I am sick and tired of hearing the term "robot" used to describe any machine or toy with pre-programmed or remote-controlled actions. The other day I was watching Myth Busters and they were, once again, talking about building a "robot"-something (a dog this time, for their shark attack tests). In fact, the contraption they built, as impressive as it may be, is nothing more than a cute little machine with an on/off switch. Yet it was called a "robot". If anyone from that wonderful show is reading this, please start calling these things "machines", using the term "robot" is an insult to anyone doing real robotics work. Robo-dog is no different from a hand-drill with a lever or two attached to the chuck rather than a drill-bit.

OK, so, what is a robot?

I don't think the answer is that simple...

Does unattended computer control make it a robot?

Well, no, a CNC machine will run unattended after the G-code program is entered. A CNC machine is not a robot, it's just a programmed machine. It goes back to the days of programmed cloth making machines that used punched cards for control.

Does sensor feedback make it a robot?

Not really. A number of microwave ovens today have the ability to sense temperature in order to vary their heating profile to, say, defrost and then cook. I don't think anyone would call a microwave oven a robot. However, it should be noted that if the Myth Busters mechanical swimming dog can be called a robot then a microwave is ten times more a robot than robo-dog.

Does it have to move to be a robot?

Hard to say. My personal opinion would make me say, yes, of course. What will we call a true AI-based computer program or assistant in the future. Something that is truly useful and has "real" intelligence. Will that be a robot or just an AI program?

Does it have to be able to learn to be a robot?

Preferably yes, although we are a little far from true and useful learning. Most robotics research today is still trying to create a basic mechanical platform from which to start doing things. Case in point are the various bipedal walkers at universities around the world. Most of them (my opinion) are excercises in futility. Outside of places like the MIT Leg Lab it seems to be hard to find researchers that seem to "get it". Most walkers you see out there --like Asimo-- are what I call "statically balanced". They don't walk like we do, they balance from leg to leg and have wide feet. They also walk like they are constipated, with their legs permanently bent at the knees. They are not too far away from using two industrial robot arms upside- down and calling them "legs". Neat for demos and TV advertising, but useless for real applications like walking on a rocky road or going up or down a dirt hill. Should these be called walking machines rather than robots?

What follows this is: Is walking required to be a robot?

No, of course. Wheeled, flying and swimming robots are categories where true robotics has a place.

The micromouse competition is an interesting area. The machines built to run these mazes are probably closer to my idea of "robot" than anything else: They are programmed by their designer to learn; solve a problem; optimize the solution and execute it as efficiently as possible. They use sensors to "see" their environment and use this in the process of learning.

It is probably clear from my short discussion that I think that the term "robot" ought to be reserved for something special, not mere animatronic machines. Here's my basic list of requirements (incomplete):

A robot must... ... have the ability to learn ... not be remotely controlled by a human being for all of it's actions. Remote control is acceptable as a form of communication and, for example, to locate the robot where it must do useful work. If the machine can perform useful actions on its own after that, then it can be called a "robot" ... move and interact with the physical world, Ai in a computer is an intelligent program, not a robot

Top on my list is learning. If you build a cute little hexapod machine out of the many kits available out there, how does it walk? Is it walking because you programmed a sequence of motions that make it walk? Then it isn't a robot, it's a programmed machine just like a CNC milling machine or your microwave oven. However, if you program a learning algorithm and the hexapod learns to walk on its own, then what you have is a robot. Or at least the beginings of it. A much tougher task than pre-programmed mimmicry to be sure.

You could add to my list a requirement to intereact with human beings in natural language. However, this can't be an absolute requirement. For example, if a true intelligent autonomous mine detection robot existed, I wouldn't want to communicate with it via spoken lenguage because there are too many opportunities for error. A command-based interface is probably far safer.

I'll stop here, I'm sure there are many on this list with far more insight in to this than I may have.


Reply to
Loading thread data ...


John Nagle

Reply to
John Nagle

Ouch. Not warranted sir. Not at all.


Reply to

You don't look like a Troll to me.

Maybe a robo-troll? :)

It's not really important how people choose to use the word robot. :)

I do agree that the use of the word has drifted far from it's original intent but it doesn't bother me. I like all the machines called robots. The more autonomous and intelligent, the more interesting they are, but I don't feel the need to debate what we should or should not call a robot.

I do agree that if you want to understand intelligence then learning is a key factor and any machine without strong general learning is hard to call intelligent. But I see no need to limit the definition of robot to only intelligent autonomous slave machines even though that's where the idea started.

Reply to
Curt Welch

intent but it doesn't bother me. I like all the machines called robots. The more autonomous and intelligent, the more interesting they are, but I don't feel the need to debate what we should or should not call a robot.

It's not like I loose sleep over this...far more important stuff to worry about these days.

I just think that terminology has to have a meaning. Not to pound on the MythBusters guys (a show we all love at home) but they call anytihng with a motor and a switch a "robot". By that measure your electric power seat in the car is a robot. Or, better yet, cars that have power seats with preset position memories are ten times more of a robot.

Maybe that's what I am grappling with. To me a robot has always been something special. With intelligence or a reasonable simulation thereof as a prerequisite.

Of course, there are stages to the development of a robot. If you are studying bipedal locomotion and have a machine with two legs attached to a pole so that it can walk in circles without falling over, well, it may be fair to call that a robot or "part of a robot". It isn't realy intelligent but it is, presumably, part of the R&D process for a real robot.

An air motor attached to a samuai sword with a manually operated valve is not, a robot, despite MB claims during these shows. Again, not picking on them at all, it just happens to be a very prominent place where the term is used over and over again.

Battle-bots are another example. They are not robots. They are remotely controlled weaponized vehicles. You take the human away from the remote control and they are absolutely useless (as neat as they may look). A real battle-bot would have a remote with a few buttons, for example: GO, STOP, AGGRESIVE, NEUTRAL, DEFENSIVE, etc. In other words, human input isn't precluded from the process, but it might be limited to requesting a state or strategic posture and then the battle- bot would --by itself-- conduct and run the battle. Big difference.


Reply to

I agree that Mythbusters takes calling everything a robot a little to far. I had issues with the whole sword swinging robot and such. Anytime they ask Grant to do something its hey can you build a robot to do that.... and its pretty much a servo with a remote trigger. but dont get me wong the guy is a class act on RC things. but robots thats a little iffy.

on the whole Battle bots thing. I would love to see a battle something like the robo cup. The robots have to 1 identify targets then 2 destroy the target. Points could be awarded on how fast a target was aquired and how acurate it was attacking.

Reply to

It's a great show. I love it. I worked in motion picture special effects for a while and know just how capable FX guys can be (and how much fun the work can be!). These guys are top notch and the show is fantastic.

Here's another one that might really drive the point home:

I love it when they rig a real car for remot control. I can't imaging how much fun it might be to drive a full size remote control car. Just brilliant.

Now, compare that to the DARPA autonomous driving challenge --which I am sure folks on this list are very familiar with--.

Which of the two scenarios is a robot? I would be astounded if anyone considered calling the full-size R/C car a "robot" as it isn't even close by any stretch of the imagination. The driver of that car could negotiate he DARPA challenges without any trouble whatsoever. Building a real robot that can do the job is far, far more complex and expensive than that.

With regards to the utility of debating the term. Debate isn't for everyone. This is not an insult, just a statement of fact. Some folks simply have no interest in this or don't see the point. And, from their frame of reference, they are absolutely right. However, those who can and do consider it are generally open to learning and changing their views of the world based on what they may hear. That is the very foundation of the scientific process.


Reply to

Debating word usage is science?

Not by my standards. :)

My type of fun is debating how to make robots act like humans.

Reply to
Curt Welch

You are correct. But if suppliers had to meet a higher standard in order to sell "robots" rather than remote controlled toys we might just have some progress in this field. Do we call R/C airplanes "robots"? What is Graupner (R/C stuff manufacturer) started to call their R/C planes and boats "robots".

The hobby robotics field has had a couple of false starts over the last 25 years or so and left a couple of failed magazines and companies in the wake. I posit that one of the reasons is that these "robots" really run out of steam as soon as you get them to dance around and do a couple of tricks. Then what? I've watched my own kids go through this cycle. I still have a HERO (who remembers those) brand-new in-the-box as well as a HERO arm (also NIB) in the garage. Those were nice kits. Probably as close to real robots as you could have expected to get back then.

Today's kits are, in my opinion, not too different...which isn't to say much because microprocessors are 1,000 times more powerful and capable today than they were back then. HERO could be programmed to navigate around a room, make sounds, sense various things, move its arm, etc. I believe the processor was a 6502 (my memory may fail me here).

And, speaking of 6502's, back in '83 timeframe I build a walking hexapod that used 11 Rockwell R65F11 microprocessors operating in parallel for control. The R65F11 was a 6502 with FORTH built in. This hexapod was two feet tall and it used TRW servomotors designed for fast control surface positioning. I know that I considered this to be a robot back then (I was young and didn't have a lot of experience). Thinking back, it wasn't. It was a fine example of a programmed machine...but that's about it. After making it walk around the room, use touch pads on its feet to sense obstacles and back up, go up a flight of stairs using a similar technique...well, what else could you do with such a thing without INTELLIGENCE. And that's my point. And so my focus changed to understanding intelligence rather than mechatronics. Mechatronics is easy.

Without intelligence these things are nothing more than programmed sequence CNC machines that get old pretty fast.

I don't want to debate the word as much as highlight the fact that until we change the focus from remote controlled, animatronic or CNC machines to seeking to build intelligent machines, the field will not reach its potential and progress will be curtailed. Thankfully there are lots of universities where this is exactly the case. My greater point, perhaps, is that young ones, when expose to the misuse of the word may actually believe that they are dealing with robots when, in reality, all they are dealing with are toys that they can program to execute an unintelligent sequence of moves. I am making a point to teach my kids the difference in hopes that the challenge to elevate these things away from toys might inpire them to create the next revolution in robotics.

"Acting" is relatively easy. Just go to any good attraction at Disneyland. THINKING and LEARNING are very different matters.


Reply to

I, too, am bothered that the term 'robot' is applied to too many things that are simply machines. Especially R/C items that are 100% controlled by the human operator. I used to fly r/c planes and would never consider any of them robots.

But this is a difficult classification. It's like there is a fine line dividing robot from non-robot, but then I think there are also shades of gray in between. For example, in the r/c planes I used to fly, I incorporated a missing pulse detector - if contact with the transmitter was lost, then some of the servos would be sent to pre-set positions. This would allow the plane to, say, be put into to shallow turn and throttle back to allow for some type of recovery. Back then (ahem, in the 80's), this was all done with analog circuits but the very same thing could be - and I'm sure - is done today with microprocessors. And you see servo-stabilizers for r/c helicopters or high-flying planes to keep themselves level. So the line starts to blur.

I think it comes down to intelligence, as Martin brought up. I think a machine that interacts with the world with intelligence would qualify as a robot. (Hmm, a microwave oven???) But the problem is simply deferred because no-one has really come up with a good definition for intelligence. Back in the 80's-90's I did some work in artificial intelligence. Besides never coming up with a good definition, I learned a very important concept that might be applicable here.

In AI the goal is to make the computer/machine/root exhibit behavior that is indistinguishable from a human. As long as we couldn't get the machine to 'behave with intelligence' the desired behavior was called 'artificial intelligence'. As soon as somebody got the desired behavior, then everyone said, "Well that is not intelligence . That's just some clever programming!" However, many of the things we couldn't do back then are common now-a-days: computer-generated speech, speech-recognition, trip planning, computer vision, and so-on.

So, _my_ definition of robot would be - a mechanical device, controlled by humans with high-level input, that interacts with the real-world with intelligence and/or learning.

Battle-bots don't qualify. But the soccer-playing bots I saw on Nova last night -THEY qualify!

Sorry if I rambled too long.


Reply to
Jim Hewitt

for sure!

Many simply consider the "artificial" just to mean "man made", as apposed to fake or imitation.

The moving goal post problem. AI is always just out of reach because no matter what you do, it's labeled as not intelligent.

Like the Google search engine computers? They are mechanical devices (everything is a mechanical device), it's controlled by humans which simply type in English commands to it, and it interacts with the real world by communicating with other computers all around the world and by interacting with humans all around the world.

My point is that you probably weren't thinking of things like the Google computers as robots but yet they basically fit your definition (and people do call web crawlers robots by the way just to muddy the waters even more).

It all looks intelligent until you understand how it works, then people just call it clever programming again.

It's almost as if it can't be intelligent if someone understands how it works! :)

Oh, if you think that's rambling, you should see of the crap I post! :)

The problem here is that there is no dividing line between intelligent and not intelligent. It's nothing but a large continuum. As such, you can't find a place to draw the line between intelligent robot and non-intelligent robot.

By the time we get machines acting like humans, there's going to be a huge trail of prior-art that fills the entire gap between the wheel and human level AI.

Currently there is a clear gap in intelligence between humans and everything else on the planet (machines and animals). This gives us a clear line between "human-intelligence" and all other forms of complex machine behavior. It creates the illusion that humans contain a unique technology of some type. But AI and the general growth of technology is going to fill that gap with a nearly infinite range of machines of different sizes, skills, and abilities and once the gap closes, the word intelligence is going to loose its foundation and all technology is going to blur together as different types of intelligence - just like the word ROBOT is blurring over that space already.

In the end, intelligence is probably going to have some vague meaning like "adaptive learning system" and it won't have anything to do with humans per se and all machines will be called robots. And more important, we will have lots of new names for all the new technologies that are developed.

Reply to
Curt Welch

You don't need regulation (at least not directly). Just register the word robot as a trademark, and license its use only to entities which use the word as you see fit.

And relevant to this thread is another thread from this very group ten years ago (I found this in Google Groups despite all odds, Google's Usenet archive search functions seem to be hosed lately), where I asked about robots "Where are they?"

formatting link

Reply to
Ben Bradley

Well, I was just 13 years old when the first Star Wars came out. Maybe that "poisoned" my thinking. I hand-build my first computer with the Intel 8080 processor not too far from that. I was immediately focused on "robotics" because, as a kid, I wanted to build my own R2D2 and C3PO. It didn't take very long to realize that this was impossible due to many reasons, the lack of good AI being one of them. So, I wouldn't call it a late realization on my part but rather a realization that these days everything is a "robot" and probably without justification.

The dialog 20~25 years ago was very different. You couldn't build cheap little machines with lots of servos and microprocessors easily. It took a lot of effort, money and study if you wanted to attempt something like that. So, in many ways, I think that the field --in the popular or hobbyist domain-- was taken a little more seriously. Something like the HERO 2000 "robot" was a pretty serious investment. I forget the price, but I am sure it was in the thousands. And it was a pretty decent platform to start a project from despite the limitations in computing that existed in the day.

Anyhow, on the topic of intelligence...to me this has nothing to do with behaving like a human unless you are trying to make a humanoid and that's a requirement. Even a humanoid doesn't necessarily have to behave 100% like a human. No, intelligence is more subtle than that. To me it has to do with understanding and adapting to surrounding conditions or the conditions of the problem to be solved. Communication doesn't have to be in spoken or written language. It can be fixed-function buttons. It doesn't matter.

Here's an example off the top of my head. I live in Southern California. We have big fires here. An R/C fire-fighting machine is one that has no intelligence but a firefighter can remote control to achieve telepresence and fight a fire. An intelligent, and hence, robotic fire-fighting machine is one that you could preemptively roll out of a truck throughout a neighborhood that is potentially under threat and TRUST that the robot is capable of assesing the level of risk as it develops and take measures to save the neighborhood. These are very different machines. One is intelligent, the other isn't. One is easy to build. The other is nearly impossible today.

In terms of implementation of intelligence in robots I am convinced that genetic/evolutionary computing is the answer. The approach has all the makings of producing very capable thinking machines --after all, every animal on this planet is the result of this process. All we have to do is learn how to implement it for our machines.


Reply to

I was in college so I was only a few years older.

I recently subscribed to the yahoo r2builders list because I've been considering building myself an R2. But as you say, without the AI, they will never be as cool as the character in the movie.

Clearly, the "real" robots are all the science fiction robots like Robby, B-9 (from Lost in Space), Gort, Huey, Dewey, Louie, RD-D2, C-3PO, Data, and Wall-E. The closer we can make are machines act like these guys, the more "robot" they become.

Yeah, I don't have any problem with the idea that the word robot should be used more like you want it used. But it also doesn't bother me that that a sword swinging machine is called a robot as well. :)

Yeah, as long as we have 4 billion years to wait for the design to get done!

Come to comp.ai.philosophy if you would like to waste endless hours debating AI with me. :)

I believe the solution to making machines act like humans (or any of the animals with brains that act somewhat intelligent) is to build a strong generic real time learning system trained by reinforcement.

The confusion most people have about AI is that look at intelligent behavior and see lots and lots of complexity. Human behavior is clearly very complex. We fill entire libraries with books trying to describe all the different aspects of human behavior.

In engineering, the more complex the machine's behavior is, the more engineering work we have to put into. Complex computer systems are filled with billions of lines of code these days written by thousands of programmers. And even after all the engineering effort it takes to create these complex machines, they don't come close to acting intelligent.

Add up these two obvious facts and you get the obvious conclusion most people come to - the idea that creating a really intelligent robot is going to take a lot of hardware, and a lot of engineering time to duplicate the complexity of what must exist in a human brain.

And, some, like you, seem to have come to the conclusion that this is just too much work for a human to do (maybe it's just too complex for us to even understand), and as such, the only way to do it is build a machine to do the work for us - and evolutionary computing fits the bill.

However, it will never work - at least not in that form. It's too slow.

What almost everyone fails to see, is that inteligence is not about the behavior, it's about the ability to _learn_ behaviors and apply them correctly to new situations.

Solving AI is not about duplicating each and every _adult_ human behavior in a machine by witting lines of code for each behavior. It's about building strong, generic, behavior learning systems.

AI will not be solved by adding more code to all the AI code we already have, it will be solved by figuring out how to make less code, do more. We need strong, totally generic, learning algorithms. These algorithms will be fed sensory data - but won't need to be customized for the type data it is - you can feed any sensor data to it, and it will learn how to make use of it. And you can allow it to control any output - without customizing the system for the type of output you are controlling, and it will learn how to make use of it.

And it will do it with one, fairly simple, fairly straight forward, real time temporal reinforcement learning algorithm.

The complexity of behavior comes not from the amount of code in the learning algorithm but from the size of the state space the learning algorithm is applied to (the size of the neural net for example it is working with).

My definition of inteligence is acutally "reinforcement learning" but almost no one understand what I mean by that.

What's interesting is that genetic algorithms are in fact reinforcement learning algorithms - they are just reinforcement learning applied to a machine design, as apposed to reinforcement learning applied to behavior generation. This, in my book, makes genetic algorithms examples of inteligence. So in that sense, if we used them to create AI for us, we would be using an intelligent machine to design and build other types of intelligent machines for us. But it's a type of inteligence that's so damn slow, it's pointless to even play with it for the job of solving AI - which is really a job of finding a new type of learning algorithm - which I don't believe a GA system has any hope of coming up with fast enough to be of any use. We have other intelligent machines to work with that will solve the problem of finding the learning algorithms much much faster - humans.

When we find these better genetic learning algorithms, I think it will finally make robots into what we have seen in the movies. It will make it trivial to build intelligent robots because all you have to do is drop one of these learning algorithms into the machine, feed it the sensor data you want, configure it for the number of effectors you want, write the code to generate the reward signal to define the machines purpose, and then turn it on and wait for it to learn (and help it along by teaching it). Once you have one "educated", to do the things you need it to do, you can mass produced it by copying the memory.

In order for the machine to do complex and intelligent things, it does need lots of complex code in it. But the point here is that the complex code is not (and can not be) written by humans - it's too complex for a human to understand. It is written by the learning algorithm instead. And the code the learning algorithm "writes" for us is nothing like we would write. It's like the weights of a neural net - it's something the machine has to calculate from real data - it's not something a human is smart enough to calculate on their own. So the complex of behavior is produced by "code" written by the learning algorithm.

This is actually very close to what you are thinking by saying we should use GA techniques to create the AI for us. In a sense, because GA techniques are reinforcement learning systems, using a reinforcement learning algorithm is sort of like using GA to "write the code" for us. But it's really a very different implementation than how the GA techniques work.

I think the type of learning algorithm I believe we need to solve AI is close to being found. People like Jeff Hawkins (On Intelligence) is basically working on finding such an algorithm for example. Lots of people are looking in what I consider to be the right place, and making progress. I think it might show up in the next 5 years (I have a bet I'll get it done in 8 years if someone else doesn't beat me to it) - but that's optimistic. None the less, It's going to show up soon - the next 25 to 30 years for sure but I think a lot sooner. And when it does show up, all these cool robot kits will truly become ROBOTS instead of just RC toys that are too stupid to know that it's bumped into a wall.

And at the same time, I think people are going to be amazed and even stunned at how much intelligence can be created by even a tiny amount of computing power in a robot.

Reply to
Curt Welch

Don't think in terms of biological time. We are talking about electronic hardware here. At this very moment I am working with latest-generation Xilinx FPGA's that can run logic at 500MHz without a problem. There are other companies that claim 1.5GHz performance and I know that both Xilinx and Altera have new (faster, denser) chips coming out next year.

GA evolution time, when implemented with performance in mind, is extremely fast. The issue is to have a "brain" with enough cells and inputs to be able to evolve something useful. In many ways, I see it as a Moore's Law challenge. When we get to FPGA's or microprocessors with 10 times the density we have today it is likely that very interesting things will start to happen in both robotics and Ai.

I don't think we disagree, you use different terminology to mean approximately the same thing.


Reply to

Yeah, I think we are looking in conceptually very similar ideas at the high level but the implementation and results I think are different - unless there is work in GA that I don't know about (which is very possible).

With the GA approach, you must have a well defined fitness function. There is no well defined fitness function for general AI as far as I can tell - other than the one evolution itself used - survival in the real world. That is too vague and would take too long for any GA search to find something closer to "intelligent".

The GA approach only works well if you can be very precise about what behavior you want the hardware to produce and can easily build a fitness function to test how close the hardware is to producing the correct function. It also typically requires that you create simulations of the problems you are trying to solve so it can run though millions of simulations very quickly. If the goal is to produce hardware to make a hex-pod walk on a flat surface, a GA search for the code can be simulated with the simulator running far faster than real time - so in the simulator you can get years of virtual evolution training happening in seconds of real time.

But what happens when the problem is so complex you can't simulate it and the robot has to learn in real time in real conditions? For example, what if you want to train your robot to hunt and catch rats in a barn? Or to train a bird-robot to fly? Or to not only fly, but to catch birds as it's flying? Do you think you can use a GA approach to build robots like that?

General reinforcement learning however makes no attempt to specify the behavior. It only comments on the results. If you can build hardware to test for the desired result which generates the reward signal, the reinforcement learning system figures out the correct behaviors on it's own.

For strong reinforcement learning to work, it has to include a function to estimate expected future rewards. This function in effect becomes the systems fitness function. In other words, with reinforcement learning, the system not only has to learn useful behaviors, it has to learn its own fitness function as well which is then used to evaluate and select behaviors.

The good thing about GA approaches, is that they are useful today. We know how to do useful things with them. For the most part, reinforcement learning algorithms are not useful today - or are useful in only very limited domains and applications. This fact seems to have drawn more people to play with GA. But the solution to AI is not GA (unless you warp your definition of GA so much you end up with a true reinforcement learning algorithm).

Reply to
Curt Welch

A robot is a device built to do some task formerly reserved for living things. Usually the living things in question are humans, but certainly not always. Literally any device, with or without onboard intelligence, that was designed to do a task done only by living things before the device in question was built can legitimately be called a robot.

Now... if you're building a device to do something that has been done by a machine before, you haven't built a robot. You've just built a machine. But if you are taking some function out of the biological world and putting it in the mechanical world, (for the first time, ever)... you are doing so with a robot.

I'd suspect that's why Mythbusters like to call their stuff robots... they arrogantly believe that they are the first to transfer whatever function their contraption is supposed to mimic from the biological to the mechanical world.

There. That ought to settle any debate.

Reply to

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.