What do you want your robot to do?

skip the offer, suprise her with it, and show us pictures of your machine :-)

Rich

Reply to
aiiadict
Loading thread data ...

with a basic neural net the bot can try different algorithms, paint a picture based on it, then get a judgement about how good it is from the public. then it generates another algorithm, draws, then gets votes, repeating to infinitiy if need be. maybe a simple yes-maybe-no button on a remote that can also take votes from the net. eventually it would produce quite pretty, possibly saleable images.

Reply to
jim dorey

I'm not sure how old you are, but there was a little robot "turtle" that did this back in the 1980's

Reply to
mlw

Not a pen, but in the 1940s and early 50s Grey Walter used candles on top of his "turtle" robots (he coined the phrase) and used long exposures to capture their movements. The candles also served as light sensors for the robots to follow.

Art and function together.

formatting link
(I'm pretty sure that Elsie and Elmer, thes subject in these photos, are on display at the Smithsonian.)

-- Gordon

Reply to
Gordon McComb

My comment wasn't about what was practical, but what is possible, given enough resources. Never say never. The fact that they don't have a dishwashing robot has more to do with realities of labor than technological stagnation. It's far cheaper to hire someone at $6 an hour to load a dishwashing machine, than to construct some weird robot with tentacles that can do it all.

Even the "glorified dustbuster" is more of a curio than an effective vacuum, because making it into a real vacuum would be expensive. People buy it to say they have a robot cleaning their house, though in reality isn't not a very good clean. The Roomba lacks the vacuum suction or strong beater bar to do a thorough job.

This doesn't mean a robot *can't* do this, but the cost of the thing doesn't make it a worthwhile investment for consumers. The best cleaning robot I've seen so far is made for hotels, and costs upwards of $4-5K. It looks more like a snake than a traditional robot.

Why limit the discission to an autonomous mobile robot? Is that the only kind of robot there is, or the only type that's useful? Could it be that specialty robotic applications are not only more functional, they're more affordable? You don't see robots wandering around a Ford plant, but there are hundreds along the line. Car quality wouldn't be where it is today without these robots.

I do agree that the notion of a truly practical autonomous mobile robot, like an R2D2 or Rosie, is more science fiction than fact. We build them because we like to dream, not because they represent the best solution to a problem. But we do build them, and find out they are very limited. Part of it is technology, part of it is cost. But part of it is trying to use a hammer to remove a screw.

Ah, but your bread machine/toaster doesn't also wash the dishes or meter out liquids for the perfect martini. There is little difference in capability between a bread machine and a toaster, though these are unique functions; the bread making part has a motor to sift the ingredients, and a timer to let the dough rise, but they're both ovens. This combination makes sense. I wouldn't say the same of the combinations expected of your typical autonomous mobile robot. Having a robot serve drinks and vacuum the floor and clean the toilet is a lot to ask of any machine, at any price affordable by those who aren't Bill Gates.

-- Gordon

Reply to
Gordon McComb

Of course the idea isn't new. It is one of the things you can do with the modern lego robot. Your current robot base is essentially a larger and more expensive lego robot with on board computing power instead of a wireless connection to a PC.

"What do you want *your* robot to do" was the question.

I have answered most of that question in previous posts and your robot doesn't have what I want and you have no intention of giving it that capability. You have simply dismissed my needs. If the product doesn't fill someone's needs they cannot use it. If you want to move furniture a sports car will not be useful no matter how good it is in other respects. It will fail to meet the goals of the particular user.

Apart from giving your robot entertaining behaviors it has no use at all to the general public. It might have some educational value to a student learning electronics and programming although I think a cheaper smaller robot would do just as well.

When you write about using some sort of triangulation navigation with a wireless router I think you a missing the big picture about what kind of system a "real" robot with "real" intelligence is about.

Essentially you are trying to provide an electronic railway track. It is a practical solution to moving a platform around where a physical rail track may not be acceptable but it is still the same thing. It is not addressing how animals navigate or interact with their environment to achieve some goals. To think for themselves and adapt to unexpected situations.

This is the other side of robotics. It was the point I was making about the need or not for a PID system. What if the PID system fails on your robot? A human will adapt and still achieve the desired goal with a simple on/off control of the motors. This is the kind of intelligence desirable, I think, in a robot. It goes beyond the electronics or the ability to program a multitasking operating system.

Regards,

John Casey

Reply to
JGCASEY

I have never had any interest in these track-schemes.

Just because you can't see the track doesn't make it any better.

A model train on a track is not interesting. And it does not present a programming problem that needs to be solved. To get the train to go around the track, you turn on the motor. It goes around the track.

Take away the track, and what do you have? A problem! And it is so much fun to solve. You start looking around on the net and can't find much on the subject. Ask a few questions, people let you know the terms used in navigation. You are directed to papers written by PHD's and Masters. A bit more digging and you begin to understand all of it.

Now to code! Do you program a simulator, or do you test the code on the robot?

In the end, you get a robot that has enough "intelligence" to navigate by itself. To map out a house. And it can go to a room you tell it to go to. You can even block it's path with a box, and it will figure out how to get around it.

I got a slot-car track when I was a child. I played with it one day.

I built a robot when I grew up. I played with it for years. I still do.

Rich

Reply to
aiiadict

Reply to
Tim Polmear

Did *it* figure out how to get around the obstacle or did *you* give it an algorithm that usually achieves that goal?

Can you put your robot in *any* home and tell it to go to (find) the kitchen/fridge/beer and to meet you in the lounge room?

Mapping out an area by filling in a grid with go/no_go cells is a practical method but not one a human would use. We use abstract schemas not accurate maps of our world. This is memory efficient and allows generalization.

John

Reply to
JGCASEY

Figure out how to get around an obstacle: if it is a new obstacle, a box in the hallway, a side effect of the navigation program is that it will update its map, and figure out a way to get around it. I didn't have to program "avoid **NEW** obstacle"

I can put it in any home, give it some time to map it out, name certain areas (IE, this square = kitchen this square = lounge room)

I'd have to define the fridge as well. My robot cannot open the door on the fridge by itself. It needs help from a solenoid.

My vision software isn't the best, so it would take a while to grab the beer.

It has no problem "meeting" you anywhere. Just tell it coordinates or room name.

No, it isn't human. :-)

Humans don't use "flood fills" to navigate space.

If you consider a (flood fills leading edge) to be the "mind's eye" of your own mind thinking of a way to get from your house to the post office, then I think it certainly resembles the way we think.

I see in my mind getting up, getting in car, drive down the road, the turns I have to make. It is all in order.

As the navigation program flood fills it's memory of space to find a destination, the flood fill stops at walls... That is disreguarding that option, or NOT thinking about taking sharp turn into wall and bumping head. The flood continues down travelable paths.

It is better at finding the quickest route for sure.

Rich

Reply to
aiiadict

robotic lawnmower and edger

Haven't seen a commercial one yet. Probably due to liability issues.

I'm wrong there is supposedly a few.

formatting link
Give you something to use triangulation and gps with.

Question is steering.Would have to be self-propelled but could do away with large batteries.

What about as a guide or doorman for a trade fair / conference centre/museum ? Surely there would be a small market for this ? Interact with people especially kids.

People see this at a museum, science fair etc and want one so buy the cheaper $500 dollar version.

formatting link
maybe a few ideas here
formatting link
Alex

Reply to
Alex Gibson

No I assume you programmed it to avoid *any* obstacle by applying some algorithm. Fill in the cell/s as "occupied" and compute a new path? It may be a *new obstacle* but it is not a *new problem*.

This is the heart of the problem AI faces in that machines don't think, all the thinking is done by us. Machines simply automate it for us. We have to insert the behaviours and when they fail we have to step in and fix it. This is why we haven't any "intelligent" machines and maybe never will.

And wouldn't it be neat if you could put it in any home and it was able to recognize that this room had kitchen properties and thus you didn't have to name certain areas?

Until a robot has "arms" it is rather limited to what extent it can manipulate the world.

This is where English like commands might be an interesting problem?

"Get me a beer please."

The robot would have to translate that into a set of actions and decide what to do if it returns to find you have moved. Or would your moving be another "bug" you had to fix because machines can't really "think" and you didn't take your moving into account the first time?

Better than what other methods?

Just as you found playing with trains boring compared with robots, I find robots boring when it is limited to talking about the same old hardware subjects when the exciting bit is all in the software (or its hardware implementation).

You struck me as someone who likes to think beyond the robots physical mobile base requirements?

John

Reply to
JGCASEY

Can you set your four year old child down in "any home" and have it find the refrigerator? Does a four year old child know what a "Beer" is? You need to start thinking about these robots as children and start understanding how they learn.

The blob vision systems you guys talk about sound just like how a baby sees things. The same goes for most voice recognition and motion control. We have established a starting point from which to work from not the end.

All this reprogramming and rewriting is the same thing that one must go through in raising a child. The child makes mistakes, gross mistakes at first and then we refine their abilities to the point where they can shoot basketball, feed themselves, and have semi-intelligent conversations.

I'm planning on working with the same robot for years possibly to build up its programming instead of building and rebuilding, writing and rewriting. I'm not saying that I won't make changes to the hardware along the way, just gradual changes.

BTW the flood fill mapping technique is exactly the same way I find my way around. It's probably the same method you use, just that you don't pay attention to it any more. When you were a kid you had to figure out that a wall was not a thing to walk on. I can't tell you how many times I've had to tell my kids to literally stop climbing the walls.

I think that the robot just needs to keep running and learning. Eventually "getting a beer" will be exactly "childs play".

Eljin

Reply to
Eljin

all true.

I started a thread here about "learning" programs but nobody seemed interested in contributing.

very neat. My ideas:

You have to have the robot learn this. Not program it in.

Robot, this item in your vision (point with mouse. We'll skip actually pointing at the item in real life for now) is the fridge.

Robot has coordinates of what mouse is pointing at. It flood fills the bitmap of the picture it just took. Finds the edges of "fridge". It derives endpoints for lines, dimensions, colors, features (vertical line half way up on the right side).

It asks you "is this the fridge?" and highlights the area on its picture which it thinks is fridge. Assume for now it is correct.

Now it asks "what is this?" and it points to vertical line half way up on the right side. You tell it "That is the handle"

The only way we can get the software to think like us is to program it to learn. Then we teach it.

Mine has an arm. Just not strong enough to open fridge. I guess I could fix a handle onto the bottom of the fridge door, hook his gripper on the handle, and back the whole robot up.

Understanding of language needs to be programmed. nevermind. You have to program it to listen, watch, and associate.

hear "door". watch vision: theres a rectangle in front of me. rectangle in vision = door.

same here. Hardware is done. People are copying eachother on hardware. Software is another story. You have to actually think about it to get something neat to watch as it executes.

My Base = mobilization of computing power. I have cheap motors, horrible encoders. All hacked together from some damaged aluminum tubing. I spent about

1 day on it. All it does is allow me to test software.

The software I've spent hundreds of hours on.

Rich

Reply to
aiiadict
[...]

I think I did? Fri,Mar 18 2005 Subject: Re: machine learning ?

Of course in order to learn you need a sensory system of sufficient sophistication to provide something worth learning :)

Mmmm. A lot of *what* it does without the *how*.

I think we have a lot assumptions here?

Why would it ask that in particular? Later it might mistake a fridge magnet for a handle. A lot of things look the same in an image, it can depend on context. For example a circle could be a wheel, clock, ball and so on...

Vision is really hard. So despite what I wrote about "railway lines" I think we have to structure our visual world for the robot until it can be programmed to see better. In the case of the fridge put a high contrast "fridge" symbol on it. That will identify and orientate it relative to the robot.

The problem is how to write a "learning" program.

"Rectangle" can mean many things not just door.

Although I find the hardware boring I don't think that it isn't important, just expensive. I try to get the best but affordable hardware. I made sure I had a solid robot base from which to work. No noisy flimsy gears or motors.

The same goes for choosing the hardware for software. It is important that an idea is not stymied by a lack of computing speed or memory.

Practical advancements in robots is a partnership between the hardware and its associated circuitry and any higher level AI software. It depends on your interests, abilities, financial means and how you want to divide up your time.

John

Reply to
JGCASEY

There was an article a few years ago about someone, I forget the details, but he ended up programming an industrial robot to tentatively touch its environment before committing to an action. This behaviour was modeled on his wife's way of interacting with the world, since she was blind. So instead of the robot slapping down an item in a storage spot regardless, it would first nudge the storage location to see if it was already occupied, then store the item if the slot was free. In this way it avoided making huge blunders, and without overengineering the environment or relying on vision.

It also was programmed with n>Vision is really hard. So despite what I wrote about

Reply to
Tim Polmear

world,

This is not unlike you poking around in the dark yourself, or probably the way most bugs and nocturnal mammals interact with their environments in daily life. Who says your robot needs a $10K vision system and IQ 170 to get along.

Along this line, I recently started reading "Legged Robots That Balance" by Marc Raibert, 1986. Among other things, he describes a quadruped walker built by Hirose in 1983 that could climb up and over obstacles. It used a touch sensor and a simple retract-and-lift-leg algorithm to sense and negotiate the obstacles. Plus a comparable routine for walking down the back sides of the obstacles. Early form of reactive control. You don't need complex sensors and planning algorithms to do everything, when simple sensors and feedback might suffice. You can't do "everything" this way but .... it forms a good basis for the other things.

Reply to
dan

There was an article on Catalyst (ABC Australia) about that sort of thing. The article is called Fembots. It is available at:

formatting link
robot was made by a Sydney based company called Kadence Photonics, (there is a link to their website in the article)

Regards

Andrew Wagstaff

Reply to
Andrew Wagstaff

Giving a robot a sense of touch anywhere approaching that available to a human being isn't that easy. Giving a robot a sense of vision that would be enough for a human to use means plugging a web cam into a USB port.

The storage robot most likely didn't have to feel itself around the factory or recognize anything accept that the space was clear.

Give your robot hand the same sense of touch that a human has and then tell me it is simple.

We can put our hands in our pocket and identify what is in there. We can feel just part of an object and identify it completely. We can manipulate multiple small objects in our hand with ease. The idea that our sense of touch is simple compared with vision isn't true.

Just as you can have a simple touch system you can have a simple visual system.

Vision is useful to find an object but to manipulate an object does require a sense of touch and force feedback. We can sense the texture, shape and weight of an object in an instant and know how to move our fingers and hand to do whatever we want with it.

A major computational task I would think?

It may be true that a simple reflex system will enable something like a cockroach scuttle across unpredictable terrain to the nearest dark spot. If this is all you want your robot to do?

If you want your robot to have an arm/hand that can manipulate objects the way we can that is a different ball game. Work is being done on all this but it is outside the financial status of the average hobby robot I would suggest?

Your basic robot needs its sonar/infrared obstacle detectors and whiskers/bumpers should they fail. But a web cam is cheap and offers a simple vision system with the possibility of a more complex visual system in the future.

The sense of touch that the fembots will become part of the future of robots. If I ever get around to building a robotic arm I would certainly want it to have a sensitive sense of touch.

As for female multitasking it does have a cost.

I like to explain these differences by the idea that men were the hunters and women nurturers :)

- John Casey

Reply to
JGCASEY

This is basically true, but the sensory/cognition ratio is reversed for the two. It may be more difficult to construct a "skin" with millions of tiny sensors on it, but it's relatively easy to use the information from those sensors.

Conversely, a video camera can provide a highly detailed view of the robot's environment. But analyzing that view is anything but simple. In fact, science has barely cracked it.

In school, we learned the simplistic side of our body's senses: "eyes" for seeing, "ears" for hearing, and so forth. The other half of

*sensation*, which is analysis, is given scant attention. The two can't be separated. Some 30 percent of the human brain is dedicated to vision processing. Given the complexity of the human visual cortex, the dorsal stream, and the various other lobes, there is some doubt we'd be able to master its subtleties before an affordable artificial skin could be developed.

This isn't to say that mimicking the human vision system is the best way to approach robotic vision, but it gives us a higher confidence that we know what the robot is seeing, and how it will react.

In the mean time, they are using simple motion analysis for things like red light cameras. One step at a time...

-- Gordon

Reply to
Gordon McComb

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.