Robot aesthetics/personality

I have been building hobby robots for quite a while now, and have discovered in the last couple of years that the way the robot looks has a major impact not only on how others see the robot, but how I feel about it and how much I enjoy working on it. The more I can personify the robot, the more I "like" it. My early robots were very utilitarian:

formatting link
were interesting to build and program, but not very engaging. Later I started making them "cuter":
formatting link
this was by accident. The placement of the sensors looked like eyes, etc, but after I realized the effect this had on people's reactions to the robot I started purposely designing for this kind of look.

Even more recently I have been attracted to the idea of adding emotion/ personality to a robot to further the anthropomorphizing that seems to be taking place. I have done some studying of how to use the limited resources available to a hobby robot to express emotion and personality:

formatting link
My ultimate goal is a Deskpet. A small robot that lives on your desk and invites interaction and emotional attachment. It would play games, react to its surroundings, and interact with the user, but not require the kind of care and support needed by, say, a Tamagotchi.

My whole point behind that extended introduction was to foster some discussion of robot aesthetics and personality. What have you done to make your robots more attractive? Is anyone in the group doing any research/building along these lines? Any ideas or comments to share?

Reply to
Robotguy
Loading thread data ...

Excellent observations, and right on.

While I do feel that the utilitarian style is perfectly good for us experimenters, to entice those outside our craft it's often necessary to endow the robot with some kind of emotional pull. I saw that very recently when I showed a video of the Big Dog robot to friends and family. They all felt "sorry" for it when the person tried to kick it to make it fall down (it didn't, which was the purpose of the demonstration).

Along similar lines, Scott Savage (OOPic guy) and robot designer Ted Macy have been working on an interesting project they are just about ready to ship, the ooBug. I haven't seen one yet personally, but it looks like its feature set is in line with your thinking. Perhaps close to your Deskpet maybe. It has enough processing power and sensors already on board to do things like exploring and interaction. (Doesn't have a camera, but I wonder how hard it would be to add a small one.)

Here's a video I just found of it doing a light-following behavior. It's all software:

formatting link
It has a bio-look to it, as your ralph5.jpg, which I've found to be particularly important for the young ones. REALLY gets them interested.

I think also that sound feedback is very important. Not so much talking to the robot (though it helps) but music, sounds, and even voice responses. I know this is one of the first thing people comment on with the Robosapien, for example.

As for your question regarding what I've done to give my robots more personality and aesthetics, I have to admit it's basically nada! However, I think we're at a cross-roads where this has come important to the ongoing acceptance of robotic creatures in home, office, and school.

-- Gord>

Reply to
Gordon McComb

I think the aesthetics aspect is easier to approach and can be solved quicker. To me, the aesthetics factor strikes first and most. Maybe due to great movies like Short-Circuit and upcoming Wall-e.

I realized the importance of aesthetic feel by chance- when on one of my mobile robots i mounted the ultrasonic sensor as a "head" which was rotated by a stepper "the neck". I didnt do that for aesthetics, but save on sensors and use only one sensor pair to take distance measurements around the robot. (without rotating the full bulky robot)

The metallic transducers, when rotated appeared as if the robot is turning his head and looking around for best path to take. That was function anyway, but the aesthetics made it all more realistic, and beautiful to look at.

For personality - i don't know how far we can go presently.

- the ideas in your notebook are great, but these effects shall be generated by some stimuli, which are either - pre-programmed random random based on time. (like if the robot doesnt get touched for a while, he gets sad.) im trying to think/experiment how else we can make it more realistic, given the current state of hobby processors and size limitations, (and not attempt to open a University AI lab, - read MI media lab.). i found more the sensors the better. (for example using the info from just 1 IR distance sensor, we could make it smile when distance reduces suddenly. - the distance could reduce suddenly, in general, when a human approaches, or a ball approaches...no one is going to throw a chair at a robot..) -so smiling seems a nice response.

some thoughts..

Best Regards Vivek

Reply to
Vivek

Agreed. However all effects,even in humans are generated by stimuli, whether internal or external.

My current thought is to make sure the robot has a form of internal feedback (food=good, bump_sensor=bad, etc) and goals (increase overall happiness or get as much attention from its owner as possible), and then add a learning algorithm that finds a way to attain these goals, including using expressions of personality. Initially the actions would be somewhat random, but could be weighted by "reflexive" behaviors or by some sort of "personality wieghting" (moody, happy, morose, levelheaded, etc). As the algorithm got feedback it could control for instance, what the robot does when he doesn't get touched for a while, by choosing from a list of actions that have been shown to lead from his current state toward the current goal state (get attention). Obviously this would require loads of memory, but shouldn't be too tough to program for a decently small set of states and goals.

Did that make any sense?

Reply to
Robotguy

yes it all makes sense. it also sounds practical for a tiny microcontroller (say 16bit, with few hundred kB flash, and some data eeprom to save "learned goals" or part of flash used as data-flash if the chip allows.) iff the set of inputs, actions and goals is limited to small sets. i think this has been implemented to some extent on Abio etc.

Reply to
Vivek

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.