Developing a robot worth buying

I agree, up to a point, but I also have a real-world counterexample; the AI used to animate the opponents in modern computer games is getting to be quite sophisticated. I see no reason why a robot can't attain the same level of intelligence.

So you keep saying. In the real world, things don't always work out that way. Look at aviation 1900-1920, for example. There was plenty of open discussion about aircraft design going on. Those with the answers to "how can man fly" did not always keep those answers secret. There were many people who had part of the answer or who had good ideas but lacked the resources to turn them into aircraft.

Reply to
Guy Macon
Loading thread data ...

Maybe that's what you need then? A robot that can be programmed like a character in a game. This is a subject for thread of its own I think?

JC

Reply to
JGCASEY

An excellent idea. If there is a gap in the hobby market to be plugged that I think would be it. In my spare time I have been putting together a robot for the primary purpose of having a platform to write intelligence programs on. I found very little easy platforms. I ended up going with a vex robotics set and a laptop. The lego mindstorms have an easy to work with interface but are limited in their strength and in their sensors. The vex robotics set

formatting link
is impressive but so far the programming sux, since they haven't come out with their programming module. They will in august though... But I think there is a need for robots for programmers. People who like C++ and not programming and designing motor drivers and controllers. It could be just a simple robot with 3 wheels, couple of arms, and a co uple of web cams. You could make them somewhat unexpensive if you were able to do it all with just a microcontroller or an fpga. You'd just have to have the cameras wireless. I have a friend currently looking into hooking up some 2.4g transmitters to a microcontroller. If we can hook a couple cheap web cams up to that we'll have a an easy single controller for both the vision and controls of the robot all transmitting over 2.4g WLAN. This way all the data can be processed by a computer plugged into your wall. If you had easy function calls worked in for moving the arms, reading sensors, and moving the robot this would be a seriously convenient tool. I haven't found any platform like this yet making it pretty darn tough for someone with little or no desire or no knowledge or desire for the low level electronics to get into high end robotics. Wait, maybe this isn't a good idea. Lets keep robotics to just those of us who do both :).

Reply to
DaveFowler

Seems like the gap in the market is emerging here. Now how big is that gap?

There is an alternative to Lego's interface. It is pbForth (I think the pb stands for "plastic brick").

Hmmmm. A very flashy site. Once I got past the 30 off demands for a flash plug-in (personal choice not to have one) it seemed to have some interesting material on offer. You say that the programming sucks. I presume that you base this on personal experience. I could find nothing in a qucik scan of the documents on their site that told me what the underlying programming environment was, and was thus unable to judge.

With the range of construction kits and other items around (Mecanno, Lego, Knex etc) was there something that you felt could be better apart from decent series of controller modules.

Part of the robotics experience I would have thought. However, I think that if you consider that the actuator functions share a common interface then, yes you are on the right track. Perhaps if we build on the fact that there are basically 28 types of control module to deal with any robotic actuator problem (see the re-usability section on the HIDECS page of my web-site for the table). To the list of modules we could look at camera input modules and relevent vision analysis. Building around this set I can see that the higher intelligence functions would remain the most interesting for many others.

There would be a need to set this up properly. WiFi would probably be the most effective and there are, I believe, some small modules that could provide the link. If some pre-processing were done on board the robot you could even transmist stereo-vision to the bigger "PC Intelligence" unit. I can imagine that a mobile bot would be the first thing to do. Later we could add manipulators so that it could accomplish some useful tasks. Don't forget to give it some means of self-navigation and a location for where it could re-charge itself.

I actually prefer using at least one processor per actuator on a robotic system. This does assist in the long term by controlling the complexity of such a wide ranging system. You can also minimise the data transferral rate by locally processing data that is only relevent to that actuator. With regards to the camera transmitters, two cameras giving stereo-vision and processed into a data-stream that is more useful at the received end would use less bandwidth than just transmitting the pictures.

There were some good ideas in this post and I can see that there are ways that the goals would become achievable. It will, of course, take some investment of time, energy and money to accomplish. However, with enough people chipping in something may come.

Now Guy, are you getting any sense of a baseline specification yet?

Reply to
Paul E. Bennett

I stated that the programming for the vex roboitcs set sux because it doesn't exist yet. They aren't selling the programming interface until this august. When it comes out it may prove to be fairly easy to use. I've downloaded the default code they use to controll it and its in C++. There's just simply no way to edit the code that's on it currently.

The vex is still somewhat low level. It comes with a remote control and only some bump switches for sensors, though they have the goal of expanding to ultrasound. There is nothing with vision or wifi control. I am still working on getting my laptop hooked up to control the bot.

Another thing. Guy mentioned that if we can program videogame AI as smart as we do we shoudl be able to program robot AI the same. And we can, except the limiting factor is that in a video game the AI has quick and easy accesses to information on everything in their environment. Our robots aren't even close. If we had a simple AI that would shoot all the bad guys, how would a robot today be able to pick out a bad guy, or much less a person, or much less an object? We need to work on the software behind our sensors. If you created a robot that had a great set of sensors and a couple of web cams that were easy to get information from, you would have a serious product that would provide a huge stepping stone for intelligent robotics. No longer would people have to start from the begining, spending years to develop their robotic platform before they could get going on the intelligence.

Reply to
DaveFowler

I stated that the programming for the vex roboitcs set sux because it doesn't exist yet. They aren't selling the programming interface until this august. When it comes out it may prove to be fairly easy to use. I've downloaded the default code they use to controll it and its in C++. There's just simply no way to edit the code that's on it currently.

The vex is still somewhat low level. It comes with a remote control and only some bump switches for sensors, though they have the goal of expanding to ultrasound. There is nothing with vision or wifi control. I am still working on getting my laptop hooked up to control the bot.

Another thing. Guy mentioned that if we can program videogame AI as smart as we do we shoudl be able to program robot AI the same. And we can, except the limiting factor is that in a video game the AI has quick and easy accesses to information on everything in their environment. Our robots aren't even close. If we had a simple AI that would shoot all the bad guys, how would a robot today be able to pick out a bad guy, or much less a person, or much less an object? We need to work on the software behind our sensors. If you created a robot that had a great set of sensors and a couple of web cams that were easy to get information from, you would have a serious product that would provide a huge stepping stone for intelligent robotics. No longer would people have to start from the begining, spending years to develop their robotic platform before they could get going on the intelligence.

Reply to
DaveFowler

The original configuration on that walking biped

formatting link
was 6 controllers C167 linked via CAN-bus to a Pentium III. They changed that soon to a single Pentium IV because the CAN-bus turned out to slow. Obvious problems are:

  • serial links available in lowcost controllers are too slow. Transputers are no longer available.
  • in experimental systems the workload initally is unclear. Therefore its hard to judge the right size for the slave-controllers.
  • softwareupdates, debugging for the slave-controllers is more difficult.

MfG JRD

Reply to
Rafael Deliano

I was waiting for someone to point the bit about videogame AI having perfect "sensors".

The good news is that better sensors are a straightforward (but difficult) engineering challenge. While imitating human intelligence is a fool's game, imitating something like the "sensor package" a bee uses just might be feasible. I am keeping an eye on the Darpa Grand Challenge as being a likely place to see better sensors in action.

The first person that develops a robot good enough to help an elderly or handicapped person will no doubt become rich and famous. The question in my mind is whether we are like the folks in 1900 trying to figure out heavier than air flight, or whether we are like folks in 1700 trying to figure out heavier than air flight...

Reply to
Guy Macon

Common fallacy "products for the elderly will boom".

  • another person is cheapest and best to help a handicapped person.
  • handicapped persons are not the sort of wealthy customers buying gadgets.
  • there may be in the medium future a market "tools for elderly persons who want or have to work". But their brain is usually the part thats still working while the rest is falling apart. They do not need one big "smart" device. Rather simple stuff like aids for vision, hearing. Or a manipulator/teleoperator to enhance their hand/arm.

Not too technical, but well illustrated: Schraft, Schmierer "Service Robots" Peters 2000 list in its index as topics:

  • refuelling
  • agriculture, forestry
  • construction , renovating
  • cleaning
  • office
  • surveillance
  • firefighting
  • sorting
  • hotel and cooking
  • marketing
  • hobbies and recreation
  • entertainment
  • nursing care
  • medicine
  • underwater
  • space Under "nursing care" you have souped up wheelchairs or experimental stuff that will never have any practical value. The whole index lists mostly applications where robots are impractical. Expection:
  • environment unsuited for humans ( "underwater", "space" ). Note that "firefighting" or "forestry" is unsuited for robots because the environment is uncontrolled.
  • scale of the stuff handled is unsuited for humans * "construction" uses cranes and other machines because too big. * "medicine" like eyesurgery: too small.

MfG JRD

Reply to
Rafael Deliano

The idea behind a processor per actuator architecture is that you have to make some effort to reduce the transmittable data requirement to the minimum necessary at a sufficiently brisk pace to minimise hold-ups in the serial links. There are other options besides CAN-bus which, if the higher speed comms were required, would allow high speed communications.

There are always some very clearly essential parameters that would form the basis of the communicated data packets. It shouldn't be too hard to determine these in the initial analysis phase. Other parameters may just require the adjustment of values in variables which is a more ad-hoq communication need that should not happen too frequently.

This, I will concede, can be an issue. However, there are techniques that can be employed that simplify entering such updates. WRT the vision sub-system and the brain unit. I consider that the intelligence module would rely on vision to such an extent that it would need to be fairly closely coupled to the cameras. This may demand that the link between the cameras and central brain unit is extremely high speed with massive processing data capability.

I think I am beginning to see a way to partition the project such that a reasonable collaborative development regime may emerge from this.

Reply to
Paul E. Bennett

That's an interesting biped, but the control algorithms used in it were designed in the wrong space.

I suspect that the problem was in the way things like the ankle motors operate (Lead screws!) which made it hard to decompose the control methods...

The IEEE-1355 bus is still available - google for "Spacewire". I know DLR used it in their Hand for internal serial comms. The Spacewire website has the VHDL core for you to download and reuse...

CAN is pretty good if you put the partitions on at the right place in the architecture. E.G.: don't try and send high-speed PWM signals over it! Use it as distributed shared memory. This is what we do with the Dextrous Hand (see sig)

We use an out-of-band low-priority message to adjust configuration parameters: works pretty well. We thought about putting them in the shared memory space, but ran into the old bootstrap problem: how do you define where in the space without doing the configuration?

TBH, I find it's a valuable abstraction barrier. You *know* that the controller hasn't been reprogrammed, so the fault must be in the electronics or the PC software. You can also keep physical swap-out units with known-stable versions of firmware in.

Bear in mind that there are a lot of embeddable systems now with direct camera-into-CPU support - and I don't mean Firewire.

have video-in ports on the CPU - although the CPU itself is a little slow. However, if you were to do "1980s-era" image processing, it would probably be sufficient.

The other option, of course, is just to stack up a small pile of mini-ITX boards as a cluster. One firewire camera feeding each board,

100MB Ethernet for clustering, local 2.5" HD for storage - should work quite nicely.

cheers, Rich.

Reply to
Rich Walker

Perhaps. Before I decided to go into electronics I had wanted to be an architect. I guess that (since a fateful visit to a doctors surgery - long story) I become a reasonably good one for electronic systems.

Looks like an interesting one. I have bookmarked it for reading when I get a little more free time.

I am sure that CAN is very good for a wide range of communications and I was not trying to denigrate it for lack of speed. The last sentence was just to indicate that it is not the only communication solution. There is a very rich basket full of communication solutions out there.

I usually keep the adjustable parameters in a group and have a simple communications protocol that manages them explicity. I have also usually arranged that all other data passes as a response to the Status Enquiry request. Mostly I am down to about 10 cells of data in that response. Lately I have begun to look into IEEE1588, TTP and LXI as potential strategies for keeping very accurate time-frames for the data. This is, though digressing a little.

Considering that, sometimes, you may need to re-programme controllers that are somewhat physically innaccessible, then you need sound reprogramming techniques in your arsenal. However, for locally accessible controllers I agree that the direct attachment is simpler and more satisfactory.

ISTR that Ultra Technology's F21 chip had some capability for video input and processing. I must catch up on that chips situation.

Silhouette recognition?

I should think that would do some pretty decent video analysis especially if it were programmed with a language totally suited to the task. I wonder if Guy is getting any good vibes from this thread yet.

Reply to
Paul E. Bennett

F21 product development ended in 2000 and the last prototypes were made in 1998 then the prototype house, Mosis, shut down their .8u fab line. A couple of production deals were almost negociated for high volume production here in the US after development funding ran out. And after the .8u fabs moved to the third world I looked into production there. But basically the situation is historic only at this point except for a few people who got some of the prototype chips. I would have liked to produce them in quantity, the nice Internet appliances we made at the iTV corporation did demo some of the things one could do with inexpensive high performance chips.

The company I am working for now should come out with a new generation in the family tree that could be very useful in some robotics applications calling for high speed and low power.

Best Wishes

Reply to
fox

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.