Developing a robot worth buying

DaveFowler wrote:


Seems like the gap in the market is emerging here. Now how big is that gap?

There is an alternative to Lego's interface. It is pbForth (I think the pb stands for "plastic brick").
<http://www.hempeldesigngroup.com/lego/pbForth/homePage.html

Hmmmm. A very flashy site. Once I got past the 30 off demands for a flash plug-in (personal choice not to have one) it seemed to have some interesting material on offer. You say that the programming sucks. I presume that you base this on personal experience. I could find nothing in a qucik scan of the documents on their site that told me what the underlying programming environment was, and was thus unable to judge.

With the range of construction kits and other items around (Mecanno, Lego, Knex etc) was there something that you felt could be better apart from decent series of controller modules.

Part of the robotics experience I would have thought. However, I think that if you consider that the actuator functions share a common interface then, yes you are on the right track. Perhaps if we build on the fact that there are basically 28 types of control module to deal with any robotic actuator problem (see the re-usability section on the HIDECS page of my web-site for the table). To the list of modules we could look at camera input modules and relevent vision analysis. Building around this set I can see that the higher intelligence functions would remain the most interesting for many others.

There would be a need to set this up properly. WiFi would probably be the most effective and there are, I believe, some small modules that could provide the link. If some pre-processing were done on board the robot you could even transmist stereo-vision to the bigger "PC Intelligence" unit. I can imagine that a mobile bot would be the first thing to do. Later we could add manipulators so that it could accomplish some useful tasks. Don't forget to give it some means of self-navigation and a location for where it could re-charge itself.

I actually prefer using at least one processor per actuator on a robotic system. This does assist in the long term by controlling the complexity of such a wide ranging system. You can also minimise the data transferral rate by locally processing data that is only relevent to that actuator. With regards to the camera transmitters, two cameras giving stereo-vision and processed into a data-stream that is more useful at the received end would use less bandwidth than just transmitting the pictures.

There were some good ideas in this post and I can see that there are ways that the goals would become achievable. It will, of course, take some investment of time, energy and money to accomplish. However, with enough people chipping in something may come.
Now Guy, are you getting any sense of a baseline specification yet?
--
********************************************************************
Paul E. Bennett ....................<email:// snipped-for-privacy@amleth.demon.co.uk>
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
I stated that the programming for the vex roboitcs set sux because it doesn't exist yet. They aren't selling the programming interface until this august. When it comes out it may prove to be fairly easy to use. I've downloaded the default code they use to controll it and its in C++. There's just simply no way to edit the code that's on it currently.
The vex is still somewhat low level. It comes with a remote control and only some bump switches for sensors, though they have the goal of expanding to ultrasound. There is nothing with vision or wifi control. I am still working on getting my laptop hooked up to control the bot.
Another thing. Guy mentioned that if we can program videogame AI as smart as we do we shoudl be able to program robot AI the same. And we can, except the limiting factor is that in a video game the AI has quick and easy accesses to information on everything in their environment. Our robots aren't even close. If we had a simple AI that would shoot all the bad guys, how would a robot today be able to pick out a bad guy, or much less a person, or much less an object? We need to work on the software behind our sensors. If you created a robot that had a great set of sensors and a couple of web cams that were easy to get information from, you would have a serious product that would provide a huge stepping stone for intelligent robotics. No longer would people have to start from the begining, spending years to develop their robotic platform before they could get going on the intelligence.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
I stated that the programming for the vex roboitcs set sux because it doesn't exist yet. They aren't selling the programming interface until this august. When it comes out it may prove to be fairly easy to use. I've downloaded the default code they use to controll it and its in C++. There's just simply no way to edit the code that's on it currently.
The vex is still somewhat low level. It comes with a remote control and only some bump switches for sensors, though they have the goal of expanding to ultrasound. There is nothing with vision or wifi control. I am still working on getting my laptop hooked up to control the bot.
Another thing. Guy mentioned that if we can program videogame AI as smart as we do we shoudl be able to program robot AI the same. And we can, except the limiting factor is that in a video game the AI has quick and easy accesses to information on everything in their environment. Our robots aren't even close. If we had a simple AI that would shoot all the bad guys, how would a robot today be able to pick out a bad guy, or much less a person, or much less an object? We need to work on the software behind our sensors. If you created a robot that had a great set of sensors and a couple of web cams that were easy to get information from, you would have a serious product that would provide a huge stepping stone for intelligent robotics. No longer would people have to start from the begining, spending years to develop their robotic platform before they could get going on the intelligence.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
(Paul, please don't set followups to take threads out of misc.business. product-dev. I am trying to rebuild a dead newsgroup into a resource that will be useful to product developers, and I can't do that if you redirect the threads out of misc.business.product-dev.)
Paul E. Bennett wrote:

Marvin Minsky's Society of Mind theory hold that this is how our minds work now. See [ http://en.wikipedia.org/wiki/Society_of_Mind_theory ].

One could still do the development work on a standard PC by having the fast PC processor emulate a number of smaller processors in various mesh configurations. If total CPU power becomes the limiting factor, this would be an excellent candidate for a Beowulf cluster.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Guy Macon wrote:

Sorry Guy, I'll pay more attention to that in future (incidentally, I didn't do the setting of Followups but I will make sure to cancel them. Do you wish me to eliminate other ng's from the Groups list as well?). [%X]

Agreed. When I mentioned processors I was not distinguishing between hardware processors or software processors particularly. I tend to look at the overal architecture and structure then decide what has to be hardware and which parts software at a later stage of development than most others consider.
Incidentally, did you ever pick up on the Mentifex thread at all?
--
********************************************************************
Paul E. Bennett ....................<email:// snipped-for-privacy@amleth.demon.co.uk>
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Paul E. Bennett wrote:

I rather like the crossposting. It brings in fresh ideas. I have set the automoderation software to allow crossposts to one other group, and that group can't be a moderated group for technical reasons. I also monitor the groups that are crossposted to to see if anuone objects.

I did, and am still thinking about it. It sure has spawened some interesting websites! Like these:
http://dev.null.org/psychoceramics/archives/1998.05/msg00018.html http://mentifex.virtualentity.com/dsm-ai.html http://www.nothingisreal.com/mentifex_faq.html http://www.ifi.unizh.ch/ailab/aiwiki/aiw.cgi?Mentifex
I am not inclined to blinly believe Murray or his critics. I would like to see a robot running the code rather than evaluate claims and counter-claims.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

The original configuration on that walking biped
http://www.amm.mw.tu-muenchen.de/Forschung/ZWEIBEINER/johnnie_e.html
was 6 controllers C167 linked via CAN-bus to a Pentium III. They changed that soon to a single Pentium IV because the CAN-bus turned out to slow. Obvious problems are: * serial links available in lowcost controllers are too slow. Transputers are no longer available. * in experimental systems the workload initally is unclear. Therefore its hard to judge the right size for the slave-controllers. * softwareupdates, debugging for the slave-controllers is more difficult.
MfG JRD
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Rafael Deliano wrote:

The idea behind a processor per actuator architecture is that you have to make some effort to reduce the transmittable data requirement to the minimum necessary at a sufficiently brisk pace to minimise hold-ups in the serial links. There are other options besides CAN-bus which, if the higher speed comms were required, would allow high speed communications.

There are always some very clearly essential parameters that would form the basis of the communicated data packets. It shouldn't be too hard to determine these in the initial analysis phase. Other parameters may just require the adjustment of values in variables which is a more ad-hoq communication need that should not happen too frequently.

This, I will concede, can be an issue. However, there are techniques that can be employed that simplify entering such updates. WRT the vision sub-system and the brain unit. I consider that the intelligence module would rely on vision to such an extent that it would need to be fairly closely coupled to the cameras. This may demand that the link between the cameras and central brain unit is extremely high speed with massive processing data capability.
I think I am beginning to see a way to partition the project such that a reasonable collaborative development regime may emerge from this.
--
********************************************************************
Paul E. Bennett ....................<email:// snipped-for-privacy@amleth.demon.co.uk>
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

That's an interesting biped, but the control algorithms used in it were designed in the wrong space.

I suspect that the problem was in the way things like the ankle motors operate (Lead screws!) which made it hard to decompose the control methods...

The IEEE-1355 bus is still available - google for "Spacewire". I know DLR used it in their Hand for internal serial comms. The Spacewire website has the VHDL core for you to download and reuse...

CAN is pretty good if you put the partitions on at the right place in the architecture. E.G.: don't try and send high-speed PWM signals over it! Use it as distributed shared memory. This is what we do with the Dextrous Hand (see sig)

We use an out-of-band low-priority message to adjust configuration parameters: works pretty well. We thought about putting them in the shared memory space, but ran into the old bootstrap problem: how do you define where in the space without doing the configuration?

TBH, I find it's a valuable abstraction barrier. You *know* that the controller hasn't been reprogrammed, so the fault must be in the electronics or the PC software. You can also keep physical swap-out units with known-stable versions of firmware in.

Bear in mind that there are a lot of embeddable systems now with direct camera-into-CPU support - and I don't mean Firewire.
<http://mcu.st.com/mcu/modules.php?name=mcu&file=pfinder32&FAM=STPC
have video-in ports on the CPU - although the CPU itself is a little slow. However, if you were to do "1980s-era" image processing, it would probably be sufficient.
The other option, of course, is just to stack up a small pile of mini-ITX boards as a cluster. One firewire camera feeding each board, 100MB Ethernet for clustering, local 2.5" HD for storage - should work quite nicely.
cheers, Rich.

--
rich walker | Shadow Robot Company | snipped-for-privacy@shadow.org.uk
technical director 251 Liverpool Road |
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Rich Walker wrote:

Perhaps. Before I decided to go into electronics I had wanted to be an architect. I guess that (since a fateful visit to a doctors surgery - long story) I become a reasonably good one for electronic systems.

Looks like an interesting one. I have bookmarked it for reading when I get a little more free time.

I am sure that CAN is very good for a wide range of communications and I was not trying to denigrate it for lack of speed. The last sentence was just to indicate that it is not the only communication solution. There is a very rich basket full of communication solutions out there.

I usually keep the adjustable parameters in a group and have a simple communications protocol that manages them explicity. I have also usually arranged that all other data passes as a response to the Status Enquiry request. Mostly I am down to about 10 cells of data in that response. Lately I have begun to look into IEEE1588, TTP and LXI as potential strategies for keeping very accurate time-frames for the data. This is, though digressing a little.

Considering that, sometimes, you may need to re-programme controllers that are somewhat physically innaccessible, then you need sound reprogramming techniques in your arsenal. However, for locally accessible controllers I agree that the direct attachment is simpler and more satisfactory.

ISTR that Ultra Technology's F21 chip had some capability for video input and processing. I must catch up on that chips situation.

Silhouette recognition?

I should think that would do some pretty decent video analysis especially if it were programmed with a language totally suited to the task. I wonder if Guy is getting any good vibes from this thread yet.
--
********************************************************************
Paul E. Bennett ....................<email:// snipped-for-privacy@amleth.demon.co.uk>
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Paul E. Bennett wrote:

F21 product development ended in 2000 and the last prototypes were made in 1998 then the prototype house, Mosis, shut down their .8u fab line. A couple of production deals were almost negociated for high volume production here in the US after development funding ran out. And after the .8u fabs moved to the third world I looked into production there. But basically the situation is historic only at this point except for a few people who got some of the prototype chips. I would have liked to produce them in quantity, the nice Internet appliances we made at the iTV corporation did demo some of the things one could do with inexpensive high performance chips.
The company I am working for now should come out with a new generation in the family tree that could be very useful in some robotics applications calling for high speed and low power.
Best Wishes
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
DaveFowler wrote:

I was waiting for someone to point the bit about videogame AI having perfect "sensors". <grin>
The good news is that better sensors are a straightforward (but difficult) engineering challenge. While imitating human intelligence is a fool's game, imitating something like the "sensor package" a bee uses just might be feasible. I am keeping an eye on the Darpa Grand Challenge as being a likely place to see better sensors in action.
The first person that develops a robot good enough to help an elderly or handicapped person will no doubt become rich and famous. The question in my mind is whether we are like the folks in 1900 trying to figure out heavier than air flight, or whether we are like folks in 1700 trying to figure out heavier than air flight...
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Common fallacy "products for the elderly will boom". * another person is cheapest and best to help a handicapped person. * handicapped persons are not the sort of wealthy customers buying gadgets. * there may be in the medium future a market "tools for elderly persons who want or have to work". But their brain is usually the part thats still working while the rest is falling apart. They do not need one big "smart" device. Rather simple stuff like aids for vision, hearing. Or a manipulator/teleoperator to enhance their hand/arm.
Not too technical, but well illustrated: Schraft, Schmierer "Service Robots" Peters 2000 list in its index as topics: * refuelling * agriculture, forestry * construction , renovating * cleaning * office * surveillance * firefighting * sorting * hotel and cooking * marketing * hobbies and recreation * entertainment * nursing care * medicine * underwater * space Under "nursing care" you have souped up wheelchairs or experimental stuff that will never have any practical value. The whole index lists mostly applications where robots are impractical. Expection: * environment unsuited for humans ( "underwater", "space" ). Note that "firefighting" or "forestry" is unsuited for robots because the environment is uncontrolled. * scale of the stuff handled is unsuited for humans * "construction" uses cranes and other machines because too big. * "medicine" like eyesurgery: too small.
MfG JRD
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.