"Realistic" precision.

It occured to me over a few, um, discussions? where I think our arguments are coming from. It is sort of interesting, but think about it.

I come from a very strong engineering background, while I know the theory cold, I implement from a very practical perspective.

Take for instance a discussion about a robot moving at 4 miles per hour, and how 0.01 seconds represents 3/4 of an inch, or something like that. Yes, all these numbers are true, and they are 100% absolutely correct, but is it a practical and realistic expectation of precision?

I don't know about you, but my sub $500 robot is not a precision instrument. Chances are, your robot isn't either.

At Denning Mobile Robotics, we spent months trying different types of wheels made from rubber, urathane, and plastic, inflated and solid, different diameters, smooth or with tread, flat or conical, just to conclude that there is no reliable way to know the robots true position based on wheel motion, even on a controlled surface.

Once the navagation team realized that there was no reliable precision in the position based on the wheels, they had to think differently. They could use the wheel position as an approximation that degrades over time. With some corrective process periodically. (beacons)

The "realistic" precision is quite low. So, in the end, you can only get "so" close no matter how much precision you apply to the problem. It turns out that you can get pretty darn close with a hell of a lot less. That is how I have approached my robot.

Take the dual wheel differential drive. Even if there was no variation in wheel diameter, if I could keep .001% variation in motor speeds, the robot would still drift because of surface imperfections (dirt, rugs, dog, kid). How much better could it realistically get? Seriously, think about it.

The motor control will work fine 99.999% of the time, every once in a while it may miss a beat or so, but the code accounts for it, and will compensate.

Reply to
mlw
Loading thread data ...

Let me take the apportunity first to thank you for the contributions you're trying to make. I know some things started off on the wrong foot, but you seem to be genuinely interested in contributing to the "collective," as evidenced by your posting of pictures on your server, and the thread on PS/2 mouse PID.

(I have to say I was skeptical of any mouse interface being able to keep up with quadrature input where mouse encoders were placed on the internal shaft of the motor. Given say, a 3000-4000 RPM motor, that's a fast moving mouse! But, I'm assuming it works reliably and it looks like a nice piece of code.)

Anyway, you ask about precision, but to me, that's the wrong question. There is no true precision in mobile robotics, especially with low-cost hardware. As you noted, drive speed, imperfections in the floor, slightly different wheel diameters, and all the other things that contribute to drift will always be there. As Borenstein showned years ago, there are ways to minimize the drift, but it's impossible to remove it altogether unless a cooperative system is used as correction.

In my experience, it's not so much how to minimize error but how to compensate for it (example: as Borenstein showed, and as you mention by using landmarks).

IMO, designs need flexibility in their problem solving, not ultra-precision. Not that your system doesn't have some flexibility, or won't, but that this is simply something to keep in mind as a over-riding design principle. Not to beat a dead horse, but my objections to your not offloading processes to something like a microcontroller had much less to do with precision, and much more with flexibility. Will it cope with unknowns down the road? You my find it's not needed for your particular task-set, and that's fine. But -- and in my opinion -- if you're eyeing producing a how-to guide, you need to consider how other people use their robots. Will your platform support their needs too? (No need to answer; it's just a rhetorical question.)

By incorporating some flexibility in the design (again, not that yours doesn't already have it) precision becomes less of a problem if there are ways to work around inherent limitations.

Do note I'm not talking about simple hardware expansion. That's just a way of fobbing off the problem to someone else.

As an example of what some others are doing, there is a group of hobbyists in my local robot user's group that have been developing a laptop-based robot of about the same size as yours -- maybe a little taller. It has vision (true vision; it finds objects, moves to objects, recognizes faces), voice output, voice recognition, and other features. Their aim is eventually publishing -- for free -- their design, all code, etc. They took the HC12 subcontroller approach, but with a special slant: the hardware designer made a board that allows users to add whatever hardware they want. There are limitations to the number of sensors, motors, etc., but the platform is among the most versatile and flexible I have ever seen.

The end result is that any inherent deficiencies in the overall platform

-- they use Windows! -- has compensation by allowing each user to tailor-fit the robot to their particular needs. IOW, their design allows each user's robot to be unique different. To me, this is the ultimate goal of robotics.

Your design goals may be different, and you may desire others to follow your lead as closely to your prototype as possible. This is certainly the approach to commercial products, which have a specific feature set. The approach has its advantages, and is also perfectly acceptable.

Again, thanks for your efforts.

-- Gordon

Reply to
Gordon McComb

Thanks.

It works reliably as long as you read one byte at a time. If you try to read more than that, you fall out of sync, but the code manages that.

As for the mouse not being fast enough, I thought about that early on:

There are about 34 teeth on the encoder.

3000 RPM (a high number for my motors) is 50 revolutions per second. There are about 1700 counts per second at 3000RPM The mouse has a 9 bit counter 1 sign bit, and 8 bit count. I need to read the mouse about every 0.15 seconds (absolute max) to avoid losing data. A mouse ball is about 0.8 inches in diameter. A mouse encoder shaft is less than 0.10 inches in diameter. The mouse encoder shaft rotates about 8 revolutions per mouse ball revolution. If a mouse moves 12 inches in one second (moderately fast), that is about 5 rotations of the mouse ball. Which is about 40 rotations of the mouse encoder shaft. Close enough!
Reply to
mlw

[snip]

Exactly.

Again, exactly.

Again, exactly.

I think that's what my post was getting at, just in not such a concise manner.

Ironicaly, that's why I don't want to use a microcontroller.

There are no guarantees, either way.

A good one. One of the things I have been trying to convey, is that too many tool sets are in themselves confusing. Tools to develop micro controller code, tools to develop PC code. What about debugging? Profiling? data processing routines?

Using a PC type system allows for one tool set. If you need more processing power, add another PC and use the parallel processing tools used for beowulf clusters.

This is one of my fundimental objectives, and the requirement of microcontroller development is not allowed.

How you go about the limitations *is* the engineering, true?

Perhaps.

Very cool.

Sounds like a cool design.

Overcomming Windows' limiations is, in itself, an accomplishment.

Reply to
mlw

OTOH, separate PCs tend to raise the cost of the engineering more than an application-specific coprocessor, but to answer the toolset question, I wonder why there can't be one toolset. I wonder why the main PC itself can't be used to program (and reprogram) an embedded microcontroller. I don't see why the PC could not re-flash the controller with updates provided by a community working on the design (given permission from its owner, of course), for example. This would bring the design to more people, including those who do not have a strong programming background.

I am thinking an interesting design approach is along the lines of the Lego Mindstorms RCX, combining firmware as an OS, and user-provided programs in program space. This is showing my bent of bringing robotics to the masses, but -- and we can save this for future discussions -- I don't think adding and managing coprocessors need be unnecessarily complex if the overall system is designed for it. No one has done this yet (to my knowledge) but as you've said, that doesn't mean it can't be done.

Yes, but I was thinking more about how one allows for limitations, including the unknowns, that makes the difference. Engineering over some known limitation could be as simple as a fix for a hardware bug; engineering for accommodating things that are as yet unknown is the real trick. I think even the best designs achieve this only some of the time.

I have been working on creating robotics scripts using XML. It's an interpreted system, intended to be platform independent, but the idea is that the underlying interpreter is open (it can be written in any language), as is the scheme for the XML translation. It's designed to be a teaching tool mostly. While XML can't do everything, I've been amazed at the forethought put into it. Data management and data presentation is a hoary affair, but to me XML is a good example of striving for dealing with the unknown. They couldn't have figured I would have wanted to build a scripting language for robotics out of it, yet it's turned out to be perfectly capable of the job. Amazing stuff.

-- Gordon

Reply to
Gordon McComb

That depends on how you go about it. I'm thinking of devloping the code from the beginning with the assumption it is running on a computer cluster and any slave computer systems will boot off the main computer. Adding computing power would be transparent.

That is still some time down the road.

Sure it can.

Perhaps.

That is an interesting idea, but it is a whole development project in and of itself. A lot of the old WinModems used to work this way. The Windows "driver" would download operating code into the modem.

My sub $500 robot should be simple and use off the self technology.

There are always changes that exceept the initial design.

The funnyest part of XML is that it was an effort to document various extensions people put into HTML early on. XML was not designed so much as it evolved before it became standardized.

Reply to
mlw

Gordon McComb wrote: [...]

What language are they using Gordon?

- John Casey

Reply to
JGCASEY

Here's some scoop, from the horse's mouth(s) (though I see the page hasn't been updated in a while):

formatting link

Alex has some more pictures here:

formatting link
Some info for the vision software can be found here:
formatting link
Like the ER-1, they go from the assumption many people have a laptop already, which you don't have to dedicate to the robot.

They use an amalgam or languages, basically, chosen for their familiarity with each member of the team, plus suitability for the job. The AI stuff is in Lisp, the vision, mapping, pathfinding, and other PC-based components are in C++; the Motorola HC12 is programmed in C.

However, many of the routines (especially the vision) are just DLLs, so they can be called from most any Windows program that supports DLL calling convention, including VB.

I'm not sure where it is on the page, but Alex's board design for the microcontroller is available for download. If anyone is interested, they like to "batch up" the boards when they're made, to save costs. They're done in sets of two, I believe. You can see Alex has opted to use RJ45 jacks for I/O, which is a great idea. The board itself has built-in options for things like accelerometer and gyro.

The Leaf project robots are very impressive in action. You can see that one of the design concepts is human feedback. The designers consider it important for the robot to be of a certain height and size to encourage human interaction. Note the location of the laptop LCD screen, about where a child's head would be.

-- Gordon

Reply to
Gordon McComb

very cool. RJ45 looks nice. Good idea!

Rich

Reply to
aiiadict

They don't readily come out! (UInless you want them to.) Anyone who's been in a competition, and lost because one of those silly 0.100" header connectors came off, will appreciate the locking action of RJ45. The cables are easy to make, too.

-- Gordon

Reply to
Gordon McComb

My only issue with those is the amount of board space they take up. I prefer screw terminals because they are compact, easy to connect, and they make good secure connections. I especially like the 0.1" screw terminals like shown here:

formatting link
Their main disadvantage is that they are kind've expensive, especially when compared to 0.1" headers. On the plus side, the footprint is the same as for 0.1" pin headers so you can opt for either one on the same PCB board as I do on my boards.

I do agree that the RJ45's are also easy to wire and connect, though.

-Brian

Reply to
Brian Dean

Gord> Here's some scoop, from the horse's mouth(s)

And for those that don't I guess a Mini ITX could substitute, with the advantage being able to give the robot a LCD face separate from the keyboard?

Although the face has "cute" appeal and I guess is an easy way of expressing emotions it is kind of fake in that the eyes are really the webcams and the mouth/ears are really from the speaker and microphones. I kind of lean toward a Kismit face.

jacks for I/O, which is a great idea. The board

Thanks for the reply Gordon, I have found what they are doing very much in line with the way I am thinking and the platform is similar to my own.

The only thing I don't agree with, for stability reasons, is having the motorized wheels in the centre of the base. My robot base is rectangle (almost square) with curved corners with the free running swivel wheels at one end and the two drive wheels at the other.

-- John

Reply to
JGCASEY

On Alex's version the latest face looks more like a robot. He did it with Flash. You're not limited to a specific face, and you can change it from a boy or a girl (or a robot or whatever). The face animation is tied to gestures, so you can model a happy or sad look, a mean or friendly look, etc. That's completely up to you, and it's connected to the AI engine. Since the animation is just bitmaps, you could even do Arnold ("call me Ahnald") Schwarzenegger.

The project design doesn't specify a platform, and besides, those large ball casters have become hard to find anyway. I used to carry them, and Alex bought one of my last. Those were very nice...made in Italy.

The thing you'd have to change is the dimensions for the turning circle of the robot (in code), so its mapping software knows what's where. And since the mapping software needs to know the dimensions of the robot, you'd have to key in the grid size if you used a larger or smaller base. (Not show in those illustrations is a sweeping scanner that contains an ultrasonic sensor and an IR sensor. Other sensors can be added. The robot will map a room as it explores its surroundings.)

-- Gordon

Reply to
Gordon McComb

I agree on stability, but with drive wheels at one end, you don't spin on center. My first few bases were rectangular and square. I don't like that because to turn you have to do some calculations to see what cells in occupancy grid you are going to swing into (when turning and not moving forward)

my preference is a robot that can spin in circles and not have to worry about hitting anything. It simplified my software...

It isn't as stable as I'd like it to be. The offset of front and rear casters is too great. It doesn't take much force to tip it over when pushed diagonally.

I need to spend some time on the mill and make my own casters, or modify the ones I already have. The distance between the pivot of the caster, and the center of the axle on the wheel needs to be lessened. OR, I need to put two casters in front and in back. Current design is one in front, one in back, centered.

Rich

Reply to
aiiadict

Gordon McComb wrote: ...

Or possibly Norman Lovett (Holly - Red Dwarf) - remember* when Queeg took over and Holly was consigned to being night-watchman on a little mobile platform?

Cheers

M

  • Remember - if you have watched Red Dwarf, that is.

Reply to
Matthew Smith

Max Headroom? Does it stutter?

-Brian

Reply to
Brian Dean

You b-b-be-b-bet!

-- Gordon

Reply to
Gordon McComb

Of course I don't use occupancy grids.

With a PC based robot you can have very sophisticated software.

I have a similar problem with only one wheel at the front if either corner of the front is pushed down. I may solve that by using two front wheels.

Or perhaps with a sophisticated system you could get it to balance on two wheels and toss the casters away :)

- John

Reply to
JGCASEY

JGCASEY wrote: ...

Any why not?* Dean Kamen did it with the Segway. This is the crowd that makes the solid-state gyroscopes used in the Segway:

Cheers

M

*Possible why not: Newark lists the CRS-03 at 299 USD :(

Reply to
Matthew Smith

I was thinking of this robot,

formatting link
Cheers,

J
Reply to
JGCASEY

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.