Navigation Software, considering wheel slippage, compass sensors

Hello group,

I've got my robot's new navigation software working. Auto mapping. Breadth first efficient exploration. Just set it in a house and It will map it all out.

I want to take wheel slippage, sensor inaccuracies into consideration. What if the robots wheels slip while it is traveling? It's memory representation of where it is at in its map will be incorrect.

I am getting a compass sensor to help keep track of how precise the turns are. I would like to add a software routine to make sure the robot is where it is at in it's memory.

I have done some research and see that some people use "landmark recognition". I see that this will work for me. I move the robot around, and once every couple of minutes have him do a sonar scan. Store the points found in the scan, and see if those points match the memory of his previous exploration. If they do, he is in the right spot. If they don't, he needs to reposition himself in his memory.

I may have trouble with the routine because the map in his memory may be rotated at a different angle than the angle that his scan has found the "landmakrks". So to match the landmark in memory with the scanned points, I would have to figure out how to match "shapes", or a group of points, which are at a different rotation from eachother.

With a compass I will be able to turn the robot to a definite direction, scan, and then check the landmarks in memory against the scan points. This sounds like the best idea to me.

Anybody have any C or BASIC code that will do this? I'll figure it out eventually. I have some really SLOW basic code that I got from a book.

Are there any other techniques to navigate and take slippage/sensor inaccuracies into consideration? I do NOT want GPS, radio/ir beacons.

Where can I get a compass sensor with decent accuracy? I was thinking maybe point a gameboy camera at a real compass mounted on the robot, and do line detection to find out where the compass is pointing. It would cost me about $20, but lots of CPU time compared to how other compass's work.

Rich aiiadict AT hotmail DOT com

formatting link

Reply to
Rich J.
Loading thread data ...

The commercial cybert robot would reset its position by making contact with a two walls at right angles to each other.

You might structure its environment. Looking at you web site I see you used black markers along a white strip to locate a basket containing the desired object. You might use such markers (or reflectors) along the wall? Maybe use button magnets and a reed switch? Perhaps reflectors stuck to the ceiling?

Using a light source and mirrors you might make a set of "railway tracks" where the beams of light (laser?) replace the steel tracks?

===========================================

A shape can be rotation independent by converting the absolute coordinates to relative coordinates.

=========================================

Or just keep the scanner base pointing in the same direction regardless of the orientation of the robot?

=========================================

Why not reflect light off a magnetic light weight mirror. Then the scanner base be rotated until the light is detected?

Are compass sensors all that accurate indoors?

John

=========================================

Reply to
JGCasey

I don't know. I did some research and some are accurate to within a few degrees. It looks like they're expensive (about $50 apiece).

I think I'll use my idea of digitizing the image of a real compass. I'll have to place it far enough away from the motors so I don't influence it's direction with them. I can get that done for about $10. Or I could add tilt function to the gameboy camera that is on the robot, and just look DOWN at the compass!

I don't want to use beacons, etc, because I want to be able to bring the robot to any house and make him map it out.

I see how to rotate landmarks with the relative coordinate suggestion you made. Thanks! I'll check that out.

I appreciate your suggestions.

Rich aiiadict AT hotmail DOT com

formatting link

Reply to
Rich J.

What kind of functions are you using your gameboy camera for?

========================================

Fair enough.

I have always thought that a blind, dumb robot could always use touch sensors (and whiskers) to feel its way around.

About the slippage issue. From the images I've seen of Cybert its wheels look like oversized platics cogs with rounded teeth. That seems like a good non slip design?

========================================

Reply to
JGCasey

Rich,

what MCU are you going to use to interface the gameboy camera? I am trying to interface ti with a PIC, exactly 16F877A 20 MHz. Is it a problem if I ask you few questions about it, please? Thank you

Best regards, Refik

Reply to
Refik Hadzialic

He can play chess on a real chess board (or checkers). I do searches through the squares to see which two squares have changed, and change the data of where each piece is accordingly. Preliminary experimentation with some other vision softwares. Line detection, etc.

I've got sonar mounted on RC servo (sonar hacked from a polaroid camera). Bumper switches. Optical encoders on wheels to see how far we've traveled. IR distance detect. Navigation software uses Sonar and bumpers so far.

The encders on the drive system aren't 100% accurate. I need to reorient the robot in his memory map once in a while, or he loses himself on the map. Thinks hes in the dining room when he is actually crossed over into the kitchen. I want to be able to pick robot up, turn him manually, and have him figure out where he is. Compass would be nice.

Rich aiiadict AT hotmail DOT com

formatting link

Reply to
Rich J.

SNIP

Essentially I am working on the same thing using vision alone.

Have you tried collecting 360 degrees of sonar measurements to see if you can use them as unique signatures for the robots position and orientation?

John

Reply to
JGCasey

You might be interested in a technique known as evidence grids

formatting link
which I am intending to use for my upcoming project. The basic concept being that you store a map of probabilities as opposed to actual points, and that each time a point is verified by the robot's sensors, the probability for that point is increased. This allows you to take into consideration the fact that an object may not stay somewhere for ever (if you are planning to make a map persistent over multiple visits).

Additionally, that paper (at least, I think it's that one) talks about some mathematical formulae for doing things like comparing two evidence grids, which I intend to use to work out position information and align an existing map with the data coming in from the sensors when trying to work out the location at a new session.

Reply to
Barnaby Mannerings

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.