room recognition

Back at Denning (circa 1984/1985) we were playing with the idea of room recognition based on size and obsticals. The theory is that a robot roams around a room and "learns" the various readings it is likely to see while it is in that room. These measurements were stored in a DB and searched when trying to identify position. Then, using the recorded readings, can better estimate its position.

Anyway, does anyone know if anyone is doing this sort of technique lately?

Reply to
mlw
Loading thread data ...

I think a recent poster to this newsgroup uses something like that:

formatting link

I remember some group using a 360 degree set of sonar measurements to scan a data base for "best fit" as a means to determine position.

I experimented with this idea "on computer" but not with a real robot. Often given the same problem people come up with the same set of possible solutions.

You don't mention how these measurement are made?

Sonar or laser as used by,

formatting link
or some other method/s?

Bouncing sound directly off a flat wall can give a very accurate reading. In practice the collected data I think would have to be "processed" to extract useful readings?

There may be issues with open plans where the kitchen, lounge, dining might simply be different areas in a shape that may not be a simple rectangle?

JC

Reply to
JGCASEY

Pattern matching in occupancy grid.

Robot has to have been here before Take a 360 sonar scan and store it in a temporary occupancy grid (called a "local" map).

You know how far the robot could have deviated on the map due to odometer accuracy. The robot thinks it is at X,Y according to odometer, but could actualy be at X+-Error, y+-Error.

Error becomes larger the more the odometers turn (IE, the more the robot turns it's wheels). When you localize, you reset Error to zero.

to localize, you consider all cells x+-error, Y+-error.

take the 360 scan, store it temp local map.

now to compare local occupancy grid to global occupancy grid.

start on cell X-error, Y-error in the global map. compare which cells match in local map compared to this reference point. Store the number of matches in an array.

next cell is X-error+1,Y-error. Compare which cells match in local map compared to this reference point. Store the number of matches in array.

once done with all Cells in global map (x+-error, Y+-error), cycle through the array of number of matches. The cell with the highest number of matches between global/local maps is the cell that the robot is most likely to be located at.

I don't think that this is exactly what you want, but it does compare room information (gathered on previous explorations) to a sonar scan, and will figure out if you are in a room....

You could turn the robot off, move it to another room, switch back on, and tell it to "localize"...

Error would be infinite, so all cells in the global map would be checked. It would take a much larger ammount of time to localize without a specified Error, but can be done (works with my software, but I've only done it a few times for novelty)

Rich

Reply to
aiiadict

A while back, Vassilis Varveropoulos wrote a paper on localization using "feasible poses". That is, given measurements, determine positions that satisfy the values supplied. It's at

formatting link
Gary

mlw wrote:

Reply to
gwlucas

Reply to
Doctor Robotnik

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.