Indoor gPS

"Localization" is one of THE big problem in robotics applications. It may be more difficult for a robot running in indoor environment (where
ther is no GPS signal) to localize itself.
How a mobile robot can localize itself in a warehouse/factory type of indoor environment?
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

indoor
It's a hard problem to do it GPS-like, where you use some external navigation signal. One technique could be to use 3 or more transceivers at known fixed locations and measure the round trip time of signal travel. This is how aircraft DME works but that only needs accuracies in the 0.1 mile range. For a robot that needs to know position to the nearest few inches its harder because the navigational signal travels about a foot every nanosecond. Your navigation hardware/software would have to be able to measure a time delay in the 10s of nanosecond range.
Or you could do it with bearing to 2 or more transmitters in the warehouse, which I think would be easier to implement but less accurate and gets worse with distance. The best solution is probably a combination of navigation by bearing plus the other usual stuff like short range obstacle detection and odometry.
Anybody else have ideas for navigation by external signals in a non - GPS environment ?
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
<Hawk> wrote in message> "Localization" is one of THE big problem in robotics applications.

indoor
Place infrared beacons all over the place, each coded differently. The robot can then locate each and by angle and some simple trig, know its position. I use this scheme and it works pretty well.
Cheers!
Sir Charles W. Shults III, K. B. B. Xenotech Research 321-206-1840
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Sir Charles W. Shults III wrote:

I'm thinking about using some IR beacons in an upcoming project. Do you find that reflected IR from the beacons causes errors?
--
(Replies: cleanse my address of the Mark of the Beast!)

Teleoperate a roving mobile robot from the web:
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
I have had very few problems with it. I use a rotating slit sensor on the robot and when it is not sure of its position, it stops, scans, and takes positions. The scheme works best when you overlay the robots expected position (for instance, what it is calculating on the fly from its movements and known last position) with the calculated results from the sensor angles. Note that mathematically, for two sensors, there is a circular path or arc that will yield the same angle, so three beacons is the minimum for absolute position sensing. But this system works very well even if you don't need absolute position because you can have a robot look for and home on a beacon, then once in motion, it can start looking at an expected angle for the next beacon. This generates a guaranteed path to a known destination automatically. And, if each beacon is numbered, for instance, you can define your destinations generally by telling the robot to traverse the node. An example would be, "go to beacon 7, 4, 3, in that order." The robot will then end up in a known area very accurately. I played with this concept for years and have had very good success with it.
Cheers!
Sir Charles W. Shults III, K. B. B. Xenotech Research 321-206-1840
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Thank you for sharing your experience with us.
Do you have some sort of document/code etc explaining the math/algorithm how odometry, last known pos and detected IR beacon angles are combined/fused together to update robot's position/orientation to find the current position/orientation?
Can you give us a bit more info about the optical properties of your IR beacons and receiver? How "narrow" your beams ? How accurately you can detect the IR beacon angles?

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
<Hawk> wrote in message> Thank you for sharing your experience with us.

how
The odometry is pretty typical trig and wheel rotation counting. Any robot is considered to be at the charger when it resets. I use the charger as the reference point for everything and standard Cartesian coordinates. The easiest way to handle everything is to use standard (x,y) in rectangular format and to convert "perceived" changes in angle and distance into the same rectangular format. You need know only the position and orientation then (and the details are for those who have not yet confronted the concept of making a robot figure out its own way). Consider a graph paper but with unclear markings. That will give you a first approximation of your map. As you navigate, errors accumulate and the theoretical position and the real-world position come to be harder to reconcile. Add carpeting or a lawn and things get pretty inaccurate rapidly. Now, overlay on this a system that is also rather inaccurate but always stable- unlike our dead-reckoning and odometry methods of above. Between the two, you get some fairly stable references points and can "reset" your known positions to some degree and this keeps your known position a bit more accurate. For example, imagine that you walk to the store, but take a shortcut through a few acres of unfamiliar woods. When you emerge from the other side, you may have no good idea of where you are- just "emerging near your target". How do you reconcile your position? For organisms, we recognize landmarks- these allow us to reset our internal map references and get a real grip on where we are. We use a combination of experience and gut instinct. For a robot, we can get the same sort of performance by providing landmarks of a sort. The IR beacons do that. So now we have a rough idea of where the robot is in code, but we need to add one more thing to make it all work. We must teach the robot to "visualize a map". The real test of a programmer's acumen is here, in their ability to translate logic and numbers into a map representation that matches the robot's world closely enough. Once you settle on a representation (and a good one is a "boundary map" where you represent impassable obstacles or boundaries as line segments) then you have to program your map into the robot. As your robot navigates, it will test an extension of its path and se if it is expected to collide with anything. This is the first "layer" of the system. Then you add in any sensory data you have- for instance, whiskers or proximity sensors to add in any true collisions and see how closely they correspond with your internal map. If you have GPS data, you would do likewise. You are essentially making an overlay of real sensory data and projected "mental" map data. Between the two, you can resolve quite well just where the robot happens to be. Now for IR beams or beacons. These too have real world coordinates, but now you deal with issues of visibility and range. My solution is to create conic zones that have known positions on the map and overlay it. Your robot will be able to see a particular beacon IFF (if and only if) it is within the conic zone and aimed at a specific angle. This means that you have new navigation data that you can use- one, you can predict when and where the beam should be visible as your robot moves around. These "appearance zones" can serve as verification in themselves. If they match your internal odometry and position as expected, then you know that you are on track. You have verified both the orientation and position of the robot. For the simplest and most brain-dead types of robots that navigate by beacon, you place a rotation mirror or sensor on top of the robot that has a known home position (sort of like top dead center on the cylinder in a car engine) and then, while the robot is stationary, you rotate the sensor and map out which beacons you see as the sensor turns. This, too, in a complex robot, can provide a very good indicator of your absolute position. In a simple robot, you just tell it to home on the sensor and follow it in, and every so often you look for the next beacon. Then you may home in on its position to get to the next leg of the journey. My best summary is for people to dispense with absolute position sensing because it can introduce far too many complications- make your robots more like organisms, and navigate by a combination of dead reckoning, internal odometry, and beacon or landmark sensing. It is very robust and once you get the hang of overlaying your internal maps, it becomes extremely intuitive.

Now, for the technical details on the IR beacons. I use high intensity IR LEDs, and I modulate them with the simplest of signals- a brute force 555 oscillator set to a unique frequency. A standard MOSFET driver runs the LEDs. It isn't necessary to use more than about a 20% on-cycle to achieve a good, usable signal, and that means that if you resort to battery operation for them, you can get reasonable life spans. The beacons are made to be either wide or narrow angle by your choice of how many LEDs and what dispersal pattern they have. You can make them any way you choose, and I typically will make a really important beacon (like the battery charger) have a wider dispersal pattern and more LEDs to ensure greater visibility. More specific "landmark" beacons can be narrow to keep the robot on a track if it is meant to home in on it from a specific position- this also means that narrow beam beacons typically will have a greater range as most of the light ends up in a narrow cone. I do not use lenses on the beacons themselves. I rarely use lasers as beacons because they end up being too narrow, although you can use a small lens to spread a laser beam so that you can get a beacon with maybe a kilometer of range or so. This might be useful on a farm or in a large warehouse. The receiver is simply a plastic lens mounted so that it looks straight up, with an NPN phototransistor below it. A rotating mirror angled at 45 is spun above the lens, meaning that it can get a true 360 field of view. I use a ring gear to spin it so there is no central shaft, and the light reflected off the small mirror can proceed directly below to the lens. A small stepper motor can be used to rotate the ring gear (and of course, the mirror). The position is known by an optical sensor for the home position, although a small metal flag can be used with a proximity sensor or a reluctance sensor. The mirror should have a collimator box around it- two opaque thin plates painted flat black- and this is how you ensure a narrow "view window" as it rotates. I completely ignore vertical data here and use only the horizontal data- the angle with respect to the ground position is the important thing, not the height. If you want to get sophisticated, you could add height as well and see if you can add one more measure to your data in confirming your position, but I was never in a position to need that. So a typical beacon would have a beam angle set by your specific needs- ten degrees is a pretty good long range beacon and 90 to 180 is pretty typical of short-range beacons. Remember that the beacon beam angle is not as important as your sensor's ability to locate the beam source!

The accuracy of angle measurement depends on the sensing of the home position of the rotating mirror, the width of the collimator, and the stepper motor step size (keep gear ratios in mind here). Once again, the beacons make it possible for a robot to navigate "softly" instead of with centimeter precise location. It pays off greatly to make your robot able to detect its goal through other means, rather than expecting it to be absolutely positioned. Using absolute positioning would be like going to a restaurant, being a meter to the left of the door, and giving up because you cannot find the handle. "Soft" navigation and landmark sensing allows you to be a meter or more off target, yet able to recover because you have alternate sensing methods. Note that when you are programming your navigation map, you must treat mirrors or other reflective surfaces as extensions of your beam and its geometry. This sound like a pain but in reality it can be a help.
Cheers!
Sir Charles W. Shults III, K. B. B. Xenotech Research 321-206-1840
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
THANK YOU VERY MUCH !..
What a fantastic posting!.. Full of information in easy to read and undertand format.
Do you have images of your robotic works somewhere on the net?

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
<Hawk> wrote in message> THANK YOU VERY MUCH !..

I will be reposting my older robotics page shortly, but my Mars research has taken a great deal of my time. All will be back online as quickly as I can manage it. And thanks- I am thinking of writing a book on artificial intelligence that will show many of the things I have worked out and how the systems use an internal "visulaization" to help a robot react to the real world. A big part of it will be overlaying sensory modes to create realistic internal theater models.
Cheers!
Sir Charles W. Shults III, K. B. B. Xenotech Research 321-206-1840
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
--=[ Thank you. ]=-- We (a small group of robotics fanatics with different age and professional backgrounds) followed some of your news-group postings. We enjoyed reading your posts and learned from you. --=[ Thank you. ]=--
We are looking forward for the publication announcement of your book(s).
Since you have mentioned that you are working on some Mars research, you may enlighten us on another matter. For a rover simulation, we are searching for the digital terrain elevation data of the Mars rover's navigation field.
Do you know where can we find the digital terrain elevation data (in any format) of the region where the Mars rovers Sprit and/or Opportunity rovers navigate ?
Do you know any link for this?

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.