It occured to me over a few, um, discussions? where I think our arguments are coming from. It is sort of interesting, but think about it.
I come from a very strong engineering background, while I know the theory cold, I implement from a very practical perspective.
Take for instance a discussion about a robot moving at 4 miles per hour, and how 0.01 seconds represents 3/4 of an inch, or something like that. Yes, all these numbers are true, and they are 100% absolutely correct, but is it a practical and realistic expectation of precision?
I don't know about you, but my sub $500 robot is not a precision instrument. Chances are, your robot isn't either.
At Denning Mobile Robotics, we spent months trying different types of wheels made from rubber, urathane, and plastic, inflated and solid, different diameters, smooth or with tread, flat or conical, just to conclude that there is no reliable way to know the robots true position based on wheel motion, even on a controlled surface.
Once the navagation team realized that there was no reliable precision in the position based on the wheels, they had to think differently. They could use the wheel position as an approximation that degrades over time. With some corrective process periodically. (beacons)
The "realistic" precision is quite low. So, in the end, you can only get "so" close no matter how much precision you apply to the problem. It turns out that you can get pretty darn close with a hell of a lot less. That is how I have approached my robot.
Take the dual wheel differential drive. Even if there was no variation in wheel diameter, if I could keep .001% variation in motor speeds, the robot would still drift because of surface imperfections (dirt, rugs, dog, kid). How much better could it realistically get? Seriously, think about it.
The motor control will work fine 99.999% of the time, every once in a while it may miss a beat or so, but the code accounts for it, and will compensate.