Kalman filtering of video and IMU output at the same time ?

Hi,
I have not seen in the literature but I may be looking in the wrong direction. Does anybody know of a kalman filtering type of scheme that
would take as input not only the average IMU output such as acceleration, gyros, heading and GPS coordinates but also the output of the video ?
Most of what I have seen in the vision comminuty tries to decompose the images from the video (using very clever schemes), but I have seldom seen any part of the decomposition being used directly against other IMU data to enable a good filtering.
Any ideas or leads ?
Jake.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Newbie wrote:

There are two parts to this. The hard part is getting attitude and heading info from the video. Including it in a Kalman filter is the easy part - it would be incorporated just like the other heading data. I think there has been discussion on the sourceforge rotomotion group about sensing the horizon optically - maybe with video. Agood place to look after sourceforge would be to search on Google for "horizon sensing".
Mitch
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Mitch,
Thank you very much.
"Horizon sensing" got me this on google : http://www.ctie.monash.edu.au/hargrave/ horizon_sensing_autopilot.pdf http://www.isr.umd.edu/Labs/CSSL/horiuchilab/projects/horizon/horizonchip.html
as well as other answers using rotomotion as a keyword.
This is great as it tells me I am not the only one too think along those lines. However, this is mostly for UAV whereas I am more on the land side of things. And while horizon is definitely a critical information, the end value of using video and IMU type of data (acc, heading, GPS) is more along the lines of making more sense of the scene in which I am navigating in.
For instance, it would seem to me that given all the information from the IMU and some of the optic flow computation one would be able to segment the scene more easily than using current techniques where the segmentation is done using ad-hoc criteria that do not take into account the dynamics of the robot. This segmentation would spot obstacles around which I should be navigating.
Jake.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
any other ideas ?
Jake.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
I'm not sure what your real goals are. If it's only to provide more data for the kalman filter to give you better positioning information one idea could be to do as in the optical mice.
Compare the current image with the last in order to get a turn or travel rate information. If the camera is aligned with the travel direction I believe it should mostly give you turn rate. If it's mounted sideways It would likely give you a mix of turn and speed information. This could perhaps be compensated with having one right and one left-looking cam.
To be honest I'm not sure it would be worth the effort just for improving the localization skills of your bot.
Map-building from camera images could be tricky. Stereo-cams could be an idea.
/Leif
Newbie wrote:

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Thanks Leif,
I guess I need to be specific on this then.
I am thinking of using the video data to perform an analysis of objects that are in the way of the bot. I cannot use a SICK laser system nor ultrasound this is probably where my constraint is different than others. It could be a cost issue but actually it is not. I want a bot that behaves and can be trained in a manner that a human can "figure" out quite easily. We are not bats.
When you take a look at the application of asvanced techniques such as FastSLAM (Simultaneous Localization and Mapping Problem) for instance or other similar techniques that are currently used, you see the use of sonar or SICK laser as being the main detecting mechanism because it is "easy" to see objects in the way of the bot.
Stereo-cam is definitely worth, because humans do that all the time, however, I have seldom seen a paper using, for instance, bayesian inference using video data, hence my question to this group.
Jake.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Newbie wrote:

How about 3D evidence grids (http://www.frc.ri.cmu.edu/users/hpm/project.archive/robot.papers/1996/9609.stereo.paper/SGabstract.html) with stereo cameras to get depth information (http://www.ptgrey.com /). This would seem to be the extension of laser/sonar systems to 3D and using video.
Mitch
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

What is a SICK laser?
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
I meant a ranging system using laser like: http://www.informatik.uni-freiburg.de/~gutmann/pioneer/map.html
Jake.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Sick is a manufacturer of laser products.
Tim

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Newbie wrote:

Other search words which you might find useful are "Bayesian filtering localization". The book "Artifical Intelligence and Mobile Robots: Case Studies of Successful Robot Systems" by Kortenkamp, et al., has a good description of localization by combining sonar data with other data using a Bayesian filter. I don't know how you'd substitute video data, but maybe you will.
Mitch
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Thank you Mitch,
This is exactly what I was thinking, but unfortunately, I have looked at that literature and have not seen anything of these filtering techniques using video data.
Jake.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Hows your project going Mitch?
Regards
Tim

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Hi,
Here is our new updated call for contest. A PDF version is available here: http://roboka.org/call.pdf
Roboka contest: Call for participation
The Roboka contest is a wrestling robot programming contest held on the Internet. It uses a free version of the Webots mobile robot simulation software allowing to program the robots using the Java programming language. This contest is organized by Cyberbotics Ltd. and co-sponsored the Robot-CH association, the BIRG research group (EPFL), K-Team SA and the EURON European Robotics excellence network. It follows the four other robot programming contests organized in 1998, 2000, 2002 and 2003 by Cyberbotics Ltd.
video: http://roboka.org/video/judo.mpeg (7MB) Home Page: http://roboka.org Date: May 1st, 2004 - May 21st, 2005
Focus
Research and development in humanoid robotics has recently achieved spectacular results in both universities and industry. Humanoid robotics remains however a challenging reseach area, especially at the motor control level and the artificial intelligence (AI) level. The most fascinating issues include generating efficient and robust walking gaits, coordinating servo motors with sensors, performing image processing, handling human interaction, etc. The goal of this programming contest is to investigate the best suited control and AI techniques to apply to a humanoid robot engaged in a Robot wrestling game.
As Robot Soccer proves to be an interesting challenge for fairly simple mobile robots with many contests organized worldwide, robot wrestling appears to be more suited for more complex humanoid robots. Robot wrestling involves two humanoid robots facing each other. As in real wrestling, the goal for each robot is to make the other robot fall down on the floor. This exercice require the use of many interesting robotics techniques, including vision to locate the opponent, motor control to move towards the opponent, AI to choose the best action to desequilibrate the opponoent, to fake, anticipate or avoid an attack.
As real humanoid robots are currently pretty expensive, a model of a humanoid robot is provided in the Webots mobile robots simulator. This model uses real time physics simulation to provide realistic movement and collision detection. Moreover, robot models include several simulated sensors, like cameras, distance sensors, touch sensors, inclinometer, etc.
Caution
Although wrestling is still sometimes considered as a martial way rather than a sport, we do not aim at developing warrior robots. Rather, we consider robot wrestling as an ideal sport or framework for developing dexterous and clever bio-inspired humanoid robots that will prove to be useful and friendly to human beings. Any roboticist should never forget Isaac Asimov's three laws of robotics:
- A robot may not injure a human being, or, through inaction, allow a human being to come to harm. - A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. - A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Subscription
Subscription is free and open to anyone at any time until May 9th, 2005. However, earlier subscriptions are highly recommanded. Subscribers will receive a special version of the Webots mobile robot simulator containing a model with couple of humanoid robots on a wrestling tatami. Online subscription on http://roboka.org
Schedule
Beginning of the contest: May 1st, 2004. End of the online contest: May 9th, 2005. Finals: May 21st 2005 at Yverdon-les-bains, Switzerland, during the Swiss Robotics Days.
Sponsors
Robot-CH association: http://www.robot-ch.org Cyberbotics Ltd.: http://www.cyberbotics.com BIRG research group (EPFL): http://birg.epfl.ch K-Team S.A.: http://www.k-team.com EURON European Robotics Network of Excellence: http://www.euron.org
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Jake,
The only answer to this is SCAAT Kalman filtering. But it is a good answer. Normal Kalman filtering depends on working out an observable position (that is an exact or over specified measurement of position). SCAAT Kalman filters can take single measurements, which do not completely define the position. This allows for data fusion.
I am working on open source code for a Kalman filter, and very keen when I finish it for people to test to destruction.
Search for SCAAT Kalman, and read the PhD of Gregg Welch.
Jack

...
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.