Hi I am working on developing a robotic arm with vision capabilities. The arm is modeled along the lines of the Lynxmotion arms (with base rotation, shoulder, elbow and wrist motions). I have not added a gripper because what I am trying to achieve is object tracking, teaching the robotic arm movements by observing human arm etc. For this, I think a single web-cam with a proximity sensor like an IR distance sensor should suffice. On the image processing front, standard object tracking techniques like Image Subtraction will fail because I am mounting the web-cam on a platform on the robotic arm's wrist. So because the camera itself is moving, subtracting the current image directly from the previous one will not yield any result. I need help in developing the image processing algorithm for the OBJECT TRACKING APPLICATION. Thanks
Arun India