My goal is to mount a web camera on a ground vehicle robot and have it recognize
lanes and drive in between the lanes. In order to look for the appropriate size
lines(lanes) in an image, I am trying to learn how to convert the size of a real
world object to image pixel size.
If I mount a usb camera so that its height above the ground is a known fixed
vertical distance from the ground, and the angle between the center of the
camera lens is fixed, how do I relate the size of a real world object to the
number of pixels in the image.
Example: My camera is two feet above the floor. If I define the y axis as
pointing upward from the floor, my camera is on the y axis at real world
coordinates (x = 0, y = 2 feet, z = 0). It has an angle of 15 degrees below the
horizontal xz plane, meaning my camera is pointed downwards toward the floor.
I take an image of some bright green tape on the floor. The tape is 3 feet long
and 2 inches wide. How do I translate an object of 3 feet long and 2 inches wide
to pixel dimensions? How do I calculate how many pixels represent 3 feet in my
image? I am trying to recognize lines of the approximate size.
Thank you in advance for your help.
9 years ago