Determining the angle

Does anyone know some kind of symbol or sign that could be used in robot vision to determine the angle at which some object is turned? Eg. There is
an object in front of the robot and through the camera it is needed to determine at what angle is the sign turned (full frontal, tilted,...). Don't know if I was clear enough, but thanks anyhow.
P.S. I heard there are some specific symbols developed for this, but I do not seem to be able to find them on the Internet.
Yah!
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Just me wrote:

Maybe a circle of lines radiating out at different angles. All lines being equal in length when viewed frontal no rotation?
A crude ascii version :)
\ | / -- + -- / | \
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Yeah, I was thinking of some very simple versions but it seems like my resolution will be kinda low so there won't be any room for fine calculations (length, distances and so forth). But maybe it works... Gotta think about it...
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

For the manipulators used in space, they use a 3" long black rod with a white tip, centered in a 1" dia white circle on the rod's platform. Viewed straight on, the white tip is centered in the circle, but even small amounts of pitch or yaw misalignment can be detected as it makes the white tip shift within the circle. Horizontal reference stripes give roll misalignment.
Mike Ross
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Ok. If I got it right, it's a rod fixed in the centre of the circle, and we have the top-view (we see the full circle and just the tip of the rod - when it's straight). Right?
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Speculating, not speaking from direct experience, I'd expect that a distinctively colored circle would work best, especially if low resolution and processing power are important, the visual target is reasonably large (at least 5 or 10 pixels high or wide), and high precision is not required.
Speaking from experience, use HSV (*1) or a similar color scheme to detect the circle. Threshold (separate object pixels from background pixels) based on a formula like object=[abs(hue-orange)<5 AND saturation>0.7 AND value>0.4]. Hunter's orange is easily identified in the average indoor setting. For microprocessor efficiency, rescale the HSV equations to values of 0-255.
If such a circle is viewed "full frontal", then you will see a circle; if viewed at an angle, you will see an ellipse. You can easily calculate this ellipse using the mean and moments of the pixel coordinates (*2). Based on these axes, you can determine the axis of tilt (major axis of the ellipse).
If you also need orientation, then a second or third uniquely colored circle can be used. Looking at the vectors between the centers of these circles will yield orientation information, in addition to the previously obtained tilt.
I've seen other algorithms which rely on a checker-board pattern. They use edge detection to fit lines across the board. These lines are then used to calculate the board's position and orientation. These also seemed more processor intensive.
Have fun, Daniel
Quickly searched links: *1: http://en.wikipedia.org/wiki/HSV_color_space *2: http://palantir.swarthmore.edu/maxwell/classes/e27/F03/E27_F03_Lectures.pdf
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
D Herring wrote:

There's source code for a checkerboard-based aligner in OpenCV on SourceForge. The math needs some work; it sometimes reports totally bogus alignments.
                John Nagle
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
In terms of resolution...
There are 2x as many green pixels as red or blue in a bayer color pattern. The blue are noisy as well.
RGRGRGRGRGRGRG GBGBGBGBGBGBGB

http://palantir.swarthmore.edu/maxwell/classes/e27/F03/E27_F03_Lectures.pdf
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

The solution to that is to not look at the Bayer pattern directly, but convert it to an image format you know how to use. And to severely oversimplify the noise and contrast problem, one of the best ways to clean up the data set is to simply drop the entire set of "damaged" information. The problem is identifying that the blue channel, for example, has more noise than useful signal. Image processing is a science, and already addressed every one of the issues we're likely to bring up. The work now is to make put all those DSP mips to work and extract a useful image in realtime. Useful is the key word; it doesn't have to resemble what we consider a good photograph to be useful for machine vision.
It should be pointed out that emulating the human eye and visual perception is a much more complicated problem than the one we're actually trying to solve. To be precise, the problem is identifying our relationship to objects in the immediate environment. Vision is so immediately obvious as the "best" way to do this -- by virtue of our possessing a very well developed sense of sight -- that we ignore other possible solutions, conveniently forgetting that we don't fully understand how the brain processes what we "see". To a blind person, vision is a damned poor way to find his way around. Rather than reinvent eyes, it might do us better to find usable substitutes. My favorite alternative reality is that side-scanning Doppler ultrasound has promise. At the very least, I don't have to deal with a noisy blue channel and Bayer patterns.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.