imaging - radar

Hi Is it possible to use two cameras to works as a radar. Our brain can calcuates the distance of each object we see, by the differneces between two eyes. If yes, we don't need a radar.

thanks from Peter ( snipped-for-privacy@hotmail.com)

Reply to
cmk128
Loading thread data ...

snipped-for-privacy@hotmail.com =E5=AF=AB=E9=81=93=EF=BC=9A

i got it :

formatting link
but how much is it? the website haven't tell

Reply to
cmk128

schreef in bericht news: snipped-for-privacy@m73g2000cwd.googlegroups.com...

snipped-for-privacy@hotmail.com ??:

i got it :

formatting link
but how much is it? the website haven't tell

Well, it's for sure not cheap: (> $300)

formatting link
Greetz, Stef?

Reply to
Stef?

That's not using stereo vision to calculate distance. That's bouncing a light beam off the object and uses the round trip time with and speed of light in air to calculate distance. That technique works much better than stereo vision. You can buy similar hand held units at sports stores for a couple hundred dollars. They are used for golf and hunting. They work fine but the accuracy is limited to about 1 meter at that cost (which is fine for both golf and hunting). The high end versions of the unit you found at least comes with a computer interface which is useful if you are going to try and use it on a robot.

You can also buy scanning laser range finders that work much better for an application like robotics. Just about all of the DARPA Grand Challenge cars used them. But they cost thousands of dollars.

With eyes set 6 inches apart can't calculate distance very accurately at all. It's been done using video cameras but it's a complex problem trying to correctly identify matching objects in the two images to use the data to calculate distance.

Humans get most of there distance information in other ways - not from raw stereo data. When we see a familiar object we know how far away it is just by the size of the image compared to our past experience of seeing that e type of object millions of times in the past. When we look around our environment, it is normally filled with tons of familiar objects all of which our brain is able to give us rough estimates of distance on - even with just one eye open.

Second, the brain makes heavy use of all the information in the temporal data - how the picture is changing from second to second. As you move your head from side to side the various objects in our view move - but they move at different rates relative to each other based on there distance away from us. This allows the brain to use triangulation information with only one eye but it's done by comparing the speed of motion of objects. Same thing is true for how the size of an object changes as it moves towards us and away from us. How quickly that size is changing tells us a lot about it's distance away from us and it's direction of movement relative to us - once again, using only one eye. The extra stereo data we get from two eyes just gives the brain that much more to work with.

The brain is very good at pickup up all these types of clues and others to give us a very good sense of our surroundings. Doing all that with computers is still very hard.

Reply to
Curt Welch

Curt Welch =E5=AF=AB=E9=81=93=EF=BC=9A

Hi Curt Welch. You said our brain's "distance calculation" is based on our pass experimence. Yes, i agree, but baby can catch thing very accurate by their hand, so i think pass experimence is just auxiliary. Without it, our brain still work. I saw "DARPA Grand Challenge" on the magazine, those car are using laser beam to detect the road block, but it is too expensive. Is there any cheap solution? i only need to detect the thing's distance within 5 meters. thanks from Peter

Reply to
cmk128

Ultrasound is the way to go for that distance - there are many inexpensive units available. For shorter distances Sharp makes numerous inexpensive IR sensors.

Curt Welch ??:

formatting link
snipped-for-privacy@kcwc.com
formatting link
Hi Curt Welch. You said our brain's "distance calculation" is based on our pass experimence. Yes, i agree, but baby can catch thing very accurate by their hand, so i think pass experimence is just auxiliary. Without it, our brain still work. I saw "DARPA Grand Challenge" on the magazine, those car are using laser beam to detect the road block, but it is too expensive. Is there any cheap solution? i only need to detect the thing's distance within 5 meters. thanks from Peter

Reply to
Mark

In addition to stereo vision for fast 3D perception most people still have a strong sense of 3D when only looking through one eye.

This is because the brain does a size independent feature recognition and association algorithm. There as a discussion of an implemenation of a size independed visual feature object detection and recognition algorthim used in robots that let them have the ability to identify all the objects in the room AND tell you how far away each one is. It even automatically stitches together different views of the front back and sides of objects that as it moves around it still recognizes individual objects.

It could also recognize objects when only a part of the object was visible when much of the object is obscured behind other things. This is what humans and animals do.

The algorithm seems to be much better at recognizing real objects in the real world and being able to understand and navigate or deal with whatever individual objects in its environment that it needs to recognize.

The human brain also applies a Gabor transform that fills in visual information for the brain that the eye really never saw at all. So what one 'sees' in the brain is not necessarily what the eye saw. This is not just visual illusions, but actual details of the image that brain syntesizes.

So binocular vision is not as important for most things as the ability to visually identify objects and remember them so that they can be recognized from different angles, different distances and with different parts of the image obscured, the way we do.

Your robot can tell you if you tennis shoes are next to it or accross the room because it will remember how big they are and what they look like from different angles if it is a reflex.

If you have radar or sonar and vision then you will need more sensor fusion.

Reply to
fox

Babies have basically no hand-eye coordination at birth. It can take them

6 months to learn to put food in their mouth. They don't have the vision, or the fine motor skills needed to do these things at birth. It takes many months of learning before a baby develops enough of a sense of the things around and enough a sense of control over their own body to enable them to reach out and grab something they see.

Sonar sensors are cheap (

Reply to
Curt Welch

here is the link to those improved sensors.

formatting link
Although I've doubts on several claims in this folder, if I see the user responses, they must be very good. I've to try these tricks once myself ;-)

Stef Mientki

Reply to
Stef Mientki

True enough for a human baby, but many baby animals have both the vision and motor skills for basic functioning within minutes of birth, and some immediately upon birth. Dolphins are a good example of a fairly high order creature babies can immediately swim, follow their mother, and even find where to feed. Oddly enough birds, being far more dumb, need a lot more parental care during their first week or so.

The comparison to humans is natural as that's what we are, but we're no where near having an artificial brain that compares with humans -- most lack the speech centers, opposable thumbs, and other eveolutionary developments that mandate an increased learning curve. Being we can hardly build a robot with the smarts of a cockroach, let along a human, I think it's more realistic to concentrate on the processes of those animals where instinct (basically preprogramming) supplants progressive learning. The robot can still learn, but is at least minimally self-reliant when the power switch is thrown.

-- Gordon

Reply to
Gordon McComb

It's not learning. It's brain development. Human brains aren't fully grown at birth.

We know this because there are animals that are ready to go at birth. Horses typically stand within an hour of birth and can run with the herd the day they're born. This is significant; it indicates that the key locomotion and visual systems don't start out blank.

Some other animals are ready to go at birth. Guinea pigs are, but mice and rats are not.

John Nagle

Reply to
John Nagle

Split hairs. They aren't fully grown until at least 16-18 years old, and even then, learning produces a constant development of the brain. It may not always be physical growth, and the development not be as profound in later years as infancy or childhood, but the brain is not a static device even in adults.

Today even there was an article in the paper (San Diego Union Tribune, but it's probably elsewhere) about how seniors who exercise their brain have a better chance of avoiding the worst of Alzheimer's. They don't fully understand the mechanism, but they noted being mentally active even by doing something as simple as crossword puzzles appears to

*rewire* the brain around the Alzheimer-afflicted portions. On autopsy, they found many of the mentally-stimulated seniors had significant enough Alzheimer's calcification that would typically render the person disabled, yet these men and women had only the occasional "senior moment." That's clearly brain development from learning, and this in 70-80 year olds.

-- Gordon

Reply to
Gordon McComb

Auto-focus still cameras use only one "eye" and clever (polarizing) optics

Reply to
the Wizard

Yeah, a lot of people like to talk about like that, however, brain development based on sensory data is commonly called learning.

The fact that there are many lower animals that have systems that develop without the need for sensory interaction does not prove anything about what is happening in humans. Humans need sensory interaction with an environment to develop hand eye coordination as well as many other skills. That's why this brain development in humans is commonly called learning.

Reply to
Curt Welch

their hand, (Peter)

No they can't. I've thrown dozens of things at infants, and so far they just weep or look surprised.

This explains my experimental data!

IMO, 'lower' creatures, say, for example, spiders, come 'hardwired' with a few skillsets which allow them flourish in itheir intended envrionment. Whether this is by evolution or design is largely irrelevant. It is highly unlikely, however, that spiders will ever develop beyond these skillsets, at least in the foreseeable future. That is, the spiders of today will largely be the spiders of your great- grandchildren.

I would venture to say that are somewhat different creatures than humans of one or two hundred years ago, by virtue of culture and communication. 12 year olds of today have mastered skills, technologies, and concepts which were scarce dreamt of 100 years ago. In some environments, the human learning process appears to be accelerated by the rapid proliferation of information, and the rapidity at which new ideas and concepts can be tried and tested.

Thus, in my opinion, roboticists face two broad paths they can travel down- adaptive learning systems, or hardwired reactive systems. There are blends of the two, but the distinction still exists. 'Classic' BEAM walkers are, IMO, a perfect example of the latter; experimental bipedal walkers are an example of the former.

At one time, I read of an individual who had worked out some sort of 'bitnet' score-based learning algorithm, implenting it using a PIC, which also mimiced a BEAM core, which was coupled with a beam style walker. This was an example of a blend.

Just my 2 cents, Tarkin

Reply to
Tarkin

You can do it, but it's not easy. Look for 'stereo vision' or 'machine vision' and you'll find that it's a well researched area. Check out

formatting link
for one solution. I'm also working on my version for my mobile robot, but I haven't published images yet.

Regards, Andras Tantos

Reply to
Andras Tantos

Actually, it's not that hard to do, but it doesn't work very well in most visual situations. There's code for doing it in the OpenCV computer vision library on Sourceforge. There's even code to correct for camera misalignment, but about 10% of the time that code produces a totally bogus correction.

John Nagle Animats

Reply to
John Nagle

OK, ok. What I meant was it's hard to do it right and fast.

Andras

Reply to
Andras Tantos

Depends on the bird. A robin is born blind and helpless, and needs to have food stuffed into its mouth.

But a chicken can run around and feed itself within an hour of hatching. That is a good thing too, because many commercial breeds of chicken no longer have any mothering instinct.

Reply to
Bob

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.