autonomous robot, path planning by webcam

Hi everybody,

does anybody has experience with path planning using images that are take by a webcam mounted on the robot? any links would help too.

thank you all zoltan

Reply to
Zoltan Nagy
Loading thread data ...

Lol! Your question says so much! You've never worked with either commercial or custom hardware or software having anything to do with image processing or robot navigation based on image processing.

First forget the web cam. You would need the cam software source code to bridge with your other software that uses the web cam images, specifically the starting pixel address of the frame data. That is obtainable for a few brands of web cams, but then you also need to be able to control the web cam outside the canned software, and that means knowing the calling procedure for each of the web cam functions, and being able to do software calls since your robot navigation software will be doing the controlling. If you just plug a web cam into for example your USB port and can actually write your own software to use it... I know some companies that will pay you $100K year to do just that in their new product development departments.. but you wouldn't be posting here if you could do that. So no web cam.

What you want is a memory addressed frame grabber board from PixelSmart. You plug the grabber into a computer slot, plug any of the little cheap monochrome or color NTSC video cameras into the grabber board, plug a video monitor into the grabber board, then write your software using whatever programming language you are comfortable with. Using the memory addressed function calls provided with the frame grabber (just make a read instruction to the appropriate address) you can control all aspects of image acquisition, and thus you have a complete imaging system that can be software controlled either by you or your robot to provide the images it uses to "see" the surroundings.

Now the fun begins... the robot needs to "see".. and that is in the software, not the hardware. All the hardware does is "provide the images it uses".

No problem, but you will need the hardware set up to bring the pixel data of the images into your program matrix so your software can do the processing of the pixel data that results in the robot being able to "see" where it is going.

Writing the various subroutines that perform different aspects of image processing is where the smarts are made for your robot. In my own hobby doing exactly this I presently have

214 subroutines that when placed under robot control enables it to "see" will enough to provide the necessary information for the navigation software to move it from any point A to any point B, based on image data (not to mislead.. this is for flat terrain, within the confines of a building. The software is presently designed to find and move the robot to a wall outlet so the robot can recharge its' battery).

This may all sound mundane to the more experienced robot builders, and maybe others here can give you better advice on how to proceed with your project. But what I've described works for me. Learning how to design and build a robot with even the simple task of navigating autonomously to keep itself powered up based on acquiring and processing image data from a camera I have found very satisfying.

So get the hardware set up and have fun.

Reply to
Robert Mockan

actually I see I forgot to mention that I have a predefined environment that is 3x7 m^2, flat and brown and contains only some grey stones on it. The place of these stones is however unknown. In addition there are two neons two help navigation in the diagonally opposite corners. So I want to navigate for instance from some unknown point A to one of these neons. Thanks for your answer anyway, I see there is much going on in this field and some really hard tasks to solve ;-) zoltan

Reply to
Zoltan Nagy

"Zoltan Nagy" wrote

environment that

Since you have such a simple visual environment, you may be able to use this:

formatting link
Cheaper and simpler than Robert's suggestion, but it's also ALOT more limited.

Reply to
Anthony Fremont

Baloney. There is available open-source software to read from both USB and FireWire cameras. Look up the Open Computer Vision Library on SourceForge. There's FireWire camera support for Linux.

I've written a FireWire camera driver for QNX myself.

Few people use analog cameras for computer vision any more.

John Nagle Team Overbot

Reply to
John Nagle

So what your saying is that I cannot access the images from my Logitech Webcam into VB or something like that?

And that I also cannot make some software that tracks my face?

I then wonder if I was dreaming, time to wake up then.

Peter

Reply to
Peter

What I described worked for me. You may recall a few years ago I asked some questions similar to Zoltan Nagy but about how to get a robot to find the wall outlet. I appreciated your reply then as I do now. The reason I went the way I did with the frame grabber is that Pixel Smart provided first class tech support for their product. I wasn't interested so much in using a web cam, but in building a hardware system for image acquisition where I could have pixel access in an 8-bit gray scale PGM matrix format within the language I was working with (PowerBasic for DOS) with a minimal learning curve to actually obtain the image data access. The PixelSmart approach accomplishes this in a couple days after plugging the board in. (As you probably are aware PBDOS can also call C DLLS and has a structured format, so I didn't lose any programming power using it, and it suits the older computers I buy cheap on the surplus market).

As I said in my earlier reply to Nagy I suspect the more experienced robot enthusiasts can offer better advice than mine. But even today finding drivers for the web cams, such that one is able to control their operation, and gaining access to the starting pixel of the frame data on software command (as required by the robot), regardless of the platform and OS (WIn32 API and MFC skills, VC++ experience, or linux programming knowledge if that is ones' choice, etc. all have steep learning curves) is still problematic for a newcomer to the field. The system I describe can enable one with just BASIC programming experience to successfully obtain image pixel data, and process it so the robot can use image data to navigate. The biggest problem IMO that people face when they have decided to integrate computer vision (that includes image element recognition not just obtaining images) into their robot is that the hardware and software system they start with is over their heads in terms of required proficiency to use.. then they get discouraged and frustrated as time goes by and they still don't have an operational system, then they just give up.

Reply to
Robert Mockan

Hi Zoltan: Your solution is readily made using the Evolution ER1 Robot on sale now for $199 from

formatting link
see their spring sale or 2 for $338

Next download a mapping software package called "strabo pathfinder ". at

formatting link
is free for 30 days or $49 to buy. This software written by Dave Evartt is probably the best mapping software for use by us hobbyists.

Next use Strabo to draw a map of your 3 x 7 M^2 territory , and include the lights markers etc as waypoints. Strabo will drive you to an approximate location based on odometry. Next the onboard camera and software in the Evolution robot will zero in on the exact location of your beacons.

Use the Localize function in Strabo 1.4.0 to get to the final location

p.s I'm not affiliated we either vendor

have fun, I am

Steve

Zoltan Nagy wrote:

Reply to
steve vorres

(snip)

You can, I could, but can Nagy? I commented in my post to Nagy that more experienced robot builders could probably offer better advice. If you describe to Nagy a system that will enable the project being worked on to be done, I will be as interested as Nagy.

Reply to
Robert Mockan

We have a camera, so I am looking mainly for algo's and I try to find out if this kind of navigation is possible (=should we try to implement) or not. if not we would take some sharp and us sensors.

Reply to
Zoltan Nagy

Additionally, I should mention that I have some (very little) experience in this field, as I implemented a canny edge detector and a waveplanner in matlab. Are these algo's suitable?

Reply to
Zoltan Nagy

here is project which uses an airborne camera for tracking.

formatting link

->Hi everybody,

Reply to
Ingo Cyliax

Do a google on "video for windows". I use it to get images in VB from my webcam. Ringo

Reply to
Ringo

I believe that when you install the webcam, VB can get access to the OCX (installed by the webcam) to view the image. I installed a webcam and noticed that the ActiveX driver can be added to a project, but I haven't used it yet.

On the issue of path planning: You may want to read some of the early papers by Brooks and Horsewill (spelling). They implemented several visually guided robots during the late 80's using very simple visual processing. The system detects the floor area and then returns the best vector forward. I believe the robot was called Polly.

Best regards,

Geoffrey

formatting link

Reply to
Geoffrey Swales

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.