Robots: sharing ideas

mlw wrote:

Can you elaborate on what you mean by wanting "to see some more action on this group"?

Have you scanned over the subject titles of this newsgroup to see what it is currently mainly used for?

I think it is mostly hardware questions like "what do you think of X" and "where can I find Z" with the occasional "look at my robot web site". I suspect the most of often asked question is about H-bridges.

Because there is such a diversity of interests in terms of how complex your robot system is and what language/hardware/os configuration you use we all fall into incompatible (at the detailed level) groups.

Also I suspect that 90% of robot projects never get beyond the robot base stage as the real work is (IMHO) in the software.

John

Reply to
JGCASEY
Loading thread data ...

: Cool pics, how are those wheel/motor combos? Where did you buy them?

Motors were from Diverse Electronics

formatting link
are long sold out. $10 each at the time for surplusu DeWirt motors. They look similar to what I see sold for a bit more now as surpluss wiper motors. Wheels are lawnmower wheels from the hardware store. TheOneSpot.com site I mention had the complete method he used to attach them. Site is gone - but AH, the Wayback machine has it. I'll have to pull this down:

formatting link
My base started as a copy of Lazlo, with plywood instead of plexi.

: > Linux. 200mhz processor, with 800x600 LCD, runs on 5v, and takes a laptop : > battery. It has 2 RS-232 ports that run at TTL level, so I can connect it

DOH -- I mean laptop HARD DRIVE.

: You know, that is a neat idea, but I'm actually doing something different. : Web services! how about this:

: http://myrobot/forward?amount=10: http://myrobot/sonar/read?sensor=1 I would wonder if you can get real-time response from web services.

I'm not convinced it's going to work the way I envision it either, but figure I can always combine all the small programs into a big one if it doesn't.

: BTW: Where are you? I'm in Boston.

New York, Westchester.

Reply to
Christopher X. Candreva

Judging from all the other posts about what language to program in, I think it is obvious that psuedo-code is the best way to share ideas. Then you share the ideas, not the program structure. Everyone can tehn program the psuedo-code in whatever language suits there hardware, OS, skill level etc.

Matthew

Reply to
Matthew Gunn

Matthew Gunn wrote:

In practice an algorithm is explained in English (if that is your native language), maybe some equation fragments, and some sketches. A book I have on vision called "Digital Image Processing" by Gregory A. Baxes doesn't have any code and it explains everything in a way that I have no problem at all implementing in my own programs in whatever language or OS I want.

So you use whatever language is appropriate to your particular situation. The professional programmer has no monopoly on good ideas but they do have an advantage when it comes to implementing them in a complex OS.

John

Reply to
JGCASEY

Hi,

Thanks for the name/author of the book.

Does anyone else have a good book to recommend?

I cannot recommend any because I haven't found any. I have a list of papers I've found on the net that were very helpful.

Are there any good books concerning autonomous navigation? I am curious because I wrote one while I was doing net research, and I wonder if I would have any competition if I were to publish.

How about machine learning books? AI?

Rich

Reply to
aiiadict

Well, I have been thinking about a number of issues along the lines of modularity. For the moment, I believe, the single PC has enough CPU and I/O to handle the robot, but I have been thinking that I may separate the system into two computers. The low level computer and a higher level. (Smilar to the brain, no?)

Anyway, my EPIA is getting old, it is sub-1GHZ, but should be sufficient to handle the routine functions of the robot. If I want to get more sophisticated, I may need to add computational horsepower in the form of a faster computer.

I started thinking about how to do it, and how to develop the system such that any piece could run any where I wanted it to. This sounds like what you are planning.

Fortunately, this problem has already been solved!! Are you familiar with any of the cluster computing projects? Basically, a loose computer cluster works with a messaging API. (Like you are working on.) One of the more common ones is MPI (Message Passing Interface).

Basically, I will construct the "higher" level "main" program as a controller program, using MPI, and code the sub-tasks as MPI subroutines. When coded as such, I can start a virtual computer across a number of computers on the robot. I can even run parts remotely on faster computers. (Assuming good R/F connectivity)

Take a look at

formatting link

Reply to
mlw

The cool thing about C and C++ is that while they can be used as if they are high level languages, there is a direct correlation between the statements used and the machine code that is generated. This is not true with Java, .NET, perl, etc.

C and C++, are "low level" in that they produce directly executable machine code in a predictable fashion.

Assembler is good, and I used to love writing in it, but I find that 99% of the time, a good optimizing C compiler can generate faster code than hand-done assembler. This isn't to say that C is faster than assembler, because, obviously, it can't be. The problem is readability and maintainability. When you write assembler, you have to be able to read it and maintain it. The C compiler doesn't need to. Because of this, spagetti code that you would never write in a million years, gets generated automatically. It takes short cuts you would never do, and makes assumptions that cross function modules you never would. In short, it generates fast unmaintainable poorly written code. The beauty is that it doesn't need to be maintained, and its only poorly written if you have to read it.

Reply to
mlw

formatting link

"realtime" is so subjective. Obviously you would not need sub-milisecond responses, but if you can tollerate 100ms, it should be possible depending on the infrastructure and size of the messages.

I was thinking the higher level interfaces would work a web services. The internal parts would be MPI.

I said in another part of this thread, I'm aslo thinking about MPI. I can envision the robot as a cluster. Take a look at:

formatting link
If all the self contained logical blocks are written as MPI functions, then your robot software can grow and spead out over any number of computers.

Reply to
mlw

And why can't C be faster than assembler?? The speed of assembler pgms very often depends on the skill, or lack thereof, of the coder. You are 100% correct on maintainability.

LB

Reply to
LB

mlw wrote: [...]

As you wrote in another post, this is a computer science debate and we are here on comp.robotics.

We are not all advanced computer programmers and never will be. I have not used AVL trees or even a hash table and haven't found myself handicapped as a result. At this point issues such as cluster computing, messaging APIs or MPIs are not on my radar screen.

Unfortunately for people like me this great big bureaucratic monolith called Windoze or Linux has been placed between me and the hardware (camera, i/o port, or graphics card). The only access I can afford (as a non professional programmer) is Java.

In fact all the robotic projects I have done so far, including vision, have worked fine using C or even the QBasic interpreter (robotic arm). If they ever had commercial value I could always hire an expert to type up a Windoze/Linux interface :)

John

Reply to
JGCASEY

If you read the paragraph carefully, you'll see that I say that most of the time C/C++ code is faster than assembler.

It is *always* possible (if you know how) to code something more efficiently or at least as efficiently in assembler, the issue is whether or not it can be done in a way that is maintainable.

Reply to
mlw

While robotics is not computer science, there is a great deal of computer science in it.

Well, there is an interesting paradox in this statement. Robotics is by its very nature a persuit that requires some very advanced conceptual thinking. Knowledge of computer science is not nessisary to build kits or assemble parts, but if you intend to do anything interesting once you do, you'll find such knowledge very useful.

Windows does have a huge barrier to entry. The cost of tools, the complexity of the APIs, etc.

With Linux (and FreeBSD and most other x86 unix), if you run the programs as root, the layer between the OS and program goes away. I wouldn't suggest this in a production environment, but for a hobby it works great. You get the accessabiliy of old DOS and the multitasking, memory, networking, etc of unix, what's not to love?

Reply to
mlw

"mlw" wrote

Exactly. Seems like a good idea, but I've been thinking about implementing a few basic behaviors in hardware, in this "lower substrate". For example, collision detection. If the rover is about to collide, then the microcontrollers would command it to stop right away. A few miliseconds later, the "brain" would realize why the rover stopped and then trigger some behaviors to avoid that obstacle, but I still don't know.

Loose coupling is almost always a good idea

Great, I will take a look at that. I hate reinventing the wheel.'

Thanks! I will look into.

Reply to
Padu

"mlw" wrote

Interesting, you may also want to take a look in

formatting link
(I guess that's the address, if not, google CORBA). I've used CORBA in the past for systems interoperability. It is really nice since it is completely independent of the platform, unlike COM/DCOM/COM+. Just recently I've heard that the omg also has interoperability solutions for embedded and real-time systems. I don't know if it is CORBA (omg offers many solutions), but the good thing about it is that their technology are very mature, and you can scale up as big as you want very easily.

Reply to
Padu
[...]

So you don't consider my programs interesting?

Nothing I have read as regards AI, robots, NNs requires expertise in the Windoze or Linux OS or the use of APIs or MPIs etc.

But I do understand the power of these systems as an environment for multitasking and integrating mulitiple programs, giving a standard user friendly interface and so on...

[...]

I had thought of loading Linux onto a cheap old computer to see how hard it would be to learn to use but just haven't got around to it.

John

Reply to
JGCASEY

Never did I say they weren't interesting, but I don't think anyone could disagree with the statement that a good foundation is some of the basics of computer science would be helpful.

I remember in the early '80s, I was learning about opamps. Out of nowhere came this idea about how to make a pseudo-ramp generator using the opamp and an RC circuit. I was so excited as I thought this was a new idea. Later on, after I had built and proved to myself that the circuit worked, I saw it publised in a "Using opaps" cookbook.

Many of the things a beginner struggles with are covered quickly in some of the first computer science books. Learning about trees, hash tables, lists, recursion, and so on equip you with some really nifty concepts that make figuring out how to do things much easier.

I was talking about computer science, not the trivia of OS and API.

Yes, a good OS is helpful.

Well, for starts, you could download "knoppix," which is a Linux that need not even be installed. It boots off the CD. Get your feet wet and decide what to do.

Also, you can install a second hard disk on your computer and install Linux there.

Reply to
mlw

Your mention of DOS inspires me to mention there is something called "Free DOS" which is just that. Go here for more info

formatting link
LB

Reply to
LB

I followed that project a while back. DOS, like CP/M before it, were awesome for thier time. These days, I find that I am less than enthusiastic about DOS. Sure, for nostalgia purposes its great, but of little value outside of a few virtical applications.

Most computer systems from the PC-104 to nano-ITX to full blown PCs are conceptually HUGE computer systems compared to what DOS was designed to run on. Gigabytes of memory, many more of virtual memory, memory protection,

32bit addressability (64bit becoming popular), process isolation, instruction pipelining, more RAM than DOS 3.x could handle on a hard disk.

For my time and money, a open source unix is the way to go. Linux, FreeBSD, NetBSD, etc. all have a cleaner simplicity. DOS started life as (more or less) a port of CP/M (Z80/8080) to the Intel 8086 (8088) by the name of QDOS by seattle computer (Microsoft bought the original DOS). Everything added to it was shoe horned into it with the requirement it ran in 640K (and could run programs). UNIX, the bastard child of Multics, inherited a design that expected to run on hardware that handn't even been designed (or even conceived). In short, the design was carefully done. CP/M and DOS were designed to run on little microprocessors, Multics was designed to run on large hardware with lots of memory and lots of disks and peripherals.

I know this sounds like a dis on DOS and CP/M, it isn't at all. The first micro computer I ever built was based on an RCA 1802 ELF. An 8 bit computer with 16 general purpose registers. It was real fun to work on. I miss burning eproms, debugging with a scope, etc. Those times are, however, gone.

Reply to
mlw

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.