Linux router as embedded computer for robots

Hello, I am wondering if it is possible to use a device with embedded Linux (as wifi routers, NAS, etc.) as a real embedded computer for a robot. Especially wifi routers today are cheap, often contain an embedded Linux version, and have some I/O ports (Lan and Wifi, of course, plus serial console and Usb sometimes).

Reply to
gidesa
Loading thread data ...

The hard part is how to get enough IO -- the routers really don't have much. If you've got a platform that will funnel its IO into a serial or USB port, then it's probably feasible. That's how I'm running an iRobot Create from a 486 SBC. Of course, that means there's really a lower-level controller on board too.

Reply to
Joe Pfeiffer

I was looking at those netbooks (e.g. EEE PC or Dell Mini) with SSD hard drives and having the same thought. These are potentially game-changing for moderately-sized mobile robots, if you can just get the I/O.

First time video input looked like the easy part!

- Daniel

Reply to
D Herring

There was an article on using the FTDI FT245R USB/parallel interface and a PIC processor in the August '09 Circuit Cellar, which probably has what somebody would need to get the interface from a netbook to a robot going. Unfortunately, it was buried under a title about automatic Homeland Security threat-level indication, so it was easy to miss.

Though embedded single-board computers still look like the easy and slightly cheaper way, to me.

Reply to
Joe Pfeiffer

Indeed I just built a robot controlled from a netbook, using an home made PCB with a Pic micro, connected to the netbook by an USB-serial converter. See my site:

formatting link
But the netbook is a bit expensive, and of course it's not suited to control a little robot, the one that now I would build.

Reply to
gidesa

Several of us at the Home Brew Robotics Club (San Jose, CA) have repurposed a Linksys WRTSL54GS router. I'm really busy today, but I'll post a more detailed response in the next couple of days. Here is my notes for bringing one up:

In short, it is a Broadcom MIPS chip with a Wi-Fi, 5 Ethernet ports, a USB port, 2 serial ports (need to hack the board to access), some RAM, some FLASH, etc. It runs Linux just fine. Alas, I think they are hard to obtain, so a newer one would have to be hacked to repurpose for a robot.

-Wayne

Reply to
waynegramlich

formatting link

formatting link

Reply to
technologiclee

You have plenty of network bandwidth available, and more motor controllers are moving to Ethernet. See

formatting link
But those will cost more than the router.

John Nagle

Reply to
John Nagle

I have used the FTDI chips and associated driver to do parallel I/O. The latency for USB scheduling and transmission means that 1mS is as fast as it could get and around 3mS is what I have observed. The platform that I was running under was windows xp, but I doubt that makes a lot of difference in the scheduling latency. 480 mBit/second USB II still only ships a frame per millisecond. This much latency would be fine for whisker sensors, but you could not close a servo loop for motion control with it.

I have messed with the Microtic router boards. They have a compact PCI board interface for up to 3 boards. That seems like the path for I/O to me.

Good Luck, BobH

Reply to
BobH

A *lot* more, and you've still got the sensor problem. That parallel USB adapter still seems a better bet....

Reply to
Joe Pfeiffer

I can't imagine trying to close a servo loop with a processor running a "real" OS (Linux, XP, or anything else worthy of the name), no matter what hardware was doing the details of control and sensing. Things like that are what $1 specialized MCUs are for.

Reply to
Joe Pfeiffer

I basically agree. I am a huge fan of using dedicated MCU's for tight real-time control. However, the EMC2 project over at:

is basically real-time Linux being used to control CNC machines. This just shows that some people are pushing hard on real-time Linux.

-Wayne

Reply to
waynegramlich

It depends on how complex your servo control is. The Big Dog robot closes its position and force servo loops at 1Khz, and the balance and locomotion servo loops at 200Hz. All that runs on a ruggedized Pentium-based PC/104 machine, running QNX.

The LIDAR tilt head on the CMU Grand Challenge vehicle closed its servo loop at 1KHz. Again, this was running on QNX.

If you're just doing simple PID position control, some little MCU can do the job. But if you're doing adaptive feedforward control or coordinating multiple flexible actuators, you need more than that. Modern robot control systems are using machine learning. This stuff is compute and memory intensive.

John Nagle

Reply to
John Nagle

Joe, I wasn't commenting on the sanity of using a standard OS for running a servo loop. I agree completely with your comment on that. I was commenting on the long and variable latency that I/O done over USB gets. Measurement data in + control data out could not happen in less than 2 mS, even with no scheduling or calculation latency.

There are certainly plenty of things that this would not be a problem for, but it is a limitation.

Regards, BobH

Reply to
BobH

I hadn't looked at their code... they're trying to do the acceleration and deceleration ramps under Linux? They're braver than I am.

Reply to
Joe Pfeiffer

Joe:

Yes, they do the accel/decel, kinimatics, etc. all under a real-time variant of Linux. The documentation is pretty good too. They have a pretty general purpose architecture for hooking into the real-time Linux, including some GUI visualization tools. All-in-all, it is pretty slick. The HAL User Manual is worth a quick read:

My general feeling is that a real-time Linux variant plus a bunch of $1 MCU's can provide a very flexible and cost effective control platform for robotics.

-Wayne

Reply to
waynegramlich

Thanks, I'll take a look at that -- when you say real-time Linux, do you mean RTLinux? That seems a bit less wild, since in that system the real-time part isn't Linux (Linux is a task under the real-time part). Or do I just need to go RTFM?

Reply to
Joe Pfeiffer

Joe:

Reading the manual would help.

From section 2.1 of the HAL User Manual:

The RTAPI environment

RTAPI stands for Real Time Application Programming Interface. Many HAL components work in realtime, and all HAL components store data in shared memory so realtime components can access it. Normal Linux does not support realtime programming or the type of shared memory that HAL needs. Fortunately there are realtime operating systems (RTOS=92s) that provide the necessary extensions to Linux. Unfortunate= ly, each RTOS does things a little differently.

To address these differences, the EMC team came up with RTAPI, which provides a consistent way for programs to talk to the RTOS. If you are a programmer who wants to work on the internals of EMC, you may want to study emc2/src/rtapi/rtapi.h to understand the API. But if you are a normal person all you need to know about RTAPI is that it (and the RTOS) needs to be loaded into the memory of your computer before you do anything with HAL.

The following link gives an overview of EMC2:

Here are few quotes to whet your appetite:

It can simultaneously move up to 9 axes and supports a variety of interfaces.

Support for non-Cartesian motion systems is provided via custom kinematics modules. Available architectures include hexapods (Stewart platforms and similar concepts) and systems with rotary joints to provide motion such as PUMA or SCARA robots.

The following quote answers your question:

EMC runs on Linux using real time extensions. Support currently exists for version 2.4 and 2.6 Linux kernels with real time extensions applied by RT-Linux or RTAI patches.

For those of you who are interested, there actually *two* common real-time extensions to Linux -- RTLinux and RTAI. EMC2 works with both:

I do not recall which variant is the *preferred* one at this time.

-Wayne

Reply to
waynegramlich

Hello Everyone,

Modern computers use dedicated processors for subsystems such as video and sound, would seem to make sense to follow this pattern for I/O intensive functions too. Using USB for the interface isn't as clean as having it embedded but it would work pretty well if the MCU handling I/ O is programmed to be self-sufficient as much as possible. Think of the I/O MCU as your robots autonomic nervous system, monitoring sensors and controlling motors, while the netbook is your robot's brain. Instead of the MCU sending a message to the brain that there are steps right in front of us and wait for the brain to stop, the MCU knows that steps are bad, stops the robot and sends a status update to the brain saying we stopped because of the stairs.

I'm sure that a lot of lower level I/O functions could be done by the brain, but does it make sense to do that?

Eddy Wright Wright Hobbies Robotics

formatting link

Reply to
Eddy Wright

That sums it up pretty well :-)

Eddy

Reply to
Eddy Wright

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.