For a number of reasons, I am in the process of re-designing a lot of the $500 robot software. Most of the pseudo-realtime stuff will remain for motor control, but it will be isolated in its own OS level task.
What will change is the structure of the control system. In another thread, "Standardized distributed robotics," I talked about how adding a networked joystick started me re-thinking the plumbing.
One poster suggested "player/stage." To tell you the truth, I looked at it and was not very impressed. I think the abstraction model is too low level. There were too many API functions. It seemed like the learning curve exceeded the benefit.
Another person also mentioned CORBA or real-time/embedded CORBA, I'm not so sure I like that either. CORBA has always seemed bloated to me, but then again, maybe I'm old school, who knows.
But here's what I'm thinking, and I bet you micro-controller dudes will like it too:
The system will be message based and object oriented. Thhere will be two types of objects, senders and receivers, Each receiver object will have a single message handler and messages will be standardized and extensible. The best analogy I have is the "DefWindowProc" in the MS Windows API. A sender object may be a sensor or input mechanism.
Communications between objects will be done via "channels" a channel can be ethernet, USB, serial, function call, or even carrier pigeon. One interface class per type of channel should work. USB may be a bit problematic if we start defining messages that can have variable length packets, but that shouldn't be a problem at first, and all we need to do is chunk out the data.
Each sender object will send its messages to the central core "robot." Each receiver object will be layered upon the main "robot" object . A robot object defines the core of what a robot does. A robot object can be library, shared library (or DLL), or a process.
Take a keystroke sender object. It will be implemented as its own process or thread. It will open a channel to the main robot object. It will wait for key strokes from the user. When meaningful keystrokes are sent, they are decoded and made into a set of messages to be sent to the robot. I.e. if the user presses "f," this could mean forward, so a ROBOT_MOVE_FORWARD message can be sent to the robot. (Yes, very simplified for discussion)
The main robot object may, or may not know how to deal with this message. If it does, it will act, if it does not, it will return a failure code.
The "main robot object" is a generic receiver that can be replaced by chaining the API. The default robot object may do nothing, but your wizbang robot object may know how to move. You call into the core and replace the default robot object with your robot object. Then your robot will get all the calls from all the various sender objects. A later module may chain your robot object and further augment the robot "whole."
Other receiver objects could be implemented, but I'm not sure that is necessary. Maybe things like encoders and other sensors may also be receivers, that's one of the things we should think about.
For you microcontroller dudes, this now works out nicely. Suppose your robot is sitting on a USB port. A USB channel can send messages to your robot receiver object. As long as you respond to the messages as defined, it makes no difference how or where your robot is implemented. As long as there is a channel to it, and you implement a message handler, you are a robot.
All in all, this is totally location agnostic, and creating functionality is simply responding to a set of messages. Your robot could be a simple radio link microcontroller turtle, but have the brains of a huge super computer and be coded in exactly the same way as a big stand alone robot with an SMP motherboard.
What do you think? Is anyone interested in working on the development of it? I have a networked CVS repository that I can setup.
Anyone?