I've been thinking about requirements for a protocol as described in the Subject. The idea is to use it in robots etc as a flexible interconnect for actuators and sensors. The requirements as I see them are:
few (2-4) wires,
support up to say 50 devices on one bus,
data rate say 100Kbps, bidirectional (polling with interrupts?)
supports enumeration of devices,
able to provide a few mA to unpowered sensors,
peripheral end of the protocol implementable in small micros, having, say, 1K ROM.
no need for external transceiver chips (clamping diodes only)
maximum of say 10 metres, in any topology,
not required to handle extremely noisy environments.
I've looked at SNAP, CAN, etc, but want something simpler, perhaps like the Dallas/Maxim One-Wire (tm) bus, and not patented like I2C. The basic idea is to allow a move up from R/C servo pulse protocol, supporting distributed motor control, encoders and sensors. If a suitable protocol was created, I could see multiple manufacturers producing devices and brains for it, all interchangable. Plug in a device, and the brain recognises it, asks what it is and does, e.g.: "temp sensor", "PWM H-bridge max
5A and 28V", "encoder yielding position or velocity", etc.
Is this an application for RS-485 (though that needs drivers)? What other hardware and software protocols would suit? Otherwise is anyone interested in helping design one?
Honestly, I've been pretty happy with I2C, patents or no -- it does just about every thing you'd like at rates up to 1.2mbps. Whatever protocol you choose, having dedicated hardware support on a micrcontroller is a major plus, which is probably why designing an entirely new protocol from scratch isn't really all that appealing to me.
As to I2C patent issues, it's probably worth looking at when the patent expires -- it may be that in a few years your patent objections would be moot (maybe not, though -- I don't know which patents are relevant, and I'm too lazy to check the Phillips docs for the patnos).
Several of us have hashed out a simple RS485 protocol for this purpose. See:
I've implemented it on a micro as small as an ATmega8 with plenty of code to spare for the application (which was a smart dual H-bridge w/encoder feedback, pid algorithm, etc). You can see this particular implementation here:
I've also more recently used the ROBIN protocol for a remote control wireless connection using a 900 MHz radio to control a full sized truck (1987 Chevy Suburban) - it's kind've fun driving a large 4WD SUV with a Playstation II controller :-)
If you want to go that way, most of the work is done for you as the code above is reasonably well tested and implements most of the protocol (does not implement the configuration commands, but that's not too hard to add).
It would actually be nice if more people adopted ROBIN for their projects. The original hope was to encourage folks to use it, especially hardware vendors so that we (consumers) can buy ready-made ROBIN-enabled devices to attach to our distributed systems :-)
I will probably be coming out with an h-bridge soon with ROBIN as one of its communication capabilities. I'll perhaps add some other devices as well. My MAVRIC-II and MAVRIC-IIB controllers both support ROBIN directly with their on-board RS485 interface and the sample code from my web site. I find it very convenient because distributing the sensing and control provides a great deal of modularity not to mention it greatly simplifies wiring, especially on larger robots and applications where running wires back to a single central controller is sometimes inconvenient, or even impossible.
1) Separate the harware layer from the protocol itelf in how you think of this thing.
2) look at your requirements for a) Baudrate b) No transceivers c) Line run of 30' d) Variable number of devices on the network They conflict.
3) You state that you wish to have motors and other high frequency generators on the same network. This make noise a very relevant issue.
4) A dedicated protocol like CAN or I2C will likely be supported as on on chip peripherial, or require an ancillary processor to handle specific communications, as "bit banging" at any baudrate is unpleasant.
One of the issues here is that you wish to have an auto detect capability. Do you mean "hot swap", or merely the ability to initially poll the network? I prefer a scheme where every device has a unique address. This is pretty straight forward if you transmit ASCII data, in that you can leave 0x80 to
0xFF available as unique identifiers. In this way, you can use RS-485. LTC485's are not that expensive, and simple to use. Alternately. you can adopt RS-422 for higher throughput.
Get yourself 30' of twisted pair cable, and 2 dozen LTC-485's on it. Plug them in at intervals, add some pull down on the - line, some pull up on the high line, put a terminator in at the end, and start screaming into one end at 115200 with the packet 0xAA or 0x55 at a periodic rate, and see what you get on the other receivers. Listen with some micros programmed to only wait so long for a correct packet, and count how many packets you get. Put up a warning led if things go astray. Look at the lines with a scope to see how clean they are. If you are adventerous, wrap the thing around a running hairdryer and see whether you have trouble.
Yup, looks mostly like what I meant. I figured the idea wouldn't be original. More peripherals and some standard messages for them would be the next step. Would the code fit in the 4K ROM of a small MSP430? Wait, I can try that :-)
I had thought to glue an MSP430 to an LMD18200/245 H-bridge driver, or a HIP4081+mosfets for higher power. For sensors, it'd be great to have a single-chip solution on a half-square-inch PCB, but I guess an LTC485 driver chip would fit in that size with the MCU.
Code that works with other MCUs and with ITX m'boards will help with adoption. I acquired a PC104 embedded PC running Linux which has RS-485, so I'd use that for a brain, for example. ARM is the other main candidate for a controller.
Exactly my thought. Ability to enumerate devices and assign addresses to new devices dynamically would be a bonus, see my other msg.
I believe you, but haven't the experience to know how badly they conflict. What would make the biggest difference in practicality, supposing an open-drain one-wire type interface (noise-prone)? Would it be feasible by reducing the devices to 16, the baudrate to 1k, ...?
Yes, the RS485 is probably necessary. Assuming an LTC485, how much can the ground at the end-points bounce without causing signalling problems? The data sheet says +-7V, but is that realistically achievable in the kind of network I've described?
Do the devices need to be in a single row, or can the network have short branches or a star topology? How much would that affect the capability? How much must the data rate drop to use distributed termination?
The latter. Preferably to enumerate a unique serial-number (like the one-wire 64-bit ID) but then to assign each device a single- byte address. That sort of enumeration is easy with an open drain architecture, but I guess RS485 could do it too. Just send a message saying "speak if your serial# starts with X", and if you hear noise, whether a single transmission or a collision of many, someone's there and you can refine the query. Once you have each one, you can assign addresses. You could also periodically send a "speak if unassigned" message to allow hot-plugging and handle devices that accidentally reset themselves. Sort-of a mini DHCP :-). I'm assuming that the host has a bit more grunt than the peripherals here...
NACKs are not really very useful. Garbled packets will probably have a bad checksum, so they won't be answered. If you answer packets with bad checksums, you'll send back phony NACKs for garbled messages. Mostly you get timeouts.
If you're going to allow multiple masters and collisions, you're going to get lots more garbage packets. You'll need a 32-bit CRC. For single masters, you can probably get away with what you've got.
Commands that aren't idempotent (doing it twice has the same result as doing it once) are a no-no, since there's no sequencing.
I > I've been thinking about requirements for a protocol as described
Wires are relatively cheap. Catagory-5 cable is 4 twisted pairs --
8 wires total. It is also quite inexpensive with inexpensive RJ-45 connectors that can be easily crimped on. Run signals up and down on a pair of them and use the remaining two pairs for power and ground.
Be careful. 50 devices on one bus introduce all sorts of issues -- addressing, debugging, polling vs. some collision detection, less bandwidth for all devices, bus termination. These issues can be solved, but it is not simple.
Polling is easier, but interrupts are nice. The fewer devices on the bus, the more pallatable polling is. Remember you can have multiple buses.
Very useful, and a bit tricky to do on a simple multi-drop bus.
Wires are your friends. It is easy to pump a few hundred mA down some extra non-signal wires.
If it is kept simple, it can be done.
Transceiver chips are cheap. Only go without transceiver chips if you can keep your distances really short 6"-18" and your data rates low. High data rates and long distances mandate transceiver chips.
Use RS-422 (single ended differenctial signalling) or RS-485 (multi-drop signalling). In general, either pick a star topology or a bus topology and do not try to mix them. Personally, I think star topologies are more robust, so I always pick them over bus topologies. Ethernet started out as a bus topology and has largely evovled into a star topology.
RS-422 and RS-485 work well in noisy environments.
I've actually got two designs out there. The first is RoboBRiX(tm). See:
The RoboBRiX design uses TTL signalling over short distances and fairly slow baud rates (2400 baud.) I works just fine.
The other design is not as far along, but I do have some initail working hardware. I call it Simplicinet. Here's a URL:
It is basically a star shaped network that uses RS-422 up and down signalling. It runs at 115,200 baud. For this network, each node has to have a hardware UART. By the way, I use microcontrollers like popcorn in my designs. Somebody else in our robot club has come up with a very similar design that he uses for his robot creations.
I'm sure that there are other systems out there.
I hope this helps,
P.S. To contact me via E-mail, use Wayne -at- Gramlich -dot- Net. The snipped-for-privacy@PacBell.Net is just a SPAM trap.
In my opinion. . . Actually there is no real simple easy to use solution, even if you try and make one yourself, it begins to get complicated and unweildy right from the start. I'd take the hints that CAN is a very good way to go. It works in a very noisy automobile environment. Cars can now have more than 100 computers or microcontrollers in them. Some cars now have a microcontroller issue an error when a tail light bulb burns out and notify you about it by voice or on a LCD dashboard display indicator. That's one MCU per light bulb just to monitor whether the light bulb is good or bad then comminucate it over a CAN buss. Then you have a MCU per seat person position, so that the MCU's can monitor the person's weight and/or whether the seat is empty for air bag deployment purposes. CAN works in a very noisy automobile environment.
blueyedpop has successfully built and run the "robopede" and it was featured in
April 2003 of Nuts and Volts magazine. Of course
Dr Brian Huff is now the owner of this magnificent robot. A pretty good book about CAN to help get you going is "CAN System Enginerering From Theory to Practical Applications" by Larwenz
CAN is a lot of fun, and really neat how it works. I almost went 485 for my centipede, having done some of that, CAN took care of a lot of issues right off the bat.
have a look at bb-elec.
is a tech notes section that can answer a lot of your questions.
Is auto enumeration that important to you? Even auto enumeration requires that you know what each module is doing, unless you are trying to play with emergent behavior.
CAN can be implemented with the addition of an external CAN processor. THe advantage here is that many of these processors have deeper RX buffers, which makes them suitable for use as a master. THe PIC micros, best as i understand it, are not as suitable for handling a large bus.
Just as a matter of note, you talk about 50 nodes at 100k+ speeds. THis is a lot of data. What were you planning on using as a centeral processor? That is 11520 bytes a second at 115,200. These of course are "intelligent bytes" in that beyond an address, they require processing behind them. Wrangling that much data is a task.
DECnet can be carried over a variety of link layers including Ethernet, token ring, FDDI, frame relay, and a bazillion others.
I generally agree with this. I admit that I did do a no-no in the protocol which led to this, and that was to allow the NACK bit in the flags to be set by the "application". Remember, we are talking about small 8-bit micros where code space is sometimes at a premium, so some corners were cut for the ease of implementation.
Specifically where I broke the rules a little with the NACK is that I found it very convenient to return a NACK from the application if it received an otherwise good packet, but one that it didn't understand, i.e., in my h-bridge, it would set a NACK bit in a response packet if it did not understand the packet's data content - such as bad command or parameter. Certainly, no amount of resending that packet would help that situation - I used it as an application response, not a protocol response. Shame on me.
What I _should_ have done is to either send a specific response code as a normal data packat and not mix the protocol layer with the application layer. Or use a spare bit in the flags byte and let the application determine its meaning.
The topic of ROBIN came up again a few months ago on the mailing list where it was conceived and hashed out originally, and we then defined a few of the unused bits as "application bits" that could be used for this purpose. D. Jay Newman who posted earlier on this thread was part of that discussion as well (might have been his idea, can't remember for sure).
That's also where the idea of a "semi-automated" method of line configuration was proposed. This is the "CONFIG" or "COMMAND" bit in the flags register - which define the meaning of the data payload to be a "configuration command" as opposed to application data. I don't think I've updated my document yet to say this, but I think that we've agreed that this "extension" is optional and not required. Again, the target is small microcontrollers where flash and RAM space may be tightly constrained. While nice to have, the on-line configuration is not crucial and can be optionally implemented. Devices should at least recognize the CONFIG bit and silently ignore these packets. Any master device should soon get the idea that its requests are not being understood.
Anyway, with those two application bits available, I could do away with the use of NACK in my implementation as the "protocol layer" currently doesn't do much if anything with it. Basically in my implementation above, the "protocol layer" handles basic interrupt driven transmission and reception of a packet, handling checksum calculations for both sending and receiving, constructing packets for shifting out the wire, as well as automatic collision detection and random back-off and retransmission (when requested).
But when packets are received, they are not automatically ACK'd or NACK'd - they are passed into a buffer and the application is notified of packet reception. It should then handle the packet, as well as any ACK's that may be required to the sender. This allows for an optimization where a return packet can both serve as a query response to the sender as well as a reception ACK (an ACK packet is just a regular packet with the ACK bit set in the flags).
Because our packets are only 64 bytes, I don't think we need to go to a 32-bit CRC. And in practice, with 3 or 4 master, I have a real hard time actually causing collisions in the first place, especially at higher baud rates when the time on the wire is small. In fact, I had to resort to some automated and rather contrived test procedures in order to generate collisions in order to test and debug the collision detection and retransmission code in my implemenation.
But I certainly would not be opposed to 16 bit checksum. I do think
32 bit checksum would be a bit overkill. But believe me, there were actually some who opposed a checksum at all. Some vehemently opposed and dropped out of the discussion because of it - still not sure that I fully understand their reasons. But since I did want this to be a collaborative effort and not become the "Brian Dean" or "BDMICRO" protocol, I, and all of us involved made concessions regarding are "pet features". I think it is more valuable to the community to keep it a collaborative effort, some give and take, and come up with something that the majority of folks can buy into. And thus, hopefully build some momentum behind it and maybe one day we will be able to purchase ROBIN-enabled sensors and robot peripherals. A number of people have contributed ideas for the protocol - myself, Dennis Clark, Jay Newman, and lots of others I can't recall at the moment.
We were not looking to reinvent TCP/IP. Things like fragmentation and sequencing were pretty much off the table since those tend to imply rather significant buffering on the part of the receiver. Some of the target micros might have only a few hundreds bytes of RAM.
When you must be sure that the packet was received and understood, the sender should set the ACK bit so that it gets the response, and the receiver should be tolerant of duplicates. Not 100%, but pretty darn good. Devices should be designed to have robust command sets - ROBIN does not guarantee packet reception - but it does provide a high probability of packet reception and decent error detection - that is what we've asked of it and it does that very very well.
Another design criteria was cheap hardware. My MAVRIC-IIB boards currently ship with a Texas Instruments SN65176 transceiver - an industrial temperature rated RS485 bus transceiver that costs less than a dollar from Digikey in quantity. It is very simple to use - far simpler than the common MAX232 RS232 level shifters that folks use for RS232 connections. And when you drive it with a standard UART - your program doesn't even need to do anything different than normal other than enable the transmitter when it is getting ready to send data. My MAVRIC-II and MAVRIC-IIB boards also incorporate on-board terminators, further simplifing the bus. And my MAVRIC-IIB uses screw terminals for the bus connectors - it just doesn't get much simpler to connect to the bus. The screw terminals also makes adding bias'ing resistors extremely easy if they are needed.
And finally, an important design decision was that it be _simple_. If it gets bogged down in esoteric issues that are periphery to the main goal of easily moving data from point A to point B, we will have a wonderfully academic protocol that noone implements. There were many naysayers who spoke out against Ethernet many years ago - folks saying that it will never catch on because it is not 100% deterministic, it has collisions as a fundamental premise of its operation, throughput will suffer, etc, etc. We see how that turned out.
ROBIN is simple to implement. I did my first implementation in just a few hours - starting from scratch! Now when I need a ROBIN device, I just copy the code exactly as it is on my web site and modify it. Now I create entire devices that solve real-world problems in just a few hours, for example, the Playstation II controller that controls the
4WD SUV. I literally built that in just a few hours using hardware I had on-hand - it's got two end nodes: the transmitter a MAVRIC-IIB + Playstation II controller + MaxStream 900 MHz radio module. On the other end - a MAVRIC-IIB + MaxStream 900 MHz radio module. The over the air protocol is ROBIN - exactly the code from my web site. I send
20 packets per second at 9600 baud and get about 1000' foot range. I used ROBIN even though I had only 2 endpoints because the packet protocol and data checksums made it robust for an otherwise unreliable radio medium. Control is very solid and robust.
Anyway, I hope this helps to shed some light on ROBIN. I hope to see some more implementations in the future. I'll be offering a few from BDMICRO. I believe that Jay Newman is working on a Java implemenation. I will eventually do a PC ("C") implementation that will work on something like the mini-ITX boards using an RS485 adapter.
It should - in my ATmega8 implemenation which has 8K flash, ROBIN took around 10%, if I recall. So you might be looking at 20% of your space, assuming similar code density. Note that my implementation is entirely "C" using GCC which does pretty good in terms of optimizing code size and speed. An expert assembly language programmer can probably make that even smaller and faster.
Yep. Implementation isn't hard. I don't have an ARM controller in which to implement it. I need to go ahead an do a PC implementation, I just haven't needed it myself so that has been lower on the priority list for me.
Well, ROBIN sort've has this facility. The way it works is that a device that needs to be configured assumes the address of 0xfe, 9600 baud, 8N1, perhaps by installing a jumper or something and the firmware automatically assumes those settings. The master controller then assumes those same settings, not the node id, of course - it keeps whatever it had. Then configuration commands are issued to the new device to set the desired node id, line speed, and other settings. When the new device is reset and the jumper removed, it should automatically assume the new settings. Note that the jumper does need to be manually controlled - any free I/O line could be used to indicate that the device should assume the default "configuration line settings".
At that point, the device is configured to talk on the bus.
Another feature of ROBIN along these lines is the "Id Request" bit. When a node receives a packet with this bit set, it is compelled to respond to the sender with a human readable ascii string describing the node. Thus, you might see in response: "BDMICRO:MAVRIC-IIB:ROBIN Test Node" in response to one of my test nodes.
Several folks have mentioned in other threads about noise. RS485 is designed specifically with this in mind. RS485 is probably the most common bus used in industrial automation and data collection. It uses a pair of balanced differential transmission lines such that the _difference_ between the two lines defines the state. This gives _very_ high noise immunity because any noise that affects one of the lines also affects the other, and since RS485 looks at the difference between the lines, the noise just cancels out and goes unnoticed.
Regarding transmission rate, I've personally ran the bus with a baud rate of 460.8 kbps. In my test program on my web site you will find a "ping" command which quantifies the throughput and reports any errors or timeouts. Here is the result of transmitting the largest ROBIN packet size of 64 bytes both to node B and B echoes the exact same data payload back to node A:
---------------------------------------------------------------------- ROBIN> baud 460800 RS485 baud rate set to 460800 baud rate register set to 1 ROBIN> ping B 01234567890123456789012345678901234567890123456789012345678
packet send size = 64 bytes, packet recv size = 64 bytes 1000 packets sent, 1000 recvd, 0 errs, 0 timeouts, total of 3943 ms 253.614014 transactions / second 3.943000 ms for round trip 1.971500 ms one way
As you can see, both the throughput and turn around time is pretty respectable for the protocol itself. Total time for transferring 64K of data was 3.9 seconds. Removing the overhead (5 bytes of overhead per packet) we get 59,000 bytes of data in 3.943 seconds, or about 15 KBytes of data per second. Almost twice that would be possible for a one-way transmission, for example if a ROBIN node was controlling a camera on the bus and transferring the image to the master - that would be a 1-way transmission from the camera node to the master, less any ACKs used for flow control and data acknowledgements.
Note that the above is for the maximum packet set on the bus. Most packets will be much less than that. On the other extreme, sending the smallest packet of 5 bytes results in:
---------------------------------------------------------------------- ROBIN> baud 460800 RS485 baud rate set to 460800 baud rate register set to 1 ROBIN> ping B
packet send size = 5 bytes, packet recv size = 5 bytes 1000 packets sent, 1000 recvd, 0 errs, 0 timeouts, total of 555 ms 1801.801758 transactions / second 0.555000 ms for round trip 0.277500 ms one way
So in practice, you will see something somewhere in between.
But the actual data rate you are able to achieve will depend on the transceiver chip you use (some are capable of higher rates than others) and how much care you use in constructing your bus to help eliminate grounding problems, etc, as well as the overall length of your bus.
This is fun stuff - an area in which I enjoy working!
For my projects, it's not important at all. But for a protocol that we might hope to be implemented by a number of vendors, and supported by some smart development software (my main area of expertise), I think it counts for a lot. I think robotics has a long way to go in the developmemnt of smart software tools.
I proposed the high data rate to decrease latency and collisions, not to support sustained throughput - though the ability to send a lo-res video frame occasionally would be an advantage. Not every app will need it, but my brain is a P133+32MByte running Linux, so it's capable enough.
Re noise, I'll take another look at CAN. It isn't supported by any of the processors I've used, but I can change that :-). Brian noted that RS485 is differential, but if there's a possibility of more than 7V of ground bounce it will still come unstuck, no?
"EIA-RS 485 defines maximum voltage range at the receiver input (ground potential difference + alternating signal voltage) from -7V to +12V."
So it is quite tolerant in that regard. Note that most RS485 references suggest the use of a 100 Ohm resistor for connecting the grounds of devices. This mainly applies to long runs (maximum of 4000 ft). There are some nice diagrams on that web page as well.
RS485 is _designed_ for this. When your bus can potentially span kilometers, it gives a whole new meaning to "ground reference" :-)
ROBIN itself is fairly new, but its concepts are as old as the hills. One of the goals with ROBIN is to do for RS485 what I2C and CAN have already - the underlying protocol. You generally can't go out and purchase third-party RS-485 "commodity" items, not like I2C anyway. Each RS485 network runs a different protocol - vendors can't support them all. ROBIN is an attempt to make RS485 an option for vendors of certain classes of devices who like the virtues of the underlying physical and electrical characteristics of the bus (relatively high speed, high noise immunity, cheap hardware, simple wiring), but due to the lack of a common agreed-upon communication protocol, makes creating "for sale" devices using it otherwise impractical.
But I2C is really not appropriate for the Clifford's application due to the length of the bus. I2C is rated for 1 meter if I recall. Repeaters have been introduced to deal with this limitation, though. Also, I2C is not "noise tolerant" in the way RS485 and CAN are, which seems to be important for Clifford's application.
CAN is still an excellent option, but I hear it has its quirks (as does RS485 as well). I admit that I'm not nearly as familiar with CAN, though I'm changing that.
Sorry if you think I got a little wordy about ROBIN and RS-485, but it seemed to directly answer Clifford's question, i.e.:
Seemed like a pretty good fit. But only Clifford can decide what is best for his particulars.
You're absolutely right, and in fact though I have a particular project in mind, I generalised the requirements to almost exactly the goals that Brian had set for Robin, and for exactly the same reasons. All that seems necessary now is to design an enumeration protocol (like DHCP) on top of Robin, perhaps some device-class support (standard messages for h0bridges, sensors, etc) so software can drive devices by type, and a stronger checksum (16-bit is probably enough) and all my goals are met. I guess there'd need to be a standard connector too - what do you use? An RJ-12 would be a nice small form-factor, with power on 2 pins.
It's been an informative discussion, and I thank all of you. Brian, keep me in touch with your developments, and if I get far enough to actually implement enumeration, etc, I'll let you know how I fared. The project is likely to move fairly slowly though.
Before leaving the topic entirely, the Rinnai hot water heaters use a two-wire system to connect temperature consoles. Typical runs might total 30-ish metres, all in parallel, and feed data and power down the one pair - I don't even think it's polarised. Interesting system, designed to be installed by builders without even an electrician, and often in parallel with house wiring.
CAN is really really nifty, but it requires hardware specific chips to implement, and I am 90% sure there are liscensing issues, ( could be wrong.)
RS-485 is great because it is relitively simple, requires a couple of bucks of simple hardware, and a micro with a UART or UART emulation.
It just seems that if the OP decides to do what it is he intends to do, he will be diluting all the efforts that have gone into ROBIN if he is at all successful, since in my understanding of it, ROBIN covers most of what he is looking to do. Perhaps I misunderstand the ROBIN implementation.