Re: Using USB with Real-time Linux



Most LANs now-a-days are 100 Mbps full duplex and operate via a switch. There are no collisions in a switched LAN environment.
Also LAN hardware is cheaper than any other platform. Ethernet also has the advantage of simplified debugging. You can use Ethereal or any other packet sniffer to monitor links.
Sandeep
--
http://www.EventHelix.com/EventStudio
EventStudio 2.0 - Real-time and Embedded System Design CASE Tool
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Oh, never any collision? Are you really sure about that? ;-)
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

If the switch and network interface cards are all set to full duplex, there will be no collisions. The switch buffers what would be conflicting traffic. That said, if one overloads the hub, random things will happen, so keep the load below 25%. Ethernet is so cheap that the simple solution is overkill.
Even with collisions, if one runs at 5% load, the collisions are pretty rare, and I have personally participated in building billions of dollars worth of realtime systems based on ethernet and UDP, despite all heated claims that this cannot be done.
Joe Gwinn
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Did you inform your customers that you have redefined the term "real-time" from "will respond before deadline" to "very likely to respond before deadline?" Were any lives at stake if, by random chance, the unlikely (but not impossible) happens? Would you advise someone who builds nuclear reactors or commercial aircraft to go with your engineering philosophy or with mine?
--
Guy Macon, Electronics Engineer & Project Manager for hire.
Remember Doc Brown from the _Back to the Future_ movies? Do you
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Guy Macon <http://www.guymacon.com wrote:

As you no doubt know, the term "realtime" is defined many ways, some more absolutist than others. In any event, my customers are perfectly aware that UDP/IP over ethernet is used. In fact, it would be a surprise to them if something else were used, as ethernet and UDP/IP over ethernet have been used for many years, on many such systems, without any difficulty.
If something other than ethernet were proposed, my customers would greatly fear proprietary lockin, and it would take a whole lot of proving to convince them of the necessity of abandoning ethernet and internet protocols.

I prefer my philosophy. The problem with your philiosphy is that if some random bit of software or network takes a microsecond too long, or breaks, the reactor blows up and ruins the neighborhood. This doesn't seem like a good idea to me.
Seriously, your definition of "realtime" is also known as "hard realtime", but there are other kinds. Specifically, "soft realtime". The difference is that in hard realtime, deadlines are absolute, and it's a sin to stry over a deadline, even by a nanosecond. In soft realtime, it's expected that there will be a distribution of response times, rather than a hard deadline, so the software is designed to deal with the distribution.
The problem with hard realtime is that it's too hard, and so (when taken with the vissitudes of the real world) leads to fragile behaviour, especially in large systems. So, in the systems I build, one always designs for soft realtime, unless cornered. The resulting large-scale architecture is soft realtime, while some hardware controllers deep in the system are hard realtime. The overall system response behaviour is soft realtime.
If the system is distributed, with a diameter exceeding a few hundred meters, all systems are soft realtime, because message transport delays are somewhat random.
Airplanes are a different issue. Safety-of-flight computers are typically triplicated and run in hardware lockstep, with voting at every step. Soft realtime would be too hard to coordinate in the three computers. Nor is safety-critical code allowed to use such unpredictable things as interrupts. So hard realtime is the rule in such avionics.
And, yes, lives are at stake. That's why fragile behaviour is unacceptable.
Joe Gwinn
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
wrote:

That buffering delay is the problem.
With a single switch network the worst case delay (with 1500 byte offending frame) is 1.5 ms for a 10 Mbit/s network and 0.15 us for 100 Mbit/s, in addition to any internal switch delays. If this is acceptable, depends on the application, but may for instance require that the timestamping is done distributed at each transmitting node. Note that quite few embedded chips containing an ethernet port only support 10 Mbit/s, so trying to run a 1 ms cycle time would not be realistic.
If the traffic goes through two or more switches, the worst case delay becomes even harder to predict.

Ethernet is definitely an interesting transmission medium (replacing RS-485) when used in dedicated single master multidrop slave systems, using traditional master-slave protocols. In these cases, the timing is perfectly deterministic.
However, in practice, the hard part seems to be keep these networks completely dedicated, since as soon as someone finds out that there is an ethernet connection to a desired location, they want to hang their own, unrelated devices (e.g. a PC and remote ethernet/serial converter) to the network :-). So even if you are using ethernet hardware for your dedicated network, at least invent some fancy name for it, which will hopefully keep those people away.
Paul
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

With a 1 millisecond cycle time, the switch would need to support cut-through routing as well. Some do.
Does one really run the embedded chips that support only 10 megabit enet at 1,000 Hz? Sounds like it would be a self-inflicted problem.

Agree.
Use non-standard connectors, "for robustness/reliability"? Non-standard cable is even better.
In my world, this is not a problem. Ethernet is cheap, so we just provide a separate network for the non-realtime stuff. The only cost is the added switches, basically. The realtime networks are most often dual-redundant as well, for fault (or even damage) tolerance.
The big problem with mixing realtime and non-realtime is not so much bandwidth and determinism of the network, it's the fact that the non-realtime net computers will be running various office productivity software products, none of which are even remotely reliable enough, so the only practical alternative is airgap separation.
The US Navy found this out the hard way in September 1997, when the Yorktown, a billion-dollar Aegis cruiser, was rendered helpless, drifting drifting in the Atlantic just off the coast of Virginia, by a Windows NT crash. No propulsion, steering, or weapons. They were lucky. Control was recovered in about three hours, without loss of life or ship. The Navy project to replace all those expensive embedded realtime systems (and programmers) with Windows boxes subsequently died of injuries received in this incident. To get details, google on "Yorktown Windows NT". The US Navy's latest standards effort, OACE, is completely POSIX based.
Joe Gwinn
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

In my opinion, anyone who builds a ship control system that shuts down and can't be restarted when a data entry clerk makes a typo has much bigger problems than the fact that he choose Windows NT Terminal Server for his OS.
--
Guy Macon, Electronics Engineer & Project Manager for hire.
Remember Doc Brown from the _Back to the Future_ movies? Do you
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Guy Macon <http://www.guymacon.com wrote:

It's hard to disagree with this. But it is indicative. And, while admirals usually aren't very technical, they understand "drifting drifting" quite well. That's what it took to overcome all the viewgraph engineering that led to the mandated use of a desktop operating system in a safety-critical application.
The problem was that they also used Microsoft network, so the blue-screened box took the whole network down, crippling the uninvolved. If communicatious had been with fire-and-forget UDP messages, nobody would have known or cared that MS Access crashed when a zero was entered in the wrong field, taking the box down with it.
Joe Gwinn
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

As I understand it, the NT Terminal Server crashed. Because the network had nothing on it except the one NT Terminal Server that ran everything from propulsion to weapons to Microsoft Office and a bunch of NT terminals, once the server went down everything stopped working.
Try Googling on Yorktown Windows NT Terminal http://www.google.com/search?&q=Yorktown+Windows+NT+Terminal
Key quotes from the first few results:
"The ship had to be towed into the Naval base at Norfolk, Va., because a database overflow caused its propulsion system to fail"
"The Yorktown lost control of its propulsion system because its computers were unable to divide by the number zero... The Yorktowns Standard Monitoring Control System administrator entered zero into the data field for the Remote Data Base Manager program. That caused the database to overflow and crash all LAN consoles and miniature remote terminal units"
"If you understand computers, you know that a computer normally is immune to the character of the data it processes. Your $2.95 calculator, for example, gives you a zero when you try to divide a number by zero, and does not stop executing the next set of instructions. It seems that the computers on the Yorktown were not designed to tolerate such a simple failure."
"It still boggles the mind that any divide by zero error on NT would cause a system to crash, let alone 27 end-user terminals. I dont care what operating system, computer or application Im using, I should be able to type in a zero and expect the computer not to crash, especially if that zero is to represent a closed valve."
"What if this happened in actual combat?"
--
Guy Macon, Electronics Engineer & Project Manager for hire.
Remember Doc Brown from the _Back to the Future_ movies? Do you
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Yes of course, but that doesn't mean there are no collisions. That means concurrent packets are buffered as to maximize the average throughput.
The latency due to potential collisions is about in the same magnitude, sometimes worse: because switching hubs are designed with the throughput in mind, not latency.
Although it is possible to achieve relatively low latency with an ethernet interface between two devices only (it has been shown to have about 1-5 ms max latency on a 100 Mbps ethernet network and no more than 2 devices accessing the network at any given time, with the appropriate underlying OS).
That being said, ethernet networks cannot qualify as real-time communication, because there is no guaranteed bounded latency. We can only get an idea of an average latency in a given network, and provided that it be "simple enough". Which is of course a totally different thing.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Hi Cristian

Perhaps you could use ARCNET. ARCNET is deterministic and "fast" (up to 10 Mbps).
Michael
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.