Can you state what a desktop system has that a microcontroller
system usually doesn't have, that makes it relatively easy for these
apps to get their jobs done? Woops, I wasn't going to do this, I was
only going to ask the question below...
I've actually written two other longish responses, perhaps against
my better judgement because I think Gordon and others are probably
right, this topic HAS been beaten to death, but sometimes I'm a bit
optimistic (perhaps pollyannish), and/or in denial that I'm practicing
But I've narrowed down my ramblings and such to this question(s) I
hope you and/or someone else can answer:
For a typical general-purpose OS (Linux/Unix, Ms Windows - I'd say
MS-DOS too but I think that's so old it's cheating for this question)
or even a RTOS, running on modern 1GHz+ desktop systems, what is the
maximum time that interrupts are disabled? And less pertinent but
still interesting, what's the average time interrupts are disabled?
And if you can, please answer the same questions for any foreground
or background 'application' task that might be running on such a
The question isn't what a desktop has that a microcontroller doesn't, it is
more like what can a microcontroller do that a PC can't. As I've said over
and over, there are application for microcontrollers, but the point I'm
trying to make is that many of the "reasons" why you must use one are
If "requirements" drive the decisions, then certainly having the correct
facts help you understand the requirements better.
A couple things pissed me off about Gordon, but the "done to death"
statement is pure ignorance by the very definition of the word. To say the
PC vs microcontroller argument is just "preference" is a little, shall we
Again, and this is the difficult part of the debate, people have some deeply
personal attachment to microcontrollers that I never expected. They would
rather use a microcontroller than a PC, and my suggestion that you don't
*need* a microcontroller has initiated a religious war.
The problem is that this is *NOT* a religious argument, it is an engineering
discussion. Claiming it is merely preference is the sign of a bad engineer.
Sure, there are plenty of times when something is merely preference, but on
those times when it is not, understanding the characteristics of the
application is very important.
There in lies the problem. As I tried to explain, you don't always need the
precise predictability of an RTOS, so much as you need an accurate
measurement of elapsed time and a reliability that the system will be
generally close, and probably never, or hardly ever exceed a certain
I have added a histogram to my motor control software to track latency, and
it is pretty interesting. I'm going to be putting together some charts for
my site this week.
What I've seen is that a process running at high priority exhibits a low
variability, and while a process running at normal priority exhibits a
higher degree of variability. This is to be expected, what I didn't expect
is (running at high priority) is how good it would behave.
My Linux 2.4 kernel system, running my motor control process on a system
with 0% idle, reading a USB quick cam, converting it to mpeg, and streaming
it over the wireless network, with a 20ms loop, responded between 19ms and
22ms 98% of the time, the longest delay was 30ms (twice). This far exceeds
my expectations, and blows away similar tests I've done on Windows NT.
"forground" and "background" applications are pretty much a
"Windowsism," (WRT performance) what you are really asking about is process
priority. Even though it is pretty well hidden from the user, Windows does
provide a usable range of process priority levels.
Generally speaking, the higher the priority of a process, the more reliable
the timing performance.
Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here.
All logos and trade names are the property of their respective owners.