We've been debating about "real time" response for motor control. In the
right context, I don't believe it does:
A mobile robot with non-zero mass.
A mobile robot moving at a non-zero velocity.
The motor system is based on servo motors, not steppers.
Now, the robot can not stop infinitely quick. It must stop with a measured
deceleration. If it attempts to stop too quickly it can skid or fall over.
What is the "real" stopping period of a robot? It depends on the surface,
the wheels, the shape of the robot, the speed of the robot. The truth is
that it can easily be at least one second.
Now, a real time system is nice, but it is not required to control the motor
speed. As long as you can predict trends (error increasing or decreasing),
factor time between samples, and can respond reasonably within the motors
ability to accelerate. The hard core time requirement of an RTOS is not
nessisary. This is not to say that the system can go without responding for
too long, but a well ballanced system's response fluctuates but does not
stop. A properly configured Linux or a BSD running on a PC can operate
within these parameters.
As for one person's argument about safety, I'm not sure if .01 seconds makes
a huge difference in stopping distance, but I don't think it is a real
Moving at just 4 mph, in just .01 seconds that robot moves about 3/4 of
an inch (.704 inches to be exact). To get something moving that fast,
youd have to drop it from ~1.5-2 feet from the ground. Take a 50 pound
somethin or other with a nice edge (assuming a 50 pound robot, with
batteries and motors and all) and drop it onto your shin from ~1.5-2
feet above it. See if it moves your shin 3/4 of an inch. I guarantee
those bones are not 3/4 of an inch thick. In other words, it will take
much less than .01 seconds for whatever you dropped to travel through
If you have bumper switches, say, 3 inches in front of your robot...they
can compess a total f 3 inches if needed. As soon as they touch
something, they send a signal that takes only a nanosecond or so (a few
at the most) to trigger emergency braking. reverse the motors. Even if
it cant stop inside that .01 seconds, and can even only half it's speed
in the time it takes for it to travel those three inches of the bumpers,
you take it's impact force down to almost 1/4. Possibly even more,
depending on your braking system. So putting that into the above
experiment, now drop a 12 pound thing onto your other leg (since your
first one is broken) and commence screaming and swearing and possibly
bleeding, then limp away on leg #2.
Now do you think .01 seconds matters in safety?
Assuming the robot goes 4 miles per hour, that's a pretty fast clip for a
robot, but OK.
Not sure I know why this is here.
Yes, very dramatic but a pretty silly argument. You are using a higher
presision than you really have.
0.01 seconds, in a real world environment, is beyond practical measurement.
What triggers the event that causes the robot to stop? Surely some
real-world event, be it a bumper or a human hand hitting a switch, could be
affected 0.01 by wind currents from an open window.
Again, your bumper material what is it? If it is compressable, it will need
to compress some amount before it can trigger some sort of switch. How long
does that take? Does temperature affect this substance? If it is colder
will the bumber be less compressable and thus have faster response time?
How about on a hot day? will the material be less rigid and thus take
longer to compress enough to trigger the switch?
If you put the switch on the outside of the bumper, how much travel does it
have before a connection. The bumper will still need to compress some
amount before it provides enough force to trip the switch. What sort of
debounce capacitor do you have?
You are working with precisions that are not realistic for the environment.
I still think its fine.
Wouldn't any mathematical analysis be pointless
until you have built your robot so you can plug
in the actual figures?
Start with your requirement that it *must* all be
controlled from your Mini-ITX board and design the
robot and its actions around any limitations that
I would suggest you build the robot and see what
happens. If it fails to stop in time worry about
it then. The solution might be simply to move
the "whiskers" or "bumpers" out a bit further
from the robot. Maybe when contact is made a little
extra spring in the bumpers will do the stopping
It may be a matter of simply setting your sensors
to trigger at a safe distance. This could even be
velocity dependent. As you go faster the bumpers
could move out in front. Or a more practical method
using light have the angle of the beam change with
velocity to detect an obstacle at a safe distance
for that velocity?
With some ic logic you could arrange it so that
*any* combinations of contacts desired would
disconnect the motor/s automatically until over
ridden by the main program.
Just go for it, forget about the doubters. If it
turns out to require a uC you can always add that
Just an aside, um, wouldn't it make sense to design the system before you
While I disagree with he conclusions drawn by the post, I appreciate the
work done. It is exactly the sort of up front work one should do.
If this were a business, I would call that a product requirement.
It is in the process, but it never hurts to do the design up front.
Maybe if you have a robot that is only an inch long and could
actually stop in .704 inches it might make scense. Scale this up
to a robot railroad locomotive moving at 4mph and .01 sec is
probably plenty of time. You must be thinking of only those
little things that skitter about on the floor.
Yes, you do need "real time" for servo motors. The sample period of a
servo system is an implicit part of the math. If the period varies, you
must compensate the measurements. Even with a hardware assisted sample
period, a variable delay in updating the motor drive will cause loop
There are two aspects to consider. The one you mentioned is a
response time issue of the robot. One obvious example is for safety.
The problem with non-real time control here is that response is
unpredictable. Is the response to bumper switch activation .000005
seconds or 2.5 seconds? Shutting down the motors in 5 microseconds
might mean you stop in half a second, but more importantly, not
shutting down the motors instantly means you collide with the obstacle
under full power. That does a lot more damage than colliding under no
power. Think of bumping into a chair and it moves a half-inch or so.
Now think of running into the chair and pushing it across the floor
and right through the china cabinet.
The other is control signal generation. These have varying demands
depending. Lots of small robots use R/C servos. The timing of the
pulse needs to be within the range of 1ms to 2ms with around
microsecond resolution, and repeated with a period of around 20 ms.
The repeat period is not so critical, but the pulse duration is. If
the timing constraints are not met, you get jitter.
Direct h-bridge control requires the ability to generate the PWM
waveform - typical square wave of fixed period and varying duty cycle.
The duty cycle determines the power across the motor. The frequency
is pretty important - there's not one size fits all for frequency, but
in general, higher is better, both from a control point of view and
also that lower frequencies tend to make the motors whine and can be
quite noisy and irritating. Failure to meet these timing requirements
also is bad - your motors don't do what their told.
Typical microcontrollers handle this latter with specialized hardware
support to generate the PWM signals. Typically it involves
configuring a hardware timer, configuring pins to do the control, and
then plugging the duty cycle into an "output compare" register. After
that, the signal is totally hardware generated and the MCU spends 0
processor cycles generating the wave form. PC's typically don't have
this hardware - you won't find it on your parallel port or general
purpose I/O card. Your choice is to add hardware to do it, or
generate the signals using software. Generating them with software is
going to be difficult to meet the timing requirements to control
Both are real-time requirements, just at different levels of control.
I think you've been thinking of mostly the former, but I think a lot
of the folks responding to you about microcontrollers have been
thinking of the latter. At least I have, but I can't speak for
Up to a point. Things never work for me as I plan
them anyway as there is always something I didn't
take into account, like the rocking tank base I
mention in another thread.
It really depend how good your model is. Ultimately
you will have to build and test the real thing. Why
not just go ahead and do just that, particularly as
you feel it will work.
My impression is your requests for input result in
people saying you can't and you saying you can...
Often though you might be planning for things that
never eventuate and neglect things you never thought
of. Those that know, never do, to find out how to do
That was the point of my suggestions. Ok, so what if
you were wrong and the post was right. Maybe there is
a way around it anyway. Necessity is the mother of
invention. If the time to stop is unknown than give
yourself a system that is adaptable to that. Just as
you whiz along the road in your car your obstacle
detection system is planning well ahead. You don't
wait until your touch sensors signal your head going
through the front screen window after hitting a tree
before you take evasive action. This whole .01 second
thing may not be relevant if the software is smart
and the obstacle sensors long distance.
That was the point I made in another post as regards
an intelligent system working around the limitations
of the hardware.
Well, there are always unplanned circumstances, but a well engineered
product isn't hit by many.
Well, I like to know something is going to work before I spend the money on
parts and time building.
I never asked anyone "if something could work," as I've done the engineering
first. When I have asked questions, they have been specific "which would be
better" types of questions, but alas, people have ignored the question and
again said it couldn't or shouldn't be done.
A good engineer tends to have fewer unplanned circumstances.
The post was not right. I proved it wasn't. Number don't lie. In the case of
a mobile robot, 0.01 seconds is not an "real" number. If your robot is
moving a 4 miles per hour and makes contact with a wall or barrier, 0.01
seconds isn't going to make one bit of difference because in the physical
world there is so much slop, that if it didn't come from the robot, it
would come from the temperature, the angle the robot hits a barrier, or
what ever. There are too many factors involved to properly calculate.
The real issue is to not hit the barrier.
Time to stop can be known, only to a point. You have an unpredictable
surface and tire ware, just to name a couple factors, that could easily
affect the stopping distance.
This debate isn't about the system be adaptible or not, it is about whether
or not you need a "reat time" OS to control the motors.
2.5 seconds is unacceptable. Assume no greater than two quanta, say 0.02
seconds, which means on average you can expect 0.1 seconds response time.
In real life on a system without competing high priority processes, it is
usually MUCH less.
Again, how much precision are you using? Microseconds? How think is your
bumper? What's it made of? Actual physical switch response time? Nature of
the surface on which you are runing. Tread ware of the wheels. There are so
many factors, 0.01, hell, even 0.05 seconds isn't going to help you.
This isn't a small robot, 25~40 lbs.
What makes you think I'm timing the PWM?
I am using an analog signal from a D/A converter and sending that to a PWM
Don't need it.
Or use a ramp generator and some comparitors for $3.00
A lot of people responding have strong feelings about the subject, but not a
lot of strong information. They make assumptions and work from their
Take for instance, ".000005 seconds or 2.5 seconds?" clearly you know that
2.5 seconds is out of the question, but you don't think that I would have
thought about that. It would have been better for you to ask "How will you
manage response time? What is your maximum response time?"
To which I would reply, "well, I've been doing some tests on the PC, and
under load, a single high priority task has never seen more than 0.02
latency variation." Which is about to be expected.
The real problem is device driver interrupt routines, these need to be
inspected for quality and how long they diable interupts.
I think there is a slight misconseption here: real time doesn't mean fast.
It means predictable worst-case behavior. I.e. you have an upper bound on
the delay in which you react to input. I think in that sense all robot
control systems *have* to be real-time.
Now, for specificly control loops for servos: if you go with traditional
control loop theory and implementation (read PID loop), your loop delay is
very critical. In the complex frequency domain that delay will be turned
into a constantly growing phase-shift by the frequency-axes. As you probably
know all stable control loops must have less than unity gain at the 180deg.
phase point. Since the loop delay adds an additional phase-shift that grows
as the frequency grows, it can make your control loop unstable pretty
quickly. And even if it does not, since your 180deg. point gets to a lower
frequency point your ability to control the device will degrade. I mean the
control-loop error will be biger for step-function input signals. In this
sense just sampling the inputs and the outputs (implementing a PID loop in
SW) adds a loop delay of at least one sample-period.
My experiments show that if you want to have an agile PID control loop with
a small D.C motor (540 type R/C car motor) you would need a control-loop
frequency of at least 100Hz.
Another important factor for control loops is that the periodicty of the
updates must be fairly precise. You might be able to control with aperiodic
sampling but the theory for that is significantly more complex. If you just
run a PID loop with the asumption of peridodic sampling whan it is in fact
aperiodic, you interoduce additional noise to your control signal. In other
words clock-jitter in the sampled control loop directly transfers to output
That is exactly right.
Not so. PID has nothing to do with a fixed delta T.
That assumes that the motor is more responsive than the worst case sample
OK, no problem I have a bigger electric motor that is much less responsive
than a small DC motor, and my average sample rate is 100HZ.
A lot of people are confusing PWM generation with PID calculation. I have an
external PWM generator, and it is updated at about 100HZ. += 0.01 seconds.
Snipped much content...
Every control loop has a delay from when the measurement was taken to when
the action upon that measurement reaches the controlled device. This delay
time can be critical for the stability of the control loop. You're right it
has nothing to do with PID loops. It has to do with *all* control loops,
including PID loops. For continous time control loops the delay is usually
rather small compared to the other time constants in the loop and that's why
it's often disregarded. For digital control loops however the delay must be
at least one sample period, which can in fact be in the same order of
magnitude as other time constants in the loop. If that is the case, than the
additional phase-shift from the sampling delay can be rather substantial. In
other words: your asumption that the discrete time implementation of the
control loop is a good approximation of the continous time equivalent
circuit is not true any more. And digital PID control loops have this
Yes. See above.
You can do non-periodic sampling but the customary theory assumes periodic
sampling. You would probably have the invent your own control loops for
that. You almost certainly can not use a traditional PID loop there. But if
you have references to the contrary, I would be interested to read about it.
No, I'm not. PWM generation is just a way of going from the digital to the
analog domain. A D/A converter if you wish with poor anti-aliasing
characteristics. I'm talking about the whole control loop, from beginning to
If it exceeds a duration which can't be controlled or is unaccounted for.
Yes, my argument is the assumption that a fixed delta T is required.
More or less that is true.
Why? If you could control a motor accurately with a 50hz sample rate, why
would a variable rate between 400hz and 50hz be a problem?
Your argument assumes an acceleration response the exceeds the worst case
latency. If this is not the case, then this argument does not apply.
Not "customary theory," so much as "customary implementation."
Just scale the encoder counts based on actual change in time. Here are some
sippets. The code works, but I am ironing out some kinks and last night I
smoked the power transistors with a careless test lead.
int ScaleMovement(int ticks, int elapsed)
return (ticks * SCALE_FACTOR)/ elapsed;
int PIDControl::CalcPID(int actual)
int error = m_target - actual;
double ecur = error * m_gain_error;
double eint = m_error_total * m_gain_int;
double edif = (error - m_error_last) * m_gain_dif;
m_error_total += error;
m_error_last = error;
m_pos_last = actual;
return (int) round(ecur + eint + edif);
int encvLeft = enc.GetEncoderValue(LEFT_WHEEL);
int encvRight = enc.GetEncoderValue(RIGHT_WHEEL);
int valLeft = ScaleMovement(encvLeft, elapsed);
int valRight = ScaleMovement(encvRight, elapsed);
int speedLeft = pidLeft.CalcPID(valLeft);
int speedRight = pidRight.CalcPID(valRight);
The only difference betwen the above code a classic PID implementation is
that elapsed time is used to factor the PID correctly.
Sure you can.
I couldn't point you to a specific book as a lot of the knowledge comes from
many sources, some really old and disty motor control books, and old Galil
motion controller manual, and lots of hands on experience.
Well, if 50Hz is enough, why do it sometime at 400Hz? In other words: if
your loop is stable with a 20ms delay and with whatever filtering you have
in place, what 400Hz update rate buys you? But putting that aside for a
minute, let's examine your solution:
From your example code below you seem to assume that the rotation speed
(input) was constant during the whole time of the unsampled period. You can
model it like this: you have your regularly sampled input (let's say at
400Hz and let's call it 'S') and than you randomly replace some input
samples with their previous values (sometimes maybe more than one up to 8 so
that you can get down to 50Hz). Let's call the modified sample sequence 'W'.
Than you run your regular PID loop on the (now periodically sampled) input.
This model should give the same set of output than your original code, but
now using periodic sampling which can be attacked by traditional methods.
You can further modify this model: instead of replacing some input samples
with the previous values, you modify them such that they match the previous
value. You can rephrase this even further, saying that your new input (W) is
your original input (S) plus another stream of values, where some samples
are non-0 - their values are selected such that you get the effect of
duplicating some values in the stream. Let's call this new stream of data
So, now your modified input (W) is your original input (S) plus this
additional signal (N):
W = S+N
If you had a regular 400Hz PID loop, your control signal would be its
response to S, wihch is PID(S), but instead you're feeding it with this
modified signal (W) so your control signal will be PID(W). Since your PID
loop is an otherwise linear system, you can separate its output to
individual responses to each input:
PID(W) = PID(S + N) = PID(S) + PID(N)
Here you can see that the control signal that you will be giving your motor
will contain the control that you would give it would you sample the
original input periodically (PID(S)), plus the PID response to that
additional signal (N) that you've introduced by not sampling it periodically
(PID(N)). Thus you can look at this second input stream (N) as a disturbance
to your original signal (S), also called noise.
If you knew the transfer function of your motor, you would also be able to
calculate the noise that you've introduced to the rotation speed of your
output shaft like this:
Rotation_noise = Motor(PID(N))
You feed this noise back to the input of your control loop (closing the loop
on the noise as well) giving your final noise-floor for your solution on
your output shaft:
Total_Rotation_noise = Motor(PID(N)) / (1+Motor(PID(N)))
if we disregard the transfer function of your feedback circuit.
Wether that effect is important in your scenario, you are the only one to
judge, but it will degrade your performance to a certain degree.
In other words: your minimum sampling rate is determined by the acceleration
response of your system. Which (since it is a mechanical constant) says that
there *is* a minimum acceptable sample rate, in other words a *maximum*
acceptable response time, which is - by definition - a requirement for a
real-time system - to answer your original question in the subject line. And
if you can meet the minimum requirements, doing the extra work of evaluating
your loop more often occasionaly, wouldn't buy you much.
Well, I guess I have to correct myself here: you can, but you have to tune
it for the worst case, in which case, again, doing the extra work wouldn't
buy you much.
Well, the 400hz was a strawman, but there is real need to have a higher rate
than the worst case.
If 20ms is my worse case deadline and my OS can typically spaz out for some
limited period of time not likely to exceed 15ms. I can't depend on any
specific 20ms period. Right? I can, however, depend that the system won't
go away for more than 15ms. If I try to sample every 5ms I will never miss
my 20ms deadline.
What "unsampled" time? There is the a sime that covers an elapsed time and
that is assumed constant, yes. We know this to be false, but give the
nature of the gearbox, wheel, and motor, well within error margins.
What? No, I read the encoder values each time. I'm not sure where you think
I am reusing encoder reading using privious values.
Which I never do. Where do you think you saw that?
A "real-time" system has one very important quality: A deterministic and
predictable response time.
Linux or a BSD does not have this, but they do have a fairly reliable
behavior. You can be fairly confident that you will be called, worst case
scenario, 50ms, usually much lower.
By coding around the variable response time of the os, you can do it without
a real-time system.
Not needing the cost of a micro-controller comes to mind.
You can't assume any such thing. Your process might be swapped out to
disk. It could be many hundreds of milliseconds or even seconds
before your process gets control. Or ... you reference an unmapped
page of virtual memory which causes a page fault or multiples of page
faults. There are lots of code paths in non-real-time systems that
can result in unpredictable timing.
Just because you ran it for a while and the greatest you saw was 20 ms
doesn't mean that is the largest it can be. You aren't seriously
calling that a proof, are you? Surely you know better.
I'm taking about your response time being non-deterministic. See
above. If you are unable to respond to a bumper switch because your
process is not in memory and takes a long time to swap in, you collide
with the obstacle while your motors are fully powered, rather than
braking. There's a big difference. Just to illustrate what I'm
referring to, golfers know this as follow-through on their swing -
means the difference of the ball going a 400 yards vs 50 yards.
Golf swing: follow-through = good; out of control robots:
follow-through = bad.
I never said you were. I was answering your question about why motor
control is a real-time process - which you did ask, did you not? You
mentioned only one aspect of motor control - the macro level of
stopping the base or platform. There's also the actual control aspect
of driving the motor.
Yes, I remember.
In case your other method doesn't work out for some reason, you'll
need a "Plan B."
You are too.
That's not clear at all. See, there you go making assumptions that
you are accusing everyone else of making.
You've only shown that your system is operating reasonably as expected
under conditions that you have anticipated. I would certainly expect
it to be reasonably well behaved under these conditions. But what
about conditions that you haven't anticipated? Do you really think
you've exercised even a fraction of the available code paths? Have
you really loaded up several memory hungry processes to see what
happens when your systems starts swapping? Or what happens if you
cause a packet storm - i.e., a flood ping or similar? You did mention
your system will be networked. Does your kernel rate limit responses
so that it doesn't suffer from DOS? All these things can a do affect
your response time. And not just a few milliseconds. We are
literally talking seconds in some cases.