this is a really stupid question.
I am finishing 4 years of electrical engineering so I'm familiar with Bode
we've seen bode plots in many courses, including control systems, however,
never once in my student life has nayone told me how they're actually
supposed to be used or what they tell me about a system.
I can look at a magnitude plot and I know that a peak means there's a pole
near that frequency, but how does this plot help me in stabilising a control
what does the phase plot tell me? what's the difference between two systems
with identical magnitude plots but different phase plots?
I know phase difference means time delay, so does this mean a control system
with a phase lag at a certain frequency will respond slower than another
system with less phase lag?
looking through Matlab documentation and case studies, I often see such bode
plots and they are supposed to tell me the system is good, but I have no
Control systems often depend on negative feedback. Time delay or
other kinds of phase shift that increases with frequency will turn
negative feedback into positive feedback, once the phase shift reaches
180 degrees, compared ot the low frequency phase. If the loop gain
has not fallen below 1 by the frequency that has that additional 180
degrees of phase shift, the resultant positive feedback will reproduce
or increase an infinite series of echoes (oscillations). Bode plots
of loop gain and phase shift help you determine if the loop has a gain
that has fallen below 1 before the phase shifts 180 degrees.
Note that you can also use Bode plots with discrete-time systems:
instead of scanning s around the stability boundary (by calling it omega
and adding a j, etc., etc.) scan z around the stability boundary by
setting it equal to exp(j theta) and incrementing theta through a
This tells you everything about a discrete-time system that a "j omega"
bode plot does for a continuous-time system. The only real difference
is that with a continuous-time system you can construct a bode plot by
hand, with discrete-time you're better off just plotting it with a computer.
Boy, this would have been a good question to ask one of your
instructors in school, during class. What do you think you were
paying them for?
You have to distinguish the type of system that you are looking at.
If you are looking at a closed-loop system the plot typically gives
you performance information, for example if you are looking at closed
loop Bode of a servo control system. Bandwidth, steady state error,
if any, high frequency roll-off.
Open-loop Bode plot can be used for stability information, phase/gain
margin, and can tell you where compensation is needed.
Because there is a one-to-one relationship between the amplitude and
phase response, you won't have identical magnitude plots but different
phase plots, unless considering pure phase additions, like the Pade
approximation for a delay.
You have to distinguish between steady state frequency response,
and transient time response. In the steady state frequency response,
the lag appears as a delay, or shift of the sine plot relative to the
sine plot of the input. For transient time response, it depends where
the lag appears in the Bode plot and what is causing it.
To gain an understanding you'll have to go through case studies. Find
a good set of lecture notes or publications that are applied.
Things you get from a bode plot are
Performance--the bandwidth indicates how fast the system can move.
Accuracy--the low frequency gain indicates settling accuracy.
Stability--the margins (gain and phase) are a measure of stability.
Compensation--you can easily apply networks which alter the margins.
No. It means you have an integrator in the loop. For a sinusoidal input it
means the output sine will lag behind. If you have a derivative in the
loop, a sinusoidal input will create an output that leads the input. This
does NOT mean you are predicting the future.
When a sine voltage is applied to an inductor, the current lags. When a
sine current is applied to an inductor the voltage leads. Does this mean
that the voltage is somehow anticipating the current just like my dog runs
ahead of me? Nope. the dog only leads me when I am going in a straight
line. When I change direction she has to recover and change and then speed
up to get ahead of me again. In the same way, the voltage only leads during
steady state when terms like frequency and phase have meaning. If you
suddenly apply a sine input, the voltage will appear as a spike the instant
you apply the current. It will NOT 'lead' the current until steady state
occurs. Similarly, when the current is abruptly stopped, the voltage will
show a strong negative spike and then decay. It will not 'lead' the current
during the transition. It is not a predictor.
No, it certainly is NOT a stupid question, but a very important one.
This does not surprise me about the poor state of affairs in control
systems teaching. I'll bet your head is crammed full of all the
"useful" state space stuff like linear full state variable feedback
and observer theory though! No wonder fresh young graduate engineers
want to design adaptive/fuzzy/neural controllers using digital
anti-aliasing filters in a noise free environment when they have
missed all the basics and have never even heard of concepts like
classical loop shaping. It's high time academics roll up their sleeves
on the shop floor and actually do some practical work instead of
sitting in their offices theorizing about pole-zero cancellations,
observability etc. that they have only read about or spoken about at
conferences. If they are so smart why aren't they tackling the really
relevant problems like controlling real nonlinear MIMO physical
systems? Perhaps the real reason is that it's very difficult and not
easy to publish voluminously and quickly - no, I'm sure it's not that!
Actually it is useful. But only if you also know the down & dirty
classical stuff. I've found that in a DSP chip a state-space
implementation takes advantage of the DSP's MAC instruction and executes
very fast, so that's how I do it. But the actual _design_ for a SISO
system is all classical block diagrams & bode plots.
You're being harsh. Yes, using an adaptive, fuzzy or neural controller
where a plain ol' PID is optimal is just stupid, and so is using noise
reduction techniques in quiet environments, but such techniques _do_
have their places. On of my favorite control texts is Astrom's
"Adaptive Control", in part because he has one whole chapter examining
when adaptive control _isn't_ appropriate. Every advanced-technique
book should have such a chapter, in my opinion.
If you're complaining about kids out of grad school wanting to use the
most sophisticated methods on the simplest problems then you're just
railing against the natural tendency of Youth. Do you mean that when
_you_ were fresh out of school _you_ weren't proposing op-amps and FET
drivers when a limit switch and a relay wouldn't do?
Actually much of the current research literature _is_ about nonlinear
control, and you can publish voluminously and quickly. It's just that
the math is involved, difficult to understand, computationally intense,
and somewhat divorced from real-world effects. So it doesn't bubble
down to the real practitioner very well.
That doesn't mean that there shouldn't be more practical instruction in
control. Control theory is all well and good, the math is absolutely
beautiful (at least that's what I think of the parts that I can
understand), but real control system design has more to do with
understanding the behavior and limitations of the plants and controllers
than it does about highfalutin math.
I agree with what you say, but I think the point I was trying to make
was missed, perhaps due to poor exposition on my part.
Fuzzy/neural/adaptive control or whatever is fine and the work of
people like Karl Astrom, Graham Goodwin etc is excellent, but many new
grads are going to these advanced methods for absolutely the wrong
reasons because they have been so poorly instructed in the basics. My
esperience has been that once I sit down with the new grad and explain
how classical control theory is supposed to work, then the "wow"
factor kicks in and they get enthusiasm to follow what I think is the
right path, namely start with classical and then move onto the exotic
stuff later on when and if needed.
About the latest research - the problem is that the researchers are
usually mathematical types with no practical control engineering
experience and as such they don't really understand the engineering
process nor its problems. A lot of control and signal processing
research is being hijacked by mathematicians that can't compete in the
math journals, so they look for other topics to publish in. Most of
the time their research, while very interesting doesn't really help
the engineer battling with nasty servo nonliearities etc.
Perhaps my harshness should rather be interpreted as a "cry for help"
on behalf of engineers working on difficult problems. We need a Henri
Poincare type of person who has a handle on nonlinear mathematics and
physical reality to come to our aid.
Hear hear. The best nonlinear control theory I have comes out of books
written in the '50s. Since it's designed to be graphical it's easy to
understand, and since it's from before computers they don't even try to
use the super-zoot math -- they just show you how to make it work from
Part of the struggle with servo systems is that you can't make a silk
purse out of a sow's ear. Bode proved a theorem (in the '50s) that you
couldn't improve the overall sensitivity of a system to disturbance, you
could only push the sensitivity around. Since we usually want to push
the sensitivity _up_, and since everything conspires to limit the
control system bandwidth, this puts an upper limit on what you can do in
the real world with any given system.
Didn't Bode publish his sensitivity theorem in his 1940 paper? I know
his book of 1945 contained it.
I agree, the most useful practical stuff comes out of the ~40's servo
and circuit theory (pushed by the war effort) by many great engineers
and mathemageers (like Bode and Nyquist) working at MIT labs and "the
telephone company" etc. Good 'ol describing functions and the like. I
think part of the problem is the communication gap between academia
and industry that now exists in publications due to the
"theorem-proof" style of paper that was introduced by the hijacking
mathematicians. Practicing engineers often get good papers rejected
because the journals are controlled by these "theorem-proof" guys (I
have heard some fascinating anecdotes about this!).
The important thing about control is the concept of bandwidth and, as
pointed out by Horowitz in 1959 (the first QFT paper and the
introduction of 2DOF systems) it is the "cost of feedback" in terms of
noise beyond crossover. He showed that arbitrarily large parameter
variations can be handled by LTI compensators once the price in terms
of bandwidth and its associated requirements on modelling and noise
are understood. He also threw the cat among the pigeons with his
challenge to the adaptive guys to prove that LTI controllers could not
do what adaptive ones were supposed to do - this sparked a great
debate and lots of useful research leading to the proper understanding
of "robustness" - a concept that Bode figured out years previously.
...for reminding me of the root of the reason for my present dedication
to classical methods for teaching undergrads! Your treatise could have
been an excerpt of a Control II lecture given to me in 1992 (perhaps by
you - to reply, knock out the "o")
In full support of the views expressed in the foregoing dialogue - my
firm belief remains, that no matter how advanced the theory and analysis
becomes, "plant is plant" and remains eternally so, as do classical
methods remain eternally valid.
My favourite final approach for a first course in control is the use of
the Nyquist criterion and Bode plots to find the simplest form of
control of all kinds of (possibly unstable) system, for various
performance/cost objectives. By all accounts, the (very old) classical
approach has worked - judging by the "ah ha's" detected, and the
resurgent interest in control-orientated 4th year projects in our School.
This has been an interesting discussion. Being self experiensed the
transsion from control theory researcher to control engineering
practitioner, I fully understand the strong feeling of Fred, still
more appreciate the viewpoint of Tim. It seems to me that "controls",
when solving real world problems, involves a lot more of the fields
it is been used. This sounds obvious, but is quite deep. For example,
I know a lot of people in this newsgroup come from the field of
process control, where PID is dominant and a major control work seems
to be tuning the gains. But in automotive, a big part of the control
work is to determine the status of the system dynamics, where observer
and estimation techniques have been instrumental. I can image in some
aerospace control problems, state-space methods are more feasible
to deal with the multi-IO stuff. Also, as control algorithms are mostly
implemented as embedded software, computational aspects become more
and more important. For problems with large dimensions, it has been
proven that state-space format is more efficient and better conditioned
than transfer functions. All I try to say is that "control" is so multi-
faced, it involves domain knowledge (plant), software, hardware(sensors,
actuators, processors, electronics), communication (CAN etc), and
a lot of maths (linear algebra, numerical analysis, etc). It is both a
bless and a curse in the sense that you can work on (learn) a lot of stuff,
but have to know a lot of stuff to do your job right. Finally, I sense
the control community has been trying to "rescue" control from the "hijacking
by mathematicians" by establishing application oriented journals like IFAC's
Control Engineering Practice and IEEE's Transactions on Control Systems
Technology. Just hope the "gap" between theory and application of the
wonderful field of controls becomes smaller and smaller - by working toward
the same goal from both sides!
When was getting IEEE's "Transactions on Control Systems Technology" I
felt that it merited "hijacked by mathematician" status and discontinued
it (2-3 years ago). I still subscribe to "Transactions on Automatic
Control", but I probably get through one article a year.
How is the IFAC journal?
The point I am trying to argue pertains to the cognitive science
aspect of design.
When doing a home improvement project, one uses screwdrivers for
and hammers for nails. It is most efficient to use the tools most
appropriate for the job.
This is where I have a problem with the overemphasis of state space
methods in design. I have no problem with state space ideas
themselves, only the way they are misapplied in the design context.
State space models are descriptions of dynamical systems where the
inner structure of the system is exposed. They are very useful for
simulation purposes and have important numerical advantages over
transfer functions in analysis - for example in the computation of
balanced realizations etc.
It can be argued that having a more detailed model is a good thing and
indeed, this is true if you are a physicist, in which case a quantum
mechanical model is even better than a state space one.
While a state space model is good for ANALYZING a complex dynamical
system in order to understand its behaviour, it is normally too
detailed for the purposes of DESIGN.
This is because engineering design is all about tradeoffs and can be
regarded as the artful application of the laws of physics to
accomplish a desired goal. The laws of physics are the engineer's
However, for an engineer, there is the important idea of
model fidelity - use the model which has just the right amount of
detail to give you the answer you are looking for. This is the
"economy of representation" concept.
Because tradeoffs are so important in design, a good design technique
requires good indicators of the design quality.
Humans have incredibly powerful pattern recognition capabilities and
are able to rapidly assess tradeoffs given good visual indicators (for
example, that's why analog instruments are used in aircraft cockpits
rather than digital ones - pilots can quickly scan them, especially in
In control system design, a good visual indicator of performance is a
response plot as it summarizes all the salient features of (good or
I think that control system design can be accurately described by
saying that the performance of a feedback loop depends entirely on the
"shape" in magnitude and phase of the loop transmission function L(s).
I believe that this statement is the heart of control system design.
Therefore, designing a good control system requires changing the shape
(the energy-frequency profile) of L(s) - a process known as "loop
shaping". The frequency domain lends itself particularly well to this
strategy, because it allows "point-by-point" design at each frequency
of a simple phasor equation: 1 + L(s) = 0 without requiring extensive
manipulation and subsequent detailed interpretation.
An important aspect of this technique is that it provides a highly
transparent indicator of the important tradeoff between complexity and
performance. The number of poles and zeros required to approximate a
desired loop shape is clearly visible in the frequency domain. (How
many people can take a state space controller matrix and evaluate its
performance at a glance?)
In addition, the vitally important concept of bandwidth, so important
with respect to noise, cost of feedback and saturation is clearly
highlighted. Such indicators are opaque or nonexistent in the state
space formulation unless a lot of extra analysis and interpretation
State space methods have the following disadvantages in design: *) They are strongly dimensionally dependent - a second order state
space model and a 50th order one are vastly different in complexity.
*) Convolution integrals and differential equations are the basic
tools - not easy to work with for most people
*) NMP zeros are not readily apparent - additional analysis and
calculation is required to discover them
*) Tradeoff indicators between performance and complexity are not
*) Costs of feedback, noise bandwidths etc are absent or not
*) Too much extraneous detail - poor economy of representation
Frequency domain techniques have the following advantages: *) Point-by-point design using algebraic phasor analysis and simple
*) No strong dimensional dependence - a 1st order frequency response
and a 50th order one reveal the same information at a glance
*) Complexity/performance tradeoffs are highly trasparent with simple
*) Good graphical indicators of T(s) and S(s)
*) The right amount of detail for successful design
State space is good for: *) Numerical simulation of complex processes
*) Implementation of controllers in firmware/hardware
*) Nonlinear systems analysis
*) Robust mathematical computation
*) Anaysis and understanding of complex behaviours
I have met many recent grads that leave college with the idea that
state space is some how superior to classical transfer function
methods. Also, I have seen them completely stumped with a simple
problem that becomes trivial with a simple frequency domain analysis.
Many of their professors have NEVER designed and implemented a
controller in their entire lives!
Both techniques have their place as mentioned above. However, in the
design context, state space methods obscure many important details on
the one hand, while supplying too many extraneous unimportant details
on the other. I would go so far as to say that state space methods
combined with faulty logic (especially in the use of observers) have
actually delayed the progress of research in control system design,
but that is another chapter in the saga!
State space was originally conceived to handle nonlinear systems,
where differential equations are naturally the main workhorse and
where it is the most appropriate theory.
History has vindicated the original proponents of the frequency domain
in the face of severe criticism by state space workers during the
early days. Statements by eminent researchers saying that "The Laplace
transform is dead and buried" have done much damage to the frequency
domain cause, but the critics have been proved wrong as control system
design has come full circle from the early state space LQR days in the
1960's back to the frequency domain with techniques like H_infinity
Perhaps the great attraction of state space is the elegant mathematics
and "neat" ideas it contains. Like a siren, it draws the sailors onto
Now I know why more people (especially young guys as Fred rightfully
pointed out) like the state-space method than the transfer function
format. It's a siren! :-)
Control design basically involves two parts: the plant and the controller.
The better we can know either part, the better we can achieve the control
Thanks, Fred and Tim, for the long and insightful posting.
Huh? Good for Nonlinear? That's a new one on me.
State space is purely a linear technique. There are some
instances where is is strained for nonlinear analysis, but I've
never seen any that were very useful. For nonlinear analysis,
the just do a robust design at one operating point, and then
schedule the results over multiple operating points.
Huh? Again you lost me. State space is just linear algebra using
vectors and matrices to solve constant coefficient, linear partial
differential equations. No nonlinearities there.....
I've found the UK publication: Control Engineering Practice
to be a good combination of theory and practice.
The ASME's Journal of Dynamic Systems, Measurement, and Control is
another reference with a a good combination of theory and practice.
IEEE Transactions on Control Systems Technology went the way of IEEE's
Automatic Controls,...no fun to read.
I have to admit that I am puzzled at the unjust bashing of the state
space representation of dynamic systems in favor of transfer functions.
BTW, while Fred outlined some disadvantages of state space approaches
he failed to present the disadvantages of transfer functions or as he puts
it, frequency responce methods.
g9u5dd43 email@example.com wrote:
I don't get this one. How else can one implement it? The only two things
missing are paper and software. The latter, I guess, is
implicitly mentioned in "Numerical simulation of complex processes" above.
So, what is the point?
Here is a state space representation. x is the state vector, u is the input,
t is time, v and w are disturbances or noise sources. How can you be so sure
that f() and h() are linear functions?
Whoever "they" are, they have the luxury of working on easier systems
than I have. While some of the nonlinearities I have dealt with can be
linearized away, friction, backlash, and actuator saturation cannot be
ignored or linearized if you want to come close to an optimum trajectory
from point A to point B -- and using a low-order nonlinear state-space
representation as shown by the OP is often the clearest way to represent