It is *very* useful to specify "dimensionless" numbers with their
underlying dimensions, eg voltage regulation in volts per volt or if
really good, in millivolts per volt; the latter giving a clue to a
sensitivity or gain in the system.
Also rather useful in error analysis.
No wonder. The PhDs without the PIDs are much more common, then the PIDs
without the PhDs.
BTW, in the theoretical physics, they use the dimensionless units to
avoid the heavyweight dimension constants:
e = c = h = 1
How about that?
No. It crashed because it is impossible to be perfect in all but in the
very small projects. This problem is mich wider then just the agreement
about the dimensions.
The beloved Microsoft style is using the so-called 'hungarian notation'
to avoid that sort of mistakes. lpSTR, HANDLE, DWORD and such.
Charles Simonai, who is the inventor of this style, just recently made
it safely to the space and back :)
C++ allows you defining the explicit types like "VOLTAGE", "CURRENT" and
such, so the dumb mistakes are avoided. However, this approach is seen
by many as the counter productive and resulting in the inefficient code.
Dimensional analysis is just a trivial check to avoid a class of a
simple mistakes at the low level.
"Wescott" would better be reserved as the name of the not yet discovered
radioactive chemical element of the halogen group. "Avins" is a
parameter of a Markov source. What could be "Grise" ?
Because it is generating so much traffic! It is trivial, so anyone can
add his two cents.
There is a method of thermodynamic potentials, which derives a useful
formulae from the dimension considerations by means of differentiation
and integration. However if you divide your phone number by your SSN, it
is not going to be very usefull.
This is the basic thing not worth mentioning which every professional
should do automatically.
It would crashed of some other trivial or non-trivial reason. Or at some
other time. Somebody else would be sacrificed as a scapegoat. That's the
DSP and Mixed Signal Design Consultant
: Tim Wescott wrote:
: > So why is dimensional analysis so cool?
: Because it is generating so much traffic! It is trivial, so anyone can
: add his two cents.
: Vladimir Vassilevsky
Yes, this made me laugh. Everyone has an opinion but few have answers,.
Fewer yet have right ones.
I knew someone who claimed to be a professor of theoretical nuclear
physics--- oh no, he really was --- who wildely objected
My scientific origins are in theoretical physics but I mostly worked
as a softwareengineer--- now lurking here to learn about DSP. What
baffled me again and again is how engineers (in Germany) approach a
problem: their question is ``where is the applicable formula''. I was
trained to ask: ``what is the mathematical modell that allows me to
handle this kind of problem.''
From studying mathematical modells, the physicist is probably just
much more trained to handle long symbolic calculations as the average
engineer and from this he has a stronger ability to analyze such
formulas ``at a glance''--- w/o inserting units. To him the essential
thing is the physical quality that is measured, like length or time,
not the units that someone happens to use.
Measured values just tell us how often some well defined object will
fit ``into'' the measured object; the sole purpose of units is to pass
along what object the one who measured happened to use for reference.
Thus I like to claim: all those units were completely superflous,
hadn't things been hopelessly messed up, starting at beginning of the
world, by merchants and engineers.
Comparing any speed to the speed of light in vacuum is perfect---
might even have some advantages, when on speed-limit signs a non-zero
digit does not appeare until the 9-th (or so) position after the
decimal point ...
Strongly typed languages were designed to support this style of
programming. They come in two flavours: ``static typing'' and
``dynamic typing''. If you want to see powerfull representatatives of
both paradigms, just look at ML and Scheme.
Static typing gives you strict compile-time type check at no run-time
overhead; dynamic typing gives you strict run-time type-check .
why would a theoretical nuclear physicist have a problem with that?
these are one example of "Natural units" (look that up in Wikipedia)
and i think these are called "electronic units" if the "h" is replaced
with "hbar". but qualitatively, the are electronic units. they can be
compared to Stoney units (but i think Stoney normalizes G instead of
hbar), Planck units (but Planck normalizes G instead of e), or Atomic
units (which normalize the mass of the electron instead of c).
personally i like Planck units the best because they have no prototype
particle, object, or "thing" that needs to be sorta arbitrarily chosen
as a base unit. it's better, in my opinion, to normalize these
parameters of free space, before introducing any special particles or
objects, to define your base Natural units. i which Planck would have
normalized 4*pi*G instead, that would have been more natural.
other good Wikipedia articles that are related is
Dimensionless physical constant
i had something to do with those articles, including Dimensional
That is cool you had some input on those articles. If you recall I use
atomic units all of the time. There are two variations. One uses a
Rydburg and the other uses a Hartree for the fundamental energy unit.
The main reason for using these is the Schrodinger equation becomes
quite simple in appearance. A factor of two shows up in the energy
term. So that is why there are two variations. The distance becomes
scaled to Bohr radii - the mean distance to the electron in the ground
state of Hydrogen. And 1 Ry is the binding energy.
what i recall Clay, is that it was YOU that first told me the name for
and i thank you profusely. because you did that, i was able to do web
searches, found papers/web_sites/books by Michael Duff, Gabriele
Veneziano (pioneers of string theory), Lev Okun, John Baez, and John
Barrow and i've had several really neat email conversations with ALL 5
of these guys about the nature of fundamental physical constants (the
only ones that count are the dimensionless ones, any notions of a
"varying c" or "varying G" are not even wrong, they're meaningless,
and for those who can't see that, just think of everything measured in
Planck units). and then later i got into Gravito-electro-magnetism
(GEM) a little and some interest in the Gravity Probe B (which*still*
hasn't been able to conclusive say that frame-draggin or
gravitomagnetism or gravity waves can be measured, they are behind
none of that fun would have happened if you hadn't done that for me
nearly a decade ago. thanks.
discovered a word for that, too: "Nondimensionalization." there's a
wikipedia article on that also. i may have done a minor edit to that.
isn't normalizing the Rydberg constant have similar effect as fixing
the Bohr radius (with another dimensionless alpha tossed in)?
I'm glad my giving a name for Planck units put you on an interesting
course. I recall reading many years ago about Paul Dirac exploring the
ideas behind Planck units. And that is how I knew about them.
Studying Hydrogen has yielded some very interesting info. Balmer
empirically put together a formula that fits the spectral lines of
Hydrogen, but it offered no theory.
Bohr with his theory of the atom gave a theoretical basis for Balmer's
formula. But spectroscopists soon discovered a slight flaw in that
some of the energy levels had finely spaced details. A spectral "line"
under close observation turned out to be 2 or more lines very close
Sommerfeld found by adding special relativity and elliptical orbits to
the theory, that he was able to explain the fine splitting (fine
structure) of the spectral lines. Bohr was of course ecstatic that his
theory was saved. Sommerfeld introduced the notion of the fine
structure constant, which in many older books is called the
"Sommerfeld fine structure constant", but now many have dropped
Sommerfeld's name - what a shame.
Alpha, as the constant is usually denoted, was discovered to be a
dimensionless number which may be expressed as (e*e)/(2*epsilon_0 *
h*c) and "e" is the charge of the electron, h is Planck's constant, c
is the speed of light in a vacuum, and epsilon_0 is the permititivity
of free space. We know alpha to be basically = 1/(137.0359895...). The
fact that it is much smaller than one allows perturbations to simple
systems to be expanded in series involving successive powers of alpha.
These series may be found by using Feynman diagrams.
Even though Sommerfeld discovered alpha with his work on Hydrogen, it
has turned out to be one of the most important fundamental constants
with consequences far beyond atomic physics. It shows up in many
places such as in the interaction of E-M fields with electrons and in
nuclear and particle physics.
In terms of atomic theory, the Bohr radius may be written as
(4pi*epsilon_0*h_bar^2)/(m*e^2), where m is the mass of the electron,
e is its charge, epsilon_0 the permititivity of free space, and h_bar
is Plank's constant h divided by 2pi.
A Hartree is simply alpha^2 * m*c^2 (27.2116 electron volts) and a
Rydberg is half of a Hartree. Again "m" is the mass of the electron
and c is the speed of light. Douglas Hartree did a lot of work in
computational atomic physics, and he gave us "atomic units." He even
built analog computers from a kid's toy called "mechano" to solve
atomic physics calculations back before WW2. [Mechano (UK) is a toy
very much like the Erector Set in the U.S.]
I've been working on a new algorithm to solve the Hartree-Fock
equations. So far the results have been quite encouraging. I.e., more
accuracy with less computation.
p.s. A book you may find interesting is "Universal Constants in
Physics" by Gilles Cohen-Tannoudji, 1993 McGraw-Hill. The parts on "h"
& "k" are interesting with "h" being the quantum of action and "k"
being the quantum of information.
i remember studying this in 3rd semester physics. from the NIST site,
they were saying that the very first occurance of alpha that
Sommerfeld had as that it came out to be the ratio of the speed of the
electron in the lowest shell of the Bohr atom to the speed of light in
check NIST, there are new 2006 CODATA:
1/alpha = 137.03599956 +/- something
2002 CODATA had it at 137.03599911.
this mathematician, James Gilson (i had an email convrsation with him,
too) says he has some theory that calculates alpha to be
alpha = cos(pi/137)/137 * tan(pi/(29*137))/(pi/(29*137)) .
it comes out almost within one stadard uncertainty to the latest
accepted value. it might be just numerology. i dunno.
it turns out that sqrt(alpha) is the ratio of the elementary charge to
the Planck charge and that's how i like to look at it. i like to
think that alpha takes on th value it does because of the amount of
charge, measured in Natural units, that nature has bestowed upon the
electron, proton, and positron. (what are the other charged
because i think that it would be more natural to normalized 4*pi*G and
epsilon_0 (instead of what Planck did normalizing G and
4*pi*epsilon_0), then the elementary charged measured in these more
natural Planck units would be
sqrt(4*pi*alpha) = 0.30282212
and THAT is the number i think that theoretical physicists should be
putting up on their walls. that dimensionless number is the charge of
the electron measured in the most natural units that are defined soley
normalizing the parameters of free space, without any use of a
prototype object, particle, or "thing". and alpha results from that.
at least this is my armchair physics opinion.
sure, given a geometry or constellation of charges, all made up from
some given integer number of fundamental charged particles, the force
between any pair of charges, measured in natural units, is
proportional to e^2 which is proportional to alpha. increase alpha by
5% and the EM force has also increased by 5% (relative to the other
sounds like a real physics text since it is McGraw-Hill. i have the
Barrow book for "light" reading.
This reminds me of something that has occurred to me in the past, and
that I would like to see if people here agree.
It seems to me that in calculations physicists usually give variables
quantities with dimensions, where engineers usually factor out the
dimensions. For example,
A physicist might say:
F = m a, where the variable m might have the value 3kg or 5g.
An engineer might say:
F(Newtons) = m(kg) * a (m s**-2), such that m has the value 3 or 0.005.
That is, the dimensions belong to the equation, but not to variables.
It might be because most programming languages don't keep units with
variables, so that one must factor them out before assigning a value
to a variable.
I would be interested to see if others agree or disagree.
I think that's true--engineers are typically taught to pull out all
those fundamental factors of hbar, 4*pi/c**2, and so on, crunching them
all into some anonymous constant to save labour and blunders. When you
do that, you have to keep track of the units by hand, whereas in the
physicists' method, the units get carried along automatically.
In fact, people doing relativistic field theory usually use c=1 style,
in which all the calculations are done with just numbers, and the proper
conversion factors get figured out at the end, by reverse dimensional
analysis--sticking in the right powers of c, hbar, G and so on to make
the units come out right. It turns out that this is a well-defined
procedure, so it saves labour and you still get the right answer at the
end. (I'm not a field theorist, so I've never done this.)
In that example, you showed kilograms or grams -- I grew up with thousands
prefixes, so I change them on the fly as it suits me. A 1k resistor has
always been, and always will be, a 1000 ohm resistor as well, and not a 1 *
(1000 / 1k) resistor. :^P
"Librarians are hiding something." - Steven Colbert
Website @ http://webpages.charter.net/dawill/tmoranwms
i disagree, i guess. when i was teaching EE classes (circuits) and as
a grad student in EE, i would be pretty hard on any engineering
student that did it the latter way. i think Jerry's little proverb
said it best: "Mathematicians routinely ignore units, but engineers do
so at their peril."
the cumbersome method you ascribed to engineers, Glen, is almost as
bad as ignoring the unit. (because your root algebraic equation is
dimentionless. in the latter case, F = ma is actually expressed as
the dimensionless F is the product of the dimensionless m and the
dimensionless a, as long as you express all three in their base SI
It isn't ignoring units, but more like tunneling then. One must
convert given quantities to the appropriate units before applying
the equation. One can then use the numbers with a slide rule or
calculator, and apply the appropriate units to the result.
When writing down the process, what is usually called "show your work"
all units will still be shown.
I probably overemphasized the difference, but yes, the algebraic
equation is dimensionless, but so are all calculators that I know of.
The problem with the way I called the physics way is that it can
result in unusual or inconsistent units. One might end up with
an acceleration in meters/second/hour, for example, or worse.
For the engineer way, one converts all given units to those
specified, does the algebra keeping the units (except when
entering them into a calculator), and then converts the result
to the desired unit.
Though the division probably isn't all that strict, and it wouldn't
surprise me if many EE's used the physics way.
sure. as noted by sombuddy else, you can program computers
(especially with C++ or some other OOP) to attach a unit from a known
list to the numerical quantity. that way we conceptually have a
dimensionful quantity in the computer.
this is what conversion factors are for. these conversion factors,
like (0.3048 m/ft) are dimensionless (even though they have units
inside the expression), in fact should be the dimensionless 1 so that
multiplying or dividing by it changes nothing.
but the requirement is that all units chosen must be consistent. the
so-called "physics" way you can actually accelerate a pound mass by
(mi/hr)/s and have a meaningful equation (and an unusual unit of
force) naturally pop out.
i guess i'm a partisan for that method. almost to the point of
i'm getting less tolerant in my middle age. (Jerry, does it get worse
or better as we get older?)
> dunno what that means.
A large calculation can be made up of many small steps.
If one computes intermediate results on a calculator,
one can attach the appropriate units onto the intermediate
result from the calculator. The numbers go through the
calculator, the units go around and are attached appropriately
onto the result. Usually the steps will be small enough not
to lose track of the units.
I used to know of a computer based physics teaching system
using PLATO: http://en.wikipedia.org/wiki/PLATO
For physics problems the user was expected to enter an answer
with units. As I understand it, each unit was given a numerical
value, and the resulting expression was evaluated and expected
to be close to the correct answer. As an example, m (for meter)
might be 123.45, cm would then be 1.2345, in (inch) 3.13563, etc.
Once for a problem expecting velocity units entered erg**0.5 g**-0.5
(I forget the actual exponential operator), and the answer was
It is ChE that uses pound for mass, and adds another constant into the
equations to make them consistent. I used to work in a lab with ChE
people, with many experiments related to absorption or emission spectra.
I once did an emission spectrum in BTU/pound mole, a unit that only
ChE would use. (A pound mole, similar to the more common gram mole,
is the amount of some substance such that its mass in pounds equals
its molecular weight in Daltons.)
Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here.
All logos and trade names are the property of their respective owners.