Motorized knee" on the mill works great now!

When I was programming process control systems, I avoided using "=" as a condition for any action. Much more reliable to use "". Like "close tank filling valve 1 when level > x". Make the test "=" and the tank's going to run over. Probably same reason as above.

Pete Keillor

Reply to
Pete Keillor
Loading thread data ...

Yes, it is obnoxious. Much better to provide a floating point format with sufficient precision that is capable of being exactly stored and read back from memory. If double isn't enough, then make a quad available. Later VAXes and the DEC Alpha had that.

Well, this is all due to decisions made years ago by Intel and Microsoft, and now all the X86 systems are saddled with this junk.

Jon

Reply to
Jon Elson

Not so fast there. In those days, memory was *very* expensive, and you just quadrupled the cost of the computer.

I don't think any major computer platform offers larger than double precision in hardware. It just wasn't worth it.

The reason to have larger arithmetic registers than memory sizes is simple:

For integer, the product of two 32-bit values is 64 bits wide. This is a matematical property, not a bit of corner-cutting in the hardware. If one has an integer divide that takes a 64-bit numerator, you can perform rational arithmetic without truncation.

For floating point, it allows one to sum a large number of multiplication results without small values being nulled out by larger results.

The fundamental problem is that floating point is inexact - given a finite number of bits, only a finite if large number of real numbers can be represented, so one rounds real numbers to the nearest representable value and proceeds, committing a small error in the process. As one performs arithmetic on such approximate values, those little errors accumulate, slowly eating away at the lower bits of the answer. So, to preserve the full accuracy of the final result as stored, extra bits are provided, so the eating away happens largely in bits that will later be discarded.

They had very little to do with it. Not even they can make finite-precision math act like infinite precision math.

There is a lot more to this than meets the eye. See "What Every Computer Scientist Should Know About Floating-Point Arithmetic", by David Goldberg, published in the March, 1991 issue of Computing Surveys. Copyright 1991, Association for Computing Machinery, Inc.

Joe Gwinn

Reply to
Joseph Gwinn

That's not the issue. (this is a comment for both Joe and Jon).

The issue is that depending on how exactly compiler optimization is done by the compiler, the result will be substantially different depending on whether a register, or memory location, was used.

This is gratuitious and very messy. It adds no value whatsoever.

i

Reply to
Ignoramus10412

What's not the issue?

FP arithmetic *is* messy, especially if one wishes to get it *exactly* right. Programmers don't get to choose not to play - the arithmetic really does work that way. I've been having this debate with people for

30 years, most often when somebody discovers that x==y isn't reliable if x and y are floats, and claims that the computer is busted. Sure. If you say so.

War story. I was running a 6-man Compiler group in the 1980s. One of the compilers we supported was Fortran, targeted on a military computer. The team came to me one day saying that the computer's floating-point arithmetic unit must be broken. Hmm. Really? Why do you say so? Well, the math library (Sin(), Cos(), Tan(), et al) was flunking the standard test suite by reason of inadequate accuracy, but the code looked OK. Looked OK?

Anyway, for instance the Sin() routine was implemented as a rational polynomial approximation, which involved lots of multiplying and adding, and the team had traced the problem down to a FP multiply that they claimed the hardware got wrong. Now this was the product of two 32-bit floats, and nobody knew who was right. The VAXs did their FP arithmetic somewhat differently (this was pre IEEE-754), so a direct comparison wasn't available, although it could come close. What to do?

Well, FP formats and math are not magic, and one can do the arithmetic manually, if needed. So, I got a large piece of gridded paper, and did longhand multiplication following the algorithm we all learned in grade school, but with a twist - the "digits" being multiplied were four digit hexadecimal numbers, and my trusty hex calculator served as the multiplication-table crib. Simple enough, but laborious. Turned out that the computer hardware was not broken, and the team soon found the true cause.

Modern computers are far better behaved than even 20 years ago. The classic gotcha on 6800-series computers was overlapping reads and writes. The problem was that while the 680x0 CPU was 32 bit, the backplane bus was 16-bit. This meant that the unit of atomic (indivisible) update was the 16-bit halfword, not the expected 32-bit longword, so if two CPUs were accessing the same longword, one writing the other reading, and they overlapped, one could end up with a hybrid (old lower half spliced to new upper half) in registers, but by the time you got there to read it with a debugger, the longword was all new. One can chase one's tail on this for quite some time.

There is far more to this stuff than meets the eye. If one wants to know more, a course in Numerical Methods will be an eye opener.

Joe Gwinn

Reply to
Joseph Gwinn

IIRC the Motorola 68881 coprocessor also performed calculations at 80 bit extended precision as the Intel coprocessors do.

Reply to
David Billington

80 bit, R* floating point was implemented in 1977 by Interdata. DEC, SEL, Haris, Microdata, IBM, & others, about the same time. Intel didn't have anything with floating point 'till later, & Microsoft was a still a startup.
Reply to
Gary A. Gorgen

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.