Motorized knee" on the mill works great now!

Ignoramus1280 wrote:


Yes, it is obnoxious. Much better to provide a floating point format with sufficient precision that is capable of being exactly stored and read back from memory. If double isn't enough, then make a quad available. Later VAXes and the DEC Alpha had that.
Well, this is all due to decisions made years ago by Intel and Microsoft, and now all the X86 systems are saddled with this junk.
Jon
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Not so fast there. In those days, memory was *very* expensive, and you just quadrupled the cost of the computer.
I don't think any major computer platform offers larger than double precision in hardware. It just wasn't worth it.
The reason to have larger arithmetic registers than memory sizes is simple:
For integer, the product of two 32-bit values is 64 bits wide. This is a matematical property, not a bit of corner-cutting in the hardware. If one has an integer divide that takes a 64-bit numerator, you can perform rational arithmetic without truncation.
For floating point, it allows one to sum a large number of multiplication results without small values being nulled out by larger results.
The fundamental problem is that floating point is inexact - given a finite number of bits, only a finite if large number of real numbers can be represented, so one rounds real numbers to the nearest representable value and proceeds, committing a small error in the process. As one performs arithmetic on such approximate values, those little errors accumulate, slowly eating away at the lower bits of the answer. So, to preserve the full accuracy of the final result as stored, extra bits are provided, so the eating away happens largely in bits that will later be discarded.

They had very little to do with it. Not even they can make finite-precision math act like infinite precision math.
There is a lot more to this than meets the eye. See "What Every Computer Scientist Should Know About Floating-Point Arithmetic", by David Goldberg, published in the March, 1991 issue of Computing Surveys. Copyright 1991, Association for Computing Machinery, Inc.
<http://download.oracle.com/docs/cd/E19422-01/819-3693/ncg_goldberg.html
Joe Gwinn
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

That's not the issue. (this is a comment for both Joe and Jon).
The issue is that depending on how exactly compiler optimization is done by the compiler, the result will be substantially different depending on whether a register, or memory location, was used.
This is gratuitious and very messy. It adds no value whatsoever.
i

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

What's not the issue?

FP arithmetic *is* messy, especially if one wishes to get it *exactly* right. Programmers don't get to choose not to play - the arithmetic really does work that way. I've been having this debate with people for 30 years, most often when somebody discovers that x==y isn't reliable if x and y are floats, and claims that the computer is busted. Sure. If you say so.
War story. I was running a 6-man Compiler group in the 1980s. One of the compilers we supported was Fortran, targeted on a military computer. The team came to me one day saying that the computer's floating-point arithmetic unit must be broken. Hmm. Really? Why do you say so? Well, the math library (Sin(), Cos(), Tan(), et al) was flunking the standard test suite by reason of inadequate accuracy, but the code looked OK. Looked OK?
Anyway, for instance the Sin() routine was implemented as a rational polynomial approximation, which involved lots of multiplying and adding, and the team had traced the problem down to a FP multiply that they claimed the hardware got wrong. Now this was the product of two 32-bit floats, and nobody knew who was right. The VAXs did their FP arithmetic somewhat differently (this was pre IEEE-754), so a direct comparison wasn't available, although it could come close. What to do?
Well, FP formats and math are not magic, and one can do the arithmetic manually, if needed. So, I got a large piece of gridded paper, and did longhand multiplication following the algorithm we all learned in grade school, but with a twist - the "digits" being multiplied were four digit hexadecimal numbers, and my trusty hex calculator served as the multiplication-table crib. Simple enough, but laborious. Turned out that the computer hardware was not broken, and the team soon found the true cause.
Modern computers are far better behaved than even 20 years ago. The classic gotcha on 6800-series computers was overlapping reads and writes. The problem was that while the 680x0 CPU was 32 bit, the backplane bus was 16-bit. This meant that the unit of atomic (indivisible) update was the 16-bit halfword, not the expected 32-bit longword, so if two CPUs were accessing the same longword, one writing the other reading, and they overlapped, one could end up with a hybrid (old lower half spliced to new upper half) in registers, but by the time you got there to read it with a debugger, the longword was all new. One can chase one's tail on this for quite some time.
There is far more to this stuff than meets the eye. If one wants to know more, a course in Numerical Methods will be an eye opener.
Joe Gwinn

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Jon Elson wrote:

IIRC the Motorola 68881 coprocessor also performed calculations at 80 bit extended precision as the Intel coprocessors do.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Jon Elson wrote:

80 bit, R* floating point was implemented in 1977 by Interdata. DEC, SEL, Haris, Microdata, IBM, & others, about the same time. Intel didn't have anything with floating point 'till later, & Microsoft was a still a startup.

--
Gary A. Gorgen | "From ideas to PRODUCTS"
snipped-for-privacy@comcast.net | Tunxis Design Inc.
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Ignoramus1280 wrote:

I think everyone that programs, should read this. It does a good job of explaining the problem. I've worked on Fortran programs that every "if" statement, contained an epsilon. Some dealing with dimensions, in light years, others in X unites. This was in the 60's.

--
Gary A. Gorgen | "From ideas to PRODUCTS"
snipped-for-privacy@comcast.net | Tunxis Design Inc.
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.