I do not think I have ever been to your house... <Smile> If that is an
invite, then sure I would like to see one.
BTW: Windows 3.0 came out when AT 286's were in vogue. I did not have a
chance to notice anything other than the OS.
Art, one of my clients is running his 2 engraving machines by using 2 Tandy
1000 machines. I just had to try and find a 5 1/4" disk drive and install it
for him. They are still going and still running his machines.
I learned BASIC on a Tandy 1000. And Star Trek too...
Yep, if something works don't screw with it. Not to mention that he would
have to spend some REAL money to switch out the Interface if he switches to
another computer controller. Might be cheaper to purchase a new engraving
Well, Dan, if that is where you are stuck at, you definately have the wrong
software for your system. If you are on a MAC, your options are limited, if
you are on a PC, drop me an email and I will help get you the correct
software so that you can do your necessary task without your current
It is just a question of the correct tools. You may know what you want to
do, but not HOW to do it. The tools on the Windows platform all run the full
range, from cheap/free utilities that may or may not do what they claim to
do. through the consumer level, pro-sumer level all the way to full
No worries, Dan, I can help get you productive.
My point is that 'clock rate' is a very poor measure of computer
performance, but that's all that most ever look at. It doesn't matter
how fast it runs, if it can't do the job efficiently, or at all.
Many things affect overall speed. Bus speed, hard drive speed, various
latencies, and, above all, how well the software is written (not to
mention the manuals). That especially includes the operating system.
"Ease of use" of both hardware and software is critical to meaningful
use of the computer. It's not how many 'cycles' the computer is
clocking, but how much work is actually getting done.
If the computer isn't running properly, for whatever reason, it's NOT
'producing' efficiently, if at all. Dependability is a BIG factor in
computer 'speed'. A poor power supply that causes intermittent crashes
and glitches definitely affects overall computer 'speed'. Both hardware
and software 'support' also enter into the overall picture. Time spent
on the phone, or on line, seeking assistance, is WASTED time.
Software issues, other than those imposed by the operating system, are
not really the fault of the hardware. Those imposed BY the operating
system ARE the fault of the hardware, IF the hardware requires use of a
poor or inappropriate operating system. Much existing hardware can run
more than one operating system, within limits.
And some software is either unique to particular platforms, or runs much
better on those platforms. The hardware the will run that software
effectively will beat ANY other machine at that task, regardless of
clock speed. That's why certain machines, that run certain software
(Mac, Windows, Unix, Cobol, Fortran, or whatever) virtually OWN certain
" snipped-for-privacy@CreditValley.Railway" wrote:
On 4/6/04 8:46 AM, in article email@example.com, "Daniel A.
Speaking of which, I have heard (rumors?) that Intel's new series processors
will de-emphasize clock speed, as they are moving towards a RISC-based
RISC processors do not require the same clock speeds to perform the same
work as CISC processors; a point Apple has tried to make for many years now.
If true, this means that Macs and PCs can be even more compared point by
That's way/weigh/whey over simplified. You are still doing the same work
but, you are doing it with a set of simpler instructions. Usually, the
memory references are decoupled from the arithmetic and logical operations.
You end up loading and storing as independent operations and do a series
of register to register operations in between. ... ok, that's still over
simplified and just one aspect of the appeal of RISC.
It puts a greater burden on the compilers to generate reasonable code.
In any case, pipeline length has a much more direct impact upon the
potential clock speed than whether the instruction set is simple or complex,
as does the type of instruction set somewhat upon the pipeline (ergo the
How, exactly, does it do that? I have spent a fair amount of time working
in assembly for certain critical sections of code and I can assure you that
you're not likely to beat a modern compiler no matter how good you are (the
intel compiler for example.) As for RISC allowing a coder/compiler to
optimize for a pipeline more directly, that doesn't seem to make sense since
the optimization process would be roughly the same between the two competing
instruction set types.
People seem to think that Intel embraces CISC more than RISC because they're
either pigheaded or stupid, they're neither. Both routes have merits,
neither is an 'all emcompassing' better approach than the other.
I completely agree but, it doesn't stop people from trying.
How do you optimize the complex instruction that:
adds one to a memory location,
tests the memory locations value
and branches if a condition is met
It's a lot easier to optimize:
load memory location Y to register X
add one to register X
store register X back to memory location Y
test register X for condition (like zero/non-zero)
you can intermix that with other instructions (inside what might be a for
loop, for instance) with the increment and test instruction sequence.
Memory accesses, probably, take longer than the inter-register operations
AND if your register bus is available while that memory reference is
taking place. You can exploit any inherent parallelism in the pipelining
architecture ... if it exists.
Uhhh ... ok?
Agreed but, I believe the RISC approach is more general and more easily
applicable to a wider range of problems. It certainly isn't a magic
Engineers designing a chip, in my experience, seem to favor the RISC
approach since the instructions are simpler and easier to verify.
Lol, sadly true. Usually young people thinking "I've got to write code like
I saw in DOOM's source..."
You do realize that the pipeline isn't just a pipe that stuff goes through,
right? I mean, there are branch target buffers, look ahead adders, PATH
optimization hardware, et cetera. All for precisely that reason.
Not in the last 10 years ;). Then add in the fact that RISC means more
instructions per operation. Basically, RISC and CISC are comparable overall
in efficiency for now. Ultimately, Intel will prove that CISC will prevail
(given longer pipelines.)
Things don't work this way anymore unless you're tuning a game for a
specific processor as a one off (and potentially a particular L1/L2 setup.)
You realize that Alpha went one way and Intel another, right?
But it isn't. RISC and CISC are the semantically except that one use
smaller words than the other (using language as a metaphor.) The words
ultimately band together to say the same thing. The reason RISC was so hot
when it was being initially pushed (and why it isn't a big deal now) is
because all of that secondary hardware (such as BTBs, PATH hardware,
intelligent cache loaders/unloaders, vftbl optimizers [sometimes just
another BTB]) was relatively crude or didn't yet exist which made longer
pipelines much more liable to stall and made RISC operate faster in many
situations. Ironically, optimizing your Intel CISC code back then would be
done to approach RISC-like performance, but that was a looong time ago.
I'm sure they do, because the associated hardware is easier and cheaper to
produce; however, for pure performance reasons, CISC is currently the king.
Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here.
All logos and trade names are the property of their respective owners.