Re: When You Hear The Heavy Accent & The Poor Phone Connection... HANG UP!! ----- 0MCX2ECzHk

What you prefer to believe is your own business. However, I'm sure is was a combination of things, including competition, and IBM's market reputation as two of the factors.

Reply to
Brian Paul Ehni
Loading thread data ...

My point is that 'clock rate' is a very poor measure of computer performance, but that's all that most ever look at. It doesn't matter how fast it runs, if it can't do the job efficiently, or at all.

Many things affect overall speed. Bus speed, hard drive speed, various latencies, and, above all, how well the software is written (not to mention the manuals). That especially includes the operating system. "Ease of use" of both hardware and software is critical to meaningful use of the computer. It's not how many 'cycles' the computer is clocking, but how much work is actually getting done.

If the computer isn't running properly, for whatever reason, it's NOT 'producing' efficiently, if at all. Dependability is a BIG factor in computer 'speed'. A poor power supply that causes intermittent crashes and glitches definitely affects overall computer 'speed'. Both hardware and software 'support' also enter into the overall picture. Time spent on the phone, or on line, seeking assistance, is WASTED time.

Software issues, other than those imposed by the operating system, are not really the fault of the hardware. Those imposed BY the operating system ARE the fault of the hardware, IF the hardware requires use of a poor or inappropriate operating system. Much existing hardware can run more than one operating system, within limits.

And some software is either unique to particular platforms, or runs much better on those platforms. The hardware the will run that software effectively will beat ANY other machine at that task, regardless of clock speed. That's why certain machines, that run certain software (Mac, Windows, Unix, Cobol, Fortran, or whatever) virtually OWN certain job markets.

Dan Mitchell ==========

" snipped-for-privacy@CreditValley.Railway" wrote:

Reply to
Daniel A. Mitchell

(snip)

Speaking of which, I have heard (rumors?) that Intel's new series processors will de-emphasize clock speed, as they are moving towards a RISC-based processor.

RISC processors do not require the same clock speeds to perform the same work as CISC processors; a point Apple has tried to make for many years now.

If true, this means that Macs and PCs can be even more compared point by point.

Reply to
Brian Paul Ehni

That's way/weigh/whey over simplified. You are still doing the same work but, you are doing it with a set of simpler instructions. Usually, the memory references are decoupled from the arithmetic and logical operations. You end up loading and storing as independent operations and do a series of register to register operations in between. ... ok, that's still over simplified and just one aspect of the appeal of RISC.

It puts a greater burden on the compilers to generate reasonable code.

Paul

Reply to
Paul Newhouse

Oh, goody-goody Does that mean that I will soon be able to get a Motorola CPU for my PC. That would be great, because then I could take it out of the walk-in freezer and put it back in my office. I have an AMD Athlon now that only works properly at sub-arctic temperatures.

Reply to
Froggy

As I said, rumor.

Reply to
Brian Paul Ehni

There are other factors, too, like data block size, etc.

Reply to
Brian Paul Ehni

In any case, pipeline length has a much more direct impact upon the potential clock speed than whether the instruction set is simple or complex, as does the type of instruction set somewhat upon the pipeline (ergo the confusion.)

WTH

Reply to
WTH

And you expect this from Microbloat? I would say that constitutes another thiumph of hope over experience

Reply to
Froggy

You do realize that most serious developers use the intel compiler irregardless of their IDE, right?

WTH

Reply to
WTH

That's not necessarily a RISC/CISC difference. RISC might make the changes easier. As I said, "way over simplified".

Paul

Reply to
Paul Newhouse

The simpler instruction set allows the compiler/coder to optimize for the pipeline more directly. You have to take the entire architecture into account.

Paul

Reply to
Paul Newhouse

I find gcc(++) doing a pretty fair job.

Paul

Reply to
Paul Newhouse

How, exactly, does it do that? I have spent a fair amount of time working in assembly for certain critical sections of code and I can assure you that you're not likely to beat a modern compiler no matter how good you are (the intel compiler for example.) As for RISC allowing a coder/compiler to optimize for a pipeline more directly, that doesn't seem to make sense since the optimization process would be roughly the same between the two competing instruction set types.

People seem to think that Intel embraces CISC more than RISC because they're either pigheaded or stupid, they're neither. Both routes have merits, neither is an 'all emcompassing' better approach than the other.

WTH

Reply to
WTH

Even that is oversimplified. RISC is nothing new.

Remember that MIPS translates roughly to "meaningless indicator of processor speed" and the same is roughly true of clock speeds of dissimilar chips.

The only true measure of computer system speed is to measure the applications YOU will be running using real data. Then decide if it is fast enough for your purposes. A heavy gamer will want a fast cpu, fast memory, and an ultra-fast graphics adapter to achieve high frame rates, etc. Someone who just surfs the internet and reads mail and newsgroups needs a fast network connection; mediocre cpu, disk, and display adapters will do just fine.

For example, there is no particular change in internet speeds between my G4/733, G4/500 cube, G4/500DP ProTools Mac, and my daughters G3/500 iMac or G3/600 iBook. All with the same exact OS and all sharing the same network connection (Earthlink DSL). (the exception being complicated Java or FLASH applications which depend upon CPU speed.

But when I use Word or Excel on those computer, the G4/733 is the clear winner (MS Word/Excel don't multi-process yet); multi-threaded stuff runs better on the G4/500DP.

So, though I'm a Mac user and make occasional sarcastic swipes at Wintel, there is no basis in fact for my prejudice. When I recommended computers for the SF Zoo in 1998, it was Windows NT and servers; it would be the same today, except the OS: Windows XP Pro servers. For that sort of industrial strength, but on a budget application and user community, the service base, application availability, and consultant and employee skill availability calls out for Microsoft OS.

If I were to set up an audio or video studio, it would be Mac for similar reasons for now. That will probably change in the next couple of years. I don't personally see how Mac can survive when Intel gets 64 bit chips, the new better interrupt and I/O structure, etc. Maybe we'll get lucky and Linux will get the Mac GUI ported to it (it's possible; Mac OS X is just Berkeley Unix with a pretty face) so we can have the robustness of Unix without Xwindows or Motif (yikes).

Ed.

in article s2Acc.78494$gA5.964969@attbi_s03, Paul Newhouse at snipped-for-privacy@pimin.rockhead.com wrote on 4/6/04 8:30 AM:

Reply to
Edward A. Oates

You've got to be kidding. GCC produces, in general, the worst code of all the compilers you can get. I used it on my Mandrake box, but that's because I don't have much choice. I'm not trying to slam GCC because I do use it extensively on Linux; however, if you're writing code and worried about size or performance, you do not want to use GCC.

WTH

Reply to
WTH

I completely agree but, it doesn't stop people from trying.

How do you optimize the complex instruction that:

adds one to a memory location, tests the memory locations value and branches if a condition is met

It's a lot easier to optimize:

load memory location Y to register X add one to register X store register X back to memory location Y test register X for condition (like zero/non-zero)

you can intermix that with other instructions (inside what might be a for loop, for instance) with the increment and test instruction sequence. Memory accesses, probably, take longer than the inter-register operations AND if your register bus is available while that memory reference is taking place. You can exploit any inherent parallelism in the pipelining architecture ... if it exists.

Uhhh ... ok?

Agreed but, I believe the RISC approach is more general and more easily applicable to a wider range of problems. It certainly isn't a magic bullet.

Engineers designing a chip, in my experience, seem to favor the RISC approach since the instructions are simpler and easier to verify.

Paul

Reply to
Paul Newhouse

That sounds familiar:

The CDC 6000/7000 (mid 1960's) series and the Cray's were/are all Risc'ish. Implementations of the concept have been around a long, long time.

Paul

Reply to
Paul Newhouse

HA! HA! HA! That's a good one. Let's change the example just a bit:

In my driveway I have a motorcycle, car and a mach+ jet. No matter how fast my mode of transportation is, the time it takes me to get from flat on my back in bed and into one the vehicles is approximately the same. The trip from the bed to the dirvers seat is the internet.

I used to run a 486 as my firewall/router. I have many friends who still' do. The 486 is plenty paowerful enough to drive the internet (cable + DSL) as long as I don't do anything else on it, like run a web server or an MTA or ... .

A 486 can route packets between a dozen NIC's faster than the NIC's can soak up, or supply, the data (as long as they aren't Win-NIC's). Ok, you need a decent motherboard so you can handle interrupts quickly.

Paul

Reply to
Paul Newhouse

The point here, to paraphrase Einstein, is to get the fastest computer you need for the job, but not faster.

Ed

in article HBDcc.79146$gA5.973237@attbi_s03, Paul Newhouse at snipped-for-privacy@pimin.rockhead.com wrote on 4/6/04 12:32 PM:

Reply to
Edward A. Oates

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.