Re: When You Hear The Heavy Accent & The Poor Phone Connection... HANG UP!! ----- 0MCX2ECzHk

In any case, pipeline length has a much more direct impact upon the potential clock speed than whether the instruction set is simple or complex, as does the type of instruction set somewhat upon the pipeline (ergo the confusion.)

WTH

Reply to
WTH
Loading thread data ...

And you expect this from Microbloat? I would say that constitutes another thiumph of hope over experience

Reply to
Froggy

You do realize that most serious developers use the intel compiler irregardless of their IDE, right?

WTH

Reply to
WTH

That's not necessarily a RISC/CISC difference. RISC might make the changes easier. As I said, "way over simplified".

Paul

Reply to
Paul Newhouse

The simpler instruction set allows the compiler/coder to optimize for the pipeline more directly. You have to take the entire architecture into account.

Paul

Reply to
Paul Newhouse

I find gcc(++) doing a pretty fair job.

Paul

Reply to
Paul Newhouse

How, exactly, does it do that? I have spent a fair amount of time working in assembly for certain critical sections of code and I can assure you that you're not likely to beat a modern compiler no matter how good you are (the intel compiler for example.) As for RISC allowing a coder/compiler to optimize for a pipeline more directly, that doesn't seem to make sense since the optimization process would be roughly the same between the two competing instruction set types.

People seem to think that Intel embraces CISC more than RISC because they're either pigheaded or stupid, they're neither. Both routes have merits, neither is an 'all emcompassing' better approach than the other.

WTH

Reply to
WTH

Even that is oversimplified. RISC is nothing new.

Remember that MIPS translates roughly to "meaningless indicator of processor speed" and the same is roughly true of clock speeds of dissimilar chips.

The only true measure of computer system speed is to measure the applications YOU will be running using real data. Then decide if it is fast enough for your purposes. A heavy gamer will want a fast cpu, fast memory, and an ultra-fast graphics adapter to achieve high frame rates, etc. Someone who just surfs the internet and reads mail and newsgroups needs a fast network connection; mediocre cpu, disk, and display adapters will do just fine.

For example, there is no particular change in internet speeds between my G4/733, G4/500 cube, G4/500DP ProTools Mac, and my daughters G3/500 iMac or G3/600 iBook. All with the same exact OS and all sharing the same network connection (Earthlink DSL). (the exception being complicated Java or FLASH applications which depend upon CPU speed.

But when I use Word or Excel on those computer, the G4/733 is the clear winner (MS Word/Excel don't multi-process yet); multi-threaded stuff runs better on the G4/500DP.

So, though I'm a Mac user and make occasional sarcastic swipes at Wintel, there is no basis in fact for my prejudice. When I recommended computers for the SF Zoo in 1998, it was Windows NT and servers; it would be the same today, except the OS: Windows XP Pro servers. For that sort of industrial strength, but on a budget application and user community, the service base, application availability, and consultant and employee skill availability calls out for Microsoft OS.

If I were to set up an audio or video studio, it would be Mac for similar reasons for now. That will probably change in the next couple of years. I don't personally see how Mac can survive when Intel gets 64 bit chips, the new better interrupt and I/O structure, etc. Maybe we'll get lucky and Linux will get the Mac GUI ported to it (it's possible; Mac OS X is just Berkeley Unix with a pretty face) so we can have the robustness of Unix without Xwindows or Motif (yikes).

Ed.

in article s2Acc.78494$gA5.964969@attbi_s03, Paul Newhouse at snipped-for-privacy@pimin.rockhead.com wrote on 4/6/04 8:30 AM:

Reply to
Edward A. Oates

You've got to be kidding. GCC produces, in general, the worst code of all the compilers you can get. I used it on my Mandrake box, but that's because I don't have much choice. I'm not trying to slam GCC because I do use it extensively on Linux; however, if you're writing code and worried about size or performance, you do not want to use GCC.

WTH

Reply to
WTH

I completely agree but, it doesn't stop people from trying.

How do you optimize the complex instruction that:

adds one to a memory location, tests the memory locations value and branches if a condition is met

It's a lot easier to optimize:

load memory location Y to register X add one to register X store register X back to memory location Y test register X for condition (like zero/non-zero)

you can intermix that with other instructions (inside what might be a for loop, for instance) with the increment and test instruction sequence. Memory accesses, probably, take longer than the inter-register operations AND if your register bus is available while that memory reference is taking place. You can exploit any inherent parallelism in the pipelining architecture ... if it exists.

Uhhh ... ok?

Agreed but, I believe the RISC approach is more general and more easily applicable to a wider range of problems. It certainly isn't a magic bullet.

Engineers designing a chip, in my experience, seem to favor the RISC approach since the instructions are simpler and easier to verify.

Paul

Reply to
Paul Newhouse

That sounds familiar:

The CDC 6000/7000 (mid 1960's) series and the Cray's were/are all Risc'ish. Implementations of the concept have been around a long, long time.

Paul

Reply to
Paul Newhouse

What does any of this have to do with the original message??????? Ed

Reply to
ZEMSKI

HA! HA! HA! That's a good one. Let's change the example just a bit:

In my driveway I have a motorcycle, car and a mach+ jet. No matter how fast my mode of transportation is, the time it takes me to get from flat on my back in bed and into one the vehicles is approximately the same. The trip from the bed to the dirvers seat is the internet.

I used to run a 486 as my firewall/router. I have many friends who still' do. The 486 is plenty paowerful enough to drive the internet (cable + DSL) as long as I don't do anything else on it, like run a web server or an MTA or ... .

A 486 can route packets between a dozen NIC's faster than the NIC's can soak up, or supply, the data (as long as they aren't Win-NIC's). Ok, you need a decent motherboard so you can handle interrupts quickly.

Paul

Reply to
Paul Newhouse

The point here, to paraphrase Einstein, is to get the fastest computer you need for the job, but not faster.

Ed

in article HBDcc.79146$gA5.973237@attbi_s03, Paul Newhouse at snipped-for-privacy@pimin.rockhead.com wrote on 4/6/04 12:32 PM:

Reply to
Edward A. Oates

Lol, sadly true. Usually young people thinking "I've got to write code like I saw in DOOM's source..."

You do realize that the pipeline isn't just a pipe that stuff goes through, right? I mean, there are branch target buffers, look ahead adders, PATH optimization hardware, et cetera. All for precisely that reason.

Not in the last 10 years ;). Then add in the fact that RISC means more instructions per operation. Basically, RISC and CISC are comparable overall in efficiency for now. Ultimately, Intel will prove that CISC will prevail (given longer pipelines.)

Things don't work this way anymore unless you're tuning a game for a specific processor as a one off (and potentially a particular L1/L2 setup.)

You realize that Alpha went one way and Intel another, right?

But it isn't. RISC and CISC are the semantically except that one use smaller words than the other (using language as a metaphor.) The words ultimately band together to say the same thing. The reason RISC was so hot when it was being initially pushed (and why it isn't a big deal now) is because all of that secondary hardware (such as BTBs, PATH hardware, intelligent cache loaders/unloaders, vftbl optimizers [sometimes just another BTB]) was relatively crude or didn't yet exist which made longer pipelines much more liable to stall and made RISC operate faster in many situations. Ironically, optimizing your Intel CISC code back then would be done to approach RISC-like performance, but that was a looong time ago.

I'm sure they do, because the associated hardware is easier and cheaper to produce; however, for pure performance reasons, CISC is currently the king.

WTH

Reply to
WTH

AH!! You do understand; CISC - expensive, RISC - less so.

"Pure performance"??? What machine, what application?

As for your, "it's more complicated than that" argument, remember we started out with "way over simplified". If you want to have a religious argument about CISC have at it but, you do understand why it will lose.

Paul

Reply to
Paul Newhouse

Naughty consumer! Naughty, naughty consumer! *8-D

Reply to
Paul Newhouse

in article YpGcc.84680$K91.185430@attbi_s02, Paul Newhouse at snipped-for-privacy@pimin.rockhead.com wrote on 4/6/04 3:45 PM:

It's worse than you thought: my newest computer is over two years old. My fastest computer has just gone off AppleCare (three years) and since it still edits video with FCE and writes DVDs with DVD SP just fine, I see no particular reason to replace it.

I'll take myself out behind the woodshed (actually, I don't have one, maybe I'll build one on my HO layout...yay! back on topic).

Reply to
Edward A. Oates

Hehe, yes, but not on expensive on the scale that would really matter to you and I. In the embedded world, yes. The PC world, not at all.

All current benchmarking software for the PC. You realize that the Opteron is more powerful than the PowerPC, right?

Not at all and there's nothing religious about it. I don't have a preference one way or the other. I just want whatever is fastest, my compiler takes care of the rest (except in increasingly rare cases.) It is plain and simple, CISC is much faster, especially in CPU heavy applications such as real time 3D graphics (games, visual simulation, et cetera.) Ironically, PowerPC is supposed to be a RISC processor but it supports complex instructions as extensions meaning it isn't actually a RISC processor.

WTH

Reply to
WTH

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.