The CAD Multi-Core Processor Problem

Very shortly 12, 24, and 48-core processors will be common place. Most CAD systems can only use 1 core.

I cover this issue extensively on the CADCAM Technology Leaders blog.

Reply to
jon_banquer
Loading thread data ...

Same thing happened with ram decades ago. So they invented paged ram (expanded). Soon someone will make paged cpu usage. but does it really matter?

It's one chip plugged into one bus. It can only do one thing at a time. A better setup would be like sli graphics. 2 giant cpu processors on 2 seperate busses with 2 seperate memory pipes, both doing the same exact thing, when the system gets to a wait state, it goes to the other cpu for the data. This multi core crap is nonsence. It's a troll. 2 multi processors will never ever work at the same time. 2 physical cores could if the whole motherboard design was changed.

Reply to
vinny

Paged ram was a workaround for a limited address space. The issue here is not that one needs a workaround to allow a processor with limited capability to handle a task, it is that there are processors that have capabilities of which the software cannot avail itself.

Where have you been for the past decade? A 48-core chip is 48 separate processors that can do 48 separate things at a time, all built into a single carrier.

So you have 96 cores instead of 48, but the software nonethless only uses one of them, leaving the other 95 idle.

Yep, you're definitely out of touch. Multicore processors have multiple physical cores.

You seem to be conflating multiple core processors with hyperthreading.

Reply to
J. Clarke

No im not. The chip is plugged into one bus. ALL the cores go thru that bus. The motherboard design needs changed to allow multiple cores to do things at the same time. As long as it's a "central" processing unit only one thing at a time can be thrown thru the bus at one time. The answer is localized processing. Kinda like when you put your hand on a hot frying pan, and your "arm" without a signal from your brain pulls your arm away. A fix would be 2 cores, both fed like a raid system, both running together, but if a wait state is reached the data is read off the other cpu. Eliminating the bus bottlekneck. An even better idea is redesign the 1990 centralized motherboard design.

Reply to
vinny

Your viewpoint is not even wrong. You clearly have no clue what a CPU does or how it works, and I doubt that you actually want to know.

Reply to
J. Clarke

Well even if I did, your clearly not the guy to school me on it. All your gonna reply is lines of text saying "your stupid". If you have no clue, just don't reply to posts like this.

Reply to
vinny

No one can help you because you refuse to do any studying of the material that you are presented.

*****

Shit....you can't even drill a friggen hole without crying across half the nets social media that the software is broken. Your worried about support for multi cores, but what you really need is support for idiots. fuckoff ya know nothing d*****ad.

Reply to
vinny

This Dell Optiplex 531S computer has a dual core processor. Currently, one core is running at around 80% use and the other core is running below 10%. Not all activity requires bus access, and it is the operating system that decides which processor to send a task to. High end video cards have a processor, as well, and all access the same data & control bus. Standard video uses part of the system RAM for video.

Before you spout off, the CPU in the KU band voice/video/data uplink for the space station was one of mine. it's an older design, based on the Motorola MC68340 MPU and support ICs.

BTW, your attitude needs a _lot_ of work. A complete off chassis rebuild look to be long overdue.

Reply to
Michael A. Terrell

uhhhh, la ti da. Everyone on usenet is a rich genius with a big dick...good for you. Seriously...

My viewpoint was that the current decades old motherboard design is not good for multi core processors. Since I'm not as learned as you I'll just quote someone who is... """ While multiple processor cores on a single die communicate with each other directly, the approach of building multi core processors by combining distinct processor dies creates the necessity to communicate via the processor interface, which, in case of desktop and server mainstream armada, is the Front Side Bus. This has been criticized as a huge bottleneck for multi core configurations ever since Intel released its first dual core Pentium D 800 aka Smithfield. As one core accesses data located in another processor's L1 or L2 cache it must use the Front Side Bus, which eats away at the available bus bandwidth.

For this very reason, the Core 2 generation implements a large, unified L2 cache, which means it is shared by two cores. However, as soon as you pack two dual core dies onto a physical processor to build quad cores, the FSB bottleneck issue is back again - and it is probably even worse, as there are more cores fighting over more data in larger L2 caches. Intel's countermeasure consists of a bus clock speed upgrade. The server platform already runs 333 MHz (FSB1333 quad-pumped); the desktop platform will probably receive the upgrade by the time the first quad core product hits the market.

The second bottleneck is the system's main memory. It is not a part of the processor, but resides in the chipset northbridge on the motherboard. Again the Front Side Bus is used to interconnect the processor(s) with the motherboard core logic, which has two or more cores fight over memory access. AMD integrated the memory controller into its processors as early as

2003, which minimizes the memory access path and improves performance due to faster operation at full CPU core clock speed. The real advantage of on-die memory controllers becomes obvious in multiprocessor environments, where each CPU can access its own memory at maximum bandwidth.

""""

It's all about the bottlenecks.

So you pick up some speed with a quad core 3.5ghz relative to a single3.5 ghz core.

But what about two physical sli type 14ghz chips.

That's the equivalent mhz to a quad core, 14ghz. Both getting fed the same stuff, when a bottleneck is reached, or temperature

gets out of hand...

it reads off the other core.

How much power is lost trying to manage all those requests. How much fatter is code due to all this nonsence.

I'd take the sli type 14 ghz chip and software written to dominate memory and cpu usage.

Reply to
vinny

The biggest bottleneck at GHz speeds is not the bus capacity, it's the timing delay incurred if the processors were physically separated by more than a few centimeters. The latency created is way more of a loss than multiple cores on the same chip sharing the same bus.

3.0 GHz wave length is 10 centimeters. Losing 1 clock cycle every cycle to keep the 2 processors in synch effectively cuts the processor speed in half.

There are numerous other issues, but this is the biggest.

Others with more electronic experience may have more detailed answers, and may have other reasons.

Reply to
Steve Walker

I'm sure distance is the biggest factor. But only because its a central processor. All roads lead to rome. ......What if there's no central processor? What if the processor didn't control the hard drive, it worked with the cpu in the hard drive? We are starting to see a little of that in graphics cards.

Reply to
vinny

Genius, hard drives have had onboard processors since IDE came out about

25 years ago (on mainframes they had them long before that). Graphics cards have had onboard processors about as long. You go on about "14 GHz". Please tell us who makes a 14GHz GPU.

As for your cut and paste, note that the Pentium-D is about 3 generations old and was playing catchup with AMD. Find similar statements about Sandy Bridge or Magny-Cours.

Reply to
J. Clarke

Are you friggen retarded? can you read english? I was suggesting instead of 3.5 quad, make a 14 single. What would be faster for the same ghz. Damn, your dummer than a box of rocks, even jb can read and reply semi intelligently.

Reply to
vinny

I just experienced what happens when the processor DOES control the IDE hard drive. It's called PIO mode, transfers less than 2 MB/S, and is the failsafe when too many drive errors take XP out of UDMA mode, which is a high-speed block transfer controlled by dedicated hardware. My laptop's module bay 2nd hard drive sometimes triggers PIO mode when it's plugged in hot.

In UDMA mode 5 the hard drive transfers at ~66 MB/S (measured) while the CPU is occupied elsewhere, or idle.

formatting link
You can see the IDE bus mode with the Device Manager.

I haven't had enough trouble (yet) with SATA drives and Windows 7 to learn their details.

jsw

Reply to
Jim Wilkins

When do you think those 14 GHz processors will be available? How about RAM that's fast enough to work with it, and how do you plan to route the data & control lines? Tell us all, what is the wavelength of a 14 GHz signal, and what variations are allowable in each signal path to maintain system timing? That is why you use multiple cores, to actuall add performance, rather than a half ass theory of what might be, some day. Like I said, the OS assigns the tasks, and the work is done in the core. They access memory and I/O as needed, but the bus is still sees few conflicts. Sorry to hear that you have no dick.

Reply to
Michael A. Terrell

If only you could rise to semi intelligence. Current semiconductor technology can't produce a 14 GHz processor, or the needed support ICs.

Reply to
Michael A. Terrell

formatting link
The A/D converter and its support circuits in digital storage scopes exceed 14GHz easily.
formatting link
jsw

Reply to
Jim Wilkins

formatting link

At what price? A lot of tricks went into that design, like multiple parallel converters. If the CPU was capable of 14 GHz operation, it would need terabytes of RAM to store a waveform. Some early scopes like that downconverted the signal, before processing. You don't have to do brute force sampling to display a waveforem.

Vital Industries 'Squeeze Zoom' used a Z80B for real time, broadcast quality NTSC video. It was the first studio grade DVE. The video A/D converter module was $1400, used. It had 768KB of 12 bit RAM (1K*1,

1000 nanosecond TI RAM that was obsolete by the middle '80s). It was interleaved into blocks that allowed the slow CPU to read or write video to multiple groups to raise the apparent access time. It processed two video frames at once so it could perform framestore as well as do video effects. That allowed a non synced video to be imported into the system without external hardware. It also simplified the process of generating the special effects It was probably the only Z80 system that had people lined up to pay $250,000 for a full rack of hardware. It had a 1000A 5V LINEAR 3 Phase 480 power supply for the RAM, and a dozen other DC supplies for various analog & digital circuits.

The RCB-2000 had a pair of D/A converters doing 90 million samples per second, but the CPU was a lot slower.

None of these even approach a 14 GHz CPU & RAM that would support it. Look at the speed of RAM used in a PC, compared to the processor speed. 400 MHz vs 3.0 GHz. The biggest bottleneck is moving data in and out of a CPU. That's why you have a small prefetch cache and other tricks to speed up the PC.

Reply to
Michael A. Terrell

I seem to believe that the fastest non-He bath processor or video-processor is those developed by

formatting link
I was a beta site as well as Intel - and we had the first versions of their scopes. Since then, they never looked back.

The very high processing speed of real time scope probe samples.

My personal processor in GaAs was 400 MHz We never found the top end, since our need was a stable 200. The IC was 78 Watts and we liquid cooled it top and bottom. The board size freon chilled plates were started before the boards were.

Mart>> semiconductor technology can't produce a 14 GHz processor, or the

formatting link

Reply to
Martin Eastburn

Ok, last post for me on this thread, your a friggen whacko. It's like your screaming. I think the whole multi core design is a way to get more out of current designs. You just admitted (i friggen think?) that multi core chips is the only way to get more out of the current configuration. the current configuration needs radically changed. But look...don't have a heart attack. I'm nobody worth impressing.

Reply to
vinny

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.