Same thing happened with ram decades ago. So they invented paged ram
(expanded). Soon someone will make paged cpu usage.
but does it really matter?
It's one chip plugged into one bus. It can only do one thing at a time.
A better setup would be like sli graphics. 2 giant cpu processors on 2
seperate busses with 2 seperate memory pipes, both doing the same exact
thing, when the system gets to a wait state, it goes to the other cpu for
This multi core crap is nonsence. It's a troll. 2 multi processors will
never ever work at the same time. 2 physical cores could if the whole
motherboard design was changed.
No im not.
The chip is plugged into one bus. ALL the cores go thru that bus.
The motherboard design needs changed to allow multiple cores to do things at
the same time.
As long as it's a "central" processing unit only one thing at a time can be
thrown thru the bus at one time.
The answer is localized processing. Kinda like when you put your hand on a
hot frying pan, and your "arm" without a signal
from your brain pulls your arm away.
A fix would be 2 cores, both fed like a raid system, both running together,
but if a wait state is reached the data is read off the other cpu.
Eliminating the bus bottlekneck.
An even better idea is redesign the 1990 centralized motherboard design.
No one can help you because you refuse to do any studying of the
material that you are presented.
Shit....you can't even drill a friggen hole without crying across half the
nets social media that the software is broken. Your worried about support
for multi cores, but what you really need is support for idiots.
fuckoff ya know nothing dickhead.
This Dell Optiplex 531S computer has a dual core processor.
Currently, one core is running at around 80% use and the other core is
running below 10%. Not all activity requires bus access, and it is the
operating system that decides which processor to send a task to. High
end video cards have a processor, as well, and all access the same data
& control bus. Standard video uses part of the system RAM for video.
Before you spout off, the CPU in the KU band voice/video/data uplink
for the space station was one of mine. it's an older design, based on
the Motorola MC68340 MPU and support ICs.
BTW, your attitude needs a _lot_ of work. A complete off chassis
rebuild look to be long overdue.
uhhhh, la ti da.
Everyone on usenet is a rich genius with a big dick...good for you.
My viewpoint was that the current decades old motherboard design is not good
for multi core processors.
Since I'm not as learned as you I'll just quote someone who is...
While multiple processor cores on a single die communicate with each other
directly, the approach of building multi core processors by combining
distinct processor dies creates the necessity to communicate via the
processor interface, which, in case of desktop and server mainstream armada,
is the Front Side Bus. This has been criticized as a huge bottleneck for
multi core configurations ever since Intel released its first dual core
Pentium D 800 aka Smithfield. As one core accesses data located in another
processor's L1 or L2 cache it must use the Front Side Bus, which eats away
at the available bus bandwidth.
For this very reason, the Core 2 generation implements a large, unified L2
cache, which means it is shared by two cores. However, as soon as you pack
two dual core dies onto a physical processor to build quad cores, the FSB
bottleneck issue is back again - and it is probably even worse, as there are
more cores fighting over more data in larger L2 caches. Intel's
countermeasure consists of a bus clock speed upgrade. The server platform
already runs 333 MHz (FSB1333 quad-pumped); the desktop platform will
probably receive the upgrade by the time the first quad core product hits
The second bottleneck is the system's main memory. It is not a part of the
processor, but resides in the chipset northbridge on the motherboard. Again
the Front Side Bus is used to interconnect the processor(s) with the
motherboard core logic, which has two or more cores fight over memory
access. AMD integrated the memory controller into its processors as early as
2003, which minimizes the memory access path and improves performance due to
faster operation at full CPU core clock speed. The real advantage of on-die
memory controllers becomes obvious in multiprocessor environments, where
each CPU can access its own memory at maximum bandwidth.
It's all about the bottlenecks.
So you pick up some speed with a quad core 3.5ghz relative to a single3.5
But what about two physical sli type 14ghz chips.
That's the equivalent mhz to a quad core, 14ghz. Both getting fed the same
stuff, when a bottleneck is reached, or temperature
gets out of hand...
it reads off the other core.
How much power is lost trying to manage all those requests. How much fatter
is code due to all this nonsence.
I'd take the sli type 14 ghz chip and software written to dominate memory
and cpu usage.
On 6/21/2012 17:21, vinny wrote:
The biggest bottleneck at GHz speeds is not the bus capacity, it's the
timing delay incurred if the processors were physically separated by
more than a few centimeters. The latency created is way more of a loss
than multiple cores on the same chip sharing the same bus.
3.0 GHz wave length is 10 centimeters. Losing 1 clock cycle every cycle
to keep the 2 processors in synch effectively cuts the processor speed
There are numerous other issues, but this is the biggest.
Others with more electronic experience may have more detailed answers,
and may have other reasons.
email@example.com (remove brain when replying)
I'm sure distance is the biggest factor. But only because its a central
processor. All roads lead to rome.
......What if there's no central processor?
What if the processor didn't control the hard drive, it worked with the cpu
in the hard drive? We are starting to see a little of that in graphics
Genius, hard drives have had onboard processors since IDE came out about
25 years ago (on mainframes they had them long before that). Graphics
cards have had onboard processors about as long. You go on about "14
GHz". Please tell us who makes a 14GHz GPU.
As for your cut and paste, note that the Pentium-D is about 3
generations old and was playing catchup with AMD. Find similar
statements about Sandy Bridge or Magny-Cours.
Are you friggen retarded?
can you read english?
I was suggesting instead of 3.5 quad, make a 14 single.
What would be faster for the same ghz.
Damn, your dummer than a box of rocks, even jb can read and reply semi
The A/D converter and its support circuits in digital storage scopes
exceed 14GHz easily.
At what price? A lot of tricks went into that design, like multiple
parallel converters. If the CPU was capable of 14 GHz operation, it
would need terabytes of RAM to store a waveform. Some early scopes like
that downconverted the signal, before processing. You don't have to do
brute force sampling to display a waveforem.
Vital Industries 'Squeeze Zoom' used a Z80B for real time, broadcast
quality NTSC video. It was the first studio grade DVE. The video A/D
converter module was $1400, used. It had 768KB of 12 bit RAM (1K*1,
1000 nanosecond TI RAM that was obsolete by the middle '80s). It was
interleaved into blocks that allowed the slow CPU to read or write video
to multiple groups to raise the apparent access time. It processed two
video frames at once so it could perform framestore as well as do video
effects. That allowed a non synced video to be imported into the system
without external hardware. It also simplified the process of generating
the special effects It was probably the only Z80 system that had people
lined up to pay $250,000 for a full rack of hardware. It had a 1000A 5V
LINEAR 3 Phase 480 power supply for the RAM, and a dozen other DC
supplies for various analog & digital circuits.
The RCB-2000 had a pair of D/A converters doing 90 million samples
per second, but the CPU was a lot slower.
None of these even approach a 14 GHz CPU & RAM that would support
it. Look at the speed of RAM used in a PC, compared to the processor
speed. 400 MHz vs 3.0 GHz. The biggest bottleneck is moving data in
and out of a CPU. That's why you have a small prefetch cache and other
tricks to speed up the PC.
True, my 1980's-vintage HP 54111D digital scope uses four 250
MSamples/Sec A/D converters with analog Sample/Holds operating
sequentially to capture 1 GSa/S. The TI flash A/Ds I used in the radio
were rated to ~800 MSa/S.
We built a dual channel, DSP based telemetry system that sampled to
90 MHz. The design could have gone higher, but customers wanted to be
able to phase them into existing, analog based earth stations with 70
MHz IF. The 70 MHz IF output was regenerated with a D/A converter.
That fed their wide band data recorders so the data could be analyzed at
a later date. All of the signal processing was controlled with a fairly
slow embedded controller, running custom 'C' based software. That was
my last design group before I ended up on disability, and things were
getting interesting. I had over 30 pages of 'D' sized schematics on my
bench while I worked on the test procedures. :)
Some pages were filled with FIR filters & DSP ICs. All had over 200
pins with cryptic assignments. I had to fight to get the D prints. The
idiot in the print room refused to give me anything but A sheets. I had
to collar a VP and send him to get them for me. :(
Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here.
All logos and trade names are the property of their respective owners.