Scaling beyond 130nm dead or alive?

In two related articles from

formatting link
they seem to claim completely different things:

One is here "Intel Begins $2 Billion Conversion Of Arizona Factory to Start

65 nm"
formatting link
another "Scaling dead at 130-nm, says IBM technologist"
formatting link
I'm really confused!!! One say we will never go over 90nm, and at the same time Intel spends a lot of money to build a factory for 65nm!

Your comments.

Reply to
Andrew
Loading thread data ...

I think that the point is that simple scaling - "die shrinks" - are no longer feasible, as they were in the past. Time was, when you got better process resolution, you could just crank the optics to reduce an existing mask set. The physics has changed nonlinearly, so new device designs are needed to exploit sub-100 nm features. Several people have 90 nm stuff in production, but it ain't easy. At 50 nm and down, whole new devices, finfets or something, will be needed.

John

Reply to
John Larkin

Not to mention that smaller feature size is not the major driver of so-called "Moore's Law". Crystal defect density has been the major driver in recent years. Moore's Law says the number of transistors per device doubles every two years. Scaling down the linear dimension by 2 increases the transistor density by 4, but that's hard to do. Decreasing the defect density allows you to build larger chips that can be manufactured with acceptable yield. Feature size may hit a law-of-physics stumbling block, but that will not happen to advances in reducing defect density. Until we have single chips the size of dinner plates, there will be room for improvement based on defect density alone, which is the major driver for Moore's Law.

Reply to
Mark Thorson

Good point. It's easier to cool a bigger chip, too.

Of course the other problem is: how much engineering does it take to design a working billion-gate chip, and then who wants it?

Electronics is just about 100 years old, approaching a mature industry.

John

Reply to
John Larkin

Various sources cite 1 year, 18 months or two years, and target number of devices, "complexity", power consumption and reliability as the factors and objectives.

formatting link
Moore's paper appeared in 1965, when he was with Fairchild Electronics, in Electronics Magazine, when that was still a meaningful publication.

Reply to
Richard Henry

Right, I agree that Moore's Law can be driven by other factors than scaling. But then I don't understand what they mean saying "65 nm technology (process)", or even "45 nm", look here

formatting link
- "East Fishkill, N.Y. and Seoul, Korea - March 5, 2004 - Samsung Electronics joins a strategic semiconductor technology development partnership with IBM, Chartered Semiconductor Manufacturing and Infineon. Initially, the four companies will focus on 65 nanometer (nm) technology and will expand, over time, to include 45 nm process development."

Do they mean something like "equivalent device size"? Like for gate dielectric thicknesses. Or 45 nm is a real FET size?

Reply to
Andrew

Except that you can't cool them, and they'll tear the solder balls right out of the module as they heat up. Maybe we can use elastic silicon ;-)

Cheers,

Phil Hobbs

Reply to
Phil Hobbs

Long before we reach that point, we will probably be forced to transition from dissipative logic to reversible logic. (The only computing operations that _have_ to generate heat are I/O operations.)

Nanotech will probably also require reversible logic, in order to avoid cooking itself...

For links to reversible computing and its relation to nanotechnology, see .

For one group that has performed some excellent experimental research into reversible computing, including building some prototype devices, see: .

-- Gordon D. Pusch

perl -e '$_ = "gdpusch\@NO.xnet.SPAM.com\n"; s/NO\.//; s/SPAM\.//; print;'

Reply to
Gordon D. Pusch

Maybe not so obvious from the eye catching title but the EE Times text moderates it a little to say that "_traditional_ scaling" is dead. So it depends on what you mean traditional scaling.

What do we mean by scaling anyways? Dennard from IBM set up some "rules" way back in the 20th century saying CMOS designs can be "scaled" by simultaneous and correlated reduction in things like oxide thickness, length, width, supply voltages, junction depth, and doping. The idea being that you can take any CMOS VLSI chip, multiply everything by the magic scaling number, run it through newer tools and {POOF} like a cookie recipe: you get the same design at lower cost per chip. The magic number is traditionally set at 71% shrink (e.g.

65nm/90nm) per generation to area shrinks by half a la Moore's law.

Truth is it doesn't quite worked out quite so neatly. And each generation it's getting harder to pretend that "scaling" applies at all. Parameters "scale" unevenly -- at "65nm" maybe only one parameter is actually equal or less than 65nm -- in spite of what the "rules" say it should be. I get the sense that the IBM guy is talking about these rules being dead rather than CMOS being dead.

Just FYI, these 90/65/45 nm "node" values are becoming something of a marketing gimic... like CPU clock speed. For example read:

formatting link
But better believe that if Intel spends $2B, there is still some economic sensibility to it. *Something(s)* about the new process will be 65-ish nm or smaller. To make economic sense it will be also be faster or cheaper or more complex or some other definition of "better". In the end, the rules that count {for this scope of discussion anyways} are economic rather than physical.

-Lee

Reply to
lee_posts04

Yeah, I know--I work in the building where reversible computing was invented, many moons ago. If CMOS is really running into a brick wall, though, there'll be enough blood on the landscape that we may all be out of work before reversible computing becomes practical (if that ever happens). There's a _lot_ of work going on just now on how to cool next-generation CMOS without circulating water right to the chip level. People are even talking about cutting channels into the back surface of the chips, to run cooling water. I don't think things are quite that desperate, but we're clearly in a new ballgame.

Cheers,

Phil Hobbs

Reply to
Phil Hobbs

This is exactly my question. Are those 65 or 45 nm real CMOS dimensions or that are fake "equivalent" numbers, that only represent higher computer power? If second is the case, then some definitions for "equivalency" must already be revealed, but I've never heard about that yet.

Reply to
Andrew

That's Fairchild Camera and Instrument, not Fairchild Electronics.

Reply to
Mark Thorson

That's like asking "Who would ever need more than 64K of RAM?".

Reply to
Mark Thorson

Nothing grows exponentially forever. The problem in today's electronics industry is increasingly the difficulty of finding a "killer ap" to absorb the incredible amount of compute and storage capacity now available. Most people haven't bothered to install 2G of ram in their PCs, even though it's now cheaper than 64K was a decade or so ago. My 700 MHz Dell has 128M of ram, and works fine, even for circuit simulation and CAD use.

Huge-capacity nanotech data storage (if it ever works) may just be in time for the tail end of a boom, when nobody has a really good use for it. Storing lots of violent movies might be cool, but is hardly any sort of boon to humanity.

John

Reply to
John Larkin

I wish somebody would start mass-producing cheap bulk monocrystalline diamond, preferably the single-isotope kind.

John

Reply to
John Larkin

You will need it to run a) the latest release of Windows, and b) to be able to run any two Adobe tools simultaneously. :-)

Reply to
Mark Thorson

Close. The connections will be compliant, as in packages pioneered by Tessera

formatting link
connect to a whole wafer, you might use spring-based probes, like FormFactor does
formatting link
Of course, why a wafer the size of a dinner plate need to have any large number of connections? If it has a video output, stereo output, couple of USB plugs, and a few other things, that can be done in a small number of pins. You could put them in the center, like is done now for flip-chip, to minimize solder ball stress caused by temperature cycling and CTE mismatch. By then, of course, we won't be using lead-based solder anymore. Raising this issue is like proving airplanes are impossible because physical laws dictate a minimum power-to-weight ratio for steam engines.

Reply to
Mark Thorson

We obviously don't live in the same world. People where I work actually have to do this stuff for real, and it isn't anything like as simple as you make out. Even with perfectly matched CTEs, just the *gradients* due to a

100W/cm**2 power dissipation level make everything want to roll up into a ball. Talking as though the interconnect were just a matter of plugging a keyboard and USB cable into a wafer-scale device shows very little understanding of the problems of the day.

Interconnect densities are currently above 3000 leads per module, and heading up--one high performance design study I'm aware of needs over 7000, just to get all the terabits-per-second off the module. (Current products are around

1 Tb/s total off-board I/O for a board with one module on it, and it's going to go much higher.) Just powering the thing takes 100 amps per square centimetre of chip surface, from a 1-V power supply, whose total impedance has to be well below *1 milliohm* at all frequencies of interest. Power, ground, and bypass caps have to be sprinkled *very* uniformly across the face of the chip, just to avoid logic errors due to power and ground bounce.

Spring clips and so on lead to scrubbing action at connector surfaces, which is a reliability headache. You can't simply suspend a dinner-plate-sized module on legs near the centre, because it won't stand shock testing, and has to hold up a big copper plate heat sink. And so on and so on.

These things can be overcome--comparable ones in the past were--but in real engineering, it has to be cheap and reliable as well as everything else. Huge chips are probably not the most cost-effective solution.

Cheers,

Phil Hobbs

Advanced Optical Interconnect IBM T.J. Watson Research Center Yorktown Heights NY

Reply to
Phil Hobbs

(I posted this earlier on comp.arch, but perhaps someone here may know the answer...)

On thermodynamic grounds it's expected that reversible computing could reduce heat dissipation - very relevant today since heat dissipation is becoming an important limiting factor on the performance of computers.

At first glance, though, it would seem that running most algorithms on a reversible computer would just replace the consumption of power to erase N words of memory, with the consumption of N words of memory - which means you quickly end up running out of memory and have to switch to irreversible mode and erase used memory cells after all, so no gain.

But on

formatting link
I found the following claim:

"The actual limitations of reversible computing are small:

The number of bits input to the computation must be the same as the number of bits that the computation outputs. Call this N. The number of bits that a reversible computation needs to remember at any point is also N. Given a irreversible computation with Ni input bits and No output bits, it is possible to produce a reversible computation with N not greater than Ni + No." Does anyone know of any examples of how this might be done with practical algorithms? (Google shows me lots of articles on how to reversibly do the equivalent of NAND, but I'm interested in how to reversibly do things like sorting, matrix multiplication or alpha-beta minimax.)

Reply to
Russell Wallace

You're right about the movies. But if you try to install and run some 3D software or just some graphic tools commonly used on the market, you'll see the reason of installing some extra RAM etc.

Also I bet new Windows would kill your mashine even if "system requirements" would let you run the installation at all. Ordinary word processing software shouldn't take this much of your CPU, right? Try to find the version of MS word that could be launched from your old trusty 386 machine. Were those version bad? Raising our hardware requirements is part of the game. The days when guys cared for memory used by the codes they developed are all gone.

Uytkownik "John Larkin" napisa w wiadomoci news: snipped-for-privacy@enews3.newsguy.com...

Reply to
Pawe³ Kasprzak

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.