Scaling beyond 130nm dead or alive?

In two related articles from www.physnews.com they seem to claim completely different things:
One is here "Intel Begins $2 Billion Conversion Of Arizona Factory to Start
65 nm" http://www.physorg.com/news52.htm And another "Scaling dead at 130-nm, says IBM technologist" http://eetimes.com/semi/news/showArticle.jhtml?articleId 502091
I'm really confused!!! One say we will never go over 90nm, and at the same time Intel spends a lot of money to build a factory for 65nm!
Your comments.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

I think that the point is that simple scaling - "die shrinks" - are no longer feasible, as they were in the past. Time was, when you got better process resolution, you could just crank the optics to reduce an existing mask set. The physics has changed nonlinearly, so new device designs are needed to exploit sub-100 nm features. Several people have 90 nm stuff in production, but it ain't easy. At 50 nm and down, whole new devices, finfets or something, will be needed.
John
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
John Larkin wrote:

Not to mention that smaller feature size is not the major driver of so-called "Moore's Law". Crystal defect density has been the major driver in recent years. Moore's Law says the number of transistors per device doubles every two years. Scaling down the linear dimension by 2 increases the transistor density by 4, but that's hard to do. Decreasing the defect density allows you to build larger chips that can be manufactured with acceptable yield. Feature size may hit a law-of-physics stumbling block, but that will not happen to advances in reducing defect density. Until we have single chips the size of dinner plates, there will be room for improvement based on defect density alone, which is the major driver for Moore's Law.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Good point. It's easier to cool a bigger chip, too.
Of course the other problem is: how much engineering does it take to design a working billion-gate chip, and then who wants it?
Electronics is just about 100 years old, approaching a mature industry.
John
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
John Larkin wrote:

That's like asking "Who would ever need more than 64K of RAM?".
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Nothing grows exponentially forever. The problem in today's electronics industry is increasingly the difficulty of finding a "killer ap" to absorb the incredible amount of compute and storage capacity now available. Most people haven't bothered to install 2G of ram in their PCs, even though it's now cheaper than 64K was a decade or so ago. My 700 MHz Dell has 128M of ram, and works fine, even for circuit simulation and CAD use.
Huge-capacity nanotech data storage (if it ever works) may just be in time for the tail end of a boom, when nobody has a really good use for it. Storing lots of violent movies might be cool, but is hardly any sort of boon to humanity.
John
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
John Larkin wrote:

You will need it to run a) the latest release of Windows, and b) to be able to run any two Adobe tools simultaneously. :-)
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
You're right about the movies. But if you try to install and run some 3D software or just some graphic tools commonly used on the market, you'll see the reason of installing some extra RAM etc.
Also I bet new Windows would kill your mashine even if "system requirements" would let you run the installation at all. Ordinary word processing software shouldn't take this much of your CPU, right? Try to find the version of MS word that could be launched from your old trusty 386 machine. Were those version bad? Raising our hardware requirements is part of the game. The days when guys cared for memory used by the codes they developed are all gone.
napisa w wiadomoci wrote:

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Various sources cite 1 year, 18 months or two years, and target number of devices, "complexity", power consumption and reliability as the factors and objectives.
http://www.google.com/search?sourceid=navclient&ie=UTF-8&oe=UTF-8&q=moore%27s+law
Moore's paper appeared in 1965, when he was with Fairchild Electronics, in Electronics Magazine, when that was still a meaningful publication.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Richard Henry wrote:

That's Fairchild Camera and Instrument, not Fairchild Electronics.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Right, I agree that Moore's Law can be driven by other factors than scaling. But then I don't understand what they mean saying "65 nm technology (process)", or even "45 nm", look here http://www.physorg.com/news74.html - "East Fishkill, N.Y. and Seoul, Korea - March 5, 2004 - Samsung Electronics joins a strategic semiconductor technology development partnership with IBM, Chartered Semiconductor Manufacturing and Infineon. Initially, the four companies will focus on 65 nanometer (nm) technology and will expand, over time, to include 45 nm process development."
Do they mean something like "equivalent device size"? Like for gate dielectric thicknesses. Or 45 nm is a real FET size?

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
These numbers are characteristic dimensions of features (DRAM half-pitch, etc.). Essentially, you can think of these numbers as the gate width.
There does not seem to be any reason not to be able to make things this small. The big problem is the on-chip wiring. About now, there is about 9 layers of thin, skinny copper wiring on top-of-the-line chips (I say top-of-the-line in terms of manufacturing acheivement, not performance).
At some point, you would be making tiny transistors buzzing away like crazy that are farther apart from each other than was the case in the previous generation, because the wiring would limit the performance too much if they were spaced closer together. The cost advantage of puting more transistors per unit area would disappear, and the cost would go up fast.
This is why people are looking at 3D ICs and such: interconnection.
---> Insert shameless plug for the article I wrote in the March 2004 issue of IEEE Spectrum. The money keepers need to understand this stuff before the engineers have a chance to do what they need to do. <---
John Baliga snipped-for-privacy@triton.edu

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Dear John,
Thank you for that answer. I do agree with you about interconnection problem.
But it also sounds from your post like this interconnection problem is the only problem that exists and as you write, "there does not seem to be any reason not to be able to make things this small".
I cannot agree on that.
I've intentionally looked to International Technology Roadmap for Semiconductors web-site http://public.itrs.net / They have a list of Grand Challenges for the near (< 07) and long-term (beyond 2008) On the first place they set low gate leakage current saying that high-k must already be implemented in 2005 (to me they already are, look for instance here http://www.physorg.com/news80.html ) Then they say about large area substrates (300 mm). Then there are several problems with lithography listed, as mask-making and process control. And finally we come to interconnect, which is also a great challenge.
See the whole list of challenges here http://public.itrs.net/Files/2002Update/2002Update-GrandChallenges.pdf
Best regards, Andrew
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Mark Thorson wrote:

Except that you can't cool them, and they'll tear the solder balls right out of the module as they heat up. Maybe we can use elastic silicon ;-)
Cheers,
Phil Hobbs
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Long before we reach that point, we will probably be forced to transition from dissipative logic to reversible logic. (The only computing operations that _have_ to generate heat are I/O operations.)
Nanotech will probably also require reversible logic, in order to avoid cooking itself...
For links to reversible computing and its relation to nanotechnology, see <http://www.zyvex.com/nanotech/reversible.html .
For one group that has performed some excellent experimental research into reversible computing, including building some prototype devices, see: <http://www.elis.rug.ac.be/ELISgroups/solar/projects/computer.html .
-- Gordon D. Pusch
perl -e '$_ = "gdpusch\@NO.xnet.SPAM.com\n"; s/NO\.//; s/SPAM\.//; print;'
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Gordon D. Pusch wrote:

Yeah, I know--I work in the building where reversible computing was invented, many moons ago. If CMOS is really running into a brick wall, though, there'll be enough blood on the landscape that we may all be out of work before reversible computing becomes practical (if that ever happens). There's a _lot_ of work going on just now on how to cool next-generation CMOS without circulating water right to the chip level. People are even talking about cutting channels into the back surface of the chips, to run cooling water. I don't think things are quite that desperate, but we're clearly in a new ballgame.
Cheers,
Phil Hobbs
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On 8 May 2004 06:00:23 GMT, Phil Hobbs

I wish somebody would start mass-producing cheap bulk monocrystalline diamond, preferably the single-isotope kind.
John
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On 8 May 2004 03:21:19 GMT, g_d_pusch_remove snipped-for-privacy@xnet.com (Gordon D. Pusch) wrote:

(I posted this earlier on comp.arch, but perhaps someone here may know the answer...)
On thermodynamic grounds it's expected that reversible computing could reduce heat dissipation - very relevant today since heat dissipation is becoming an important limiting factor on the performance of computers.
At first glance, though, it would seem that running most algorithms on a reversible computer would just replace the consumption of power to erase N words of memory, with the consumption of N words of memory - which means you quickly end up running out of memory and have to switch to irreversible mode and erase used memory cells after all, so no gain.
But on http://www.kuro5hin.org/story/2003/9/8/14125/70302 I found the following claim:
"The actual limitations of reversible computing are small:
The number of bits input to the computation must be the same as the number of bits that the computation outputs. Call this N. The number of bits that a reversible computation needs to remember at any point is also N. Given a irreversible computation with Ni input bits and No output bits, it is possible to produce a reversible computation with N not greater than Ni + No." Does anyone know of any examples of how this might be done with practical algorithms? (Google shows me lots of articles on how to reversibly do the equivalent of NAND, but I'm interested in how to reversibly do things like sorting, matrix multiplication or alpha-beta minimax.)
--
"Sore wa himitsu desu."
To reply by email, remove
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Phil Hobbs wrote:

Close. The connections will be compliant, as in packages pioneered by Tessera (http://www.tessera.com ). To connect to a whole wafer, you might use spring-based probes, like FormFactor does (http://www.formfactor.com ).
Of course, why a wafer the size of a dinner plate need to have any large number of connections? If it has a video output, stereo output, couple of USB plugs, and a few other things, that can be done in a small number of pins. You could put them in the center, like is done now for flip-chip, to minimize solder ball stress caused by temperature cycling and CTE mismatch. By then, of course, we won't be using lead-based solder anymore. Raising this issue is like proving airplanes are impossible because physical laws dictate a minimum power-to-weight ratio for steam engines.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
[ Sci.nanotech moderator's note: The relevance of this thread to nanotechnology, while initially thin, appears to have run its course so I am setting the "Followup-to" to exclude sci.nanotech. Posters are welcome to re-include sci.nanotech if they think their response is topical here. Now on with Phil Hobbs interesting reply.... -JimL ]
Mark Thorson wrote:

We obviously don't live in the same world. People where I work actually have to do this stuff for real, and it isn't anything like as simple as you make out. Even with perfectly matched CTEs, just the *gradients* due to a 100W/cm**2 power dissipation level make everything want to roll up into a ball. Talking as though the interconnect were just a matter of plugging a keyboard and USB cable into a wafer-scale device shows very little understanding of the problems of the day.
Interconnect densities are currently above 3000 leads per module, and heading up--one high performance design study I'm aware of needs over 7000, just to get all the terabits-per-second off the module. (Current products are around 1 Tb/s total off-board I/O for a board with one module on it, and it's going to go much higher.) Just powering the thing takes 100 amps per square centimetre of chip surface, from a 1-V power supply, whose total impedance has to be well below *1 milliohm* at all frequencies of interest. Power, ground, and bypass caps have to be sprinkled *very* uniformly across the face of the chip, just to avoid logic errors due to power and ground bounce.
Spring clips and so on lead to scrubbing action at connector surfaces, which is a reliability headache. You can't simply suspend a dinner-plate-sized module on legs near the centre, because it won't stand shock testing, and has to hold up a big copper plate heat sink. And so on and so on.
These things can be overcome--comparable ones in the past were--but in real engineering, it has to be cheap and reliable as well as everything else. Huge chips are probably not the most cost-effective solution.
Cheers,
Phil Hobbs
Advanced Optical Interconnect IBM T.J. Watson Research Center Yorktown Heights NY
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.