On Mon, 10 Nov 2008 22:48:27 -0500 snipped-for-privacy@aol.com wrote: | On 11 Nov 2008 02:01:39 GMT, snipped-for-privacy@ipal.net wrote: | |>On Sun, 09 Nov 2008 22:28:58 -0500 snipped-for-privacy@aol.com wrote: |>| On 10 Nov 2008 02:18:03 GMT, snipped-for-privacy@ipal.net wrote: |>| |>|>If you are building a data center for a large number of computers (lots of PCs |>|>and server sized PCs, with all the network infrastructure), and this involved |>|>a large amount of power, and this were in a location where the standard user |>|>line-to-neutral voltage is 120V, but you could get a line-to-neutral voltage |>|>of 240V by special order, would you choose the latter to reduce wiring costs |>|>and I2R losses? If the size matters, where would the point of indecision be? |>| |>| No question, get 240v wired L/L. That eliminates noise on the neutral |>| and harmonic problems. Mainframe computer rooms didn't even bring a |>| neutral into the panels back in the big iron days. Everything was |>| 208/240.. |>| Virtually all PC system units will switch to 240 and you can order |>| 240v monitors and other equipment. |>
|>But not 240V wired L-N (as in 416Y/240) ? |>
| Why would you want to introduce the noise and harmonic problems you | get with a neutral load?
These are not really eliminated by L-L loads. All L-N does is concentrate the issues when you have a shared neutral. You need to have a panels that handle large neutral current, and wiring upstream from there accordingly. Branch circuits would have separate, non-shared, neutrals.
|>Many mainframes I worked on back in the day used delta wired motor-generators |>and produced 400 Hz (I never found out the voltage at this point) for the CPU |>power systems. |>
| The voltage on IBM systems was 208 on the 400 hz supply
Nice to know. Since it was a derived system, they could have chosen to make it be whatever they wanted to design their power supplies for.
|>For in home use, where the options are 120 L-N or 240 L-L, I'd like to go with |>the latter. Proper surge protection and UPS systems are hard to find for that |>configuration at home usage scale. | | They are availab;e in the commercial market but not normally in the | residential market
And probably for 208 instead of 240.
| These days data centers are not particularly big power hogs. The Ramac | racks were the biggest load (A buttload of 3.5" hard drives in a rack) | Now that multi T-byte drives are in the marketplace, they don't need | that many drives. I pretty good size "data center" will go in a | closet. That is one reason I got out of the business. You don't have | to be an engineer to design a room with two or three racks in it and | you don't have to be very skilled to fix a machine that has a blinking | red light on the bad card.
The largest one I worked on was 3600 machines back then when I left in 1997, and reportedly has broken the 10000 machine barrier. I'm sure it mostly blade type technology these days, so it could well be a lot less power than before. But it's no closet operation. Other places I worked on included ISPs with as many as 20 rows of racks (lots of routers, dial-in boxes, servers, etc).
When I say "big", I don't mean a closet. Most businesses can put their "data center" in a closet if they are pressed for space (and many more even if not). I'd rather spread things out if I can for the smaller ones, especially if *I* am the one that has to get into the machines to do work :-)