OT Fahrenheit

Maintaining RH at 50% is mostly to make the paper (cards in the olden days) more machine friendly. At lower RH static is a problem and at higher RH the paper stiffness suffers and it swells imparing feeding.

Reply to
gfretwell
Loading thread data ...

Not true at all. A high RH contributes to failures in electronics as well. Even recent equipment is specified from 40-60% RH, over a fairly narrow temperature range.

Reply to
krw

A production laser printer will be wadding up paper long before the electronics start complaining. If the paper is too wet it will curl when it goes through the fuser. A big printer shoving that paper out at 3 pages a second will turn the stacker into something that looks like a carnation.

Reply to
gfretwell

That may be true, but it doesn't mean it's only the printers and card readers that are in controlled environments.

Reply to
krw

I think people are far too concerned with the rest of the electronics. DASD in a data cernter is just the same drives you have in your PC piled in a big box these days and the processors are not that much different than your PC. It is certainly a similar packaging. I have PCs running in totally unconditioned space in SW Florida with no problems. In fact one survived a fire. 3 are running in vehicles that see 130-140F in the day time and wide swings in RH. IBM started saying in the 80s that if the people could handle the environment the computer could. 4300 mainframes and AS/400 mid range were "office environment" machines. It was really the big paper pushers that needed conditioned space.

Reply to
gfretwell

Mainframes are *not* specified for office environment (rather "Class A") though. There is a difference between a "departmental server" and a data center mainframe.

Reply to
krw

I am not sure what machines you are talking about but 4300s and AS400s were office space rated. These were around before most people had ever heard of a server or a LAN.

Reply to
gfretwell

Ok, let me try again, slower. AS/400 and 4300s are/were what we now call "departmental servers". /370, ES/9000s were relegated to data centers and are rated for a "class-A" environment only. Note that "office space" rating isn't exactly harsh either.

Reply to
krw

I wouldn't exactly call a 4331,41 & 81 class machines a department server. It was the replacement for 370 M138-158 class machines. The AS/400 actually out performed that series in black box form. The word mainframe became fairly ambiguous anyway when they became nothing more than a rack of RISC cards. It is one reason I left. The computer business got very boring for a hardware guy. When the CPUs pumped water and the disk drives pumped oil it was fun to do. The hardware job became pluck and chuck. The Physical planning rep job pretty much just went away too. What pass for mainframes these days would run fine in a warehouse.

BTW offices are still FCC class A environments. B is residential

Reply to
gfretwell

THat's exactly how they were used. BTW the replacement for the

3138-3158 class was the 3031.

Is an xSeries a "mainframe"? Is it a "rack of RISC cards"?

You were a CE? Hardware development is still interesting.

I don't believe I said anything about the FCC. I didn't even know they cared about temperature or humidity.

Reply to
krw

You guys are in semi-violent agreement. Keith's first response was: "Not true at all. A high RH contributes to failures in electronics as well. Even recent equipment is specified from 40-60% RH, over a fairly narrow temperature range."

I call the "not true at all" part complete bullshit. What Greg said was 100% true. And the gratuitous "let me try again, slower" is another detractor.

Bottom line: human comfort and "equipment comfort" are roughly the same, with the "equipment comfort" range being wider than the human comfort range. Think about it - humans operate the equipment, and would not be willing to work in the thousands upon thousands of "normal" datacenters if the machinerey could not function in office-like temperature and humidity. (Sorry - if you're in the military, you work where they tell you - but even then, if it's in a datacenter, it's likely to be comfortable.) In fact, humans usually get uncomfortable outside the 68-72 range, on average. Datacenter machinery functions well outside of that range. The farther you depart from that 68-72, the more extensive the steps a human needs to take. Machines can't take those steps, so they will fail when the conditions are too far from nominal. What would be interesting is some real discussion of the specific numbers.

I'll give you five examples:

1) Peat Marwick Mitchell datacenter, early 70's An airconditioner failure caused DASD (2314) data errors at exactly 94 degrees on their wall thermometer. Ran fine at 93. 2) Manufacturers Hanover Trust datacenter began losing equipment (power down) when temperature went above 90 during a blackout. (Early 80's) They had emergency power to keep the data processing equipment running, but nothing to power the conditioners. 3) Bloomingdales (now part of Federated) datacenter, mid-late 70's. Red lite checks on CPU (3138) whenever a metal cart carrying cards would touch the CPU; random red lite cpu checks when loading paper in 1403. Relative humidity was 16%. Raising it to 40% fixed the problem. No hardware was damaged. Interesting - with the lights off, when a new box of 1403 paper was opened and fanned out, you could see the discharge. 4) Divco Wayne had a building heat failure over the weekend. On Monday morning, the computer room was 30 degrees F. The damn system powered up and ran, with no problems - but the 1416 print train ran audibly slow. (Early-mid 70's) 5) IBM datacenter, early 80's. A disk pack was transported in the trunk of a car, properly packed, but in sub zero temperature. Upon arrival it was immediately placed in a 2314. The idiot who did it moved the pack to subsequent drives when it didn't work. 180 heads, 5 VCM's and several days later, full service was restored. I guess by the 6th pizza oven, he moved the pack soon enough where the VCM was not destroyed.

Specifically, the relative humidity spec is for static/paper "fatness". The equipment couldn't care less. It will run happily outside the range. But if the RH is too low, static discharge can occur, and that discharge can interfere with equipment operation. The equipment does not mind the low humidity, but it does mind the discharge. "Wet" paper, due to high humidity, does not do well in paper handling machinery in the datacenter. Feed the equipment "dry" paper & it performs flawlessly. I do not have statistics on "wet" paper - perhaps one of you can discuss that in more detail.

Ed

Reply to
ehsjr

I remember the S/36's and the RS/6000's. Never got to deal with either of the above, but I did like the RS/6000's.

Reply to
T

Ah - we run something comparitively smaller in our office with a pretty even mix of *nix to Windows servers. All total there are roughly 50 servers.

Room is supplied with power by an APC Symmetra that gives us nominally

15 minutes of backup power. That Symmetra also has a kill switch for emergency and its wired into the fire alarm system so that when the sprinklers go off, all power to the room is cut.

The Symmetra also powers the cubes in the IT space. Right now we get 40 minutes time out of it, but that's only because two of our employees like to have their heaters going full tilt. Otherwise it's over an hour.

Overhead lighting and air conditioning are not on the UPS. However there is a 125kW natural gas fired generator out back that backs up the UPS, and also supplies power to not only the overheads, but to the HVAC system and we even ran a line out to the MDF int he building so Cox could take advantage of our generator in the event of a building wide power failure. We weren't being altruistic, we just wanted to make sure our network connection stays up.

We also do quarterly tests of the power system, as well as having the system set to do regular exercise runs on the generator.

That data center was my baby. And the redundancy built in shows it.

Reply to
T

Those were all designed for office environments. Mainframes most certainly were not, and it had nothing to do with paper (I/O was seldom in the same room).

Reply to
krw

I was region support for the 4300 and the 138, 148. I don't know of ONE 370 138/148 customer who went for the 3031 It basically WAS a 158 (as were the service directors) so there would be no advantage to go 158 to 3031 I was also trained on both the 158 and 3031.

After my time but I bet it is.

CE, Support Specialist then later IPR and Contract Services.

Reply to
gfretwell

If you bring a pallet of paper in from outside in Florida (80-90 RH) and put it in a 3800 it will wad up so bad you can't stack more than about 200 pages without taking it out. Forget trying to run it through the burster. They usually tried to keep it in the A/C for several days before using it.

Reply to
gfretwell

We had a bunch of them but I tried to stay away from them. When they merged GSD and FE it got harder to do. I ended up working on the 3x and was trained on the RS6000 and AS/400. I was the 7800 (TP support) guy so mostly I did the communication end. All of those boxes were basically solid except the DASD and that was just a software nightmare, not a hardware problem. Once they started using RAID5 they were a no brainer. My boss had a real sense of humor and sent me to Series 1 school the week after I got back from 3090 support school. It was the only school I walked out of.

Reply to
gfretwell

Any of the air cooled machines could run damn near anywhere. When I got to Florida (From the glass house data centers of DC) I saw it happening. A "computer room" was a bay in a strip mall or industrial center. That was also the first time I ran into red leg delta power and the first time I saw "no raised floor" since the 1401 and mod 30 days.

Reply to
gfretwell

^^^^^^^ channel

The channel directors off loaded all the I/O microcode. The 3031 was significantly faster than a 3158 because of the director. IIRC they were pretty cheap too.

It's not after mine. ;-) Nope. /360 is hardly RISC. THe processor complex is an MCM.

Reply to
krw

In article , ehsjr wrote: [...]

Reminds me of an incident that occurred in the late 80s/early 90s when I worked for the Navy. I managed a Tandem TXP system that shared a computer room with a Honeywell 66. One holiday weekend, the air conditioning system failed in the wee hours of Saturday morning after the second shift operators had gone home. (There was no third shift.) Monday being a holiday, the problem wasn't discovered until the first shift operators arrived at about 6am Tuesday to find the data center at about 110 degrees. The Honeywell had gone down only about three hours after the air conditioning did... but the Tandem was still up. The DASD cabinets were painfully hot to the touch, and one of the drives had gone down -- but since Tandem uses mirrored drives, and the mirror was still ok, it did no harm. I measured the exhaust air at the back of the processor cabinet at 134 degrees... but the Tandem was still up.

Reply to
Doug Miller

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.