OT: hard drive crash

no it wont. the power supply usually dies and we turf out the working parts because the new machines are so much better that the pain of fixing the one resistor in the centre of the power supply that has vaporised. the number of working electronic parts that get junked each year is amazing.

Stealth Pilot

Reply to
Stealth Pilot
Loading thread data ...

Oh, yes, it will, by decades.

"Power supply"? "Working parts"?

Hard drives don't have power supplies; failed hard drives usually have no working parts remaining to be "turfed out"; and their internal components are definitely not field-serviceable.

Ahh, now I think I see. You apparently misunderstand what "hard drive" means.

The thing sitting on your desk, with a power switch, that your monitor and power cables plug into, that you feed CDs or diskettes into, is *not* a hard drive. That's your computer.

The hard drive is the data storage unit *inside* your computer.

*This* is what a hard drive looks like:
formatting link
Reply to
Doug Miller

On Tue, 05 Jun 2007 08:37:18 -0500, with neither quill nor qualm, Ignoramus20900 quickly quoth:

I have yet to experience a hard drive which got even ten percent of its MTBF. Only one I've ever owned is still working. The other two or three dozen have all died MUCH earlier than they were supposed to, and most were unrestorable with low-level reformatting.

-- Smokey the Bear's rules for fire safety should apply to government: Keep it small, keep it in a confined area, and keep an eye on it. --John Stossel in _Myths, Lies, and Downright Stupidity_

Reply to
Larry Jaques

Of course that doesn't mean that the drive will actually last even 20 years. They have a funny way of calculating MTBF... and bearings only last so many hours at 7200 or 10,000 or 15,000 RPM.

I seem to have had some that were not average. One had a chip that popped it's lid (probably due to latchup).

That said, HDD failures *are* rare. If you keep a backup just in case the entire computer disappears when you're away one night then that's probably good enough.

Best regards, Spehro Pefhany

Reply to
Spehro Pefhany

I walk through a high school field several times a week, and to see the bits of modern technology kicking around on the ground, and to think of what it took to manufacture them is a bit mind boggling. That's presumably the logic behind the RoHS type of attempt to minimize lead, since this stuff is going into garbage cans probably at a rate of tons per second. All those container loads full of disposable electronic crap (and even some stuff you don't think of as disposable such as computers and televisions) are going into landfills at pretty much the same rate.

Best regards, Spehro Pefhany

Reply to
Spehro Pefhany

Yup, I run a couple of dozen 24/7, usually lose at least one a year. Lifetime on IDEs is about 2-3 years, SCSIs are about 4-5 years. Sometimes the logic board swap will get one back to the point where I can pull current data, one reason for buying bunches of the same type. I use complete drive cloning to another disk to back up, that way I can just plop the backup into the rig and keep going. I use removable drive bays for this. Tapes are way too expensive to back up the larger hard drives these days. You can buy complete hard drives for what one tape of similar capacity runs. For the fractional terabyte jobs, there ARE no tapes of similar capacity readily available. My 30G DLT drive just doesn't cut it anymore.

Stan

Reply to
stans4

Here's some real-world data for the high end "enterprise" drives:

formatting link
Usual failure rates of several percent per year up to 13%, so your experience is in line with those numbers (1 per year out of 24 is 4%).

Consumer drives have shorter life (3 years), and are often subjected to crappier power supplies and marginal cooling.

Maybe when 50G HD/Blueray recordable disks are down to a buck or so...

Best regards, Spehro Pefhany

Reply to
Spehro Pefhany

My consumer grade hard drives, which I use in Linux servers and PCs, last about 6 years or so. And they get used quite a bit, I never shut computers down.

I make backups using rdiff-backup, which does them differentially. (eg. copies only the data that changed)

i
Reply to
Ignoramus20900

Check this out also, this is a Google study on hard drive reliability. Google uses a variety of hard drives in its data centers, mostly crappy consumer grade drives.

formatting link
i

Reply to
Ignoramus20900

Another trick, if it is a case of the drive not spinning up, is to connect it without mounting it, keeping the computer open, and once power is applied, give it a quick jerk of a twist around the axis of the spindle -- in both directions if necessary. This can get the drive spinning if it is a case of heads stuck down to the platters. (This happens because of lubricant from the spindle bearings evaporating from the heat in the drive, and plating out on the platters when the system is shut down for long enough to let the platters cool. The first time, it plates everywhere but under the heads where it is stopped. Next time, the heads are likely somewhere else, so that area gets plated too. Eventually, it gets thick enough to glue the heads to the platters.

Once you get it spinning, copy everything off it to a new drive, or to some other backup media.

The above applies to all styles of drives, not just those used in Windows systems. The software suggestions for Windows are out of my field, so pick that up from the others.

Exactly.

The expensive -- but sometimes necessary -- route. The *really* expensive way to go involves unmounting the platters and treating each separately with specialized equipment -- the sort of thing done to recover data from a drive which has been erased and then captured in a spy case or the like.

Good Luck, DoN.

Reply to
DoN. Nichols

According to Doug Miller :

[ ... ]

Yes -- but that it the *Mean* Time Between Failures, not the certainty that no drive will fail long before that time. The actual failures fall on a bell curve, and just how far out the edges of that curve are could result in some failures within a year or two, and others which last over two hundred years -- assuming that all failure modes were actually taken into account in the MTBF calculations. :-)

I don't know about everybody else, but *I* use a lot of drives up to ten years old or more. I collect old systems (mostly Sun Workstations and Sun Enterprise servers), and buy used SCSI drives (and now Fibre Channel disks) to keep these systems running at an affordable price for a fixed-income retiree. :-)

Enjoy, DoN.

Reply to
DoN. Nichols
[ ... ]

That seems to match my experience -- but with my used drives, I usually have no idea how they were cared for, and how much service they have seen prior to my ownership. But I still typically get several years of service out of them.

Note that some drives, such as the higher capacity Seagate drives are set up so they change the model number in the identification string from starting with "ST" to starting with "SX" when the drive has gone past its "use by" date -- and operating systems like Solaris 10 notify you of this when the system boots -- but still continues to use the drive after warning you.

[ ... ]

It will be interesting to see how long the few IDE drives I have last in the Sun Ultra-5 and Ultra-10 (low budget) machines. :-)

[ ... ]

Well ... I'm currently using Exabyte Mammoth-2 drives, which get

60 GB on a single 225 meter tape *without* compression, and a claimed 150 GB with compression. That gets you a bit closer, but those tapes are *very* expensive, even on eBay usually. Sometimes you luck out, however.

Enjoy, DoN.

Reply to
DoN. Nichols

When I was teaching this stuff a few years ago, the first lesson involved identifying the parts of a computer. I had a few old hard drives, some with the cover removed, so that students could see how they worked. I is surprising how many people thought that the whole computer was the hard drive!

Steve R.

Reply to
Steve R.

There are some good utilities on those disks. Something there may be able to help. Good luck. Dave

Reply to
dav1936531

There was some recent research this year (one was by Google - and they have a bazillion disk drives) and it shows that the Annualized Failure Rate is from 1.7% to 8.6%.

Reference:

formatting link

Reply to
Maxwell Lol

That has to be an estimate derived using some hopefull statistics unless they have a time machine.

I realize they can accellerate aging by power cycling and constant massive data in/out operations but at best it is going to be a guestimate.

Twain said it best, "There are lies, damned lies, and statistics".

Wes

Reply to
Wes

Methods similar to estimating "Sears Horsepower".

i
Reply to
Ignoramus20900

I'd love to see a current Western Digital drive live ONE TENTH that long. My experience has generally been less than 6 years, with over half of them I've dealt with in the last 3 years lasting less than 2.

And I deal with a lot of hard drives.

Reply to
clare at snyder.on.ca

Computers should come with Backup Alarms! ;-)

Reply to
Michael A. Terrell

On "important" computers I run dual SCSI controllers and mirrorred hard drives. When one drive of the mirror fails, I buy 2 new drives, rplace the bad one of the set, then pull the "good" one and replace it, stowing the "good" used one as a spare. This allows me tokeep the "old" one as an archive if I desire. Could swap mirrors monthly if I was anal about it, but just got a 200GB tape backup to backup the entire server on a daily basis (replacing a too-small 12GB unit that just backed up critical data). I also back up critical data to Zip files on another machine, daily, and burn them to DVD monthly. (on the server at the insurance office where I spend every morning).

The Seagate 36gb SCSI drive that just failed is almost 7 years old. Replacements cost me $95US plus shipping - cost when installed was over $700 each!!!!!

Reply to
clare at snyder.on.ca

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.