Mainstay
Well-Known Member
- Reaction score
- 747
Hi All,
This is likely covered somewhere on the forums but I cannot find the right search pattern to yield a result.
I am curious, in using MHDD (remap off, loop/repair off, standard scan), what do you consider your threshold for declaring a drive as "failed".
I am also curious as to how you define "failed".
On business-critical system I consider any slow access rates in the >500 ms range to be "failed". I clone to a new drive and move on. Any UNC's are an automatic fail. <500ms is a bit of judgment call (based on age of drive, performance of system, total cost of repairs (i.e., cost-adverse clients), total cost of fixing the situation should the drive fail in the field, etc.).
For non-critical systems I use a little more discretion. So long as a backup system exists I don't mind a few slow sectors (although I give fair warning to the client that it is showing signs of deterioration).
How do you judge drives based off of MHDD?
This is likely covered somewhere on the forums but I cannot find the right search pattern to yield a result.
I am curious, in using MHDD (remap off, loop/repair off, standard scan), what do you consider your threshold for declaring a drive as "failed".
I am also curious as to how you define "failed".
On business-critical system I consider any slow access rates in the >500 ms range to be "failed". I clone to a new drive and move on. Any UNC's are an automatic fail. <500ms is a bit of judgment call (based on age of drive, performance of system, total cost of repairs (i.e., cost-adverse clients), total cost of fixing the situation should the drive fail in the field, etc.).
For non-critical systems I use a little more discretion. So long as a backup system exists I don't mind a few slow sectors (although I give fair warning to the client that it is showing signs of deterioration).
How do you judge drives based off of MHDD?