Unrecoverable read errors and RAID5

  • Thread starter Thread starter joema
  • Start date Start date
  • Tags Tags
    Errors Hdd
Click For Summary

Discussion Overview

The discussion centers on the reliability of RAID5 configurations in light of unrecoverable read errors (URE) during the rebuild phase, particularly as data volumes increase. Participants examine the mathematical reasoning behind claims that RAID5 becomes impractical and explore the implications of URE rates on data integrity and system reliability.

Discussion Character

  • Debate/contested
  • Technical explanation
  • Exploratory

Main Points Raised

  • One participant challenges the conclusion that RAID5 is impractical due to URE rates, suggesting that the underlying math may be flawed.
  • Another participant points out that the URE rate is defined per read, not per byte, and argues that this significantly alters the interpretation of reliability in RAID5 systems.
  • Concerns are raised about the interpretation of URE rates, with one participant suggesting that the actual reliability of hard drives and RAID5 systems is better than what the manufacturer's specifications imply.
  • A later reply references a study that shows a discrepancy between the manufacturer's URE spec and empirical data, indicating that actual error rates may be much lower than specified.
  • One participant proposes that smart RAID controllers could mitigate UREs by using error correction codes (ECC) to recover data from failed sectors.

Areas of Agreement / Disagreement

Participants express differing views on the reliability of RAID5 systems in the context of UREs, with no consensus reached on the validity of the original claims regarding RAID5's impracticality. Some participants agree that empirical evidence suggests better reliability than manufacturer specs, while others remain skeptical of the interpretations presented.

Contextual Notes

The discussion highlights potential limitations in the interpretation of URE rates, including the dependence on definitions and the distinction between worst-case scenarios and average performance. The mathematical reasoning presented is subject to further scrutiny and clarification.

joema
Messages
106
Reaction score
3
A number of popular-level articles have been published saying that RAID5 is increasingly impractical at larger data volumes due to the chance of an individual HDD having an unrecoverable read error (URE) during the rebuild phase. I think the underlying math which produced this conclusion may be in error, and I'd like someone to check it.

Published reasoning: typical HDDs have a URE rate of 1 in 10^14 reads (interpreted as bytes). If a 16TB 8-drive RAID5 array has a single URE, that HDD must be replaced and data rebuilt on the spare. During the rebuild no further errors can be tolerated else the entire array is bad. Apparent mathematical reasoning: 1 URE per 10^14 bytes / 8 HDD per array = 1 URE per 12.5 TB read from the 8-drive array. Conclusion: given that URE rate there is a nearly 100% chance of a 2nd HDD failing while rebuilding the 16TB array. Example article: http://www.zdnet.com/article/has-raid5-stopped-working/

I think this is incorrect for several reasons:

(1) Common sense: if the chance of a URE is 1 per 12.5 TB read, large RAID0 and RAID5 arrays would be failing at an incredible rate.

(2) The specified URE rate is 1 per 10^14 *reads*, not bytes. Modern drives do reads in 4k-byte sectors, not bytes or 512-byte sectors. So translated to bytes, the URE rate is 1 per 10^14 * 4096 bytes per sector, or 1 URE per 409,600 terabytes read.

(3) While number of drives in the array increases chance of an individual failure per unit of *operating time*, they do not increase failure chance per data transfer volume vs a single HDD of equal capacity. E.g, if reading the *same* data volume from a single HDD vs an n-drive RAID array, each drive in the array will only do 1/nth the reads, hence have 1/nth the failure chance per aggregate volume of data. Therefore for a given number of reads, the URE probability is about the same between a single drive and a multi-drive RAID array.
 
Computer science news on Phys.org
Link to an article that may help explain this:

raid 5 and ure

Also if a URE is encountered, it would seem that a smart raid controller would attempt to correct the error by using ECC to caculate the data needed for the failed sector and rewrite it, giving the hard drive some chance to recover from the error (the hard drive might remap the bad sector to a spare sector which will get updated by the write).
 
I'm still investigating this. I think my above numbers and reasoning are incorrect, because the HDD non-recoverable error spec is failed reads per total bits read, not sectors read. However the fact remains, hard drives and RAID5 are much more reliable than indicated by the manufacturer's "non-recoverable error rate". That was the key point in the article you mentioned -- thanks!

If the spec is 1 failure per 10^14 bits read, 10^14 bits = 12.5 terabytes. So by that spec you'd expect a failure on average every 12.5TB -- that's where Robin Harris who wrote all the "doom and gloom" ZDNet articles got his number. However -- we know from observation that HDDs and RAID systems do not fail anywhere near that often.

One answer is the spec is simply a "worst case" spec -- IOW a guarantee the HDD will be no *worse* than that. It is not an average failure rate, nor a predicted failure rate. It's more like an uptime or availability guarantee. If a vendor promises 90% availability, that doesn't mean the system is unavailable 10% of the time. It may well achieve 99% availability -- it's just a worst case guarantee.

If the HDD is really more reliable than, say, 1 error per 10^14 bits read, why don't they say that? We could just as well ask if the average engine in a Honda car will last 180,000 miles, why is the engine warranty only 60,000 miles? There are many reasons for that.
 
A key study (while several years old) covers this exact area: namely the disparity between the "non-recoverable error rate" spec published by HDD manufacturers and empirically-observed results. A spec of one non-recoverable error per 10^14 bits read would equate to one error every 12.5 terabytes read. This study found four non-recoverable errors per two petabytes read, which equates to one error per 4E15 bits read, or about 40 times more reliable than the HDD manufacturer spec. Empirical Measurements of Disk Failure Rates and Error Rates (2005, Jim Gray, et al):
: http://research.microsoft.com/apps/pubs/default.aspx?id=64599
 
  • Like
Likes   Reactions: jim mcnamara

Similar threads

  • · Replies 30 ·
2
Replies
30
Views
6K
  • · Replies 14 ·
Replies
14
Views
6K
Replies
2
Views
6K
Replies
3
Views
10K
  • · Replies 1 ·
Replies
1
Views
9K
  • · Replies 1 ·
Replies
1
Views
3K
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 0 ·
Replies
0
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K