I need to replace my (degraded mode :-( ) 2 TB RAID 5 array ASAP. I'd like ~3 TB of usable space and RAID 1 is preferred over RAID 5/6. Would anyone like to suggest replacement drives?
Kevin
Seagate has had a bad few years in terms of both reliability and compatibility, but MAY be on the upswing again.
WD has been plugging along at about 10%-20% higher prices than Seagate, but without the major problems.
HGST has an on-again/off-again relationship with quality… but their enterprise drives seem solid.
Above all, make sure you carefully check out the MTBF and POH specs for any drive you’re considering – many new drives aren’t rated to *SPIN* 24x7 anymore.
Lastly, I now suggest buying retail from a local provider to whom you can (relatively) easily return them. The age of the reliable HDD seems to be over.
-Adam
From: roundtable-bounces@muug.mb.ca [mailto:roundtable-bounces@muug.mb.ca] On Behalf Of Kevin McGregor Sent: Sunday, January 13, 2013 8:03 PM To: Continuation of Round Table discussion Subject: [RndTbl] Hard drives
I need to replace my (degraded mode :-( ) 2 TB RAID 5 array ASAP. I'd like ~3 TB of usable space and RAID 1 is preferred over RAID 5/6. Would anyone like to suggest replacement drives?
Kevin
How about the WD "Red" drives? They're marketed as small-NAS drives, suitable for 24x7 operation. 3 TB WD Red drives are available at Memory Express for $170 each.
Or would it be better to get four 1.5 TB drives in RAID 10?
On Sun, Jan 13, 2013 at 8:06 PM, Adam Thompson athompso@athompso.netwrote:
Seagate has had a bad few years in terms of both reliability and compatibility, but MAY be on the upswing again.****
WD has been plugging along at about 10%-20% higher prices than Seagate, but without the major problems.****
HGST has an on-again/off-again relationship with quality… but their enterprise drives seem solid.****
Above all, make sure you carefully check out the MTBF and POH specs for any drive you’re considering – many new drives aren’t rated to **SPIN** 24x7 anymore.****
Lastly, I now suggest buying retail from a local provider to whom you can (relatively) easily return them. The age of the reliable HDD seems to be over.****
-Adam****
*From:* roundtable-bounces@muug.mb.ca [mailto: roundtable-bounces@muug.mb.ca] *On Behalf Of *Kevin McGregor *Sent:* Sunday, January 13, 2013 8:03 PM *To:* Continuation of Round Table discussion *Subject:* [RndTbl] Hard drives****
I need to replace my (degraded mode :-( ) 2 TB RAID 5 array ASAP. I'd like ~3 TB of usable space and RAID 1 is preferred over RAID 5/6. Would anyone like to suggest replacement drives?****
Kevin****
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
I haven't used the WD Red, but Silent PC Review was impressed:
http://www.silentpcreview.com/WD_Red
I'm guessing that the Red will be your best option….
On 2013-01-13, at 9:33 PM, Kevin McGregor kevin.a.mcgregor@gmail.com wrote:
How about the WD "Red" drives? They're marketed as small-NAS drives, suitable for 24x7 operation. 3 TB WD Red drives are available at Memory Express for $170 each.
Or would it be better to get four 1.5 TB drives in RAID 10?
On Sun, Jan 13, 2013 at 8:06 PM, Adam Thompson athompso@athompso.net wrote: Seagate has had a bad few years in terms of both reliability and compatibility, but MAY be on the upswing again.
WD has been plugging along at about 10%-20% higher prices than Seagate, but without the major problems.
HGST has an on-again/off-again relationship with quality… but their enterprise drives seem solid.
Above all, make sure you carefully check out the MTBF and POH specs for any drive you’re considering – many new drives aren’t rated to *SPIN* 24x7 anymore.
Lastly, I now suggest buying retail from a local provider to whom you can (relatively) easily return them. The age of the reliable HDD seems to be over.
-Adam
From: roundtable-bounces@muug.mb.ca [mailto:roundtable-bounces@muug.mb.ca] On Behalf Of Kevin McGregor Sent: Sunday, January 13, 2013 8:03 PM To: Continuation of Round Table discussion Subject: [RndTbl] Hard drives
I need to replace my (degraded mode :-( ) 2 TB RAID 5 array ASAP. I'd like ~3 TB of usable space and RAID 1 is preferred over RAID 5/6. Would anyone like to suggest replacement drives?
Kevin
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
3TB WD Reds are a great choice. Not only are they rated for 24x7 usage but there is a bit extra in the firmware that you can trust confirmed writes are not just accepted into the NCQ queue but are actually flushed to the disk proper. Most SATA drives fill the NCQ and flush it as drive head location and scheduling permit. If your machine fails before the drive queue is flushed you've lost the entire drive queue (which for performance sake the queues are getting longer each drive generation). If that wasn't a happy feature they also extend SMART to deal with drive reallocation and timing delays. Traditionally drives just did what they had to do and reported back when completed operations. If the time delay between commit and completion the RAID controller/software may mark the drive as failed or the commit failed. The WD read drives implement a commit path that is time limited for both reads and writes. The other failure mode is that a commit make take a VERY long time to return and the OS/RAID controller could wait indefinitely for the response (emulating a drive failure). This read/write commit timeout allows the controller to take appropriate action and mark the commit as failed or re-schedule and put data in another block or do whatever. The feature for WD Red drives is called TLER (time limited error recovery). This little feature is required in the SAS spec but optional for SATA (every vendor names this differently). I've only tipped the ice berg of info here and I'm sure Adam Thompson will fill in the bits I've missed. :)
I'd also recommend avoiding as much RAID as possible that's not controller independent mirroring. That is 2x3TB in RAID-1 is the best option where reliability is the biggest concern and a third 3TB drive to do periodic offline backup (ie. store it in a box somewhere and not plugged in).
RAID-10 is better than RAID-5 only because it reduces the width of the stripe but you still have to build a full mirror on failure so it's only marginally better than RAID-5 (unless you can take the two known good halves and use a software RAID-1 in read-only to recover but still not a good plan.
Simply put the more drives in the array the higher the probability of failure relative to MBTF. If storage greater than the largest available disk is needed my suggestion is either to go RAID-6 or minimal RAID-10 with both a cold and hot spare.
I'm a bit on the paranoid side so my personal NAS disk usage is RAID-1 in a D-Link DNS-323 and I've tested mounting outside of the device to prove the RAID isn't controller specific. Workstations and laptop get a bit different treatment but that's a story for another day.
Another stupid pro-tip is not to buy all drives from the same vendor to ensure the highest probability that the drives were not from the same lot (ie. produced on different dates). This is becoming less useful each year (because reliability seems to be getting a bit better) though it has been known in the past (from some vendors… like IBM/Hitachi, some batches were more reliable than others so ensuring all your drives were not in the same batch spreads that risk out a bit). While this is mostly hokum, the logic isn't entirely fallacious and not really worth the extra effort.
To cap off this rather long rant of trivia… remember RAID is not in any sense of the imagination a backup.
Some further comments on Sean's excellent notes:
On 13-01-14 02:22 AM, Sean Cody wrote:
I'd also recommend avoiding as much RAID as possible that's not controller independent mirroring. That is 2x3TB in RAID-1 is the best option where reliability is the biggest concern and a third 3TB drive to do periodic offline backup (ie. store it in a box somewhere and not plugged in).
I've always preferred hardware RAID over software - mostly from a historical background, reliability etc.
RAID-10 is better than RAID-5 only because it reduces the width of the stripe but you still have to build a full mirror on failure so it's only marginally better than RAID-5 (unless you can take the two known good halves and use a software RAID-1 in read-only to recover but still not a good plan.
The largest value of RAID10 over RAID5 from an applications standpoint comes from performance. For any application (primarily DB systems) with a large number of random updates, RAID5 can bring your application to a crawl. Trying to perform table or batch updates can really suck on RAID5. It's not just the time for tables with millions of records, but when you're processing on a time-line, those extra few minutes per update add-up. (How much is everyone's time worth).
Simply put the more drives in the array the higher the probability of failure relative to MBTF. If storage greater than the largest available disk is needed my suggestion is either to go RAID-6 or minimal RAID-10 with both a cold and hot spare.
To cap off this rather long rant of trivia… remember RAID is not in any sense of the imagination a backup.
Agreed - RAID is no panacea. Guarantee your survival with proper backup strategies - at some point, hardware will fail - and it's up to us to bring things back to normal. With increased striping comes increased performance - you get what you pay for. Don't just take RAID5 because it's cheaper and someone says it will perform "just fine" - look at the applications that are hosted on the machine and the impact it will have - and weigh the consequences for your environment.
Dan.
I still haven't seen Brian's or Sean's reply in my Inbox, so I dug into the Mailman archive on the MUUG site and got the goods there.
Brian: Thanks for the link. I had seen the Reds previously, but no particular recommendation. They look much more appealing now.
Sean Cody wrote:
3TB WD Reds are a great choice. Not only are they rated for 24x7 usage but there is a bit extra in the firmware that you can trust confirmed writes are not just accepted into the NCQ queue but are actually flushed to the disk proper. Most SATA drives fill the NCQ and flush it as drive head location and scheduling permit. If your machine fails before the drive queue is flushed you've lost the entire drive queue (which for performance sake the queues are getting longer each drive generation). If that wasn't a happy feature they also extend SMART to deal with drive reallocation and timing delays. Traditionally drives just did what they had to do and reported back when completed operations. If the time delay between commit and completion the RAID controller/software may mark the drive as failed or the commit failed. The WD read drives implement a commit path that is time limited for both reads and writes. The other failure mode is that a commit make take a VERY long time to return and the OS/RAID controller could wait indefinitely for the response (emulating a drive failure). This read/write commit timeout allows the controller to take appropriate action and mark the commit as failed or re-schedule and put data in another block or do whatever. The feature for WD Red drives is called TLER (time limited error recovery). This little feature is required in the SAS spec but optional for SATA (every vendor names this differently). I've only tipped the ice berg of info here and I'm sure Adam Thompson will fill in the bits I've missed. :)
That's good to know. I feel pretty comfortable shelling out the bucks for Reds now.
I'd also recommend avoiding as much RAID as possible that's not controller
independent mirroring. That is 2x3TB in RAID-1 is the best option where reliability is the biggest concern and a third 3TB drive to do periodic offline backup (ie. store it in a box somewhere and not plugged in).
I avoid hardware RAID where possible. I've been using software RAID 10 and 5 on this server under Linux, and been pretty happy with it.
RAID-10 is better than RAID-5 only because it reduces the width of the
stripe but you still have to build a full mirror on failure so it's only marginally better than RAID-5 (unless you can take the two known good halves and use a software RAID-1 in read-only to recover but still not a good plan.
Yeah, 3 TB on one spindle makes for very long rebuild times. That's why I was wondering if it would be worth it to use more smaller drives in RAID 10 (assuming that means a stripe set of mirrors), so if one drive goes down, only that mirror-set pair need be rebuilt. However, since it doesn't happen often, it's probably not worth the increased overall failure rate:
Simply put the more drives in the array the higher the probability of
failure relative to MBTF. If storage greater than the largest available disk is needed my suggestion is either to go RAID-6 or minimal RAID-10 with both a cold and hot spare.
I'm a bit on the paranoid side so my personal NAS disk usage is RAID-1 in a D-Link DNS-323 and I've tested mounting outside of the device to prove the RAID isn't controller specific. Workstations and laptop get a bit different treatment but that's a story for another day.
I considered this:
Another stupid pro-tip is not to buy all drives from the same vendor to
ensure the highest probability that the drives were not from the same lot (ie. produced on different dates). This is becoming less useful each year (because reliability seems to be getting a bit better) though it has been known in the past (from some vendors… like IBM/Hitachi, some batches were more reliable than others so ensuring all your drives were not in the same batch spreads that risk out a bit). While this is mostly hokum, the logic isn't entirely fallacious and not really worth the extra effort.
But it's easier just to buy two of the same drives and be done with it.
To cap off this rather long rant of trivia… remember RAID is not in any
sense of the imagination a backup.
Noted and known. :-) Thanks for the rant.
--
Sean P.S. I've largely forsaken spinning media and have moved to SSDs in my daily drivers (except my employer supplied machines because I don't have any choice there). No spinning metal where I can get away with it.
Any thoughts on the life span of SSDs? How many writes are they good for?
Kevin
On Mon, Jan 14, 2013 at 11:11 AM, Dan Keizer dan@keizer.ca wrote:
Some further comments on Sean's excellent notes:
On 13-01-14 02:22 AM, Sean Cody wrote:
I'd also recommend avoiding as much RAID as possible that's not
controller independent mirroring.
That is 2x3TB in RAID-1 is the best option where reliability is the
biggest concern and a third 3TB drive to do periodic offline backup (ie. store it in a box somewhere and not plugged in). I've always preferred hardware RAID over software - mostly from a historical background, reliability etc.
RAID-10 is better than RAID-5 only because it reduces the width of the
stripe but you still have to build a full mirror on failure so it's only marginally better than RAID-5 (unless you can take the two known good halves and use a software RAID-1 in read-only to recover but still not a good plan. The largest value of RAID10 over RAID5 from an applications standpoint comes from performance. For any application (primarily DB systems) with a large number of random updates, RAID5 can bring your application to a crawl. Trying to perform table or batch updates can really suck on RAID5. It's not just the time for tables with millions of records, but when you're processing on a time-line, those extra few minutes per update add-up. (How much is everyone's time worth).
Simply put the more drives in the array the higher the probability of
failure relative to MBTF. If storage greater than the largest available disk is needed my suggestion is either to go RAID-6 or minimal RAID-10 with both a cold and hot spare.
To cap off this rather long rant of trivia… remember RAID is not in any
sense of the imagination a backup.
Agreed - RAID is no panacea. Guarantee your survival with proper backup strategies - at some point, hardware will fail - and it's up to us to bring things back to normal. With increased striping comes increased performance - you get what you pay for. Don't just take RAID5 because it's cheaper and someone says it will perform "just fine" - look at the applications that are hosted on the machine and the impact it will have
- and weigh the consequences for your environment.
Dan.
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
this differently). I've only tipped the ice berg of info here and I'm sure Adam Thompson will fill in the bits I've missed. :)
Err... no, I think you've covered it quite sufficiently!
I'd also recommend avoiding as much RAID as possible that's not controller independent mirroring.
The obvious exception being ZFS arrays, where hardware RAID controllers are pretty much useless and just get in the way. Unless you meant to stay away from hardware RAID controllers altogether and use software RAID? That pendulum swings back and forth regularly.
RAID 6 warrants (in many cases) a dedicated processor... either by running a dedicated storage array with, e.g. ZFS or Linux MD in raid6 mode, or by using a hardware controller that does RAID 6 onboard. RAID 5 gives the best TB/$ but suffers a write performance penalty, and an everything penalty when in degraded mode. RAID 1 gives the worst TB/$ but the best performance all-round.
Whatever you do, a BBWC (battery-backed-up write cache) will enable much better performance. This is typically a hardware thing, but it can be simulated in software too if you have a UPS and you otherwise trust the solid state components of your motherboard, CPU, RAM, etc.
So, if you use software RAID, make sure you have ECC RAM and a good UPS (that gets tested regularly). If you use hardware RAID, shoot for RAID 6 native support and a BBWC.
[Yes, I know I'm getting off-topic from Kevin's original post.]
-Adam
I've seen responses from Dan and Adam so far, and got confused when I saw a reference in Dan's response to one from Sean -- which I haven't seen. So I checked the MUUG Roundtable archive on the MUUG website, and sure enough, there's one from Sean and Brian!
Does anyone have any theories why the other two responses didn't get to my inbox? And yes, I checked my Spam folder; they aren't there.
Kevin
On Mon, Jan 14, 2013 at 11:42 AM, Adam Thompson athompso@athompso.netwrote:
this differently). I've only tipped the ice berg of info here and I'm sure Adam Thompson will fill in the bits I've missed. :)
Err... no, I think you've covered it quite sufficiently!
I'd also recommend avoiding as much RAID as possible that's not controller independent mirroring.
The obvious exception being ZFS arrays, where hardware RAID controllers are pretty much useless and just get in the way. Unless you meant to stay away from hardware RAID controllers altogether and use software RAID? That pendulum swings back and forth regularly.
RAID 6 warrants (in many cases) a dedicated processor... either by running a dedicated storage array with, e.g. ZFS or Linux MD in raid6 mode, or by using a hardware controller that does RAID 6 onboard. RAID 5 gives the best TB/$ but suffers a write performance penalty, and an everything penalty when in degraded mode. RAID 1 gives the worst TB/$ but the best performance all-round.
Whatever you do, a BBWC (battery-backed-up write cache) will enable much better performance. This is typically a hardware thing, but it can be simulated in software too if you have a UPS and you otherwise trust the solid state components of your motherboard, CPU, RAM, etc.
So, if you use software RAID, make sure you have ECC RAM and a good UPS (that gets tested regularly). If you use hardware RAID, shoot for RAID 6 native support and a BBWC.
[Yes, I know I'm getting off-topic from Kevin's original post.]
-Adam
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
On 2013-01-14 Kevin McGregor wrote:
I've seen responses from Dan and Adam so far, and got confused when I saw a reference in Dan's response to one from Sean -- which I haven't seen. So I checked the MUUG Roundtable archive on the MUUG website, and sure enough, there's one from Sean and Brian!
Does anyone have any theories why the other two responses didn't get to my inbox? And yes, I checked my Spam folder; they aren't there.
I think we're getting closer to solving this... I run my own mail server and have complete control. I don't use RBLs. There is no chance of "lost" mail or false-uncrecoverable positives.
Anyhow, the long Sean reply was "missing" on mine too but I checked and it was in my "possible spam" inbox with a .5 spamminess score (exactly the cutoff I use). Anyhow, if it got labelled as spammy by my spambayes, then I'd be willing to be providers out there (gmail) might be just throwing messages like that away.
Next time this happens (someone mentions a missing email), I'll check the spamminess score, and if they are all in the spammy range, then we pretty much know for sure it's over-aggressive mail service providers.
PS The Brian Doob possible missing one got a .13 spamminess in my system, not very spammy, but still not zero. And it had a URL + only 2 other lines added, so aggressive heuristics may decide it is spam.
My server uses ECC RAM and is plugged into a UPS, so I've taken reasonable precautions.
On Mon, Jan 14, 2013 at 11:42 AM, Adam Thompson athompso@athompso.netwrote:
this differently). I've only tipped the ice berg of info here and I'm sure Adam Thompson will fill in the bits I've missed. :)
Err... no, I think you've covered it quite sufficiently!
I'd also recommend avoiding as much RAID as possible that's not controller independent mirroring.
The obvious exception being ZFS arrays, where hardware RAID controllers are pretty much useless and just get in the way. Unless you meant to stay away from hardware RAID controllers altogether and use software RAID? That pendulum swings back and forth regularly.
RAID 6 warrants (in many cases) a dedicated processor... either by running a dedicated storage array with, e.g. ZFS or Linux MD in raid6 mode, or by using a hardware controller that does RAID 6 onboard. RAID 5 gives the best TB/$ but suffers a write performance penalty, and an everything penalty when in degraded mode. RAID 1 gives the worst TB/$ but the best performance all-round.
Whatever you do, a BBWC (battery-backed-up write cache) will enable much better performance. This is typically a hardware thing, but it can be simulated in software too if you have a UPS and you otherwise trust the solid state components of your motherboard, CPU, RAM, etc.
So, if you use software RAID, make sure you have ECC RAM and a good UPS (that gets tested regularly). If you use hardware RAID, shoot for RAID 6 native support and a BBWC.
[Yes, I know I'm getting off-topic from Kevin's original post.]
-Adam
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
My 2c. I pretty much agree with everything Sean said.
I haven't used the Reds yet as they are fairly new. But for sure stay away from the Greens. I mostly use the Blacks as that's the only way to still get the 7200rpm performance. If you do want the Reds, it appears my prices are way better than Memory Express currently so maybe give me a shout on a private email.
Personally, for large arrays I would use nothing but RAID6. For a 2 drive setup RAID1 is fine. I *always* go controllerless (software raid). You have so many more options for disaster recovery that way.
Seagate did put out a bunch of junk drives and I do return a lot. I live in hope they will clean up their act. All the latest DM series are NOT rated for 24/7 and they only have 1 year warranty. That said, they are the dirt cheapest 7200rpm drives around. 50% cheaper than 7200 rpm WD's. As they get cheaper, as long as the quality isn't too crappy, it may be cost effective to just let them die frequently and have some hot spares in the system for md to instantly rebuild to.
My current RAID6 16TB (2TB) setup is all Hitachis 7200's. They are often hard to find in Wpg/Canada. I haven't had a single drive failure yet (2 years so far). For performance, I couldn't ever imagine using less than 7200rpm. Performance is already a *little* less than I would have liked. And yes, it's just for data, no realtime or transaction type usage, but you can never have too much performance.
Also, when doing RAID1 I almost always do as Sean suggested and use 2 separate drive vendors. Usually a cheapo Seagate 7200 + a WD Black 7200. I've personally experience too many of the "bad batch" syndrome where you lose both drives before the 1st one comes back from RMA.