I still haven't seen Brian's or Sean's reply in my Inbox, so I dug into the Mailman archive on the MUUG site and got the goods there.
Brian: Thanks for the link. I had seen the Reds previously, but no particular recommendation. They look much more appealing now.
Sean Cody wrote:
3TB WD Reds are a great choice. Not only are they rated for 24x7 usage but there is a bit extra in the firmware that you can trust confirmed writes are not just accepted into the NCQ queue but are actually flushed to the disk proper. Most SATA drives fill the NCQ and flush it as drive head location and scheduling permit. If your machine fails before the drive queue is flushed you've lost the entire drive queue (which for performance sake the queues are getting longer each drive generation). If that wasn't a happy feature they also extend SMART to deal with drive reallocation and timing delays. Traditionally drives just did what they had to do and reported back when completed operations. If the time delay between commit and completion the RAID controller/software may mark the drive as failed or the commit failed. The WD read drives implement a commit path that is time limited for both reads and writes. The other failure mode is that a commit make take a VERY long time to return and the OS/RAID controller could wait indefinitely for the response (emulating a drive failure). This read/write commit timeout allows the controller to take appropriate action and mark the commit as failed or re-schedule and put data in another block or do whatever. The feature for WD Red drives is called TLER (time limited error recovery). This little feature is required in the SAS spec but optional for SATA (every vendor names this differently). I've only tipped the ice berg of info here and I'm sure Adam Thompson will fill in the bits I've missed. :)
That's good to know. I feel pretty comfortable shelling out the bucks for Reds now.
I'd also recommend avoiding as much RAID as possible that's not controller
independent mirroring. That is 2x3TB in RAID-1 is the best option where reliability is the biggest concern and a third 3TB drive to do periodic offline backup (ie. store it in a box somewhere and not plugged in).
I avoid hardware RAID where possible. I've been using software RAID 10 and 5 on this server under Linux, and been pretty happy with it.
RAID-10 is better than RAID-5 only because it reduces the width of the
stripe but you still have to build a full mirror on failure so it's only marginally better than RAID-5 (unless you can take the two known good halves and use a software RAID-1 in read-only to recover but still not a good plan.
Yeah, 3 TB on one spindle makes for very long rebuild times. That's why I was wondering if it would be worth it to use more smaller drives in RAID 10 (assuming that means a stripe set of mirrors), so if one drive goes down, only that mirror-set pair need be rebuilt. However, since it doesn't happen often, it's probably not worth the increased overall failure rate:
Simply put the more drives in the array the higher the probability of
failure relative to MBTF. If storage greater than the largest available disk is needed my suggestion is either to go RAID-6 or minimal RAID-10 with both a cold and hot spare.
I'm a bit on the paranoid side so my personal NAS disk usage is RAID-1 in a D-Link DNS-323 and I've tested mounting outside of the device to prove the RAID isn't controller specific. Workstations and laptop get a bit different treatment but that's a story for another day.
I considered this:
Another stupid pro-tip is not to buy all drives from the same vendor to
ensure the highest probability that the drives were not from the same lot (ie. produced on different dates). This is becoming less useful each year (because reliability seems to be getting a bit better) though it has been known in the past (from some vendors… like IBM/Hitachi, some batches were more reliable than others so ensuring all your drives were not in the same batch spreads that risk out a bit). While this is mostly hokum, the logic isn't entirely fallacious and not really worth the extra effort.
But it's easier just to buy two of the same drives and be done with it.
To cap off this rather long rant of trivia… remember RAID is not in any
sense of the imagination a backup.
Noted and known. :-) Thanks for the rant.
--
Sean P.S. I've largely forsaken spinning media and have moved to SSDs in my daily drivers (except my employer supplied machines because I don't have any choice there). No spinning metal where I can get away with it.
Any thoughts on the life span of SSDs? How many writes are they good for?
Kevin
On Mon, Jan 14, 2013 at 11:11 AM, Dan Keizer dan@keizer.ca wrote:
Some further comments on Sean's excellent notes:
On 13-01-14 02:22 AM, Sean Cody wrote:
I'd also recommend avoiding as much RAID as possible that's not
controller independent mirroring.
That is 2x3TB in RAID-1 is the best option where reliability is the
biggest concern and a third 3TB drive to do periodic offline backup (ie. store it in a box somewhere and not plugged in). I've always preferred hardware RAID over software - mostly from a historical background, reliability etc.
RAID-10 is better than RAID-5 only because it reduces the width of the
stripe but you still have to build a full mirror on failure so it's only marginally better than RAID-5 (unless you can take the two known good halves and use a software RAID-1 in read-only to recover but still not a good plan. The largest value of RAID10 over RAID5 from an applications standpoint comes from performance. For any application (primarily DB systems) with a large number of random updates, RAID5 can bring your application to a crawl. Trying to perform table or batch updates can really suck on RAID5. It's not just the time for tables with millions of records, but when you're processing on a time-line, those extra few minutes per update add-up. (How much is everyone's time worth).
Simply put the more drives in the array the higher the probability of
failure relative to MBTF. If storage greater than the largest available disk is needed my suggestion is either to go RAID-6 or minimal RAID-10 with both a cold and hot spare.
To cap off this rather long rant of trivia… remember RAID is not in any
sense of the imagination a backup.
Agreed - RAID is no panacea. Guarantee your survival with proper backup strategies - at some point, hardware will fail - and it's up to us to bring things back to normal. With increased striping comes increased performance - you get what you pay for. Don't just take RAID5 because it's cheaper and someone says it will perform "just fine" - look at the applications that are hosted on the machine and the impact it will have
- and weigh the consequences for your environment.
Dan.
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable