3TB WD Reds are a great choice. Not only are they rated for 24x7 usage but there is a bit extra in the firmware that you can trust confirmed writes are not just accepted into the NCQ queue but are actually flushed to the disk proper. Most SATA drives fill the NCQ and flush it as drive head location and scheduling permit. If your machine fails before the drive queue is flushed you've lost the entire drive queue (which for performance sake the queues are getting longer each drive generation). If that wasn't a happy feature they also extend SMART to deal with drive reallocation and timing delays. Traditionally drives just did what they had to do and reported back when completed operations. If the time delay between commit and completion the RAID controller/software may mark the drive as failed or the commit failed. The WD read drives implement a commit path that is time limited for both reads and writes. The other failure mode is that a commit make take a VERY long time to return and the OS/RAID controller could wait indefinitely for the response (emulating a drive failure). This read/write commit timeout allows the controller to take appropriate action and mark the commit as failed or re-schedule and put data in another block or do whatever. The feature for WD Red drives is called TLER (time limited error recovery). This little feature is required in the SAS spec but optional for SATA (every vendor names this differently). I've only tipped the ice berg of info here and I'm sure Adam Thompson will fill in the bits I've missed. :)
I'd also recommend avoiding as much RAID as possible that's not controller independent mirroring. That is 2x3TB in RAID-1 is the best option where reliability is the biggest concern and a third 3TB drive to do periodic offline backup (ie. store it in a box somewhere and not plugged in).
RAID-10 is better than RAID-5 only because it reduces the width of the stripe but you still have to build a full mirror on failure so it's only marginally better than RAID-5 (unless you can take the two known good halves and use a software RAID-1 in read-only to recover but still not a good plan.
Simply put the more drives in the array the higher the probability of failure relative to MBTF. If storage greater than the largest available disk is needed my suggestion is either to go RAID-6 or minimal RAID-10 with both a cold and hot spare.
I'm a bit on the paranoid side so my personal NAS disk usage is RAID-1 in a D-Link DNS-323 and I've tested mounting outside of the device to prove the RAID isn't controller specific. Workstations and laptop get a bit different treatment but that's a story for another day.
Another stupid pro-tip is not to buy all drives from the same vendor to ensure the highest probability that the drives were not from the same lot (ie. produced on different dates). This is becoming less useful each year (because reliability seems to be getting a bit better) though it has been known in the past (from some vendors… like IBM/Hitachi, some batches were more reliable than others so ensuring all your drives were not in the same batch spreads that risk out a bit). While this is mostly hokum, the logic isn't entirely fallacious and not really worth the extra effort.
To cap off this rather long rant of trivia… remember RAID is not in any sense of the imagination a backup.