[RndTbl] linux md RAID6 + XFS + add 1 drive
Adam Thompson
athompso at athompso.net
Wed Sep 10 12:46:41 CDT 2014
On 14-09-10 03:51 AM, Trevor Cordes wrote:
> Final note: After hearing ZFS (on Linux at least) cannot grow by 1 disk
> when using RAID5 or 6, and after nearly 10 years using XFS on md on huge
> arrays, I say give some hero cookies to md/XFS. It's withstood some
> pretty strange events on my server, and has never blown up. If I'm wrong
> about the sunit being a problem, then md/XFS is a great option for those
> who want to gradually add space as they need it.
The fixed-topology limitation is endemic to ZFS' design, not just the
Linux port.
However, there are many use cases where it's not a limitation: almost
exactly the type of system we're trying to build. If you only have
(say) 8 drive bays, and you fill all 8 drive bays on day 1, you never
need to grow the array by a single drive; the only two growth scenarios
that are possible are dictated by your hardware:
a) replace all 8 drives with larger drives - ZFS supports this
b) add an external drive shelf with another (say) 8 drives - ZFS
supports this, as a second sub-volume, effectively creating a RAID n+0
(typically RAID60) volume.
ZFS has three major advantages for most people: 1) no RAID write-hole
behaviour; 2) automatic resilvering; 3) integrated, effectively
infinite, snapshots with built-in replication.
RHEL 7 (and thus CentOS 7, SL 7, etc.) defaults to XFS for the root file
system. Obviously you're not the only one who likes XFS!
I've generally found that the filesystem stripe width doesn't make a
whole lot of difference on modern hardware; the worst-case I can recall
encountering was actually due to block misalignment, not stripe width in
the end. I do recall it making a measurable difference on slow 5400rpm
IDE drives with a controller that didn't do useful caching, about 10
years ago.
--
-Adam Thompson
athompso at athompso.net
Cell: +1 204 291-7950
Fax: +1 204 489-6515
More information about the Roundtable
mailing list