Okay, here's (maybe) the last word on my RAID issues at work, if anyone's still reading these. :)
For yet another comparison, my RAID10 6x 750 GB SATA XFS gives:# dd if=/dev/zero of=bigfile bs=1M count=1638416384+0 records in16384+0 records out17179869184 bytes (17 GB) copied, 79.5334 s, 216 MB/s# dd of=/dev/null if=bigfile bs=1M16384+0 records in16384+0 records out17179869184 bytes (17 GB) copied, 31.4869 s, 546 MB/sMaybe I should switch to RAID6. ;-)KevinOn Fri, May 20, 2011 at 4:54 PM, Trevor Cordes <trevor@tecnopolis.ca> wrote:
My RAID6 8x 2TB-drive SATA XFS gives:
#dd if=/dev/zero of=/new/test bs=1M count=32768
32768+0 records in34359738368 bytes (34 GB) copied, 152.032 s, 226 MB/s
32768+0 records out
#dd of=/dev/null if=/new/test bs=1M
32768+0 records in34359738368 bytes (34 GB) copied, 57.7027 s, 595 MB/s
32768+0 records out
(wow!!)
During the whole write time both CPU cores were 90-100%, mostly 95-99%!
Glad to see RAID/XFS code is multi-core aware. For reading it was 1
core at 100% and the other around 20%. The write limiting factor
appears to be my piddly Pentium D on my file server. Still, this is
3-4X the speed my old (1TB drives, crappy PCI SATA cards) array was
giving me. The read is quite interesting in that the 100% CPU
indicates it is probably doing parity checks on every read.
I think a big part of the good speed is my new 8-port SATA card, an
Intel PCI-Express x 8 in a x8 slot. If your SCSI card is just PCI,
then the PCI MB/s speed limit is what's killing you. Even PCI-X may be
limiting. And the Intel card was pretty cheap, under $200.
BTW, I got stuck with two spare SATA card expander cables (1 card
port to 4 SATA drives) if anyone wants some cheap. I can get in the
Intel cards too, if anyone wants a complete package.
_______________________________________________
Roundtable mailing list
Roundtable@muug.mb.ca
http://www.muug.mb.ca/mailman/listinfo/roundtable