Look back in dmesg output for the RAID module speed tests, notice which one was selected, and divide that number by 2. There's your theoretical bottleneck on the CPU.
Take the minimum sustained disk/channel/controller throughput, factor in interrupt latency, device driver efficiency, etc. and make a rough guess as to the overall throughput.
Consider that md code seems to have a lot of write barriers for safety - so even a rebuild will aspens much of its time waiting for the disk to sync().
…
[View More]All in all, I think your numbers are probably reasonable.
-Adam
Kevin McGregor <kevin.a.mcgregor(a)gmail.com> wrote:
>I installed Ubuntu Server 10.04.2 LTS AMD64 on a HP ProLiant ML370 G3 (4 x
>dual-core/hyperthreaded Xeon 2.66 GHz, 8 GB RAM) and I used the on-board
>SCSI controller to manage 8 x 300 GB 15K RPM SCSI drives in a software RAID
>5 set up as a 7-drive array with 1 hot-spare drive. All drives are the exact
>same model with the same firmware version.
>
>It's currently rebuilding the array (because I just created the array) and
>/proc/mdstat is reporting "finish=165.7min speed=25856K/sec". Does that
>sound "right" in the sense that it's the right order of magnitude? I though
>it should be higher, but I haven't set up such an array before, so I don't
>have anything to compare it to.
>
>If it's slow, does anyone have a suggestion for speeding it up?
>
>Kevin
>
>_______________________________________________
>Roundtable mailing list
>Roundtable(a)muug.mb.ca
>http://www.muug.mb.ca/mailman/listinfo/roundtable
[View Less]
In any event, yes, a single U320 bus can only do 320MB in aggregate, and its practical limit is around 80% of that. HP's external enclosures are not exactly notorious for being well-balanced equipment...
Only one model in any given product generation will be a single-bus design, that's the economy model.
On a Sun E450 with Ultra-II (40MB/sec) drives, the system backplane is engineered so that no more than 4 drives are connected to any one channel, for example.
Generally speaking, the 4-disk …
[View More]per channel limit is sound engineering.
There's also a curious effect that I've seen but can't explain where more spindles actually slows down the array drastically... I suspect if you recreate that array as RAID 50, you'd see performance gains.
-Adam
Trevor Cordes <trevor(a)tecnopolis.ca> wrote:
>On 2011-05-20 Kevin McGregor wrote:
>> When the SCSI controller BIOS is initialized, it lists all of the
>> drives on both channels, and most of them (as I recall) are described
>> as being set to 80 MB/s with a note that the OS will probably set
>
>Weird, see if the SCSI BIOS setup screens let you manually select the
>SCSI speed. You might have to drill down to the per-drive area. I
>know my U320 card gives me choices.
>
>80MB/s per drive isn't shabby, though your drives can probably push
>100-150MB/s.
>_______________________________________________
>Roundtable mailing list
>Roundtable(a)muug.mb.ca
>http://www.muug.mb.ca/mailman/listinfo/roundtable
>
[View Less]
I can tell you that those numbers are pretty much bang on. Writes would be faster if you enabled the drives' write caches, the HP controller will leave them at default, which is disabled.
The RAID rebuild speed should have been about 90% of that 122Mb number, which it obviously wasn't. I don't have any good theories why at the moment.
-Adam
Kevin McGregor <kevin.a.mcgregor(a)gmail.com> wrote:
>So far I made this
>md_d0 : active raid10 sdn1[12](S) sdm1[13](S) sdl1[11] sdk1[10] …
[View More]sdj1[9]
>sdi1[8] sdh1[7] sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1] sda1[0]
> 1757804544 blocks 512K chunks 2 near-copies [12/12] [UUUUUUUUUUUU]
>and got this
># dd if=/dev/zero of=/srv/d0/bigzerofile bs=1M count=32768
>32768+0 records in
>32768+0 records out
>34359738368 bytes (34 GB) copied, 281.034 s, 122 MB/s
># dd of=/dev/null if=/srv/d0/bigzerofile bs=1M
>32768+0 records in
>32768+0 records out
>34359738368 bytes (34 GB) copied, 126.21 s, 272 MB/s
>
>I'm wondering if 12 drives would over-saturate one Ultra-320 channel.
>Doesn't Ultra-320 suggest a maximum usable (or theoretical) capacity of 320
>MB/s? I could try setting up a stripe set/RAID0 of varying numbers of drives
>and compare. What do you think?
>
>I don't think the external enclosure (HP MSA30?) allows for splitting the
>drives into two groups. Only one cable can be connected to it, although
>there may have been an option for a second at purchase.
>
>On Thu, May 19, 2011 at 9:27 PM, Adam Thompson <athompso(a)athompso.net>wrote:
>
>> Well, I did say there was more than one variable involved! 25MB/sec is a
>> bit slow, but I don’t know how efficiently that LSI chip is at managing bus
>> contention, or how efficient the kernel driver for that chip is. I do know
>> from experience that 8-disk arrays are well into the land of diminishing
>> returns from a speed perspective: RAID-1 on two disks or RAID-10 on four
>> seems to be the sweet spot for speed.
>>
>>
>>
>> Once your rebuild has finished, I would recommend doing some throughput
>> tests (both reading and writing) on the array; something perhaps like:
>>
>> # sync; time sh –c ‘dd if=/dev/zero of=/mnt/raidarray/BIGFILE.ZERO bs=1M
>> count=1024;sync’
>>
>> followed by
>>
>> # time dd if=/dev/md0 of=/dev/null bs=1M count=1024
>>
>> Those are both very naïve approaches, but should give you a feel for the
>> maximum read and write speeds of your array. I strongly suspect those
>> numbers will be much higher than the raid re-sync rate, again, mostly due to
>> write-flush barriers in the md code.
>>
>>
>>
>> I’m interested in knowing how this ends, personally… please let us know.
>>
>>
>>
>> -Adam
>>
>>
>>
>> (P.S. does anyone know how to avoid top-posting in Outlook 2010?)
>>
>>
>>
>>
>>
>> *From:* roundtable-bounces(a)muug.mb.ca [mailto:
>> roundtable-bounces(a)muug.mb.ca] *On Behalf Of *Kevin McGregor
>> *Sent:* Thursday, May 19, 2011 14:52
>> *To:* Continuation of Round Table discussion
>> *Subject:* Re: [RndTbl] RAID5 rebuild performance
>>
>>
>>
>> raid6: using algorithm sse2x2 (2883 MB/s). So, 1% of that is reasonable?
>> :-) Oh well, I guess I can wait until tomorrow for the rebuild to finish.
>>
>> On Thu, May 19, 2011 at 2:12 PM, Adam Thompson <athompso(a)athompso.net>
>> wrote:
>>
>> Look back in dmesg output for the RAID module speed tests, notice which one
>> was selected, and divide that number by 2. There's your theoretical
>> bottleneck on the CPU.
>> Take the minimum sustained disk/channel/controller throughput, factor in
>> interrupt latency, device driver efficiency, etc. and make a rough guess as
>> to the overall throughput.
>> Consider that md code seems to have a lot of write barriers for safety -
>> so even a rebuild will aspens much of its time waiting for the disk to
>> sync().
>> All in all, I think your numbers are probably reasonable.
>> -Adam
>>
>>
>>
>> Kevin McGregor <kevin.a.mcgregor(a)gmail.com> wrote:
>>
>> >I installed Ubuntu Server 10.04.2 LTS AMD64 on a HP ProLiant ML370 G3 (4 x
>> >dual-core/hyperthreaded Xeon 2.66 GHz, 8 GB RAM) and I used the on-board
>> >SCSI controller to manage 8 x 300 GB 15K RPM SCSI drives in a software
>> RAID
>> >5 set up as a 7-drive array with 1 hot-spare drive. All drives are the
>> exact
>> >same model with the same firmware version.
>> >
>> >It's currently rebuilding the array (because I just created the array) and
>> >/proc/mdstat is reporting "finish=165.7min speed=25856K/sec". Does that
>> >sound "right" in the sense that it's the right order of magnitude? I
>> though
>> >it should be higher, but I haven't set up such an array before, so I don't
>> >have anything to compare it to.
>> >
>> >If it's slow, does anyone have a suggestion for speeding it up?
>> >
>> >Kevin
>> >
>> >_______________________________________________
>> >Roundtable mailing list
>> >Roundtable(a)muug.mb.ca
>> >http://www.muug.mb.ca/mailman/listinfo/roundtable
>>
>> _______________________________________________
>> Roundtable mailing list
>> Roundtable(a)muug.mb.ca
>> http://www.muug.mb.ca/mailman/listinfo/roundtable
>>
>>
>>
>> _______________________________________________
>> Roundtable mailing list
>> Roundtable(a)muug.mb.ca
>> http://www.muug.mb.ca/mailman/listinfo/roundtable
>>
>>
[View Less]
I installed Ubuntu Server 10.04.2 LTS AMD64 on a HP ProLiant ML370 G3 (4 x
dual-core/hyperthreaded Xeon 2.66 GHz, 8 GB RAM) and I used the on-board
SCSI controller to manage 8 x 300 GB 15K RPM SCSI drives in a software RAID
5 set up as a 7-drive array with 1 hot-spare drive. All drives are the exact
same model with the same firmware version.
It's currently rebuilding the array (because I just created the array) and
/proc/mdstat is reporting "finish=165.7min speed=25856K/sec". Does that
sound "…
[View More]right" in the sense that it's the right order of magnitude? I though
it should be higher, but I haven't set up such an array before, so I don't
have anything to compare it to.
If it's slow, does anyone have a suggestion for speeding it up?
Kevin
[View Less]
You buy the bare drives from online resellers of OEM equipment and put them on the hot-swap trays yourself (after removing the failed drive from said tray, that is).
In other words, the cost of the hot-swappiness is $0 because you're re-using that part... You only have to pay for the bare drive.
-Adam
Kevin McGregor <kevin.a.mcgregor(a)gmail.com> wrote:
>Well... I wasn't sure what the best choice was, given the circumstances.
>This isn't an array for production, just development …
[View More]or (more likely)
>testing. And the big thing is, I'm re-using existing equipment, and I had 8
>300 GB 15K RPM drives, and no further replacements handy. One can probably
>get those drives if need be, but the City wouldn't likely pony up the cost.
>So I figured I'd use only 7 drives with one spare for RAID5. I could have
>gone RAID6 on eight drives for the same capacity, but when one drive fails
>I'll be back to RAID5 (sort of) anyway.
>
>Just out of curiosity, where would you get HP 300 GB 15K RPM "universal hot
>swap" drives from, and what do they cost these days? I see on eBay one
>listing for US$350/drive.
>
>Kevin
>
>On Thu, May 19, 2011 at 5:25 PM, Trevor Cordes <trevor(a)tecnopolis.ca> wrote:
>
>> On 2011-05-19 Kevin McGregor wrote:
>> > I installed Ubuntu Server 10.04.2 LTS AMD64 on a HP ProLiant ML370 G3
>> > (4 x dual-core/hyperthreaded Xeon 2.66 GHz, 8 GB RAM) and I used the
>> > on-board SCSI controller to manage 8 x 300 GB 15K RPM SCSI drives in
>> > a software RAID 5 set up as a 7-drive array with 1 hot-spare drive.
>> > All drives are the exact same model with the same firmware version.
>> >
>> > It's currently rebuilding the array (because I just created the
>> > array) and /proc/mdstat is reporting "finish=165.7min
>> > speed=25856K/sec". Does that sound "right" in the sense that it's the
>>
>> I got around 20-30M/s or so on my RAID6 7200rpm 12TB 8-disk rebuild this
>> week. That was on an old Pentium-D but with a nifty zippy new 8-port
>> SATA card. Your speeds sound a touch slow, given the hardware. But
>> RAID5/6 does weird things behind the scenes.
>>
>> Note, if you're doing 8 drives anyhow, why not RAID6? Its
>> survivability is much higher and its performance is surprisingly nearly
>> that of RAID5 (there's some graphs somewhere I was recently looking
>> at). The only downside is degraded performance sucks, but hopefully
>> you will never be in that state (long). I've personally had/seen 2
>> RAID5 failures and will never use anything except RAID6 now.
>> _______________________________________________
>> Roundtable mailing list
>> Roundtable(a)muug.mb.ca
>> http://www.muug.mb.ca/mailman/listinfo/roundtable
>>
>
>_______________________________________________
>Roundtable mailing list
>Roundtable(a)muug.mb.ca
>http://www.muug.mb.ca/mailman/listinfo/roundtable
[View Less]
FYI...
-------- Original Message --------
Subject: USENIX HotCloud '11 Program Now Available
Date: Mon, 16 May 2011 13:53:25 -0700
From: Lionel Garth Jones <lgj(a)usenix.org>
To: info(a)muug.mb.ca, gedetil(a)muug.mb.ca
We'd like to invite you to attend the 3rd USENIX Workshop on Hot
Topics in Cloud Computing (HotCloud '11), taking place in Portland,
OR, June 14-15, 2011.
HotCloud '11 will discuss challenges in the cloud computing paradigm,
including the design, implementation, and …
[View More]deployment of virtualized
clouds.
The program includes:
-- Refereed paper sessions on scheduling and resource management, economics,
security, OSes and frameworks, and more
-- Joint ATC, WebApps, and HotCloud Keynote Address: "An Agenda for
Empirical Cyber Crime Research," by Stefan Savage, UCSD
-- Joint ATC, WebApps, and HotCloud Invited Talk: "Helping Humanity
with Phones and Clouds" by Matthew Faulkner and Michael Olson,
Caltech
-- Panel
-- Poster Session: This session provides an opportunity to present
early-stage work and receive feedback from the community. Posters
provide a great way to have more in-depth conversations between
authors and audience--so much so that we are automatically granting
each accepted paper a poster slot. We welcome submissions from those
who are not paper authors also. The submission deadline is Monday,
May 30, 2011, at 3:00 p.m. PDT. Find out more at:
http://www.usenix.org/events/hotcloud11/poster.html
The full program can be found at
http://www.usenix.org/hotcloud11/proga
Don't miss this opportunity to engage in dynamic discussion on
key topics in the cloud computing community. Register today at
http://www.usenix.org/events/fcw11/registration/
HotCloud '11 is part of 2011 USENIX Federated Conferences Week,
which will take place June 14-17, 2011, in Portland, OR.
http://www.usenix.org/fcw11/
We look forward to seeing you in Portland!
Ion Stoica, University of California, Berkeley
John Wilkes, Google
HotCloud '11 Program Co-Chairs
hotcloud11chairs(a)usenix.org
----------------------------------
3rd USENIX Workshop on Hot Topics in Cloud Computing (HotCloud '11)
June 14-15, 2011
Portland, OR
Sponsored by USENIX in cooperation with ACM SIGOPS
Early Bird Registration Deadline: Monday, May 23, 2011
http://www.usenix.org/hotcloud11/proga
----------------------------------
[View Less]
I'm trying to set up SNMP and MRTG on my SheevaPlug computer, and I can't
quite get it to work. I've followed the guides like
http://www.debianhelp.co.uk/mrtg.htm and
http://jitamitra.blogspot.com/2009/02/snmp-and-mrtg-on-debian.html
...but the part that doesn't work is
root@sheeva:/etc/snmp# indexmaker /etc/mrtg.cfg --columns=1 --output
/var/www/mrtg/index.html
Use of uninitialized value $first in hash element at /usr/bin/indexmaker
line 353.
root@sheeva:/etc/snmp# snmpwalk -v 1 -c public …
[View More]localhost
IP-MIB::ipAdEntIfIndex
MIB search path:
/root/.snmp/mibs:/usr/share/mibs/site:/usr/share/snmp/mibs:/usr/share/mibs/iana:/usr/share/mibs/ietf:/usr/share/mibs/netsnmp
Cannot find module (IP-MIB): At line 0 in (none)
IP-MIB::ipAdEntIfIndex: Unknown Object Identifier
root@sheeva:/etc/snmp# snmpwalk -v 1 -c public localhost system
system: Unknown Object Identifier (Sub-id not found: (top) -> system)
Any suggestions?
Kevin
[View Less]
FYI...
-------- Original Message --------
Subject: 2011 USENIX Annual Technical Conference Program Now Available
Date: Fri, 13 May 2011 15:09:26 -0700
From: Lionel Garth Jones <lgj(a)usenix.org>
To: info(a)muug.mb.ca, gedetil(a)muug.mb.ca
We're pleased to invite you to attend the 2011 USENIX Annual Technical
Conference (USENIX ATC '11): http://www.usenix.org/atc11/proga
USENIX ATC '11 is again part of the USENIX Federated Conferences Week.
Not only do you get a 3-day conference program …
[View More]filled with the latest
systems research, but you'll also have increased opportunities to
network with peers across multiple disciplines.
The technical program begins on Wednesday, June 15, and includes
refereed papers, invited talks, and a poster session.
http://www.usenix.org/events/atc11/tech/
The impressive slate of invited speakers includes:
- Keynote Address: "An Agenda for Empirical Cyber Crime Research," by
Stefan Savage, UCSD
- Plenary Talk: "Dead Media: What the Obsolete, Unsuccessful,
Experimental, and Avant-Garde Can Teach Us About the Future of Media,"
by Finn Brunton, NYU
- "Helping Humanity with Phones and Clouds," by Matthew Faulkner and
Michael Olson, Caltech
- "Linux PC Robot," by Mark Woodward, Mohawksoft
The USENIX ATC '11 Refereed Papers present the latest in groundbreaking
systems research. Be among the first to check out the latest innovative
work in the systems field.
A joint Poster Session and Happy Hour between USENIX ATC '11 and
WebApps '11 will be held on the evening of Wednesday, June 15. The
poster session is an excellent forum to discuss new ideas and get useful
feedback from the community. Poster submissions are due May 13, 2011.
http://www.usenix.org/events/atc11/poster.html
Finally, don't miss the opportunity to mingle with colleagues and
leading experts in the combined Birds-of-a-Feather sessions and at the
various evening social events, including the joint poster session,
vendor BoFs, and receptions.
http://www.usenix.org/events/fcw11/activities.html
The full program is available at:
http://www.usenix.org/events/atc11/tech/
Register by May 23, 2011, for the greatest savings.
http://www.usenix.org/events/fcw11/registration/
USENIX ATC '11 promises to be an exciting showcase for the latest in
innovative research and cutting-edge practices in technology. We look
forward to seeing you in Portland.
On behalf of the USENIX ATC '11 Program Committee,
Jason Nieh, Columbia University
Carl Waldspurger
USENIX ATC '11 Program Chairs
atc11chairs(a)usenix.org
P.S. Connect with other attendees, check out additional discounts, and
help spread the word!
Facebook: http://www.facebook.com/event.php?eid=176554335729285
Twitter: http://twitter.com/usenix #ATC11
Additional Discounts: http://www.usenix.org/events/fcw11/discounts.html
Help Promote: http://www.usenix.org/events/atc11/promote.html
----------------------------------------------
2011 USENIX Annual Technical Conference
June 15-17, 2011, Portland, OR
http://www.usenix.org/atc11/proga
Early Bird Registration Deadline: May 23, 2011
Part of USENIX Federated Conferences Week
http://www.usenix.org/fcw11
-----------------------------------------------
[View Less]
Unfortunately, no-one is willing to be the bad guy in that story... Not even a *country* can really pull it off.
Think about how many non-IPv6-capable devices there are out there: virtually every single home router, printer, modem, camera, etc.
Now as soon as a flag day is declared, the self-entitled of the world will rise up and say to their government, "who's going to pay for my new equipment?" Never mind that we've all known this day would come for over 10 years...
On the other hand, I …
[View More]might turn out to be the first who actually has to manage a dual-stack network... and be willing to talk about it, anyway. Assuming I'm not on powerful drugs as a result of doing so! Holy **** does it get complicated!
-Adam
Trevor Cordes <trevor(a)tecnopolis.ca> wrote:
>On 2011-05-11 Sean Cody wrote:
>> Anyone have an interest or are is implementing ipv6 anywhere?
>>
>> An intro to ipv6 would be a great presentation topic so if you can
>> share your experience, please do!
>
>Seconded. But don't look at me.
>
>Does anyone know when home ISP's like Shaw will start to offer IPv6 to
>home users? I don't think v6 will go anywhere until the ISP's with
>their massive IP pools start switching end users to it. Correct?
>
>All of this 6-to-4 stuff seems stupid and overly complex. I would like
>to just see a day picked where 4 is shutoff and only 6 can be used.
>We'll all be !@$#%ing our pants for a few days/weeks but then it'll be
>done.
>_______________________________________________
>Roundtable mailing list
>Roundtable(a)muug.mb.ca
>http://www.muug.mb.ca/mailman/listinfo/roundtable
>
[View Less]