I've been trying to narrow down where my performance bottleneck(s) is (are). I just ran "iperf" from a Windows PC (XP SP3) to my Ubuntu 9.10 server. With iperf in "server" mode on the PC, and the client running on the Ubuntu machine, I get 0.0-60.0 sec 4.23 GBytes 606 Mbits/sec
Reversed (iperf "server" running on Ubuntu server, client running on the PC), I get 0.0-60.0 sec 1.13 GBytes 162 Mbits/sec
Wha...?? Both client and server runs on the PC report TCP window size to be 8 KB, but the Linux client reports 22.4 KB and the Linux server reports 85.3 KB. Increasing that to 256K on both ends has little effect. Does anyone have any suggestions?
The Windows NIC is a Realtek RTL8169/8110 Family Gigabit Ethernet; the Ubuntu server is using the Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01). Time to replace a NIC or two?
Kevin
On a LAN, window sizes aren't going to make that much of a difference.
First look for the usual suspects - errors on the swich port or NIC.
If you can grab a copy of the traffic, such as with tcpdump -w test.pcap tcp port 5001, you can pull it into wireshark and look for TCP zero window conditions, retransmits, and duplicate ACKs. The TCP performance graph will also show whether or not the transmission is stalling.
Is the performance similar using the UDP test? Does it show loss?
Sean
On Mon, Apr 5, 2010 at 7:27 PM, Kevin McGregor kevin.a.mcgregor@gmail.comwrote:
I've been trying to narrow down where my performance bottleneck(s) is (are). I just ran "iperf" from a Windows PC (XP SP3) to my Ubuntu 9.10 server. With iperf in "server" mode on the PC, and the client running on the Ubuntu machine, I get 0.0-60.0 sec 4.23 GBytes 606 Mbits/sec
Reversed (iperf "server" running on Ubuntu server, client running on the PC), I get 0.0-60.0 sec 1.13 GBytes 162 Mbits/sec
Wha...?? Both client and server runs on the PC report TCP window size to be 8 KB, but the Linux client reports 22.4 KB and the Linux server reports 85.3 KB. Increasing that to 256K on both ends has little effect. Does anyone have any suggestions?
The Windows NIC is a Realtek RTL8169/8110 Family Gigabit Ethernet; the Ubuntu server is using the Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01). Time to replace a NIC or two?
Kevin
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
When I choose UDP, I get 0.0-15.0 sec 1.88 MBytes 1.05 Mbits/sec 6.154 ms 0/ 1339 (0%)
No packet loss. 0.1% utilization!
On Mon, Apr 5, 2010 at 8:12 PM, Sean Walberg swalberg@gmail.com wrote:
On a LAN, window sizes aren't going to make that much of a difference.
First look for the usual suspects - errors on the swich port or NIC.
If you can grab a copy of the traffic, such as with tcpdump -w test.pcap tcp port 5001, you can pull it into wireshark and look for TCP zero window conditions, retransmits, and duplicate ACKs. The TCP performance graph will also show whether or not the transmission is stalling.
Is the performance similar using the UDP test? Does it show loss?
Sean
On Mon, Apr 5, 2010 at 7:27 PM, Kevin McGregor <kevin.a.mcgregor@gmail.com
wrote:
I've been trying to narrow down where my performance bottleneck(s) is (are). I just ran "iperf" from a Windows PC (XP SP3) to my Ubuntu 9.10 server. With iperf in "server" mode on the PC, and the client running on the Ubuntu machine, I get 0.0-60.0 sec 4.23 GBytes 606 Mbits/sec
Reversed (iperf "server" running on Ubuntu server, client running on the PC), I get 0.0-60.0 sec 1.13 GBytes 162 Mbits/sec
Wha...?? Both client and server runs on the PC report TCP window size to be 8 KB, but the Linux client reports 22.4 KB and the Linux server reports 85.3 KB. Increasing that to 256K on both ends has little effect. Does anyone have any suggestions?
The Windows NIC is a Realtek RTL8169/8110 Family Gigabit Ethernet; the Ubuntu server is using the Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01). Time to replace a NIC or two?
Kevin
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
-- Sean Walberg sean@ertw.com http://ertw.com/
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
IIRC UDP needs you to pass the desired bandwidth, otherwise it defaults to a megabit.
Sean
On Mon, Apr 5, 2010 at 9:43 PM, Kevin McGregor kevin.a.mcgregor@gmail.comwrote:
When I choose UDP, I get 0.0-15.0 sec 1.88 MBytes 1.05 Mbits/sec 6.154 ms 0/ 1339 (0%)
No packet loss. 0.1% utilization!
On Mon, Apr 5, 2010 at 8:12 PM, Sean Walberg swalberg@gmail.com wrote:
On a LAN, window sizes aren't going to make that much of a difference.
First look for the usual suspects - errors on the swich port or NIC.
If you can grab a copy of the traffic, such as with tcpdump -w test.pcap tcp port 5001, you can pull it into wireshark and look for TCP zero window conditions, retransmits, and duplicate ACKs. The TCP performance graph will also show whether or not the transmission is stalling.
Is the performance similar using the UDP test? Does it show loss?
Sean
On Mon, Apr 5, 2010 at 7:27 PM, Kevin McGregor < kevin.a.mcgregor@gmail.com> wrote:
I've been trying to narrow down where my performance bottleneck(s) is (are). I just ran "iperf" from a Windows PC (XP SP3) to my Ubuntu 9.10 server. With iperf in "server" mode on the PC, and the client running on the Ubuntu machine, I get 0.0-60.0 sec 4.23 GBytes 606 Mbits/sec
Reversed (iperf "server" running on Ubuntu server, client running on the PC), I get 0.0-60.0 sec 1.13 GBytes 162 Mbits/sec
Wha...?? Both client and server runs on the PC report TCP window size to be 8 KB, but the Linux client reports 22.4 KB and the Linux server reports 85.3 KB. Increasing that to 256K on both ends has little effect. Does anyone have any suggestions?
The Windows NIC is a Realtek RTL8169/8110 Family Gigabit Ethernet; the Ubuntu server is using the Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01). Time to replace a NIC or two?
Kevin
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
-- Sean Walberg sean@ertw.com http://ertw.com/
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
More results: Linux iperf client, Windows iperf server: # iperf -i 5 -t 15 -c 192.168.27.23 -u -b 700M ------------------------------------------------------------ Client connecting to 192.168.27.23, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 126 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.27.10 port 56210 connected with 192.168.27.23 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 5.0 sec 428 MBytes 718 Mbits/sec [ 3] 5.0-10.0 sec 408 MBytes 684 Mbits/sec [ 3] 0.0-15.0 sec 1.22 GBytes 700 Mbits/sec [ 3] Sent 892738 datagrams [ 3] Server Report: [ 3] 0.0-15.0 sec 1.16 GBytes 663 Mbits/sec 0.015 ms 47569/892392 (5.3%) [ 3] 0.0-15.0 sec 1 datagrams received out-of-order
Reversed: C:\Temp>iperf -u -t 15 -c 192.168.27.10 -b 750M ------------------------------------------------------------ Client connecting to 192.168.27.10, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1912] local 192.168.27.23 port 2100 connected with 192.168.27.10 port 5001 [ ID] Interval Transfer Bandwidth [1912] 0.0-15.0 sec 174 MBytes 97.5 Mbits/sec [1912] Server Report: [1912] 0.0-15.0 sec 174 MBytes 97.5 Mbits/sec 1.055 ms 0/124429 (0%) [1912] Sent 124429 datagrams
And then, running the iperf client on the same hardware as was running Windows, but running Ubuntu 9.10 (dual boot): $ iperf -t 15 -c 192.168.27.10 -u -b 750M ------------------------------------------------------------ Client connecting to 192.168.27.10, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 112 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.27.23 port 51363 connected with 192.168.27.10 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-15.0 sec 316 MBytes 177 Mbits/sec [ 3] Sent 225456 datagrams [ 3] Server Report: [ 3] 0.0-15.0 sec 316 MBytes 177 Mbits/sec 0.207 ms 0/225455 (0%) [ 3] 0.0-15.0 sec 1 datagrams received out-of-order
And with TCP: $ iperf -t 15 -c 192.168.27.10 ------------------------------------------------------------ Client connecting to 192.168.27.10, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.27.23 port 53251 connected with 192.168.27.10 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-15.0 sec 368 MBytes 206 Mbits/sec
The source hardware seems to be having a problem sending. Receiving, less so: $ iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.27.23 port 5001 connected with 192.168.27.10 port 37472 [ ID] Interval Transfer Bandwidth [ 4] 0.0-15.0 sec 949 MBytes 531 Mbits/sec
$ iperf -t 15 -c 192.168.27.23 -u -b 750M ------------------------------------------------------------ Client connecting to 192.168.27.23, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 126 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.27.10 port 37815 connected with 192.168.27.23 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-15.0 sec 1.37 GBytes 783 Mbits/sec [ 3] Sent 999330 datagrams [ 3] Server Report: [ 3] 0.0-15.2 sec 1.09 GBytes 613 Mbits/sec 15.609 ms 204247/999327 (20%) [ 3] 0.0-15.2 sec 1 datagrams received out-of-order
...Although push the bandwidth (UDP) too high, and a lot of packets get lost.
Any further thoughts?
Kevin
On Mon, Apr 5, 2010 at 9:52 PM, Sean Walberg swalberg@gmail.com wrote:
IIRC UDP needs you to pass the desired bandwidth, otherwise it defaults to a megabit.
Sean
On Mon, Apr 5, 2010 at 9:43 PM, Kevin McGregor <kevin.a.mcgregor@gmail.com
wrote:
When I choose UDP, I get 0.0-15.0 sec 1.88 MBytes 1.05 Mbits/sec 6.154 ms 0/ 1339 (0%)
No packet loss. 0.1% utilization!
On Mon, Apr 5, 2010 at 8:12 PM, Sean Walberg swalberg@gmail.com wrote:
On a LAN, window sizes aren't going to make that much of a difference.
First look for the usual suspects - errors on the swich port or NIC.
If you can grab a copy of the traffic, such as with tcpdump -w test.pcap tcp port 5001, you can pull it into wireshark and look for TCP zero window conditions, retransmits, and duplicate ACKs. The TCP performance graph will also show whether or not the transmission is stalling.
Is the performance similar using the UDP test? Does it show loss?
Sean
On Mon, Apr 5, 2010 at 7:27 PM, Kevin McGregor < kevin.a.mcgregor@gmail.com> wrote:
I've been trying to narrow down where my performance bottleneck(s) is (are). I just ran "iperf" from a Windows PC (XP SP3) to my Ubuntu 9.10 server. With iperf in "server" mode on the PC, and the client running on the Ubuntu machine, I get 0.0-60.0 sec 4.23 GBytes 606 Mbits/sec
Reversed (iperf "server" running on Ubuntu server, client running on the PC), I get 0.0-60.0 sec 1.13 GBytes 162 Mbits/sec
Wha...?? Both client and server runs on the PC report TCP window size to be 8 KB, but the Linux client reports 22.4 KB and the Linux server reports 85.3 KB. Increasing that to 256K on both ends has little effect. Does anyone have any suggestions?
The Windows NIC is a Realtek RTL8169/8110 Family Gigabit Ethernet; the Ubuntu server is using the Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01). Time to replace a NIC or two?
Kevin
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
-- Sean Walberg sean@ertw.com http://ertw.com/
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
-- Sean Walberg sean@ertw.com http://ertw.com/
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
Actually, I was mainly hoping to verify that it was, indeed, a hardware problem. One person (Trevor) reporting similar results has fairly decent-quality GigE NICs on both sides – or at least what I *assumed* to be fairly decent-quality NICs!
In your case, I’d say yes, it’s time to test different NICs. Obviously one of your NICs is OK – although I wouldn’t want to put much money on which one until I tested thoroughly.
I’d also not be willing to put money on whether it’s the NIC hardware or the software (i.e. device driver) – even though you tested under two OSes, the Linux drivers and the Windows drivers share a lot of code for both Intel and Realtek NICs nowadays.
-Adam Thompson
(204) 291-7950
From: roundtable-bounces@muug.mb.ca [mailto:roundtable-bounces@muug.mb.ca] On Behalf Of Kevin McGregor Sent: Tuesday, April 06, 2010 18:37 To: Continuation of Round Table discussion Subject: Re: [RndTbl] Network performance tuning
[...] The source hardware seems to be having a problem sending. Receiving, less so:
[...]
...Although push the bandwidth (UDP) too high, and a lot of packets get lost. [...] Any further thoughts?
Before you go switching hardware, try swapping cables and switch ports.
Sean
On Wed, Apr 7, 2010 at 12:00 AM, Adam Thompson athompso@athompso.netwrote:
Actually, I was mainly hoping to verify that it was, indeed, a hardware problem. One person (Trevor) reporting similar results has fairly decent-quality GigE NICs on both sides – or at least what I **assumed** to be fairly decent-quality NICs!
In your case, I’d say yes, it’s time to test different NICs. Obviously one of your NICs is OK – although I wouldn’t want to put much money on which one until I tested thoroughly.
I’d also not be willing to put money on whether it’s the NIC hardware or the software (i.e. device driver) – even though you tested under two OSes, the Linux drivers and the Windows drivers share a lot of code for both Intel and Realtek NICs nowadays.
-Adam Thompson
(204) 291-7950
*From:* roundtable-bounces@muug.mb.ca [mailto: roundtable-bounces@muug.mb.ca] *On Behalf Of *Kevin McGregor *Sent:* Tuesday, April 06, 2010 18:37 *To:* Continuation of Round Table discussion *Subject:* Re: [RndTbl] Network performance tuning
[...]
The source hardware seems to be having a problem sending. Receiving, less so:
[...]
...Although push the bandwidth (UDP) too high, and a lot of packets get lost. [...] Any further thoughts?
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
On 2010-04-07 Adam Thompson wrote:
Actually, I was mainly hoping to verify that it was, indeed, a hardware problem. One person (Trevor) reporting similar results has fairly decent-quality GigE NICs on both sides – or at least what I *assumed* to be fairly decent-quality NICs!
I should hope so, my NICs are Intel server grade gigabit on the server and Intel high-end workstation grade gigabit on the client ($100-$300 NICs, retail).
Kevin, I didn't have time to scan your exact results, is it mostly pc->server that's slow or server<-pc? And your pc is Windows, I gather (XP?).
My big problem has always been windows->linux performance (but never linux->windows). I've given up on it for now, but one thing that made a HUGE difference was turning OFF jumbo packets. I instantly got 5X better performance with jumbo OFF. Yes, my switch is jumbo capable, and it was enabled, and set properly on the pc and linux. Go figure. I blame the Linksys WebSmart switch, but who knows.
Well. I plugged in my iMac (Intel, Core 2 Duo T7200, 2.00 GHz) to my gigabit switch and ran iperf on it in server and client mode. The only copy I could find for download (executable) was 1.70, and was compiled for the PowerPC Macs. Here are the results between the iMac and my server:
$ iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.27.10 port 5001 connected with 192.168.27.29 port 49371 [ ID] Interval Transfer Bandwidth [ 4] 0.0-20.0 sec 2.17 GBytes 930 Mbits/sec
$ iperf -t 20 -c 192.168.27.29 ------------------------------------------------------------ Client connecting to 192.168.27.29, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.27.10 port 44201 connected with 192.168.27.29 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-20.0 sec 2.18 GBytes 936 Mbits/sec
The server and the iMac seem pretty happy to talk to each other -- that's twice the performance of any other TCP result I've had! Just as a check, I plugged the PC into the same cable as the iMac had been plugged into:
$ iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.27.10 port 5001 connected with 192.168.27.23 port 4701 [ ID] Interval Transfer Bandwidth [ 4] 0.0-20.0 sec 386 MBytes 162 Mbits/sec
Much lower results. It seems to me that the problem is with the network hardware or TCP/IP stack on the PC side. Does anyone else want to venture an opinion? Or another test to run?
Kevin
On Wed, Apr 7, 2010 at 4:11 PM, Trevor Cordes trevor@tecnopolis.ca wrote:
On 2010-04-07 Adam Thompson wrote:
Actually, I was mainly hoping to verify that it was, indeed, a hardware problem. One person (Trevor) reporting similar results has fairly decent-quality GigE NICs on both sides – or at least what I *assumed* to be fairly decent-quality NICs!
I should hope so, my NICs are Intel server grade gigabit on the server and Intel high-end workstation grade gigabit on the client ($100-$300 NICs, retail).
Kevin, I didn't have time to scan your exact results, is it mostly pc->server that's slow or server<-pc? And your pc is Windows, I gather (XP?).
My big problem has always been windows->linux performance (but never linux->windows). I've given up on it for now, but one thing that made a HUGE difference was turning OFF jumbo packets. I instantly got 5X better performance with jumbo OFF. Yes, my switch is jumbo capable, and it was enabled, and set properly on the pc and linux. Go figure. I blame the Linksys WebSmart switch, but who knows.
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
And finally (I'm kinda done messing around with this): I took out all other cards save for the AGP video card and re-ran iperf, and got 162 Mb/s with the XP PC sending to the Linux server and 612 Mb/s sending from the server to the PC. Unpleasantly asymmetric.
An abbreviated "lspci" (from the server) for those interested: 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01) Subsystem: ASUSTeK Computer Inc. Device 81aa Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 26 Region 0: I/O ports at de00 [size=256] Region 2: Memory at fdfff000 (64-bit, non-prefetchable) [size=4K] ... Kernel driver in use: r8169 Kernel modules: r8169
Numerically, the PCI codes are 10ec:8168 (rev 01); the PC's card reports as 10ec:8169.
I'll wait a bit to see if any new AMD CPUs are coming out, and then buy a quad- or hex-core CPU/motherboard/RAM and hope for a decent network interface.
Kevin
On Sat, Apr 10, 2010 at 5:52 PM, Kevin McGregor kevin.a.mcgregor@gmail.comwrote:
Well. I plugged in my iMac (Intel, Core 2 Duo T7200, 2.00 GHz) to my gigabit switch and ran iperf on it in server and client mode. The only copy I could find for download (executable) was 1.70, and was compiled for the PowerPC Macs. Here are the results between the iMac and my server:
$ iperf -s
Server listening on TCP port 5001 TCP window size: 85.3 KByte (default)
[ 4] local 192.168.27.10 port 5001 connected with 192.168.27.29 port 49371 [ ID] Interval Transfer Bandwidth [ 4] 0.0-20.0 sec 2.17 GBytes 930 Mbits/sec
$ iperf -t 20 -c 192.168.27.29
Client connecting to 192.168.27.29, TCP port 5001 TCP window size: 16.0 KByte (default)
[ 3] local 192.168.27.10 port 44201 connected with 192.168.27.29 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-20.0 sec 2.18 GBytes 936 Mbits/sec
The server and the iMac seem pretty happy to talk to each other -- that's twice the performance of any other TCP result I've had! Just as a check, I plugged the PC into the same cable as the iMac had been plugged into:
$ iperf -s
Server listening on TCP port 5001 TCP window size: 85.3 KByte (default)
[ 4] local 192.168.27.10 port 5001 connected with 192.168.27.23 port 4701 [ ID] Interval Transfer Bandwidth [ 4] 0.0-20.0 sec 386 MBytes 162 Mbits/sec
Much lower results. It seems to me that the problem is with the network hardware or TCP/IP stack on the PC side. Does anyone else want to venture an opinion? Or another test to run?
Kevin
On Wed, Apr 7, 2010 at 4:11 PM, Trevor Cordes trevor@tecnopolis.cawrote:
On 2010-04-07 Adam Thompson wrote:
Actually, I was mainly hoping to verify that it was, indeed, a hardware problem. One person (Trevor) reporting similar results has fairly decent-quality GigE NICs on both sides – or at least what I *assumed* to be fairly decent-quality NICs!
I should hope so, my NICs are Intel server grade gigabit on the server and Intel high-end workstation grade gigabit on the client ($100-$300 NICs, retail).
Kevin, I didn't have time to scan your exact results, is it mostly pc->server that's slow or server<-pc? And your pc is Windows, I gather (XP?).
My big problem has always been windows->linux performance (but never linux->windows). I've given up on it for now, but one thing that made a HUGE difference was turning OFF jumbo packets. I instantly got 5X better performance with jumbo OFF. Yes, my switch is jumbo capable, and it was enabled, and set properly on the pc and linux. Go figure. I blame the Linksys WebSmart switch, but who knows.
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
On 2010-04-14 Kevin McGregor wrote:
And finally (I'm kinda done messing around with this): I took out all other cards save for the AGP video card and re-ran iperf, and got 162 Mb/s with the XP PC sending to the Linux server and 612 Mb/s sending from the server to the PC. Unpleasantly asymmetric.
Finally had a mo to test my systems, which have similar problems.
my comps: pog : Fedora 12, Core2Q, PRO/1000 MT Server NIC piles : Fedora 10, PD 3G , PRO/1000 MT Server NIC peecee: XP, Core i7, PRO/1000 PT Desktop NIC (PCIe) switch: Linksys WebSmart SRW2016, jumbo OFF
All NICs have as much offloading turned on as possible.
iperf results in order of slow to fast:
-s -t piles peecee 300 pog peecee 351 peecee pog 400 peecee piles 401 piles pog 743 pog piles 744
Now, I can't seem to figure out whether it's the -s that's sending or the -t, so I just list as above. Very consistent results with anything involving the PC (my fastest hardware!) being much slower than I get linux->linux. Sure, the peecee has a "lesser" NIC, but it's still an expensive one.
The above results mirror what I see in daily life going from piles (file server) to peecee using samba (the only thing I care about from peecee).
When I have another mo, I'll boot a live linux CD and test peecee with that to see if it's the hw to blame or simply XP. I've seen lots of reports about XP having braindead TCP and regedit tweaks to make it faster. I've tried many of them with little success.
If anyone wants to also guess as to why my network speed about doubles when I turn OFF jumbo packets(!!!), please be my guest! (Yes, I'm pretty sure I had jumbos configured properly on all NICs/switch, etc.)
I'll wait a bit to see if any new AMD CPUs are coming out, and then buy a quad- or hex-core CPU/motherboard/RAM and hope for a decent network interface.
Or buy a nice NIC now :-) The onboard NICs are usually always substandard and often crippled (no jumbo, etc).
I don’t have much constructive to offer here, but I’ve heard several people encounter much the same thing – asymmetric network performance between a Windows PC and a Linux file server. Your observations, however, by using iperf, exclude SAMBA – which means it’s not a SAMBA-specific problem, it’s a Linux-TCP-stack-to-Windows-TCP-stack problem.
Can you tell us what kernel version, and what Windows version, you’re running?
Also, do you happen to have a third system from which you could provide comparative results?
-Adam Thompson
(204) 291-7950
From: roundtable-bounces@muug.mb.ca [mailto:roundtable-bounces@muug.mb.ca] On Behalf Of Kevin McGregor Sent: Monday, April 05, 2010 19:27 To: MUUG Roundtable Subject: [RndTbl] Network performance tuning
I've been trying to narrow down where my performance bottleneck(s) is (are). I just ran "iperf" from a Windows PC (XP SP3) to my Ubuntu 9.10 server. With iperf in "server" mode on the PC, and the client running on the Ubuntu machine, I get
0.0-60.0 sec 4.23 GBytes 606 Mbits/sec
Reversed (iperf "server" running on Ubuntu server, client running on the PC), I get
0.0-60.0 sec 1.13 GBytes 162 Mbits/sec
Wha...?? Both client and server runs on the PC report TCP window size to be 8 KB, but the Linux client reports 22.4 KB and the Linux server reports 85.3 KB. Increasing that to 256K on both ends has little effect. Does anyone have any suggestions?
The Windows NIC is a Realtek RTL8169/8110 Family Gigabit Ethernet; the Ubuntu server is using the Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01). Time to replace a NIC or two?
Kevin