Jump to content

Gigabit Ethernet Performance Issues (Solved: The cause was SSHFS.)

Go to solution Solved by alpenwasser,

Haha!

I think I figured it out, the problem was SSHFS (which, in hindsight, should have

been at least an obvious suspect). While it might be very convenient to use (and

nicely secured due to SSH, at least AFAIK), you take a hard performance hit when

copying from a share mounted via SSHFS. I recall reading something about this a

while back, but had forgotten about it unfortunately.

Anyway, the numbers:

  • SSHFS, direct connection between machines: sustained about

    200 Mbit/s to 230 Mbit/s from server to laptop, and about 90 Mbit/s

    from laptop to server. I can only guess that this is because traffic

    needs to be encrypted before sending, and the laptop has a much less

    performant CPU than ZEUS. I'm not an expert on the inner workings of

    SSH though, so feel free to correct me if I'm wrong on that one.

    Interestingly, CPU usage does not really spike during this, so I'm

    still not 100% certain what's going on. To be sure I also tested

    with different cables, it was always the same result.

  • So I tried out going via NFS4, connection via switches: Sustained a

    breezy 600 Mbit/s to 650 Mbit/s from server to laptop, which is

    actually about the read limit of my ZFS pool at the moment, so I

    think the network is OK.

To Summarize...

SSHFS is awesome, but you take a serious performance hit when copying

large amounts of data (presumably due to the encryption). With NFS I

can at least reach the limit of my disk array for the time being, so I

consider my network performance issues resolved.

Now the only thing to consider is if I really need more ports. :D

Thanks for the help everyone, much appreciated! :)

EDIT: Renamed topic to something more useful in case anyone else

ever comes across this.

Evening ladies and gents,

We have recently rewired our apartment because the previous owners' concept left

a bit to be desired (Cat 5 cable instead of Cat 5e, and the wiring scheme was

not very suited to our purposes).

For the most part it is as of now a star topology, with the exception of our

living room setup (doesn't make sense to lay a separate cable for every device)

and the one room which is too far away from the central switch for it to make

sense to lay separate wires for the printer and PC.

Besides devices dropping in and out via WiFi, there's the occasional laptop

which would get connected at various spots, but for the most part, our network

looks something like this:

aw--network--2014-01-17--01.png

The E4200 has served us quite well so far, but at the moment the central switch

is just a non-managed Netgear GS108. It is a perfectly fine device for hooking

up a printer or a laptop etc., but for this job it is (in my opinion)not very

well suited.

Therefore, I have been contemplating upgrading to something a bit more beefy

(for example something from the Cisco SG300 or SG200 series, though I have

not yet made a decision).

Questions

  • At the moment, the transfer rates between two machines on the network

    hover around the 250 Mbit/s mark, which is obviously not all that gigabit-y.

    None of the machines I have available have great network controllers, so

    I don't expect utterly phenomenal performance even with the best networking

    infrastructure. But still, would an upgrade to something like an SG200 or

    SG300 yield (noticeable) performance benefits?

  • At the moment, the DHCP server function is being performed by the E4200.

    Would it make sense (again, performance-wise) to buy a switch with integrated

    DHCP server capability and relegate the E4200 to basically just a WiFi hotspot

    (maybe I'd still have the internet link come through that for wiring reasons,

    which I don't expect to affect internal networking performance though)? Or

    is it not very relevant where the DHCP server sits in the network as long

    as the main switch is good enough?

  • Note that we're not going to be permanently pushing massive amounts of

    data from several machines criss-cross around the network, but I do have

    a few terabytes of data to keep in sync with my backups, for which it would

    be nice to have a bit more bandwidth than a quarter gigabit per second.

Yes, I am aware that there are many other functions that come along with

a better switch (the SG300's manual has ~550 pages, after all), and they

are not irrelevant to me, however most of that can be looked up in the

manuals, as opposed to the questions above. I might upgrade even if there

aren't any tangible performance benefits just for those features, but it

would be nice to get a bit of a better picture before making the decision.

Apologies for the wall of text, and thanks for any help. :)

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

I really like the SG200 series, i have installed them in multiple locations for many of my clients.

 

1. It probably wouldn't increase your speeds between your computers, but it depends on the computers at etc.

2. It doesn't really matter where your DHCP server is, it won't affect performance.

3. Being able to transfer around 250Mbit/s you may bit hitting other bottlenecks on your network - such as read/write speed on the disks.

CPU: i7 3770k @ 4.8Ghz Motherboard: Sabertooth Z77 RAM: 16GB Corsair Vengeance GPU: GTX 780 Case: Corsair 540 Air Storage: 2x Intel 520 SSD Raid 0 PSU: Corsair AX850 Display(s): 1x 27" Samsung Monitor 3x 24" Asus Monitors Cooling: Swifttech H220 Keyboard: Logitech 710+ Mouse: Logitech G500 Headphones: Sennheiser HD 558 --- Internet: http://linustechtips.com/main/uploads/gallery/album_1107/gallery_12431_1107_23677.png My Setup:  http://linustechtips.com/main/gallery/image/7922-1-rkcf7io/ -- NAS: 3x WD Red 3TB Drives (RAIDZ-1), 5x 750gb Seagate ES HDD(RAIDZ-1), 120gb SSD for caching, OS: FreeNAS --  Server 1: Xeon E3 1275v2, 32GB of RAM, OS: ESXi 5.5 -- Server 2: Xeon E3 1220v2, 32GB of RAM, OS: ESXi 5.5

 

Link to comment
Share on other sites

Link to post
Share on other sites

a) For more than 10 years now Cat-5 follows the stricter Cat-5e standards. All Cat5 cables manufactured after 2002/2003 should be Cat-5e cables

b ) 250Mbit/s seams low. I could transfer files with >400Mbit/s between a single Intel atom core powered file server and a notebook (Netgear consumer router + HP switch + long cat5(e) cables). And those 400Mbit / 50 Mbyte basicaly are the max read/write speed of the drives in that specific server.

Mini-Desktop: NCASE M1 Build Log
Mini-Server: M350 Build Log

Link to comment
Share on other sites

Link to post
Share on other sites

I really like the SG200 series, i have installed them in multiple locations for many of my clients.

Good to know. :)

 

1. It probably wouldn't increase your speeds between your computers, but it depends on the computers at etc.

Yeah, I have definitely been considering that, but I don't have all that many machines

to tinker around with unfortunately, so I'll never be able to say for sure until I

actually buy a new switch and try things out.

 

2. It doesn't really matter where your DHCP server is, it won't affect performance.

That's what I was thinking, makes sense. But I'm not that well-versed in optimizing

network performance, so I wasn't quite certain.

 

3. Being able to transfer around 250Mbit/s you may bit hitting other bottlenecks on your network - such as read/write speed on the disks.

I have considered that, and indeed the ZFS pool of the machine I'm copying from is

very full and therefore performance has degraded significantly. However, the pool

can still read at ~70 to 80 MB/s (sustained), and the machine I'm copying to (a laptop)

can copy a file internally at around 100 MB/s, so I don't think the disks are the

bottleneck.

 

a) For more than 10 years now Cat-5 follows the stricter Cat-5e standards. All Cat5 cables manufactured after 2002/2003 should be Cat-5e cables

The house was built in 2002/2003. They had the ridiculous idea to wire the cables

down to a switchboard in the basement and then back up (we live on the 2nd floor),

which IMO is an almost hilarious security oversight since that's a more or less

publicly accessible area and the cupboard is not really locked at all, so anyone

can basically just go down there, open the cupboard and hook themselves into our

network (yeah, I know that's a bit paranoid, but that's who I am ;)).

In any case, whenever we hooked up a computer to their cables we got 100 Mbit/s

speed, not gigabit. Of course, it could have been something else besides the cables

though, but the infrastructure only delivered 100 Mbit/s, that's for sure.

Leaving that aside, we needed to rewire anyway because their network topology didn't

match our needs at all.

But thanks for the heads up, good to know. :)

 

b ) 250Mbit/s seams low. I could transfer files with >400Mbit/s between a single Intel atom core powered file server and a notebook (Netgear consumer router + HP switch + long cat5(e) cables). And those 400Mbit / 50 Mbyte basicaly are the max read/write speed of the drives in that specific server.

Hm, indeed my speeds seem rather low compared to yours. See above for my remarks regarding

disk speeds. But thanks for the numbers, at least that gives me a frame of reference to

compare to.

I looked up what NIC my server uses (MSI-Z77A GD65 M/B), and it's an Intel 82579V,

and the laptop (a Dell XT2) uses an Intel 82567LM. I haven't been able to find

any performance numbers on them though.

Thanks to the both of you for your help so far. :)

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

I looked up what NIC my server uses (MSI-Z77A GD65 M/B), and it's an Intel 82579V,

and the laptop (a Dell XT2) uses an Intel 82567LM. I haven't been able to find

any performance numbers on them though.

 

How did you measure that speed? Did you copy a large file (movie, etc.) or did you use any tools?

Mini-Desktop: NCASE M1 Build Log
Mini-Server: M350 Build Log

Link to comment
Share on other sites

Link to post
Share on other sites

How did you measure that speed? Did you copy a large file (movie, etc.) or did you use any tools?

Copied a ~1.7 GB file, measured transfer speeds on the interface with nload

and actual file copy speed by piping the file through pv.

Since that's what I'll actually be using most of the network bandwidth for anyway

it seemed an apt test, at least to get a rough idea of things.

Of course, I am not averse to alternative suggestions, O/S is Linux (Arch, to be specific),

on both machines.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Since your current switch is already full gigabit with non-blocking switching fabric, you should already be able to transfer at full gigabit rate. I think you should first figure out why that isn't happening, before you think of buying a better switch. Try connecting two computers directly to one another (no switch in between) and see what speeds you get then. If they are able to transfer at gigabit without the switch, then your switch is probably malfunctioning. If they can't reach full gigabit in that scenario, you know that you need to look at something else than the switch.

 

The DHCP server, as long as it's functioning correctly, won't have a noticeable impact on you throughput. The only time it's active is when a device connects to the network for the first time, or when the IP lease time is active.

 

For your needs, any full gigabit switch with jumbo frame support and a non-blocking fabric from a reputable brand should be enough. Just make sure it has enough ports ;)

 

EDIT: as long as you don't require traffic shaping or something like that, you don't need a managed switch.

Link to comment
Share on other sites

Link to post
Share on other sites

Just make sure it has enough ports ;)

Hehe, yeah, that's another reason I'm considering an upgrade, ideally I'd like to

have a few more ports than 8.

I shall investigate more and report results.

Thanks! :)

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Haha!

I think I figured it out, the problem was SSHFS (which, in hindsight, should have

been at least an obvious suspect). While it might be very convenient to use (and

nicely secured due to SSH, at least AFAIK), you take a hard performance hit when

copying from a share mounted via SSHFS. I recall reading something about this a

while back, but had forgotten about it unfortunately.

Anyway, the numbers:

  • SSHFS, direct connection between machines: sustained about

    200 Mbit/s to 230 Mbit/s from server to laptop, and about 90 Mbit/s

    from laptop to server. I can only guess that this is because traffic

    needs to be encrypted before sending, and the laptop has a much less

    performant CPU than ZEUS. I'm not an expert on the inner workings of

    SSH though, so feel free to correct me if I'm wrong on that one.

    Interestingly, CPU usage does not really spike during this, so I'm

    still not 100% certain what's going on. To be sure I also tested

    with different cables, it was always the same result.

  • So I tried out going via NFS4, connection via switches: Sustained a

    breezy 600 Mbit/s to 650 Mbit/s from server to laptop, which is

    actually about the read limit of my ZFS pool at the moment, so I

    think the network is OK.

To Summarize...

SSHFS is awesome, but you take a serious performance hit when copying

large amounts of data (presumably due to the encryption). With NFS I

can at least reach the limit of my disk array for the time being, so I

consider my network performance issues resolved.

Now the only thing to consider is if I really need more ports. :D

Thanks for the help everyone, much appreciated! :)

EDIT: Renamed topic to something more useful in case anyone else

ever comes across this.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Haha!

I think I figured it out, the problem was SSHFS (which, in hindsight, should have

been at least an obvious suspect)....

 

 

haha :D

 

No real need for encrypted file transfer in a home LAN though :P

Mini-Desktop: NCASE M1 Build Log
Mini-Server: M350 Build Log

Link to comment
Share on other sites

Link to post
Share on other sites

haha :D

 

No real need for encrypted file transfer in a home lan though :P

Need to make sure my family can't see the pr0n going between my machines. :lol:

Seriously though, of course you're right.

I mostly use SSHFS because it's very convenient and has always just worked out

of the box and very reliably. And the performance is perfectly fine for streaming

movies and such. But once I have my new server up and running I shall definitely

switch to NFS shares, especially considering we also have a Windows machine in

our net, and SSHFS performance on that is just abysmal (as in, <70 Mbit/s :lol: ).

I just don't want to use Samba if in any way avoidable, always had trouble with

it (and besides, now that I think about it, the benchmarks which I read about SSHFS'

comparatively bad performance also showed that NFS was faster than Samba if my

memory is not deceiving me).

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

I just don't want to use Samba if in any way avoidable, always had trouble with it

Weird, I find it the most straightforward and flexible way of setting up LAN shares.

 

Hadn't heard of SSHFS, nice to know, was actually looking for something similar about a year ago :)

Link to comment
Share on other sites

Link to post
Share on other sites

I just don't want to use Samba if in any way avoidable, always had trouble with

it (and besides, now that I think about it, the benchmarks which I read about SSHFS'

comparatively bad performance also showed that NFS was faster than Samba if my

memory is not deceiving me).

 

I only use AFP locally (Mac clients only :P) and SFTP/SSHFS for remote access

Mini-Desktop: NCASE M1 Build Log
Mini-Server: M350 Build Log

Link to comment
Share on other sites

Link to post
Share on other sites

Weird, I find it the most straightforward and flexible way of setting up LAN shares.

 

There were times when I could get it to work as intended, but it was always

an uphill battle. Then again, I'm no network guru (especially not on the Windows

side of things), it might well be that if you actually know what you're doing

it's super-easy.

But sharing a Linux share to a Windows machine and having all the permissions

work correctly has always been a little hit-or-miss for me, despite being

thorough in reading up on things before doing them.

Of course, it could just be that the problem was Windows, not Samba. In any

case, I need to do a reinstall of my dad's Windows machine soon anyway (its

behaviour has been a little hinky lately), and will be installing Win7 Ultimate,

which has NFS support, so I'll be trying that before resorting to Samba.

I must say getting NFS to work between my two Linux machines today was very

easy, took about 20 minutes of reading and tinkering that then it was working.

I'm wondering if getting it to work with Windows will be as easy...

 

Hadn't heard of SSHFS, nice to know, was actually looking for something similar about a year ago :)

Yeah, I don't know how I originally came across it, but it's extremely convenient

IMO. The Windows client is still a bit buggy when trying to mount multiple shares

at the same time (and as said, performance is pretty bad), but for sending the

occasional file between my Linux machine and my dad's PC it's very nice and hassle-free

(or, as said, for easily mounting shares between Linux machines without needing to

do a lot of setup work).

EDIT:

I only use AFP locally (Mac clients only :P) and SFTP/SSHFS for remote access

Never worked with Macs before, but when accessing my server from my school I

always go through a VPN connection and mount via SSHFS (and this time for the

encryption, not just the convenience ;) ).

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

it might well be that if you actually know what you're doing it's super-easy.

Just going to leave this little link here :P

 

 

 

But sharing a Linux share to a Windows machine and having all the permissions

work correctly has always been a little hit-or-miss for me, despite being

thorough in reading up on things before doing them.

Yeah, I've never really had that complicated permission requirements, but the basic stuff (deciding who may and who may not access the share, for example) is fairly easy.

Link to comment
Share on other sites

Link to post
Share on other sites

Just going to leave this little link here :P

 

 

Yeah, I've never really had that complicated permission requirements, but the basic stuff (deciding who may and who may not access the share, for example) is fairly easy.

It's been a few years since I used Samba. I'll try it out, see if my luck

has improved. Thanks! :)

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×