Jump to content

Increasing network transfer speed

My main rig as 6 case fans + 1 psu fan + 2 gpu fans, and the 1 hdd makes more noise than all of them together, since I bought a cheap server with 8GB of ram, an i3 6310t and a 120 kingstone fury ssd, I decided to put the hardrive in the server and access my games trough network, at first I had 10mb/s but after I fallowed this guides I managed to achieve 100mb/s (it still drops to 10mp/s some times)

http://www.howtosolutions.net/2013/06/fixing-slow-sending-or-receiving-of-files-through-lan-network-using-windows/

http://www.makeuseof.com/tag/9-quick-ways-to-instantly-speed-up-your-home-pc-network/

I have a really crappy rented modem/router/switch combo from my internet service provider, unhappily I cant just trade it for something else as its the only equipment that will work with my isp, but what I can do is add a router in bridge or a switch.  (Got a photo of this crap below)

Since both pc's are using cat6 cables I tough maybe using a switch like this, might speed things up, what do you think? will it help at all, right now I have =/=80% the performance I would have if the hdd was directly connected to the gaming rig. But it dosent aways work, as sometimes the speed drops to 10mb/s and if possible I would like to achieve 100% performance.

http://www.asus.com/pt/Networking/GXD1081/

 

 

Server specs:

Motherboard asus h110m-k d3

CPU i3 6310t

RAM Kingstone 2x4GB DDR3L 

SSD 120 Kingstone HyperX Fury

HDD 1tb Western Digital black

 

Gaming rig specs:

Motherboard asus rog hero 7

RAM 2x8GB Kingstone Fury (O.C./XMP 1866)

CPU i5 4690k (OC 4.5GHz)

SSD 2x240GB Kingstone HyperX Fury (Raid 0)

 

ZonHub.png

Link to comment
Share on other sites

Link to post
Share on other sites

Have you done any benchmarking on the server/workstation to make sure the any of the disks arent dying? Also you shouldn't have to make so many changes to achieve 100+mbytes. Even cheap gigabit switches can do a single gigabit transfer.

 

The server motherboard is using a realtek NIC, which isn't ideal but should even hit the lower end of a gigabit connect @ 90ish mbytes.

 

What operating system is the server and desktop running?

 

Also worth nothing, different types of file transfer will dip to 10 or even 3... Say if you're transferring a lot of tiny files, or a zip that contains a large number of little files, it is going to go much slower.

Link to comment
Share on other sites

Link to post
Share on other sites

I have been trying tto play fallout 4, in the workstation but with the files on the server.

The workstation os is windows 10 home 64bit

the server is windows 2012 r2 standart 64bit

Link to comment
Share on other sites

Link to post
Share on other sites

Also ive transdered 2.9Gb of compresed videos (winrar) from one to the other.

Untitled.png

Link to comment
Share on other sites

Link to post
Share on other sites

So you're transferring at ~108MB/s which is around 850Mb/s per second and for a cheap gigabit switch is about what you can expect. Mechanical hard drives have a limit of around 120MB/s so you're pretty much there. The drop down to around 10MB/s is probably when you are transferring a lot of small stuff.

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

Well i did try to transfer from ssd to ssd and the resolt was the same as from ssd to hdd, but since what I whant is ssd to hdd that dosent matter now. with that hdd I can achive =/= 150mb/s if directly connected so I know it anst botlenecked just yet, its close but I can still get a little bit more performance ou of the hdd.

Link to comment
Share on other sites

Link to post
Share on other sites

=/= usually means not equal but your context makes me think you mean "almost" or "close to"... which usually "~" is used to mean almost.

 

Single large files will give you the best performance, small files will give terrible performance. The 108/mbps looks good. Going from a hard drive to another hard drive is not the same as going over the network. There is a LOT more going on than just a file move, and that's why a quality NIC will improve speeds as well as the CPU.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Whats a NIC, is that the network card, if so I asume the one on the workstation is good enough but could a pcie NIC in the server inprove things?

Also, will the switch help or not (from everithing I read here Im starting to asume it wouldent help)

Link to comment
Share on other sites

Link to post
Share on other sites

How much improvement do you want? Yes, NIC is network interface card. 108 is very good, 120 being the most possible but that assumes there is no other traffic on that NIC.

Link to comment
Share on other sites

Link to post
Share on other sites

Well I just pluged that hdd directly into the workstation and tried to transfer the same file and something weid happened, it gave me 900mb/s for half the transfer and then 150mb/s the other half. 

So my question now, this would be manly for gaming, is 108 to 120 gonna be enough?

 

to aswer your question, I whant as much improvement as I can get for less than 100euros.

 

Both that switch and that NIC are 35

Link to comment
Share on other sites

Link to post
Share on other sites

108 to 120 is more than great for gaming. Gaming usually requires less than 20mbps - how fast is your internet? I feel like it's already a lot faster than what your internet can do.

 

Windows has a "cache" feature so sometimes when you transfer files the initial transfer you see is the file going to the cache and as soon as that fills up it stabilizes and becomes a normal speed.

 

For gaming, your internal network speeds a very good. 

Link to comment
Share on other sites

Link to post
Share on other sites

All right, but I still whant to inprove it a little bit, you said the NIC on the server could be better right?

could the pcie NIC from Intel or the switch help out even if just a couple of mb?

Link to comment
Share on other sites

Link to post
Share on other sites

In relation to my internet speed its 30mb/s so ya, youre righ about that.

Link to comment
Share on other sites

Link to post
Share on other sites

To be honest I kinda am missing a pcie cover and whated to buy some card to field the empty hole.

Link to comment
Share on other sites

Link to post
Share on other sites

For 1:1 transfers, I'd only replace the NIC with an intel NIC. You said it's Cat6 cable so I would leave that alone - but I would normally tell people with 100ft+ runs to consider a shorter run or use Cat6.

 

Normally upgrading the switch helps with multiple transfers between various devices and those switches start at or around $80US.

 

If you have the option of Ebay, you could just grab a gigabit intel NIC from there too. Make sure the bracket size is correct, lot on there are half-size.

Link to comment
Share on other sites

Link to post
Share on other sites

As a follow up, I did buy the NIC, and it made it worse, with it I cant get pass 20mb/s

Link to comment
Share on other sites

Link to post
Share on other sites

This thread is just.... Put the HDD back in yoru desktop.  Using a NAS to access files in place of direct access is just silly and will do nothing but add additional overhead to the process and make it slower than a local drive.  For running GAMES, this makes zero sense.

Link to comment
Share on other sites

Link to post
Share on other sites

I see a lot of confusion over Mbps and MBps in this thread. Mbps is megabits per second while MBps is megabytes per second. Please make sure you are using the correct one when posting your replies because 108MBps is much much better than 150Mbps.

 

Based on all of the posts above, I don't see a problem until you replaced the onboard NIC with a PCI card, you shouldn't need to do this unless the onboard NIC dies or you need a special feature or more ports. I don't know why it's slower, but remove it and call it a day. 108MB/s is really good for an ISP provided router/modem and more than you need for playing games off remote storage. If you want faster storage then go install an SSD in your desktop since they use little power, are silent, and SATA will be faster than 1Gbps all day long.

 

I just ran a bunch of tests and the best I could get on any of my devices was 108MB/s with a dedicated switch and no routing involved. I don't have any servers available at my house right now so I could only test using consumer hardware and a $20 TP-LINK switch, I might factory default this Dell PowerConnect 5324 I have laying around to see if this $40 switch does any better but I doubt you want to run something so loud and power hungry just for a few more MB/s (you'd be better off doing the 10Gbps project Linus made a video about for under $100).

 

 

-KuJoe

Link to comment
Share on other sites

Link to post
Share on other sites

ashleyashes, igs perfectly plausible, hdd make a lot of noise and heat if I can get 100MB/S its worth it.

 

Link to comment
Share on other sites

Link to post
Share on other sites

zzzz

 

To troubleshoot the issue:

 

-Check that both network cards are connected at 1Gbps/1000Mbps

-Check that you don't have any 3rd party software installed that can affect your network - this includes firewalls, virus software, traffic monitors, etc...

-If its still slow run an iperf transfer to test the network speeds - this uses memory, so it takes the drive speeds out of the equation.

        If that's fast, its your drive performance on either machine.

-If its slow, if you have a crossover cable, directly connect the computers together and run a test between the two

       If its fast, then its either your router, or one of the network cables.

-If its still slow, then thats when you move onto alternate NIC's, etc...

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

If you're married to the idea of running this drive over the network, then you're not going to want to use SMB for it.

 

SMB adds quite a bit of overhead, and unlike locally-attached disks, it can't properly use NCQ to queue up multiple small files/blocks at once, something games do quite a bit, particularly with newer games which often stream textures from disk. This drawback is compounded by the fact that hard drives are inherently bad at random transfer, which is why even a slow SSD is much, much faster than a HDD and why NVMe SSD's are not appreciably faster in day to day use. The overall throughput is not the issue. You can get pretty good sustained transfer rates from a HDD via SMB, with some overhead, but random transfer is not going to go so well - Think USB 2.0, except full-duplex. Unfortunately for your use case, games do a lot of random transfer. The odds are not in SMB's favour here.

 

You'll want to use iSCSI instead. If you look around at NAS reviews, you'll find that they often get more consistent speeds using iSCSI, and there's a few reasons for this:

  1. iSCSI is dedicated. It's not a share, it's a connection. You're getting sole access to the device.
  2. iSCSI has less overhead. There are fewer TCP connections, fewer handshakes, and less "chattiness".
  3. iSCSI is a block-level protocol, meaning the filesystem is handled directly by the client, not the server. This eliminates further overhead.
  4. iSCSI supports command queuing, while SMB does not. It's not quite the same thing as the NCQ found on locally-attached disks, but it does improve performance when requesting many files/blocks at once.
  5. iSCSI, being a block-level protocol, allows the client to open files for reading and writing directly as if they were on a local disk. SMB, being a file-level protocol, requires the client to fully download a file before it can be opened for reading/writing, and then fully save it again. As you can imagine, many modern games that store their data in large archive files will seriously slow down over SMB as a result, no matter how fast the connection is.

OK, sounds good. The downside? iSCSI isn't widely supported as a "share" in desktop operating systems. Windows 7 and above can connect to an iSCSI device, but they don't have an easy way to share one. To do that, you'll need to be running Linux or a dedicated NAS distribution like FreeNAS, or else buy Windows Server. iSCSI is very much an enterprise-level protocol.

 

Assuming you've got the requirements for iSCSI met, from there, you want both your client and server running at least a twin-gigabit LAN connection (a newer WD Black drive can nearly saturate 2GbE sequentially, but you're still better off than with 1GbE). For that, you'll need NICs capable of teaming + link aggregation (Intel-based NICs are generally fine, so you'll want to purchase 3 in total, or just purchase 2 2-port cards), and a managed switch with a webUI that can handle trunking (Netgear's ProSafe lineup is relatively inexpensive at around $100 for an 8-port). The exact setup for that will vary depending on which NIC and switch you go with, but you're going to want to make sure that, in the case of twin GbE NICs on the client and server, you trunk two switch ports per machine (2 trunks of 2 ports each), and ensure you plug them into those ports specifically and no other combination.

 

With iSCSI set up and your client and server both running at 2GbE or better, you're looking at the best possible scenario for network storage performance. The unfortunate part about this is that it introduces overhead that can cause an increase in latency, which is another factor and one that, for games at least, you're probably familiar with. There's no fix for that, unfortunately, unless you purchase 10GbE NICs and a 10GbE switch.

TL;DR - This is all a giant waste of time and money for your use case; Just buy a new hard drive, a new case, or both, to reduce the noise. The Fractal Define series is excellent for noise isolation and cooling, and if your drive is especially loud, it may be defective, since hard drives are not typically loud.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×