Jump to content

Slow 10GB network transfer speed with SFP+

Kodey

Hey guys,

 

so I have a Freenas server and a main pc. I'm trying to PTP for a specific folder using a SFP+... so I have an SMB share configured to run through the 10g SFP+ NIC on both sides  but when I try to transfer large files, the transfers slow down quite a bit. Initially when I begin the transfer it hits 1gb/s (so maxing out the 10g bandwidth) but then quickly begins lowering down to eventually stop around 200mb/s. The larger the file size the slower the transfer rate.

 

I have my main NIC(1gb) which is where my NAS gets internet from(192.168.1.34) and then my server SFP+ NIC(10.10.10.1) and then the SFP+nic on my PC is configured to be 10.10.10.2. I know(am pretty sure?) i'm transferring between the two SFP+ nics because of the yellow lights that flash on both nics when I begin a transfer on that share. Also before using the SFP+ i was only maxing out at 112mb/s transfer speed and like I said I initially see 1gb/s before it quickly drops down to around 200mb/s.  I tried switching shares and using 7200RPM 2tb drives, 10tb 5400RPM drives, all using stripe configuration and also tried "mirror"... so I dont know what I'm doing wrong here OR why i'm not actually getting higher transfer rates. 

Link to comment
Share on other sites

Link to post
Share on other sites

Any help is greatly appreciated. I'm just an enthusiast so this is just for fun but I cant figure this out. Thanks in advance. 

Link to comment
Share on other sites

Link to post
Share on other sites

I think either the controller can't keep up or the disks just can't do more than that.

Don't forget 1gig per second is a LOT. If you want to actually reach those write speeds consistently you need an SSD cache in raid-0 on both sides or nvme ssd's because anything else just won't be able to do 1 gig per second.

 

That said, the connection does work because if it didn't it would be stuck around 100 megabyte per second.

If you want my attention, quote meh! D: or just stick an @samcool55 in your post :3

Spying on everyone to fight against terrorism is like shooting a mosquito with a cannon

Link to comment
Share on other sites

Link to post
Share on other sites

Well its hitting 1gb/s but only for the first few seconds of the transfer. Then it dips down to 200mb/s and frequently below that. In my PC setup I have an NVME drive, two HDD and One SSD and then a graphics card with a 6700k CPU.... is it a limit of the PCIE lanes? I am very lost on why it is so slow. Its not much faster than when I was using just my switch/ethernet

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Kodey said:

Well its hitting 1gb/s but only for the first few seconds of the transfer. Then it dips down to 200mb/s and frequently below that. In my PC setup I have an NVME drive, two HDD and One SSD and then a graphics card with a 6700k CPU.... is it a limit of the PCIE lanes? I am very lost on why it is so slow. Its not much faster than when I was using just my switch/ethernet

Just checking you have jumbo frames enabled? and hardware offload enabled?

 

What NIC are you using?

Link to comment
Share on other sites

Link to post
Share on other sites

Try running iperf?

 

SInce it runs at max speed for a bit, something tells me its limited by the drives, not the network.

 

What is the disk usage look like when its copying?

 

Did you try copying between ramdisks?

Link to comment
Share on other sites

Link to post
Share on other sites

Jumbo frames are enabled as well as hardware offload. The NIC's i'm using are both Melanox MNPA19-XTR ConnectX-2.

 

So I tried running iperf and going from my PC to my freenas i'm getting the full 10gb, however, when I try to connect from my freenas to my pc the connection simply times out. 

 

I'm using Jperf 2.0.2 on my pc and using the shell on freenas. Tell me if I'm doing something wrong here here's what i'm doing

 

PC to Freenas - Open jperf, running as client - I type in my nic IP on my freenas which is 10.10.10.1, click start iperf. On my freenas I typed in -iperf -sD. Freenas then says "running server as daemon" and a process ID number. This will return on my pc getting 10gb.I ran this test for 2 minutes straight maxing out the 10gb connection the entire time. 

 

Freenas to PC - on pc I set in jperf to server mode, listening on port 5001,  Then on Freenas I open shell on freenas gui, type in "iperf -c 10.10.10.2" which is my PC's NIC ip address. Then the connection times out. 

 

Where have I gone wrong here? Any suggestions are welcome thanks for the help in advance!

Link to comment
Share on other sites

Link to post
Share on other sites

How many HDDs do you have in the NAS, and in what configuration?

I almost never see a sustained 1GB/sec transfer rate to my NAS. It will hover in the several hundred MB/sec, though. The HDDs just can't keep up. If iperf from your system to the NAS is reporting 10gb/sec throughput, you're set up properly. Now that doesn't mean there aren't improvements you can make.

 

That'll depend in part on how the HDDs in the NAS are connected. Do you have them plugged straight into the mainboard, or through an HBA controller card? If the former, consider changing to the latter. My HDDs are connected through a SAS controller card with a SAS expander board to allow for more than 8 HDDs. Integrated controllers tend to not be the greatest and you'll see some performance improvement going with a SAS HBA card.

 

But if you're expecting 10gb throughput to and from only a few HDDs, you won't achieve it, even with a SAS card. The HDDs just aren't capable of that unless you're spreading the reads and writes across enough HDDs.

Wife's build: Amethyst - Ryzen 9 3900X, 32GB G.Skill Ripjaws V DDR4-3200, ASUS Prime X570-P, EVGA RTX 3080 FTW3 12GB, Corsair Obsidian 750D, Corsair RM1000 (yellow label)

My build: Mira - Ryzen 7 3700X, 32GB EVGA DDR4-3200, ASUS Prime X470-PRO, EVGA RTX 3070 XC3, beQuiet Dark Base 900, EVGA 1000 G6

Link to comment
Share on other sites

Link to post
Share on other sites

I have them directly into the motherboard. Which SAS controller do you recommend?  The SMB I’m writing to is on 2 drives in RAID 1. I created another pool with a single HDD 7200rpm drive and attempted writing to that as well and can only get around 200mb. 

 

The total number of drives in the NAS is 6. Although I’m not using 4 of them at this moment. I have another 8tb and three 2TB 7200rpm not in use. The two I’m actually using are 5400rpm 8tb drives. I’ll try putting in an SSD and using it as an smb share and see if it improves. I think that’ll narrow it down to the drives being the bottleneck minus the SAS controller. Also, I’m only running 16gb of ram could that effect it? 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Kodey said:

I have them directly into the motherboard. Which SAS controller do you recommend?  The SMB I’m writing to is on 2 drives in RAID 1. I created another pool with a single HDD 7200rpm drive and attempted writing to that as well and can only get around 200mb. 

 

The total number of drives in the NAS is 6. Although I’m not using 4 of them at this moment. I have another 8tb and three 2TB 7200rpm not in use. The two I’m actually using are 5400rpm 8tb drives. I’ll try putting in an SSD and using it as an smb share and see if it improves. I think that’ll narrow it down to the drives being the bottleneck minus the SAS controller. Also, I’m only running 16gb of ram could that effect it? 

 

RAID 1 requires parity between drives and will slow down the transfers. you are probably seeing the caching for a few seconds than the actual disk performance.

 

10 GB/s requires 40-50 of  traditional disk spindles but only a few nvme pcie drives. you will have a hard time maxing out 10GB/s on a SAS controller inside a traditional server.

 

if this is for testing, get a motherboard that supports nvm pcie raid and just test using that.

or get a couple of these as many as your motherboard supports

https://www.amazon.com/Adapter-advanced-solution-Controller-Expansion/dp/B07JKH5VTL/ref=pd_cp_107_1?pd_rd_w=nx63h&pf_rd_p=ef4dc990-a9ca-4945-ae0b-f8d549198ed6&pf_rd_r=5D9MM0RC4J32KEQZPEEZ&pd_rd_r=3a7123bb-1903-4079-97a6-42fb8397b2d8&pd_rd_wg=w66d0&pd_rd_i=B07JKH5VTL&psc=1&refRID=5D9MM0RC4J32KEQZPEEZ

 

thats the only way your going to saturate 10 GB/s

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, tech.guru said:

10 GB/s requires 40-50 of  traditional disk spindles but only a few nvme pcie drives.

It isn't 10 gigabytes per second, but 10 gigabit per second, which is only about 1 gigabyte per second throughput. You don't need 40 to 50 HDDs to get that, but you do need quite a few. Two SATA III SSDs will saturate a 10gb/s connection as well on reads or writes when in RAID 0.

 

11 hours ago, Kodey said:

The SMB I’m writing to is on 2 drives in RAID 1. I created another pool with a single HDD 7200rpm drive and attempted writing to that as well and can only get around 200mb.

Right, because HDDs can't transfer faster than that individually in typical throughput. And in RAID-1, the write speed is also never going to be more than what you'd see with a single drive. Read speeds will be faster since the reads can be spanned across the two drives, but even that won't saturate a 10gb connection.

 

2 hours ago, tech.guru said:

Which SAS controller do you recommend?

The one I use is the IBM M1015, which is an LSI rebrand, reflashed as an HBA card ("IT mode"). You can find instructions on how to do that online. The card can be found for cheap on eBay.

Wife's build: Amethyst - Ryzen 9 3900X, 32GB G.Skill Ripjaws V DDR4-3200, ASUS Prime X570-P, EVGA RTX 3080 FTW3 12GB, Corsair Obsidian 750D, Corsair RM1000 (yellow label)

My build: Mira - Ryzen 7 3700X, 32GB EVGA DDR4-3200, ASUS Prime X470-PRO, EVGA RTX 3070 XC3, beQuiet Dark Base 900, EVGA 1000 G6

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, brandishwar said:

Right, because HDDs can't transfer faster than that individually in typical throughput. And in RAID-1, the write speed is also never going to be more than what you'd see with a single drive. Read speeds will be faster since the reads can be spanned across the two drives, but even that won't saturate a 10gb connection.

In the testing I've been doing, i'm finding this to be accurate. Transfering FROM the server is around 10-15% faster than writing to it. 

 

While I dont expect to saturate the entire 10gb ptp connection I would like to have been getting more than just 200mb transfer speeds. I went from getting 113mb transfer speed to 200mb and sometimes lower around 180mbs. I was expecting to be getting around 400mb transfer speeds. Is 200mb generally the maximum write speed for most HDD? 

 

Also, will adding an SSD Cache to the pool increase the speed?

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, Kodey said:

Is 200mb generally the maximum write speed for most HDD?

For the 8TB drives you're using, that's about the limit. Most will probably be down around 150MB/s sustained, with the random-access seek penalty dropping it down to 100MB/s or lower for most use cases.

48 minutes ago, Kodey said:

Also, will adding an SSD Cache to the pool increase the speed?

It'll kick the can down the road as far as when the transfer speed drops off, depending on what you're copying over. Whether you saturate the connection will depend on whether you use an NVMe SSD or SATA SSD. Most NVMe drives, even the lesser expensive ones, should be able to saturate the 10Gb connection on writes without breaking a sweat. In which case, the read speed from the source to the NAS will be the limiting factor. If you use a SATA SSD as the cache, it'll max out at about 550MB to 600MB/sec simply due to the interface.

 

On reads, however, that will depend A LOT on how you're using the NAS. If you're continually accessing the same files over and over again, within the storage limit on the cache, it'll help a lot. For example if you're editing pictures or video you just copied over, since those should still be sitting in the cache. I've considered adding a cache to my NAS for that since I'm a photography hobbyist (or hobbyist photographer if you will). But since I have 4TB storage space on my desktop, and I copy the photos from the card both to the NAS and my desktop before I start editing (from the copy in the desktop), I ultimately don't really need it.

 

If you're almost never reusing the same files, it won't make any significant difference to read speeds from the NAS. But whether that actually matters will ultimately depend on what you're storing there. Since there really isn't much where that transfer speed ultimately matters.

 

At the same time, if you're going to add a cache to the NAS, make sure it's connected to a UPS as there is a delay between something being written to the cache and the cache being flushed to the drives. And if the power glitches or you suffer an outage, you could risk losing data. Then again, even without a cache, if your NAS isn't currently on a UPS, I'd say to invest in one before deciding on adding an SSD cache.

Wife's build: Amethyst - Ryzen 9 3900X, 32GB G.Skill Ripjaws V DDR4-3200, ASUS Prime X570-P, EVGA RTX 3080 FTW3 12GB, Corsair Obsidian 750D, Corsair RM1000 (yellow label)

My build: Mira - Ryzen 7 3700X, 32GB EVGA DDR4-3200, ASUS Prime X470-PRO, EVGA RTX 3070 XC3, beQuiet Dark Base 900, EVGA 1000 G6

Link to comment
Share on other sites

Link to post
Share on other sites

@brandishwar Thanks for all the help. I'm slowly realizing that 10gb transfer speeds are a myth and I wasted the $150 on trying to achieve it. Instead I got 200mb(sometimes) vs my previous 113mb

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Kodey said:

@brandishwar Thanks for all the help. I'm slowly realizing that 10gb transfer speeds are a myth and I wasted the $150 on trying to achieve it. Instead I got 200mb(sometimes) vs my previous 113mb

Dude, 10 Gbps is not a myth if you have proper requirements to achieve this speed in Storage ? Here is example of transferring big file from NAS to PC via 10Gbps network with Intel  X520-2 and MikroTik CRS328 via SMB share on Baremetal Synology (Xpenology) with 4x2TB WD RED + 2*250 Samsung 960 EVO SSD cache tier to PC with NVMe Samsung 970 EVO:

 

image.png.631862eb248a578ce3f99729b07a94a5.png

And here is transferring from PC to NAS:

image.png.ec127b3fb4e763a9a100c15e46a89008.png

It goes up to 670 MB/s when it goes to SSD cache tier, after that stabilizes at around 300 MB/s.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Kodey said:

@brandishwar Thanks for all the help. I'm slowly realizing that 10gb transfer speeds are a myth and I wasted the $150 on trying to achieve it. Instead I got 200mb(sometimes) vs my previous 113mb

You haven't wasted anything. You've at least gained the ability to saturate the HDDs on your NAS. Whereas the Gigabit connection was holding you back. That's the single main reason to go 10GbE. And if you made the leap and bought a 10GbE switch, though I doubt that since you said you've spent only $150, everything on your network will be able to talk without running into Gigabit caps all over the place.

 

I have a Gigabit Internet connection, so having a 10GbE backbone to my home network means I can do whatever I want on the home network and not throttle whatever my wife is doing online. We've actually tested that - she downloaded a game from Steam while I copied several large video files from the NAS, and neither of us were throttled. For me the movies copied in at about 300MB/sec, but again that's because you can't saturate 10Gb with only a few HDD clusters. And my wife's game download nearly saturated our Internet connection. I call that a WIN even if I'm not getting 1 gigabyte per second to or from the NAS.

 

And I could probably copy video files from my HDD storage on my desktop to the NAS while downloading a game from Steam and see a combined throughput of several hundred megabytes per second. I haven't tried that, but I'm sure I could do that without issue.

 

And that, again, is the main reason to go 10Gb. Even with just HDD storage, the fact you're not capped at gigabit speeds means you're still coming out ahead, even if you're not saturating the network connection to your NAS. Now you just need to figure out how to expand your NAS storage.

Wife's build: Amethyst - Ryzen 9 3900X, 32GB G.Skill Ripjaws V DDR4-3200, ASUS Prime X570-P, EVGA RTX 3080 FTW3 12GB, Corsair Obsidian 750D, Corsair RM1000 (yellow label)

My build: Mira - Ryzen 7 3700X, 32GB EVGA DDR4-3200, ASUS Prime X470-PRO, EVGA RTX 3070 XC3, beQuiet Dark Base 900, EVGA 1000 G6

Link to comment
Share on other sites

Link to post
Share on other sites

You're right for sure! Thanks a lot for the reply I am definitely happy hearing that. I'm happy to relieve the bottleneck on my home network I like that outlook a lot. So I hooked up 2 of my 7200rpm drives with a 480gb ssd cache and was getting almost 400mb/s transfer speeds with 200gb folders being moved around. But I had the two HDD's in Raid 0... so not the safest way to store data. I could at least move my plex over to the faster pool I guess!

 

I have concluded though, that my drives and their configuration in Raid 1 (aka "mirror") is the single limiting factor in my 10gbe PTP connection.

 

PS - I was changing my SSD into my NAS and accidentally bent the SATA cable connected to my 10tb hdd. It broke the plastic off of the pins on the actual harddrive and now its just the bare pins left. FML. She still works if I use the same SATA cable. How long do you think she'll last lol 

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Kodey said:

I was changing my SSD into my NAS and accidentally bent the SATA cable connected to my 10tb hdd. It broke the plastic off of the pins on the actual harddrive and now its just the bare pins left. FML. She still works if I use the same SATA cable. How long do you think she'll last lol  

Keep an eye on the logs in the NAS software in case it starts reporting errors about the HDD and replace that HDD and cable as soon as possible. Post a picture as well so I can see what you're talking about as your description is a little unclear.

Wife's build: Amethyst - Ryzen 9 3900X, 32GB G.Skill Ripjaws V DDR4-3200, ASUS Prime X570-P, EVGA RTX 3080 FTW3 12GB, Corsair Obsidian 750D, Corsair RM1000 (yellow label)

My build: Mira - Ryzen 7 3700X, 32GB EVGA DDR4-3200, ASUS Prime X470-PRO, EVGA RTX 3070 XC3, beQuiet Dark Base 900, EVGA 1000 G6

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 years later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×