Jump to content

10gb network, NAS drive botleneck

orb101

Hi all.

I currently have two synology NAS units, both with 8x 8TB Seagate ST8000NE0004 Ironwolf Pro drives running RAID6.

One on-site, and the other off-site running hyperbackup.

I bought and installed 10gb cards in my workstations, and also the 10gb network card for my on-site NAS, everything's wired up and transfer speeds between machines are good.

However, transfer speeds between workstations and the on-site NAS remain slow, somewhere in the 30-50MB/s range.

This is probably down to my RAID setup and drive speed, so I have a few questions.

1: For a nas of this size, what's the optimal drive type to buy to increase performance.

2: If I swap out all the drives on-site with better ones, do I need to do the same with my off-site or will that backup run fine with different drives?

3: If I change my RAID config on-site, say to RAID-0 can I restore data from a RAID-6 off-site.

Apologies for the noob queries.


 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, orb101 said:

1: For a nas of this size, what's the optimal drive type to buy to increase performance.

I very much doubt the issue is with your drives. I can get better speeds with decade-old, craptastic HDDs in a RAID6.

 

8 minutes ago, orb101 said:

somewhere in the 30-50MB/s range

Are you talking about read or write? Read-speeds from the NAS should be a lot better than write-speeds. Have you tried any other file-transfer protocols, if they perform better? Also, have you checked the NAS's CPU-usage during transfer?

Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to comment
Share on other sites

Link to post
Share on other sites

speeds this low are absolutely not cause by the drives, even one single drive should easily be able to push higher speeds.

 

it could be a configuration thing or a bad cable, have you checked what the Synology says regarding the network, does it show that it detected the full 10gbit in there?

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, WereCatf said:

Are you talking about read or write?

Read and Write average 25-45MB/s, tested multiple file types. On the test machines internal drives, moving the same files shows 1.5GB/s+.

 

 

21 minutes ago, WereCatf said:

Have you tried any other file-transfer protocols

Not something I have tried or know about yet.

 

 

3 minutes ago, Pixel5 said:

have you checked what the Synology says regarding the network

Synology shows 10000MB/s Full duplex, MTU 1500 under the network tab. Cables are all Cat6a and bought for this upgrade, none of them run farther than 5 metres.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, orb101 said:

Read and Write average 25-45MB/s, tested multiple file types. On the test machines internal drives, moving the same files shows 1.5GB/s+.

 

 

Not something I have tried or know about yet.

 

 

Synology shows 10000MB/s Full duplex, MTU 1500 under the network tab. Cables are all Cat6a and bought for this upgrade, none of them run farther than 5 metres.

Are you running multiple P2P connections or do you also have a 10gbit switch? 

Try setting MTU to 9000 in all your interfaces (nas and ws) first although that shouldn't make the difference between 50mb/s and 1000mb/s.

Gaming HTPC:

R5 5600X - Cryorig C7 - Asus ROG B350-i - EVGA RTX2060KO - 16gb G.Skill Ripjaws V 3333mhz - Corsair SF450 - 500gb 960 EVO - LianLi TU100B


Desktop PC:
R9 3900X - Peerless Assassin 120 SE - Asus Prime X570 Pro - Powercolor 7900XT - 32gb LPX 3200mhz - Corsair SF750 Platinum - 1TB WD SN850X - CoolerMaster NR200 White - Gigabyte M27Q-SA - Corsair K70 Rapidfire - Logitech MX518 Legendary - HyperXCloud Alpha wireless


Boss-NAS [Build Log]:
R5 2400G - Noctua NH-D14 - Asus Prime X370-Pro - 16gb G.Skill Aegis 3000mhz - Seasonic Focus Platinum 550W - Fractal Design R5 - 
250gb 970 Evo (OS) - 2x500gb 860 Evo (Raid0) - 6x4TB WD Red (RaidZ2)

Synology-NAS:
DS920+
2x4TB Ironwolf - 1x18TB Seagate Exos X20

 

Audio Gear:

Hifiman HE-400i - Kennerton Magister - Beyerdynamic DT880 250Ohm - AKG K7XX - Fostex TH-X00 - O2 Amp/DAC Combo - 
Klipsch RP280F - Klipsch RP160M - Klipsch RP440C - Yamaha RX-V479

 

Reviews and Stuff:

GTX 780 DCU2 // 8600GTS // Hifiman HE-400i // Kennerton Magister
Folding all the Proteins! // Boincerino

Useful Links:
Do you need an AMP/DAC? // Recommended Audio Gear // PSU Tier List 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, FloRolf said:

or do you also have a 10gbit switch? 

Everything is running through one of these - NETGEAR 16-Port 10G Ethernet Smart Managed Pro Switch (XS716T)

 

 

7 minutes ago, FloRolf said:

Try setting MTU to 9000 in all your interfaces

Set MTU to 9000 on both ends, no noticeable difference.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, orb101 said:

Everything is running through one of these - NETGEAR 16-Port 10G Ethernet Smart Managed Pro Switch (XS716T)

 

 

Set MTU to 9000 on both ends, no noticeable difference.

You have to set the switch as well in that case. 

 

Can you run iperf on the nas and the ws to see if you can at least reach the theoretical throughput of 10gbit? 

Gaming HTPC:

R5 5600X - Cryorig C7 - Asus ROG B350-i - EVGA RTX2060KO - 16gb G.Skill Ripjaws V 3333mhz - Corsair SF450 - 500gb 960 EVO - LianLi TU100B


Desktop PC:
R9 3900X - Peerless Assassin 120 SE - Asus Prime X570 Pro - Powercolor 7900XT - 32gb LPX 3200mhz - Corsair SF750 Platinum - 1TB WD SN850X - CoolerMaster NR200 White - Gigabyte M27Q-SA - Corsair K70 Rapidfire - Logitech MX518 Legendary - HyperXCloud Alpha wireless


Boss-NAS [Build Log]:
R5 2400G - Noctua NH-D14 - Asus Prime X370-Pro - 16gb G.Skill Aegis 3000mhz - Seasonic Focus Platinum 550W - Fractal Design R5 - 
250gb 970 Evo (OS) - 2x500gb 860 Evo (Raid0) - 6x4TB WD Red (RaidZ2)

Synology-NAS:
DS920+
2x4TB Ironwolf - 1x18TB Seagate Exos X20

 

Audio Gear:

Hifiman HE-400i - Kennerton Magister - Beyerdynamic DT880 250Ohm - AKG K7XX - Fostex TH-X00 - O2 Amp/DAC Combo - 
Klipsch RP280F - Klipsch RP160M - Klipsch RP440C - Yamaha RX-V479

 

Reviews and Stuff:

GTX 780 DCU2 // 8600GTS // Hifiman HE-400i // Kennerton Magister
Folding all the Proteins! // Boincerino

Useful Links:
Do you need an AMP/DAC? // Recommended Audio Gear // PSU Tier List 

Link to comment
Share on other sites

Link to post
Share on other sites

what kind of files are you testing with and how big is the file?

if you test by moving many small files this could be the reason for the slow transfer speed.

Link to comment
Share on other sites

Link to post
Share on other sites

If the speeds between PC's is good (900+ MB/s) and only the speed to/from the NAS is terrible then the problem lies with the NAS. Are you sure it is actually using the 10Gbit connection to transfer the files? If you have the Gbit connection active, pull the cable from it.

Since it is supposed to be a 10Gbit connection, 2GB files are to small. That would be transferred in 2 seconds.. get bigger files (at least 10 to 20GB). Transferring files from my ws to my server sometimes also starts slow (still ~150MB/s though) and quickly goes up. Only reaches 900+ when it's half way done.

If you are using something like the windows file transfer window to judge the speed, open the task manager and check the real speed there. The transfer window only shows the average speeds.

As said above, the managed switch should be changed as well to the MTU value. However since the PC's can already transfer at higher speeds, i doubt it will make much of a difference.

I have no signature

Link to comment
Share on other sites

Link to post
Share on other sites

@Helly Speeds between the PC's is also very slow. I'm getting the same speed no matter where I send the files.

I've tried getting into my switch to set the MTU as @FloRolf suggested, but I've had no luck. Limited time this week, and the box is refusing to connect / can't see it from Netgear Prosafe utility.

I reset it to factory defaults, but no change.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, orb101 said:

@Helly Speeds between the PC's is also very slow. I'm getting the same speed no matter where I send the files.

I've tried getting into my switch to set the MTU as @FloRolf suggested, but I've had no luck. Limited time this week, and the box is refusing to connect / can't see it from Netgear Prosafe utility.

I reset it to factory defaults, but no change.

Have you done the between pc testing with files or iperf? 

You should definitely try iperf to see where your bottleneck is. 

Gaming HTPC:

R5 5600X - Cryorig C7 - Asus ROG B350-i - EVGA RTX2060KO - 16gb G.Skill Ripjaws V 3333mhz - Corsair SF450 - 500gb 960 EVO - LianLi TU100B


Desktop PC:
R9 3900X - Peerless Assassin 120 SE - Asus Prime X570 Pro - Powercolor 7900XT - 32gb LPX 3200mhz - Corsair SF750 Platinum - 1TB WD SN850X - CoolerMaster NR200 White - Gigabyte M27Q-SA - Corsair K70 Rapidfire - Logitech MX518 Legendary - HyperXCloud Alpha wireless


Boss-NAS [Build Log]:
R5 2400G - Noctua NH-D14 - Asus Prime X370-Pro - 16gb G.Skill Aegis 3000mhz - Seasonic Focus Platinum 550W - Fractal Design R5 - 
250gb 970 Evo (OS) - 2x500gb 860 Evo (Raid0) - 6x4TB WD Red (RaidZ2)

Synology-NAS:
DS920+
2x4TB Ironwolf - 1x18TB Seagate Exos X20

 

Audio Gear:

Hifiman HE-400i - Kennerton Magister - Beyerdynamic DT880 250Ohm - AKG K7XX - Fostex TH-X00 - O2 Amp/DAC Combo - 
Klipsch RP280F - Klipsch RP160M - Klipsch RP440C - Yamaha RX-V479

 

Reviews and Stuff:

GTX 780 DCU2 // 8600GTS // Hifiman HE-400i // Kennerton Magister
Folding all the Proteins! // Boincerino

Useful Links:
Do you need an AMP/DAC? // Recommended Audio Gear // PSU Tier List 

Link to comment
Share on other sites

Link to post
Share on other sites

As above, really need to use iperf to test pure network bandwidth ruling out other areas of a system. Most of the time it's a NIC driver issue or storage just isn't fast enough.

Link to comment
Share on other sites

Link to post
Share on other sites

try raid 10 (1+0), it could be the controller can not keep up with the parity information for RAID 6. RAID 6 and RAID 5 can have slower writes due to the need to calculate parity information for blocks of data.

 

yes RAID 10 will give you less usable capacity, but the performance will be much much better.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

HNY all.

I ran an iperf test between two workstations. I get average speeds of 933Mbits/sec

Link to comment
Share on other sites

Link to post
Share on other sites

I got iperf installed on  the Synology Rackstation

SSH extracted and ran.

Transfer speeds between workstation and Synology: 2.11Gbits/sec

Link to comment
Share on other sites

Link to post
Share on other sites

Tried with the workstations to workstation test again, noticed I had a bad cable attached.

Iperf now shows 2.52Gbits/sec between workstations.

Link to comment
Share on other sites

Link to post
Share on other sites

hi, after read all,  that 2.52 Gbit means that you configure your disk on RAID 10 (1+0) or it still RAID 6?

Link to comment
Share on other sites

Link to post
Share on other sites

@SrSuri The NAS is still RAID 6

iPerf says the bandwidth available is 2.11Gbits/sec between a workstation and the NAS.

A windows transfer between workstation and NAS runs at ~50Mb/s

Link to comment
Share on other sites

Link to post
Share on other sites

Are you using encryption on the RAID or on file transfer, and have you checked what the NAS says about CPU usage on file transfer. Encryption needs a lot of CPU power and if used may cause slower transfer speeds, so checking CPU usage might be worth it. Even if CPU has hardware support for encryption, it still effects transfer speeds.

Link to comment
Share on other sites

Link to post
Share on other sites

On 1/7/2020 at 6:32 PM, Sugadaddy said:

What type of cables are you using?

Cat 6a

 

 

On 1/7/2020 at 7:17 PM, Rusted said:

Are you using encryption on the RAID or on file transfer, and have you checked what the NAS says about CPU usage on file transfer. Encryption needs a lot of CPU power and if used may cause slower transfer speeds, so checking CPU usage might be worth it. Even if CPU has hardware support for encryption, it still effects transfer speeds.

No encryption

Link to comment
Share on other sites

Link to post
Share on other sites

On 1/7/2020 at 7:17 PM, Rusted said:

Are you using encryption on the RAID or on file transfer, and have you checked what the NAS says about CPU usage on file transfer. Encryption needs a lot of CPU power and if used may cause slower transfer speeds, so checking CPU usage might be worth it. Even if CPU has hardware support for encryption, it still effects transfer speeds.

34% cpu during windows transfer

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×