Jump to content

10GB NIC so slow vs 1GB NIC (still slow but better)

So, I kind of created a topic about this a while back and had some excellent responses, which I appreciate it and tried to implement. But now things are so much comparable that it is worth bringing up again.

 

I have a NAS I bought a few months ago.

 

Spec:

I put in a 10GB NIC in that NAS

I also put a 10GB NIC in my desktop computer and directly wire them (Desktop OS: WS2008 R2 enterprise (or Win7 for you non-server folks!))

Desktop already has a 1GB NIC which is wired into my router, which also the NAS is connected to from the integrated 1GB NIC

 

Situation:

I am running (or was) weekly full backups of my system (OS and data - both on different SSD's, the data is RAID0) to put on the NAS.

 

I would think doing a backup over the 10GB connection would be super fast. Well, no. it was around 1MB/transfer speed. 

Doing a backup over the 1GB NIC turned out to be faster (rates of around 70MB/s dropping to... around 30MB/s after a few mins sometimes)

 

Just to give you an idea - after 1 minute of "warmup" during a transfer, I took the stats but doing a transfer of a folder of 11.3GB (5 files in total) from desktop to the NAS results in the following stat:

 

10GB NIC: 

image.png.e53b79038d95120f9cf67e962c651616.png

 

 

1GB NIC: 

image.png.884185a16568de83ab1d490d38ad417c.png

 

 

WTH!?

 

Can someone explain to me why and what the heck is going on?

And above all - how to get the 10GB NIC doing a 10GB NIC job?

 

Thank you for insights!

Link to comment
Share on other sites

Link to post
Share on other sites

What are your PC's specs? What sort of NAS are we talking about?

 

There are many things that can cause this, including limited PCI-E lanes, drive timeout issues, bad BIOS configuration, bad cable, drivers not installed or wrong drivers installed, NIC incompatible with the PC/NAS, link negotiation issues, et cetera.

Ryzen 5 2600 3.9Ghz all cores 1.175V | MSI X470 Gaming Pro | 16GB ADATA Gammix D10 @ 3000C16 | Sapphire RX 5700 XT Pulse | Samsung 970 EVO Plus 250GB & 2x Samsung 860 EVO 500GB | Super Flower Leadex II 650W | Phanteks P350X

Asus VG245HE 24" 1080p 75hz | Logitech X-540 5.1 | Logitech G710+ MX Brown | Logitech G502 Hero | Logitech G440

Link to comment
Share on other sites

Link to post
Share on other sites

If you can, try iperf3 to test actual bandwidth you can achieve.

Also check for selected speed on interface.

Do you have proper cable that supports 10G (CAT 6a)? What's the length?

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Nick7 said:

If you can, try iperf3 to test actual bandwidth you can achieve.

Also check for selected speed on interface.

Do you have proper cable that supports 10G (CAT 6a)? What's the length?

Cat 5e can also do 10gbps over short distances. I believe it's up to 45 meters. Tested it myself with a 2 meter cable and saw speeds way over gigabit.

Ryzen 5 2600 3.9Ghz all cores 1.175V | MSI X470 Gaming Pro | 16GB ADATA Gammix D10 @ 3000C16 | Sapphire RX 5700 XT Pulse | Samsung 970 EVO Plus 250GB & 2x Samsung 860 EVO 500GB | Super Flower Leadex II 650W | Phanteks P350X

Asus VG245HE 24" 1080p 75hz | Logitech X-540 5.1 | Logitech G710+ MX Brown | Logitech G502 Hero | Logitech G440

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks!

 

so it is a CAT 6a cable, I even tried 5e and no difference. Cables are 3-5 metres in length. 

Unable to use iperf - it does not seem to work for me.

 

PC Specs are great. it's an Intel i7 core extreme x990 @ 3.47GHz (I built this myself about 5 years ago). Asus mobo.(P6X58D)

The 10GB NIC on the desktop is an Asus one: XG-C100C PCI E

 

The NAS I have is a synology 1819+ https://www.amazon.com/Synology-Bay-Diskstation-Diskless-DS1819/dp/B07KMKDW42/ref=sr_1_3?keywords=Synology+8+Bay+NAS+Diskstation+(Diskless)+(DS1819%2B)&qid=1580901237&s=electronics&sr=1-3

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, AhmedIlyas said:

Thanks!

 

so it is a CAT 6a cable, I even tried 5e and no difference. Cables are 3-5 metres in length. 

Unable to use iperf - it does not seem to work for me.

 

PC Specs are great. it's an Intel i7 core extreme x990 @ 3.47GHz (I built this myself about 5 years ago). Asus mobo.(P6X58D)

The 10GB NIC on the desktop is an Asus one: XG-C100C PCI E

 

The NAS I have is a synology 1819+ https://www.amazon.com/Synology-Bay-Diskstation-Diskless-DS1819/dp/B07KMKDW42/ref=sr_1_3?keywords=Synology+8+Bay+NAS+Diskstation+(Diskless)+(DS1819%2B)&qid=1580901237&s=electronics&sr=1-3

 

 

 

 

A few questions:

 

1) For the 10GbE NIC on your NAS, is that configured with a static IP or is that configured using DHCP?

 

2) For the 10GbE NIC on your computer, is that configured with a static IP or is that also configured using DHCP?

 

3) When you, say, copy a file, to test the 10 GbE interface, do you disconnect the 1 GbE cable so that it will force the data through the 10 GbE interface (or disable the 1 GbE interface in your network adapter settings)?

 

4) Are the latest drivers for your 10GbE NIC installed in Windows Server 2008R2?

5) Are you able to confirm that it has been able to auto-negotiate the speed at the 10 GbE speeds that it should be running at?

 

6) How many drives do you have in your Synology NAS/pool/array?

 

7) How full is your NAS?

 

I'm asking these questions because if you have both cables plugged in at the same time and there is a network bridge, then it might not know how to send the data unless the NICs have a /24 (255.255.255.0) subnet mask, and you put the 10GbE NIC on a different subnet. (Otherwise, if the 1 GbE NIC and the 10 GbE NIC are on the same system, on the same network, with the same subnet mask, then when you copy a file over, Windows might not be intelligent enough to automatically know to use the fastest NIC possible first before using the slower one. It might just use the first NIC that it finds.)

 

My higher speed networks have a different Class A IPv4 IP address compared to my "normal" network, and varying high speed networks (10 GbE vs. 100 Gbps IB) are also put onto different Class C IPv4 subnets so that I am using the subnets to logically separate the traffic so that it can specifically push data over the different NIC, which also has physical differences as well. (i.e. my 100 Gbps IB using QSFP28 DAC cables and QSFP28 fiber optic cables whereas my 10 GbE uses SFP+ fiber cables).

 

If you're able to confirm that you've done everything you can to force the data onto the 10 GbE NIC and it's still slow, then it points to potentially a configuration issue, most likely on the NAS.

IB >>> ETH

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks @alpha754293

 

Answers to your burning questions! :

 

1) For the 10GbE NIC on your NAS, is that configured with a static IP or is that configured using DHCP?

A: Static

 

2) For the 10GbE NIC on your computer, is that configured with a static IP or is that also configured using DHCP?

A: Static

 

3) When you, say, copy a file, to test the 10 GbE interface, do you disconnect the 1 GbE cable so that it will force the data through the 10 GbE interface (or disable the 1 GbE interface in your network adapter settings)?

A: I tried both. Same result. Both plugged in at the same time and also just the 10GbE plugged in directly to the NAS with the same switcharoo on the NAS i.e direct only without plugging into the router and also plugging through router whilst having the 10GbE plugged in.

 

 

4) Are the latest drivers for your 10GbE NIC installed in Windows Server 2008R2?
A: Yup. The only one I can find was the one supplied on CD. Went to the website too and same version of drivers


5) Are you able to confirm that it has been able to auto-negotiate the speed at the 10 GbE speeds that it should be running at?

A: yes, auto-neg (from what I can see). I even disabled Large Send Offload V2 to Disabled which did help a little but now longer helps.

 

6) How many drives do you have in your Synology NAS/pool/array?

A: 6 drives

 

7) How full is your NAS?

A: 31.6TB free of a total of 36.6TB - so, pretty free.

 

 

The NICs on the desktop has the following IP info:

 

10GB: 10.168.1.4 and subnet of 255.0.0.0

1GB: 20.168.1.2 and subnet of 255.255.255.0

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, AhmedIlyas said:

Thanks @alpha754293

 

Answers to your burning questions! :

 

1) For the 10GbE NIC on your NAS, is that configured with a static IP or is that configured using DHCP?

A: Static

 

2) For the 10GbE NIC on your computer, is that configured with a static IP or is that also configured using DHCP?

A: Static

 

3) When you, say, copy a file, to test the 10 GbE interface, do you disconnect the 1 GbE cable so that it will force the data through the 10 GbE interface (or disable the 1 GbE interface in your network adapter settings)?

A: I tried both. Same result. Both plugged in at the same time and also just the 10GbE plugged in directly to the NAS with the same switcharoo on the NAS i.e direct only without plugging into the router and also plugging through router whilst having the 10GbE plugged in.

 

 

4) Are the latest drivers for your 10GbE NIC installed in Windows Server 2008R2?
A: Yup. The only one I can find was the one supplied on CD. Went to the website too and same version of drivers


5) Are you able to confirm that it has been able to auto-negotiate the speed at the 10 GbE speeds that it should be running at?

A: yes, auto-neg (from what I can see). I even disabled Large Send Offload V2 to Disabled which did help a little but now longer helps.

 

6) How many drives do you have in your Synology NAS/pool/array?

A: 6 drives

 

7) How full is your NAS?

A: 31.6TB free of a total of 36.6TB - so, pretty free.

 

 

The NICs on the desktop has the following IP info:

 

10GB: 10.168.1.4 and subnet of 255.0.0.0

1GB: 20.168.1.2 and subnet of 255.255.255.0

I assume then, that the 10 GbE connection on your NAS also has a Class A IPv4 address of 10, then?

 

You mentioned that you have a switch in between your NAS and your desktop, if I read your post correctly.

 

Would you mind sharing the model number of that switch so that I can look the User's Guide for it?

 

Does the switch report 10 GbE connectivity?

 

Are you using jumbo frames?

Sometimes, I've also read that setting the MTU manually to 9216 might also help as well.

Is your NAS running a virtual switch? Again, I am just checking to make sure that I understand your reply correctly. The direct connection - does that have a switch in between them or are you running like a crossover cable (or does CAT6a cable don't need to be crossover anymore? Might sound like a stupid question, so my bad, but my 10 GbE stuff right now is all SFP+ fiber).

 

I'm somewhat surprised that perf didn't work for you.

 

My two NAS units each have dual SFP+ 10 GbE ports, and I have a short cable that directly connects them together, so I can make transfers between the NAS units at 10 GbE speeds (although, the eight hard drives that I have in there, as part of the RAID5 array can't really do much more than 150 MB/s, which apparently is a problem with the 10 GbE NIC chip that's in the Qnap NAS units).

 

Apparently, not all chips on 10 GbE NICs are made the same.

IB >>> ETH

Link to comment
Share on other sites

Link to post
Share on other sites

Thank you,

 

I have a netgear nighthawk R9000 router (only supports 1GB and not 10GB). 

I also did try setting the MTU but that didnt make a difference along with using jumbo frames. The jumbo frames worked for a very very small period of time then... back to square one!

 

NAS is not running a virtual switch. I used pretty much most of the defaults out of the box for setup on the synology setup. The cable (CAT 6a) is a direct connection between the Desktop 10GbE NIC and the 10GbE NIC on the NAS (I put in the NIC in this as well - the "approved" or "compatible" NIC, which is a synology brand also)

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, AhmedIlyas said:

Okay...so I'm going to ask even dumber questions now -

 

What's the make and model of your 10 GbE NIC?

 

And what is the make and model of your hard drives?

 

Is the array in some kind of a RAID? If so, what's the RAID level? (RAID0? RAID5? etc.)

 

Is the array thin provisioned?


Can you confirm the number of drives? Are all of the drives the same capacity or are they different capacities?

 

I'm just trying to get some more basic information about your set up and how you have configured your stuff so that I can try and see if we can figure this thing out together.

 

(And to help speed things up a little bit so that I don't have to look up the specs of the drives themselves), if you have multiple, different makes and models of drives, please state their advertised capacity, the rotational speed of the hard drive (if known and/or explicitly specified by the manufacturer), and whether they're SATA 6 Gbps (a.k.a. SATA 3) drives or whether they're SATA 3 Gbps (a.k.a. SATA 2) drives.)

 

Also, what I am eventually going to ask you to do is we're going to be running a bunch of just "simpler" connectivity tests checks on your system between your desktop and your NAS, so be prepared for that. One of them is going to be where you take the biggest file that you want to copy over, and we're going to time it or report out what Windows says the transfer rate is.

 

I know that some models of 10 GbE NICs really isn't that great from what I've been told, but I also think that if you are trying to write data to your NAS and you have RAID5 set up and the NAS unit doesn't have a particularly fast processor to be able to do the parity calculations, then that can also slow down your speeds.

 

My Qnap TS-832X has an Annapurina Labs quad-core ARM processor in it, and my array is configured as a RAID5, so my 10 GbE to 10 GbE connection (I have two of these NAs units, so they're tied together via two separate 10 GbE SFP+ direct attached cables), and each 10 GbE link is only really good for upto around 150 MB/s (1.2 Gbps). So if I have a lot of data to move, I will actually break it up into two separate streams and so, I've been able to push my array to around 200 MB/s (with 8 HGST drives -- one of the NAS units that I have has eight HGST 6 TB 7200 rpm SATA 6 Gbps HDDs and the other has eight HGST 10 TB 7200 rpm SATA 6 Gbps HDDs in it). So, I can't even quite hit 2 Gbps between them.

 

If I put SSDs in it, I probably can, but I'll also probably burn through the write endurance limit (which I've done before already to at least five of my SSDs and had to RMA them), so I don't use SSDs for bulk storage for that reason.

 

So between the drives, the processor, and the NIC -- that's about as fast as I am going to be able to get my NAS to go, which, for the most part, is fine.

 

So, I'm going to run you through some of the similiar kind of diagnostics, where I'm going to ask a bunch of really dumb questions about your configuration, and then have to test it with copying a really large file, and then also probably run a ping/packet flood on the 10 GbE NIC itself to see what kind of throughput you might be getting (in terms of packets), etc.

 

You'll also want to get iPerf up and running as well, so if you can work towards that, that will also help.

 

Thanks.

IB >>> ETH

Link to comment
Share on other sites

Link to post
Share on other sites

@alpha754293

 

Quote

What's the make and model of your 10 GbE NIC?

 

 

On the desktop: Asus XG-C100C

On the NAS: Synology 10Gb (E10G18-T1)

 

Quote

And what is the make and model of your hard drives?

 

On the NAS, I have 6x WD RED. Combination of 10TB (WD101KFBX - 2 of these which are 7200RPM) and 8TB drives (WD8003FFBX - 3 of these, and 1 of WD80EFAX which is also an 8TB drive and these are 5400RPM but i still dont expect the 10GbE transfer speed of up to 1Mb/s!).

 

On the desktop, Samsung SSD's (850 EVO) (2x 2TB in RAID0) and 1 boot SSD which is a OCZ Vector150

 

all the drives are SATA 6b/s

 

Quote


Is the array in some kind of a RAID? If so, what's the RAID level? (RAID0? RAID5? etc.)

 

Desktop is RAID0.

On the NAS it is configured as SHR (With Data protection for 1-drive fault tolerance) using BtrFS

 

Quote

 

Is the array thin provisioned?

 

I am not sure, but i suspect so (shows as 1 big drive when I map) and 1 storage pool:

 

image.thumb.png.8946b7083abae89ab0037324345711b2.png

 

I hope this helps! Thanks again

Link to comment
Share on other sites

Link to post
Share on other sites

Sorry...just getting back to this now.

 

Were you able to get iPerf installed on your system okay?

 

I take it that on your Windows machine and on your NAS, you are able to confirm that it was able to auto-negotiate the link to 10 Gbps speeds?

 

I might also check the MTU and also the frame size and to see if both NICs support jumbo frames.

 

But there is nothing that jumps out at me in terms of any configuration issues that would indicate that there is a problem.

IB >>> ETH

Link to comment
Share on other sites

Link to post
Share on other sites

Did you sign a different, unique subnet to your point to point 10G connection, without defining a gateway?

PC : 3600 · Crosshair VI WiFi · 2x16GB RGB 3200 · 1080Ti SC2 · 1TB WD SN750 · EVGA 1600G2 · Define C 

Link to comment
Share on other sites

Link to post
Share on other sites

Thank you

 

@alpha754293 - yes to all your questions. The NIC's do support jumbo frames.

I do have iperf but have not yet ran it. last time I tried, it didnt work because the connection was way to slow for it to run - seriously.

 

@beersykins - I dont think so? There is no default gateway configured on the PC 10Gb NIC.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, AhmedIlyas said:

@beersykins - I dont think so? There is no default gateway configured on the PC 10Gb NIC.

Did you reuse the same subnet, or did you use a different one?

PC : 3600 · Crosshair VI WiFi · 2x16GB RGB 3200 · 1080Ti SC2 · 1TB WD SN750 · EVGA 1600G2 · Define C 

Link to comment
Share on other sites

Link to post
Share on other sites

1. Leave Jumbo frames on for both 10GB devices.

2. What applications is the Synology running?  Stop them before trying any speed tests.  Next how much RAM does your Synology have?  Perhaps add more RAM to the Synology. 

3. Also, did you wire the patch cable yourself between the devices?  If so try a pre-made Cat 6a patch cable of at least 6ft in length, longer is better, so if you've got a 25 footer, try it, do not use a very short (less than 5 ft.) patch cable. 

4. Add an M.2 drive to your Synology to provide a cache buffer layer before writes/reads.

Reasons:

1. Jumbo frames are better for 10GB connections.  You said that after enabling Jumbo frames transfers were good for a VERY short period of time, because of that the likelihood your problem can be solved by the solutions in #2 and #4.  Those are your most likely solutions.

2. Certain apps running on the Synology could overload the local capabilities.  The Synology needs to have free RAM to cache the received information before it can write it to disk.  The Synology can support up to 32GB of RAM, and any not used by the OS and running apps will be used for cache. 

3. For direct connects to work well on any form of Ethernet, the entire packet needs to be physically 'on the wire' for the collision sensing to operate correctly, if the patch cable is too short this can't happen and the collision detection portion of the adapters doesn't work well.  This isn't an issue if you have a switch in between, as the switch buffers packets, and makes collision detection mute.

4. If the Synology has an M.2 drive it will use it as a cache for the rotational drives, and speed up transfers substantially..

 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Mwoody said:

3. Also, did you wire the patch cable yourself between the devices?  If so try a pre-made Cat 6a patch cable of at least 6ft in length, longer is better, so if you've got a 25 footer, try it, do not use a very short (less than 5 ft.) patch cable.

3. For direct connects to work well on any form of Ethernet, the entire packet needs to be physically 'on the wire' for the collision sensing to operate correctly, if the patch cable is too short this can't happen and the collision detection portion of the adapters doesn't work well.  This isn't an issue if you have a switch in between, as the switch buffers packets, and makes collision detection mute.

Until you can show me a paper on this proving these claims, I will call them bullcrap.

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

IEEE 802.3 & IEEE 802.3a  Just read the spec.  Odds are you're too young to have ever heard about such things, because you've always had switches to take care of the collision sense issues, but the old rules apply when you don't.
 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Mwoody said:

IEEE 802.3 & IEEE 802.3a  Just read the spec.  Odds are you're too young to have ever heard about such things, because you've always had switches to take care of the collision sense issues, but the old rules apply when you don't.

Look, just because an old spec required something doesn't mean a spec like 802.3ab or 802.3an is required to build off those and keep the same rules. Specs evolve and grow and change over time and obsolete specs are no longer necessary. I damn well know my 802.3 specs and I know that 802.3a is NOT for twisted pair but for over coax.

https://en.wikipedia.org/wiki/Ethernet_over_twisted_pair

https://en.wikipedia.org/wiki/Ethernet_physical_layer#Minimum_cable_lengths

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, Mwoody said:

1. Leave Jumbo frames on for both 10GB devices.

2. What applications is the Synology running?  Stop them before trying any speed tests.  Next how much RAM does your Synology have?  Perhaps add more RAM to the Synology. 

3. Also, did you wire the patch cable yourself between the devices?  If so try a pre-made Cat 6a patch cable of at least 6ft in length, longer is better, so if you've got a 25 footer, try it, do not use a very short (less than 5 ft.) patch cable. 

4. Add an M.2 drive to your Synology to provide a cache buffer layer before writes/reads.

Reasons:

1. Jumbo frames are better for 10GB connections.  You said that after enabling Jumbo frames transfers were good for a VERY short period of time, because of that the likelihood your problem can be solved by the solutions in #2 and #4.  Those are your most likely solutions.

2. Certain apps running on the Synology could overload the local capabilities.  The Synology needs to have free RAM to cache the received information before it can write it to disk.  The Synology can support up to 32GB of RAM, and any not used by the OS and running apps will be used for cache. 

3. For direct connects to work well on any form of Ethernet, the entire packet needs to be physically 'on the wire' for the collision sensing to operate correctly, if the patch cable is too short this can't happen and the collision detection portion of the adapters doesn't work well.  This isn't an issue if you have a switch in between, as the switch buffers packets, and makes collision detection mute.

4. If the Synology has an M.2 drive it will use it as a cache for the rotational drives, and speed up transfers substantially..

 

 

1) OK, I will leave them on.

2) No applications. Seriously. None. The synology has 4GB RAM. it is only using 12% at idle. 

3) Nope, I bought them. They are 5ft cables.

4) I do not want to spend anymore money than I already have to then come to the same issue/conclusion. 

Link to comment
Share on other sites

Link to post
Share on other sites

So again, enabling jumbo frames is giving me 45-55Mb/s to begin with. This happened when I restarted my computer but I will report back what happens after about an hour or so in general, as "later in the day" it can cause the issues I am seeing.

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Here's what Fluke has to say about the sub topic:

For Category 5e and 6, there is no minimum length requirement. ANSI/TIA/EIA-568-B.2-1 in Annex K does give a warning about reflected FEXT on shorter links with minimally compliant components. The obvious solution is not to purchase minimally compliant components. In the early days of Cat 6 when vendors were struggling to do better than marginally compliant, short links were an issue. Today, this is not an issue if you stay with a main stream vendor.

Within this same standard, there is also advice on distance when using a consolidation point. It advises a minimum distance of 5 m between the CP and TO. In ISO/IEC they are a little more clearer is specifying 15 m between the DP and CP. This is all for Category 6/Class E.

With regards to Category 6A, there is a minimum length requirement - kind of. In Annex J of ANSI/TIA-568-B.2-10 is describes worst case modeling using a 10 m link. The suggestion therefore is that you should not go less than 10 m. But again, that is with minimally compliant components. As with Category 6 stated above, there are now components available that will give you passing field tests below 10 m. HOWEVER, even vendors with good components may still have a minimum length requirement in their design specifications. The only way to know where you stand is to talk to the vendor AND test it to see.

If you are talking specifically about patch cords, then 0.5 m is the implied minimum length in ANSI/TIA/EIA-568-B.2-1 for a certified patch cord. That's because the math for the limit lines really does not work below this. Infact, getting a certified patch cord of 0.5 is going to be tricky. Many vendors only offer a certified patch cord of 1.0 m or longer. I suspect that this may be the most useful information with regards to your question.

Kind regards

Adrian Young
Sr. Customer Support Engineer

Fluke Networks Technical Assistance Center
6920 Seaway Blvd, Everett, WA 98203
Toll Free 1 800 283 5853
International + 1 425 446 4519

 

Now for the primary topic:

I expect that you are running out of RAM on the Synology to buffer the data before it can calculate the parity and write it off to disk.  This is the most likely issue.  The most likely solution is to add RAM.  I've used Synology's before, and they always performed MUCH better after the RAM was increased.

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks @Mwoody - actually no, the RAM stays pretty consistent and uses less than 30% when copying large files.

Still so far, it is keeping in the 45-55Mb/s range, which is better than before for sure. Any faster? I would expect so if it is a 10GbE direct connection!

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×