Jump to content

Getting rid of hard disk bottleneck on 10G network.

I have a

dual x5430 

28 Gig of ram

7x 4tb hard drives

Ubuntu 18.08

I am maxing out my 10 gig nic at 300MB/s or 30% of possible throughput. I want to get my full speed. Can I set up a hard drive cache on my SSD. Create a RAM disk for it? I want put in a dual 10gig or even 40gig infiniband. Anyone one have advice. I am about to upgrade to some MikroTik 10gig switches. 

Link to comment
Share on other sites

Link to post
Share on other sites

You need faster storage, so either all SSD, but even SATA SSD's won't saturate a 10g link, or use of a cache tiering system

Link to comment
Share on other sites

Link to post
Share on other sites

RAMDISK is dangerous as hell, but it would work. You'd have to SSD Cache the hell out of that thing to get 10G writes. You'd be better off to use u.2 NVMe RAID cache if you wanted 10G writes. If you're looking for 10G read/writes... full u.2 storage.

[FS][US] Corsair H115i 280mm AIO-AMD $60+shipping

 

 

System specs:
Asus Prime X370 Pro - Custom EKWB CPU/GPU 2x360 1x240 soft loop - Ryzen 1700X - Corsair Vengeance RGB 2x16GB - Plextor 512 NVMe + 2TB SU800 - EVGA GTX1080ti - LianLi PC11 Dynamic
 

Link to comment
Share on other sites

Link to post
Share on other sites

Yea a SSD cache will help with write performance but not read performance. If you want faster speeds in both directions you flat out need an array of SSDs. Don't forget both ends need to have fast storage. A single SATA SSD will cap out at 500mbyte/s so you need more than one.

 

Unless you have some crazy raid array, your 7 disks should be putting out around 500mbyte/s for a single file transfer, that must mean your desktop or what-ever-client you're using has a slow SSD.

 

SSD read caches are meant for something along the lines of databases where multiple servers are requesting the same data. Still the initial transfer is going to be slow because even the SSDs have to pull the data from the spinners.

 

LACP / link aggregation does not just double your bandwidth, it creates more lanes for traffic to flow. Like a road has multiple lanes for a car to go down - it doesn't increase the speed limit just how many cars can travel at one time. So maybe you get 2 cars instead of 1 car at the exit, but both were still technically going 60mph.

 

God I hate fiberchannel, so somebody else will have to help / comment on that one.

Link to comment
Share on other sites

Link to post
Share on other sites

depending on how you use hdd's it could be a network related issue with etc jumpoframes and other network related issues... and not the disks

we kinda need to know if your setup is in a raid and what kind of raid setup... and how fast your disks is in that setup..

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, kladzen said:

depending on how you use hdd's it could be a network related issue with etc jumpoframes and other network related issues... and not the disks

we kinda need to know if your setup is in a raid and what kind of raid setup... and how fast your disks is in that setup..

I donno what kind of HDDs he's using, but with 7 WD Red drives in FastRAID with sustained write speeds of ABOUT 150MB/s each, sounds right on the money.

 

150MB/s per HDD, x 8 = 1200 Mb/s

 

1200Mb/s * 7 drives in FastRAID = 8400 Mb/s

 

300 MB/s transfer = 2.4 Gb/s transfer.

 

10 Gig ethernet = 10,000 Mb/s

 

Something is not right.

[FS][US] Corsair H115i 280mm AIO-AMD $60+shipping

 

 

System specs:
Asus Prime X370 Pro - Custom EKWB CPU/GPU 2x360 1x240 soft loop - Ryzen 1700X - Corsair Vengeance RGB 2x16GB - Plextor 512 NVMe + 2TB SU800 - EVGA GTX1080ti - LianLi PC11 Dynamic
 

Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, knightslugger said:

I donno what kind of HDDs he's using, but with 7 WD Red drives in FastRAID with sustained write speeds of ABOUT 150MB/s each, sounds right on the money.

 

150MB/s per HDD, x 8 = 1200 Mb/s

 

1200Mb/s * 7 drives in FastRAID = 8400 Mb/s

 

300 MB/s transfer = 2.4 Gb/s transfer.

 

10 Gig ethernet = 10,000 Mb/s

 

Something is not right.

7*150 is 1gbyte/s assuming no parity. Single parity then you're at 850mbyte/s. Single file umcompressed transfers. Assuming partiy, limited to a single disk's I/O which is probably 100-200 iops. So 300 is very possible if he's trying to transfer a folder of files or an archive full of junk.

 

Isn't "FastRAID" just an HP controller, where does 1200Mb/s come from? 

 

10G card still needs to send ACK packets, so you might get around 9gbit/s or slight over 1gbyte/s.

 

Odds are whatever he's transfering to is the bottleneck - be it his personal computer or another server. Probably a slow SSD or small disk array.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Mikensan said:

7*150 is 1gbyte/s assuming no parity. Single parity then you're at 850mbyte/s. Single file umcompressed transfers. Assuming partiy, limited to a single disk's I/O which is probably 100-200 iops. So 300 is very possible if he's trying to transfer a folder of files or an archive full of junk.

 

Isn't "FastRAID" just an HP controller, where does 1200Mb/s come from? 

 

10G card still needs to send ACK packets, so you might get around 9gbit/s or slight over 1gbyte/s.

 

Odds are whatever he's transfering to is the bottleneck - be it his personal computer or another server. Probably a slow SSD or small disk array.

1200Mb/s comes from 150MB/s translated into bits. that's the uncompressed sequential read throughput of the WD Red 4TB drive. Writes I have no clue.

 

a 10 Gbps network translated to Bytes is 1250 MB/s. For simplicity sake, i'll agree 10Gbps = 1 GB/s.

 

I use FastRAID as a configuration (RAID0), not as a specific piece of hardware in this instance.

 

He should be getting double the throughput he is now if he's using RAID0. If he's not, he shouldn't be surprised with that he's getting.

[FS][US] Corsair H115i 280mm AIO-AMD $60+shipping

 

 

System specs:
Asus Prime X370 Pro - Custom EKWB CPU/GPU 2x360 1x240 soft loop - Ryzen 1700X - Corsair Vengeance RGB 2x16GB - Plextor 512 NVMe + 2TB SU800 - EVGA GTX1080ti - LianLi PC11 Dynamic
 

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, knightslugger said:

1200Mb/s comes from 150MB/s translated into bits. that's the uncompressed sequential read throughput of the WD Red 4TB drive. Writes I have no clue.

 

a 10 Gbps network translated to Bytes is 1250 MB/s. For simplicity sake, i'll agree 10Gbps = 1 GB/s.

 

I use FastRAID as a configuration (RAID0), not as a specific piece of hardware in this instance.

 

He should be getting double the throughput he is now if he's using RAID0. If he's not, he shouldn't be surprised with that he's getting.

Ah ok, when you threw up fastraid I didn't think you converting but listing a different configuration of disks, got ya now. I agree, in a striped array he should be ruising.

Link to comment
Share on other sites

Link to post
Share on other sites

SSDs are cheap enough that you could get a 512 GB one just for write caching.

 

Otherwise, one idea would be to upgrade your hard drives.  An 8 TB HGST Ultrastar has a sustained rate of 195 MB/s ... with 4 of them in raid 0 you'd get close to 700 MB/s or something like that. But is it worth $1k .... probably not.

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, knightslugger said:

1200Mb/s comes from 150MB/s translated into bits. that's the uncompressed sequential read throughput of the WD Red 4TB drive. Writes I have no clue.

 

a 10 Gbps network translated to Bytes is 1250 MB/s. For simplicity sake, i'll agree 10Gbps = 1 GB/s.

 

I use FastRAID as a configuration (RAID0), not as a specific piece of hardware in this instance.

 

He should be getting double the throughput he is now if he's using RAID0. If he's not, he shouldn't be surprised with that he's getting.

RAID scaling isn't perfect linear and HDDs don't actually perform nearly as well as they like to state. WD Reds sustained I use 80 MB/s in an array and 100 MB/s stand alone, bump up to a 7200 RPM SAS and I use 100 MB/s per disk sustained.

 

Long time ago I was running 4 WD Velociraptors off a high end SAS RAID card in RAID 0 and would top out at 800 MB/s read and 700MB/s write.

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/11/2018 at 1:01 AM, knightslugger said:

RAMDISK is dangerous as hell, but it would work. You'd have to SSD Cache the hell out of that thing to get 10G writes. You'd be better off to use u.2 NVMe RAID cache if you wanted 10G writes. If you're looking for 10G read/writes... full u.2 storage.

His server is an ancient Core 2 based thing, so SATA2 is basically the limit anyway, maxing around 300MB/s.

PC Specs - AMD Ryzen 7 5800X3D MSI B550M Mortar - 32GB Corsair Vengeance RGB DDR4-3600 @ CL16 - ASRock RX7800XT 660p 1TBGB & Crucial P5 1TB Fractal Define Mini C CM V750v2 - Windows 11 Pro

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, NelizMastr said:

His server is an ancient Core 2 based thing, so SATA2 is basically the limit anyway, maxing around 300MB/s.

It pains me so much to think of Core 2 as a dinosaur but good god. Looking it all up on google, even the PCI2 lane can only handle 500mbyte/s - ish.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Mikensan said:

It pains me so much to think of Core 2 as a dinosaur but good god. Looking it all up on google, even the PCI2 lane can only handle 500mbyte/s - ish.

We sure are spoiled with these shiny NVMe drives with direct PCIe links and even SAS 12Gbps being saturated easily by spinning rust.

PC Specs - AMD Ryzen 7 5800X3D MSI B550M Mortar - 32GB Corsair Vengeance RGB DDR4-3600 @ CL16 - ASRock RX7800XT 660p 1TBGB & Crucial P5 1TB Fractal Define Mini C CM V750v2 - Windows 11 Pro

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×