Jump to content

Diskless System

Raylex

Diskless systems surpassing the 1gb lan network connection?im currently using a linux enterprise and windows based diskless system and i was wondering on how i can increase the network speed beyond 1gb without spending so much on a 10gb network system

Link to comment
Share on other sites

Link to post
Share on other sites

Install a second 1Gbps NIC and set up load balancing between the two.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, Naeaes said:

Install a second 1Gbps NIC and set up load balancing between the two.

Load balancing is not the same as Link Aggregation.

ESXi SysAdmin

I have more cores/threads than you...and I use them all

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Raylex said:

Diskless systems surpassing the 1gb lan network connection?im currently using a linux enterprise and windows based diskless system and i was wondering on how i can increase the network speed beyond 1gb without spending so much on a 10gb network system

Oh my god my eyes.

If you've previously won the build off please pm me so we can get something worked out.

Link to comment
Share on other sites

Link to post
Share on other sites

What use is a system without disks and why would you need to increase speed if there is nothing there to access?

Link to comment
Share on other sites

Link to post
Share on other sites

Link aggregation would be your only (cheaper) option, but as others have suggested, you need to make sure your switch supports it. Even unmanaged switches will support this feature, but you need to make sure yours does.

 

On another note, cheap 4Gbit NIC cards would only work in 1 of 2 scenarios:
1. You have a switch that can work with 4Gbit connections (usually costly).

2. You directly make a connection between two PCs through their 4GBit NICs, which would probably not work for you unless your network consists of only your host and client alone.

"Rampage IV" - Gaming PC

Cooler Master HAF 932 Advanced    EVGA GeForce GTX 980                            ASUS VE278H 27in LED Monitor x 3

ASUS Rampage IV Black Edition         G.Skill Trident X 16GB DDR3 2400Mhz     Cooler Master Silent Pro Gold - 1000W

i7 4930k - Overclocked @ 4.5GHz     Samsung 850 SSD 250GB x2 RAID 0           Western Digital Blue 1TB

Logitech G930 Wireless Headset      Razer Naga 2012 MMO Gaming Mouse      Logitech G710+ Mechanical Keyboard

 

"EMCMS-ESXI" - Server

HPZ800 Workstation Chassis           Seagate 4TB NAS Drive x 4 RAID Z           48GB ECC Elpida DDR3 SDRAM

Xeon E5620 @ 2.66GHz x 2             PNY CS2211 240GB SSD                          HP 80 PLUS Silver APFC PSU - 1110W

LSI 9211-8i SAS in IT Mode

Link to comment
Share on other sites

Link to post
Share on other sites

Go watch the video Linus did on the 10Gbps NIC with Twinax cable for under $150.

-KuJoe

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/12/2016 at 1:47 PM, Naeaes said:

Install a second 1Gbps NIC and set up load balancing between the two.

i did install additional NICs but i didnt know about the load balancing switch but i think load balancing doesnt make the network faster.thank you for your reply :D

 

On 4/12/2016 at 3:06 AM, Stefan1024 said:

There are some old, but super cheap 4 GBit/s fiber NICs on Ebay.

this means i have to buy fiber capable switches and cables so its a no go.

 

On 4/13/2016 at 10:22 AM, beavo451 said:

What use is a system without disks and why would you need to increase speed if there is nothing there to access?

i bet you havent googled what im talking about. google is your best friend :D

 

12 hours ago, Sevilla said:

Link aggregation would be your only (cheaper) option, but as others have suggested, you need to make sure your switch supports it. Even unmanaged switches will support this feature, but you need to make sure yours does.

 

On another note, cheap 4Gbit NIC cards would only work in 1 of 2 scenarios:
1. You have a switch that can work with 4Gbit connections (usually costly).

2. You directly make a connection between two PCs through their 4GBit NICs, which would probably not work for you unless your network consists of only your host and client alone.

thanks man.ill look it up and see if i can do this link aggregation.i have 7 servers and 300 diskless nodes/clients so i hope i can make it work.i still have to submit a proposal but thanks again.

 

12 hours ago, KuJoe said:

Go watch the video Linus did on the 10Gbps NIC with Twinax cable for under $150.

this was between 2 pc only right?so its a no go my friend :D 

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, beavo451 said:

What use is a system without disks and why would you need to increase speed if there is nothing there to access?

A diskless system is one that boots off the network.

There are a lot of diskless systems, the most common being ThinClients for Citrix XenDesktop deploys.

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

56 minutes ago, Raylex said:

this was between 2 pc only right?so its a no go my friend :D 

According to your first post you have 2 systems, a Linux and Windows system. I see in your recent post this was not correct. You best bet is to invest in 10Gbps switches if you really have 307 devices on your network and they really need disk IO faster than 110MB/s. LACP is nice, but doubling your ports (614) becomes a much bigger hassle and even more so when you start to expand, it is nice having redundancy though but I haven't seen many thin clients that have dual NICs unless you're just using desktops in which case you can just add 300 PCI cards to the budget.

-KuJoe

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, KuJoe said:

According to your first post you have 2 systems, a Linux and Windows system. I see in your recent post this was not correct. You best bet is to invest in 10Gbps switches if you really have 307 devices on your network and they really need disk IO faster than 110MB/s. LACP is nice, but doubling your ports (614) becomes a much bigger hassle and even more so when you start to expand, it is nice having redundancy though but I haven't seen many thin clients that have dual NICs unless you're just using desktops in which case you can just add 300 PCI cards to the budget.

my bad for not clearing that out.i was pertaining to the servers.i have 5 linux based server and 2 windows based servers.the clients/nodes run win7/winxp.i just found out that multiple 64bit NIC,compared to the 32 bit, (maybe linus can make a video on this since not alot know about it) can increase my server network speeds since 1Gbs for the cllients/nodes are good enough.the software i use can manage the clients (e.g. NIC-1 can handles clients 1-50 and NIC-2 for 51-100....and so on) so can the LACP further increase my server network speeds?

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, Raylex said:

my bad for not clearing that out.i was pertaining to the servers.i have 5 linux based server and 2 windows based servers.the clients/nodes run win7/winxp.i just found out that multiple 64bit NIC,compared to the 32 bit, (maybe linus can make a video on this since not alot know about it) can increase my server network speeds since 1Gbs for the cllients/nodes are good enough.the software i use can manage the clients (e.g. NIC-1 can handles clients 1-50 and NIC-2 for 51-100....and so on) so can the LACP further increase my server network speeds?

From what i understand it can increase the bandwidth to the number of 1Gbps ports but it will not increase the throughput for a single threaded connections. So for example if you take 2 1Gbps ports and bond them together in LACP, you'll have 2Gbps available for 2 1Gbps ports but you will not get a single transfer speed of 2Gbps (but you can get 2 concurrent transfers at 1Gbps each).

-KuJoe

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, KuJoe said:

From what i understand it can increase the bandwidth to the number of 1Gbps ports but it will not increase the throughput for a single threaded connections. So for example if you take 2 1Gbps ports and bond them together in LACP, you'll have 2Gbps available for 2 1Gbps ports but you will not get a single transfer speed of 2Gbps (but you can get 2 concurrent transfers at 1Gbps each).

thank you very much for the insights.now ill just have to buy the stuff and test it out.ill post the results if it really increases network speeds.

Link to comment
Share on other sites

Link to post
Share on other sites

I occasionally run one of my laptops completely diskless, PXE-booting to an iSCSI mount on my NAS machine.  I am able to monitor the throughput.  A faster network would not be of much, if any advantage.  Even with SSDs, throughput is rarely in excess of that of Gig-E. 

 

And yes, I have run "synthetic" benchmarks and have pushed Gig-E to its limits, around 125megabytes/second.  So its not a matter of my gear being able to keep up. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Mark77 said:

I occasionally run one of my laptops completely diskless, PXE-booting to an iSCSI mount on my NAS machine.  I am able to monitor the throughput.  A faster network would not be of much, if any advantage.  Even with SSDs, throughput is rarely in excess of that of Gig-E. 

 

And yes, I have run "synthetic" benchmarks and have pushed Gig-E to its limits, around 125megabytes/second.  So its not a matter of my gear being able to keep up. 

I do the same thing for the disk that hosts my steam library on my desktop but using two Intel X540-T1 between that and the server. The server has an array of 6 SSDs and I can hit 4Gbps+ loading games and 10Gbps on benchmarks.

 

Generally speaking yea not much needs more than 1Gbps and latency really kills the usable bandwidth, even directly connected copper 10Gbps with jumbo frames is not even close to SATA bus latency.

 

I have switched from iSCSI to SMB 3 multichannel though as it is slightly faster.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×