Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

10GBPS Switch to 1GBPS Switch

Recommended Posts

Posted · Original PosterOP

I'm planning to build an internet cafe will this network setup do? i will be using the diskless system so im wondering if this setup will make my client pc's faster in terms of data transfer since clients will not use hard drives im asking if this will work.. the realtime speed im getting when im using a normal setup without 10gbps switch and NIC is about 40-80mbps.. thanks.



Link to post
Share on other sites

Well you're going to bottleneck the 48 port switches to 1Gbps unless there are 10Gbps uplink ports on those switches that you didn't include in the drawing. Also, why not get an 8-port or so 10Gb switch instead of 2 switches with 2x10Gb ports?

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to post
Share on other sites

I don't get the point with this setup: why are there two 10Gbps switches at all, if you're going to be connecting single 1Gbps switches to them? They're completely redundant in this setup, you could just leave them out completely and get the exact same results.


Picture to demonstrate:



Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to post
Share on other sites

What you need is 48 port gigabit switches with 10g uplinks.

I'd recommend Ubiquiti 48 port switches, they have dual SFP+ 10g uplinks which you can use to link them together and use the other port to connect to the 10g computer, you can't really use 20g anyway so better to have redundant 10g


and if you're not setting up any kind of caching server on that 10g computer/server you're not gonna benefit from 10g because most network traffic will need to travel through the wan connection anyway

Dutch Talk Thread

Unofficial LMG Social Media Accounts Website

Desktop Rig:

CPU: i7-4790K Cooler: Cooler Master Hyper 212 EVO Motherboard: MSI Z97 U3+ RAM: 4x4 GB DDR3 1600MHz GPU: MSI GTX 1070 Ti 8G Case: Corsair 230T Windowed Orange SSD: Crucial BX100 250 Gb PSU: Cooler Master G450M HDD: WD 1 TB Generic

Laptops: Macbook Air 2012 11" baseline (i5-3317U @ 1.7 Ghz) upgraded with a Samsung 850 evo 250 GB SSD

HP Zbook Studio G5 (i7-8750H, 32 GB DDR4 2666MHz, Samsung PM981 512 GB, 970 EVO 1TB, Nvidia Quadro P1000)

Link to post
Share on other sites

Remember, the connection will only be as fast as the slowest link in the chain. In this case, the 1gb uplinks on the 48 port switches. You will not benefit from those 10gb uplinks on the 8 port switch at all because of those 48 port switches.

Link to post
Share on other sites

For an internet cafe 10g really is not appropriate. You mentioned disk-less systems so clients will not be transferring data anyway.

Tplink makes great consumer grade stuff. For an internet cafe I would look at something like ubiquity, cisco, or ruckus depending on number of clients.

Link to post
Share on other sites
1 hour ago, skippytheturtle said:

You mentioned disk-less systems so clients will not be transferring data anyway.

Wouldn't that be the opposite?  Client has no local storage, has to acquire all of its data over the network.


As per others @Vexillio at least get something with 10G uplinks.  The extra switches in line make no sense 

PC : 3600 · Crosshair VI WiFi · 2x16GB RGB 3200 · 1080Ti SC2 · 1TB WD SN750 · EVGA 1600G2 · Define C 

Link to post
Share on other sites

Might help to know exactly how this disk-less system is going to work?  Why disk-less in the first place?

Router: i5-7200U appliance running pfSense WiFi: Ubiquiti nanoHD (~700Mbit peak throughput)
ISP: Zen Unlimited Fibre 2 (66Mbit) + Plusnet Unlimited Fibre Extra. (56Mbit)

Link to post
Share on other sites

You might want to look into this if you are using managed switches:
It might solve your bottle neck issues and it will make your envirioment more resilient.
Remember in networking it's always:
1 link is no link
2 links is 1 link

Also you want to consider your server(s) if they are bottlenecking there is no use in having better switches. So you probably want to set up a failover cluster with a load balancer in front and have dual 10 gig connections between the cluster nodes and the load balancer. And have a very fat connection to your core switches.

Link to post
Share on other sites

I've implemented a few diskless systems and using 1Gbit is acceptable in most scenarios, most of the time this is done for security and/or ease of image management.


If the 48 port switches have a 10G uplink, this will provide less of a bottleneck situation for each individual client but they will be limited to 1G individually but collectively have up to 10G of bandwidth.  If you don't have 10G uplinks from the 48 port switches, there is absolutely no benefit to using 10G switching.

On the systems I have implemented which use on disk caching (SSD in the device) the disk IO and network 'thrashing' can be reduced significantly.


Things to consider if your uplinks are 10G;

  • You will need fast disks to max out 10Gbit (1180MB/s theoertical) SSD's and some level of RAM caching would be sensible
  • Do not use cheap SSD's, make sure you use suitable SSD's with sufficient write endurance
  • Boot storms are a problem, booting all devices at once is very heavy on network throughput
  • Configuring monitoring of any PXE and DHCP services, if they fail services client booting will not be available
  • 1Gbit will provide approx 115MB/s of disk IO for each client (read OR write *not both* sequentially)
  • A jumbo MTU of 9000 is a must when using the network for storage IO
  • If you plan on using iSCSI, I recommend looking at redundant connections per machine and using multi-path IO (MPIO)

Please quote or tag me if you need a reply

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now