Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Router for large LAN events

Definitely the Ubiquiti ERLite-3 and I don't think you need a bigger subnet than a /24, that is if you are only running a wired connection. If you are also running WiFi then just to be sure, you should go up to /23.

Link to post
Share on other sites

Greetings.

 

We are planning to organize a LAN party and expecting over 100 gamers to come.

What router or other solution should I look for that is stable and can handle large amount of connections?

 

Last time we had around 50 people and used a TP-Link router that was supposedly better than random household one, however it's DHCP failed to give out IPs and we had to assign them manually.

 

 

OP, everybody has been chiming in with their two cents, but nobody has asked you the most important question. One that without the answer to, it makes it very difficult to suggest the proper hardware.

 

For the LAN party; will the game server that the gamers are connecting to be locally hosted or will users be connecting to a server on the internet? This makes a HUGE difference on the type of hardware you should be running.

 

Scenario one; Game server is hosted locally - most traffic stays on the network and internet usage is limited:

 

I would suggest getting a couple of these or something very similar:

 

http://www.ebay.ca/itm/Dell-POWERCONNECT-5548-48-x-10-100-1000-2-x-10-Gigabit-SFP-Switch-/161764719999?hash=item25a9edbd7f:g:bRIAAOSw~gRVpSNS

 

These switches have stacking capabilities and you can use regular (high quality) HDMI cables to connect two switches with a 10Gb/s link. What this means is that when you setup the switches, instead of managing two separate 48 port switches, it looks to you like you are managing a single 96 port switch.

 

The 10Gb uplink speeds between the switches also means that there will no bottleneck when computers hooked to one switch are trying to talk to computers on another switch. That might be an issue if you are using 1Gb ports to uplink switches. on a 48 port switch, you'd be forcing the agregate data of 48 machines through a single 1Gb link.

 

From the specs that I read, it would also seem that the uplink and 10Gb SFP+ ports on that switch operate independently. That means that you could have the 10 Gb link between two switches and then still be able to connect the game servers to the system with a 10Gb connection.

 

For a router in that situation, I'd recommend even something as simple as the EdgeRouter Lite or even the EdgeRouter X. For minimal outside traffic, either of those would be fine.

 

 Scenario two; Game server is hosted on the internet - most traffic has to get routed to the internet and very little traffic is machine-to-machine locally on the network.

 

I'd suggest you get the full EdgeRouter. According to ubiquity, it'll route 2 million packets per second. That's what you need. Most traffic from gaming sessions are going to be small packets so you can easily overload small, underpowered consumer gear with not very much total data, but very high packet counts.

 

In this scenario, you could then just hook up four or five 24 port switches directly to the router - one to each port obviously. That would put each user only one hop away from the router and only two hops away from any other computer on the network.

 

Another thing that I'd suggest is this:

 

http://www.musiciansfriend.com/accessories/eurolite-12u-19-rack-mount-amp-case-w-casters

 

You can throw a bunch of rackmount gear in there; switches, router, your game servers and even a rackmount UPS. LAN party in a box.

 

You keep all your gear in there; roll it into whatever venue you're hosting your LAN party, pull off the front and back covers, hook up the one power cable from the UPS to the wall and go. Having the UPS will save you running a bunch of power cables and protect you from dodgy power in buildings that you might not be familiar with.

Link to post
Share on other sites
  • 3 weeks later...

We had local servers (I was using my own PC to create CS:GO game servers), but we also allowed players to connect to servers on the internet (1Gbps connection in the building). We did not use the WiFi on the router. DHCP did indeed fail, though ping and internet speeds were fine after manually giving out local IPs. I could set up my PC as DHCP server technically and turn off the router's.

 

I had the idea of using a PC as a router, though the idea came too late and did not have enough time to get a proper NIC.

 

Switches were provided by our local university, these were 1Gbps Cisco managed ones (rackmount).

 

Talking about the connection - when I connected my PC directly to the RJ45 port in the wall, I got like 950 Mbps down and up. When I connected with Cat6 cables from wall to that router and then to my PC, I only got like 200 Mbps. The router seemed to be a major bottleneck.

 

Can I ask what your LAN event is? Does it have a website or Facebook page etc?

 

We just create a FB event, not a major event.

 

I am quite fresh into this field, still trying to improve my network knowledge and skills, though still the most experienced in the organizing team.

Skynet: MacBook Pro Late 2016 Space Gray | i7-6820HQ 2.7 GHz | 16 GB LPDDR3 | Radeon Pro 455 2048 MB | 512 GB NVMe SSD | 15" 2880x1800

HAL9000: Intel i5-9600k | Cryorig M9 | 32 GB Corsair Vengeance LPX DDR4 3200 MHz | Gigabyte Z390I AORUS PRO WIFI | MSI GTX 1080 Ti SeaHawk X | 1 TB Samsung 970 Evo Plus + 1 TB Crucial MX500 + 512 GB Samsung 970 Evo Plus | Corsair TX650M | NZXT H210i | LG 34UM95 34" 3440x1440

Hydrogen server: AMD Ryzen 9 3900x | AMD Wraith Prism | 64 GB Crucial Ballistix 3200MHz DDR4 | Asus Prime X570 Pro | Corsair HX1000 | 256 GB Samsung 850 Evo + 1 TB Crucial MX500 + 4x 3 TB + 2 TB WD Red/Seagate/Toshiba  | Fractal Design Define R5 | unRAID 6.8.3

Carbon server: Fujitsu PRIMERGY RX100 S7p | Xeon E3-1230 v2 | 16 GB DDR3 ECC | 60 GB Corsair SSD & 250 GB Samsung 850 Pro | Intel i340-T4 | ESXi 6.5.1

Big Mac cluster: 2x Raspberry Pi 2 Model B | 1x Raspberry Pi 3 Model B | 2x Raspberry Pi 3 Model B+

Link to post
Share on other sites

We had local servers (I was using my own PC to create CS:GO game servers), but we also allowed players to connect to servers on the internet (1Gbps connection in the building). We did not use the WiFi on the router. DHCP did indeed fail, though ping and internet speeds were fine after manually giving out local IPs. I could set up my PC as DHCP server technically and turn off the router's.

 

I had the idea of using a PC as a router, though the idea came too late and did not have enough time to get a proper NIC.

 

Switches were provided by our local university, these were 1Gbps Cisco managed ones (rackmount).

 

Talking about the connection - when I connected my PC directly to the RJ45 port in the wall, I got like 950 Mbps down and up. When I connected with Cat6 cables from wall to that router and then to my PC, I only got like 200 Mbps. The router seemed to be a major bottleneck.

 
 

We just create a FB event, not a major event.

 

I am quite fresh into this field, still trying to improve my network knowledge and skills, though still the most experienced in the organizing team.

I'm glad it worked out ok. Thanks for the reply to tell us about it. Now you know how to make it work better in future. If you had the money and use for it, building a complete thing in wheeled flightcase would be awesome. You'd just need that and some plastic storage boxes of cat5e with RJ45s attached.

Link to post
Share on other sites

I'm glad it worked out ok. Thanks for the reply to tell us about it. Now you know how to make it work better in future. If you had the money and use for it, building a complete thing in wheeled flightcase would be awesome. You'd just need that and some plastic storage boxes of cat5e with RJ45s attached.

I was talking about the first event we had, the new one is planned in March/April. :P

Skynet: MacBook Pro Late 2016 Space Gray | i7-6820HQ 2.7 GHz | 16 GB LPDDR3 | Radeon Pro 455 2048 MB | 512 GB NVMe SSD | 15" 2880x1800

HAL9000: Intel i5-9600k | Cryorig M9 | 32 GB Corsair Vengeance LPX DDR4 3200 MHz | Gigabyte Z390I AORUS PRO WIFI | MSI GTX 1080 Ti SeaHawk X | 1 TB Samsung 970 Evo Plus + 1 TB Crucial MX500 + 512 GB Samsung 970 Evo Plus | Corsair TX650M | NZXT H210i | LG 34UM95 34" 3440x1440

Hydrogen server: AMD Ryzen 9 3900x | AMD Wraith Prism | 64 GB Crucial Ballistix 3200MHz DDR4 | Asus Prime X570 Pro | Corsair HX1000 | 256 GB Samsung 850 Evo + 1 TB Crucial MX500 + 4x 3 TB + 2 TB WD Red/Seagate/Toshiba  | Fractal Design Define R5 | unRAID 6.8.3

Carbon server: Fujitsu PRIMERGY RX100 S7p | Xeon E3-1230 v2 | 16 GB DDR3 ECC | 60 GB Corsair SSD & 250 GB Samsung 850 Pro | Intel i340-T4 | ESXi 6.5.1

Big Mac cluster: 2x Raspberry Pi 2 Model B | 1x Raspberry Pi 3 Model B | 2x Raspberry Pi 3 Model B+

Link to post
Share on other sites

Honestly, if you need 100 computers on one network, your best bet is to go wired, get a couple big switches and give everybody manual IPs. It will completely cut down the broadcast traffic. and if you require internet connectivity, any router plugged from the internet to one of the switches will provide internet to the whole network (just turn of DHCP)

ESXi SysAdmin

I have more cores/threads than you...and I use them all

Link to post
Share on other sites

Honestly, if you need 100 computers on one network, your best bet is to go wired, get a couple big switches and give everybody manual IPs. It will completely cut down the broadcast traffic. and if you require internet connectivity, any router plugged from the internet to one of the switches will provide internet to the whole network (just turn of DHCP)

Using manual IP's is a bad idea, it will only take one fat finger to cause you endless issues. DHCP does not cause any large amount of traffic especially on a gig network. Most likely the DHCP daemon crashed or it ran out of leases in the OP's previous LAN. If you set your subnet large enough like with a /22 and have an appropriate DHCP server you should have no issues. If you want to limit broadcast traffic then increase your lease time. With a 24hour lease most PC's should run DHCP only every 12 hours. This could waste some IP's but with a /22 network you should have 1022 hosts. Minus some IP's for equipment/router you'd have a thousand or so IP's which would be plenty. 

Link to post
Share on other sites

How long will the party last, because for a one day party any lease time over 18 hrs shouldn't cause any issues.

Link to post
Share on other sites

Using manual IP's is a bad idea, it will only take one fat finger to cause you endless issues. DHCP does not cause any large amount of traffic especially on a gig network. Most likely the DHCP daemon crashed or it ran out of leases in the OP's previous LAN. If you set your subnet large enough like with a /22 and have an appropriate DHCP server you should have no issues. If you want to limit broadcast traffic then increase your lease time. With a 24hour lease most PC's should run DHCP only every 12 hours. This could waste some IP's but with a /22 network you should have 1022 hosts. Minus some IP's for equipment/router you'd have a thousand or so IP's which would be plenty. 

 

I was working on the assumption nobody had fat fingers  :P

ESXi SysAdmin

I have more cores/threads than you...and I use them all

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×