Jump to content

10Gbps+ Lan Party *Sense Router/Server Build Suggestions/Recommendations

Hello.
I’m building a *BSD (pfSense/OPNSense) router/server for a 200+ player Lan Party. And I need some suggestions from people that are a whole lot more in the know about such builds.

First some background:
Me and my team have organised 2 big (100-150 player) Lans in the last 2 years.
Both times we used a X99 build with a 5820k@4.2Ghz as the router.
It also contained 3* 4x 1Gbps Intel NICs as links to the switches and a 2 port 10Gbps card from QLogic (HP 530T) as the main link in.
We never came even came close to maxing out the CPU but we have decided to replace the X99 build with something more server like as we are expecting more and more players in the near feature.

Requirements
We are looking to build a Router/Server that can handle at least 10Gbps and would stay with us for multiple years. That’s including caching stuff like Steam, Microsoft updates, …
The machine would also have a couple of Jails/VM’s for game servers.

The current budget is 3000€ (That’s for the motherboard, CPU and Ram only).
We plan on filling the machine up with:
2x 2 Port 10Gbps Ethernet cards*,
2x 2 Port 10Gbps Fiber cards*,
n* 4 Port 1Gbps Ethernet cards**.

I have currently found 2 motherboards that I assume would be good enough for our use case.

MW51-HP0 - Intel LGA2066 socket
I assume Intel CPUs are tried and tested in such a use case.
The CPUs boost to a higher clock then any AMD EPYC CPU on the market currently but due to the budget I would be trading cores, memory and PCIe lanes for addition clock speed.***
The fact that the motherboard uses a PLX chip to get some of the lanes also concerns me. I assume if I used 4x PCIe cards in the last 4 slots there would be no issue, but upgrading down the line…
Also the board only comes with 2x 1Gbps NICs (shared over DMI3).

MZ31-AR0 1 - AMD SP3 Socket
The board already has 2x SFP+ 10Gbps slots eliminating one PCIe card.
Everything important is directly connected to the CPU so no PLX chips.
Also a bunch more ram slots.
The problems I see are that AMD EPYC is clocked lower then Intel Chips*** and looking at the OpenBSD documentation AMD EPYC is not mentioned anywhere (and yes I know that the CPU works with OpenBSD looking at the Phoronix testing).

Why not use a dual socket server?
We tried using a dual socket Intel server but we had some problems, if one of the cards was attached to CPU1 and the rest to CPU0, they would sometimes drop packets or not even link at all.

Why not upgrade the X99 build?
The same issue as listed for the Intel platform above, PLX chips.

TL;DR:
More Cores (AMD) or More GHz (Intel) for a *Sense router build?

All recommendations are welcome. Just make sure they are reasonable

* The cards would be used for incoming links from the school or facility that we would host the party at. Some are only able to provide high speed Fiber other Ethernet.
** The cards would be upgraded as the need for more connectivity would grow. The reason we currently want multiple out links is due to most of our switches only support Gigabit. Also the need for multiple separate Lans.
*** Due to only testing with high clock speed CPUs, I don’t really know what pfSense/OPNSense would prefer, cores or speed. If anyone has done the testing and has some number to share, please let me know in the comments.

(I might edit this to add more information later so I reserve this small section for edit notes. Also IF this build becomes reality sooner then later I don’t mind posting performance numbers and a build log here.)

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not entirely sure what the 4x1gb nics are supposed to do? 

Can you get a bunch of switches that have one 10gbit intake split into multiple 1gbits? Or are you connecting the PC's directly to the router?

 

 

Edit: oh BTW this sounds really cool! I'd love to see a build log with some experiences around here if you don't mind :)

 

Gaming HTPC:

R5 5600X - Cryorig C7 - Asus ROG B350-i - EVGA RTX2060KO - 16gb G.Skill Ripjaws V 3333mhz - Corsair SF450 - 500gb 960 EVO - LianLi TU100B


Desktop PC:
R9 3900X - Peerless Assassin 120 SE - Asus Prime X570 Pro - Powercolor 7900XT - 32gb LPX 3200mhz - Corsair SF750 Platinum - 1TB WD SN850X - CoolerMaster NR200 White - Gigabyte M27Q-SA - Corsair K70 Rapidfire - Logitech MX518 Legendary - HyperXCloud Alpha wireless


Boss-NAS [Build Log]:
R5 2400G - Noctua NH-D14 - Asus Prime X370-Pro - 16gb G.Skill Aegis 3000mhz - Seasonic Focus Platinum 550W - Fractal Design R5 - 
250gb 970 Evo (OS) - 2x500gb 860 Evo (Raid0) - 6x4TB WD Red (RaidZ2)

Synology-NAS:
DS920+
2x4TB Ironwolf - 1x18TB Seagate Exos X20

 

Audio Gear:

Hifiman HE-400i - Kennerton Magister - Beyerdynamic DT880 250Ohm - AKG K7XX - Fostex TH-X00 - O2 Amp/DAC Combo - 
Klipsch RP280F - Klipsch RP160M - Klipsch RP440C - Yamaha RX-V479

 

Reviews and Stuff:

GTX 780 DCU2 // 8600GTS // Hifiman HE-400i // Kennerton Magister
Folding all the Proteins! // Boincerino

Useful Links:
Do you need an AMP/DAC? // Recommended Audio Gear // PSU Tier List 

Link to comment
Share on other sites

Link to post
Share on other sites

53 minutes ago, FloRolf said:

I'm not entirely sure what the 4x1gb nics are supposed to do? 

Can you get a bunch of switches that have one 10gbit intake split into multiple 1gbits? Or are you connecting the PC's directly to the router?

 

 

Edit: oh BTW this sounds really cool! I'd love to see a build log with some experiences around here if you don't mind :)

 

Note those are 4 port Gigabit cards. 
They will be connected to the switches directly via a LAG/LACP interface. 
The switch idea would work but 10Gbps switches are expensive so currently we only have one.

Link to comment
Share on other sites

Link to post
Share on other sites

pfSense/OPSense might not be the most sensible choice for 10Gbit/s line rate as to this date I have yet to exceed 8Gbit/s WAN<>LAN throughput using far better equipment than you are suggesting to use.

 

The problem is not the CPU performance per say, the problem is the soft interrupts that even 10G Intel 710 using the igbx driver cannot avoid.  Unfortunately it's a packet filter and it is inspecting traffic and tracking states and those require CPU cycles.  Even with fastforward enabled and all the will in the world, pfSense will not do 10G line rate with small packet sizes, large yes.. small no (standard 1500 MTU / 1472 payload)

 

What kind of connectivity do you get from the upstream provider?  Is this a single 10G fiber link or are there options for multiple uplinks?

Please quote or tag me if you need a reply

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Falconevo said:

pfSense/OPSense might not be the most sensible choice for 10Gbit/s line rate as to this date I have yet to exceed 8Gbit/s WAN<>LAN throughput using far better equipment than you are suggesting to use.

 

The problem is not the CPU performance per say, the problem is the soft interrupts that even 10G Intel 710 using the igbx driver cannot avoid.  Unfortunately it's a packet filter and it is inspecting traffic and tracking states and those require CPU cycles.  Even with fastforward enabled and all the will in the world, pfSense will not do 10G line rate with small packet sizes, large yes.. small no (standard 1500 MTU / 1472 payload)

 

What kind of connectivity do you get from the upstream provider?  Is this a single 10G fiber link or are there options for multiple uplinks?

That's what I was afraid of.
About the upstream connectivity but most of the time we get 2 links up with 2 separate IP's or 2 links up with only 1 IP.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, ULEES said:

That's what I was afraid of.
About the upstream connectivity but most of the time we get 2 links up with 2 separate IP's or 2 links up with only 1 IP.

Edit:

Also mind telling us what kind of hardware do you use? Just so we can get an idea for the kind of power required to push something like that.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Am I the only one here who thinks that each computer on a LAN based gaming party would only need 1-2mbps up and down at most? What is with this 10gbps stuff? So 200 players on a bunch of 24 port or more switches would only consume 500mbps of throughput. A xenon server is probably what you want to route all the traffic through. Networking is one of my weaker areas. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Columbo said:

Am I the only one here who thinks that each computer on a LAN based gaming party would only need 1-2mbps up and down at most? What is with this 10gbps stuff? So 200 players on a bunch of 24 port or more switches would only consume 500mbps of throughput. A xenon server is probably what you want to route all the traffic through. Networking is one of my weaker areas. 

If the players would only be gaming locally then yes.
But with games such as Fortnite, LoL, ... that's just not possible. 
Also players watch/listen to Youtube, Spotify. Some even stream to Twitch.
That's why we need all the bandwidth we can get.

Link to comment
Share on other sites

Link to post
Share on other sites

pfSense is supposed to be getting 10Gig improvements in the next big release.  Unfortunately we have no date for that yet.

Router:  Intel N100 (pfSense) WiFi6: Zyxel NWA210AX (1.7Gbit peak at 160Mhz)
WiFi5: Ubiquiti NanoHD OpenWRT (~500Mbit at 80Mhz) Switches: Netgear MS510TXUP, MS510TXPP, GS110EMX
ISPs: Zen Full Fibre 900 (~930Mbit down, 115Mbit up) + Three 5G (~800Mbit down, 115Mbit up)
Upgrading Laptop/Desktop CNVIo WiFi 5 cards to PCIe WiFi6e/7

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, ULEES said:

That's what I was afraid of.
About the upstream connectivity but most of the time we get 2 links up with 2 separate IP's or 2 links up with only 1 IP.

If you have 2 up-links which provide up to 10Gbit/s of bandwidth with 2 external IPs (e.g 5Gbit/s each) then 2x well spec'd pfSense boxes can do the job.  This assumes that your ISP would be willing to allow for this.

 

Below is a layout suggestion obviously select IP addressing you want, I've simplified it with 2x VLANs and /24 subnets for the internal usage.
Tbh this may be way outside of your comfort zone but it would do the job with minimal effort
Ideally you would be using switching with 10G uplinks which I have suggested on the below diagram.  Client's can be 1G but I would recommend that 10G uplinks be used and don't use avoid using LAGGs (LACP) with multiple 1G ports.

You could do similar with a layer2 bridge between 2 pfSense boxes, unfortunately you wouldn't be able to use DHCP for handing out addressing as it would all default to one upstream gateway (even with 2x gateways specified in the config it will always use the first and/or lowest octet gateway IP).  You could go down this simpler route with a single larger subnet network but you would need to assign IP addressing manually to users with their respective gateways accordingly as DHCP won't do the job in this instance.

Regarding pfSense spec, this is what was used for around 7.8Gbit/s during my testing but is massive overkill;
Chassis - Dell R630
CPU's - 2x E5-2643v4
Memory - 32GB
Network interface - Intel X710 DA2 (this only has 2x 10G ports, they do a DA4 model which is 4x 10G ports as you will need 3x 10G ports per pfSense box for the below diagram)

You are probably wondering why I had this hardware, I work in this field and one of the network engineers challenged me for WAN<>LAN throughput vs one of the Cisco 5585X 2U dreamboats they get wet about that we have used for customers wanting 10G uplinks with inspection, firepower etc etc.
 

The below ASSUMES a lot just based on your post(s) so feedback is welcome.

 

image.thumb.png.37c300b4e760a9d100c64b3553bc832f.png

Please quote or tag me if you need a reply

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, Falconevo said:

If you have 2 up-links which provide up to 10Gbit/s of bandwidth with 2 external IPs (e.g 5Gbit/s each) then 2x well spec'd pfSense boxes can do the job.  This assumes that your ISP would be willing to allow for this.

 

Below is a layout suggestion obviously select IP addressing you want, I've simplified it with 2x VLANs and /24 subnets for the internal usage.
Tbh this may be way outside of your comfort zone but it would do the job with minimal effort
Ideally you would be using switching with 10G uplinks which I have suggested on the below diagram.  Client's can be 1G but I would recommend that 10G uplinks be used and don't use avoid using LAGGs (LACP) with multiple 1G ports.

You could do similar with a layer2 bridge between 2 pfSense boxes, unfortunately you wouldn't be able to use DHCP for handing out addressing as it would all default to one upstream gateway (even with 2x gateways specified in the config it will always use the first and/or lowest octet gateway IP).  You could go down this simpler route with a single larger subnet network but you would need to assign IP addressing manually to users with their respective gateways accordingly as DHCP won't do the job in this instance.

Regarding pfSense spec, this is what was used for around 7.8Gbit/s during my testing but is massive overkill;
Chassis - Dell R630
CPU's - 2x E5-2643v4
Memory - 32GB
Network interface - Intel X710 DA2 (this only has 2x 10G ports, they do a DA4 model which is 4x 10G ports as you will need 3x 10G ports per pfSense box for the below diagram)

You are probably wondering why I had this hardware, I work in this field and one of the network engineers challenged me for WAN<>LAN throughput vs one of the Cisco 5585X 2U dreamboats they get wet about that we have used for customers wanting 10G uplinks with inspection, firepower etc etc.
 

The below ASSUMES a lot just based on your post(s) so feedback is welcome.

 

image.thumb.png.37c300b4e760a9d100c64b3553bc832f.png


That would work but it's a very convoluted system but not doable due to the lack of DHCP and handing out IP's to 200+ users would be really slow. 
I also think that entire network would be slower since the boxes would have to also handle "LAN" traffic between machines/subnets.

Talking about hardware from our testing:
Multi CPU Systems are problematic due to the added latency when talking over the QPI bus.
Multi CPU breaks LAGG interfaces in some instances. 
Single CPU performs way better when there are no PLX chips involved. 

So we would just stick with a single system with a single CPU. 
And hope for PfSense/BSD to improve the networking overtime (since this is more of a investment then a really need ATM).

Link to comment
Share on other sites

Link to post
Share on other sites

I would get rid of the 4 port gigabit cards, I don't see a need for it.

 

Assuming you have stacking switches, the final should ba a LAG'd 10gb to the "router". The router should do nothing more than facilitate access to the internet. There should be no routing between clients and local servers - create a flat network. Extend it to /16 if you have to.

 

If you -must- have separete networks, then hopefully your switches are L3 - and I would route at the switch level. Take the pressure off the router/firewall. Unless you have a need for layer 7 filtering or other IPS/IDS needs, I would just use L3.

 

The only reason I would LAG the 10gb to the router/firewall is because I assume this same server would be the host for the game servers and what not. 

 

As for a solution that can route 10gb - VyOS / untangle (should be able to) / other linux distros using iptables.

Link to comment
Share on other sites

Link to post
Share on other sites

** Does your location have greater than 10Gb/s internet connection?

 

I would do some traffic shaping of some sort, even 200 users will not need 10gb. Hell 5,000 shares a 1gb connection at my previous job.

 

Internet wise, the most a user would need is roughly 20mbit/s for 1080p streaming. Everything else would consume far less. 200 users 20mbit nets you 2gbit right? So internet facing I would plan for that and rate limit users to 20mbit. That's more than a large portion of home users get.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, ULEES said:


That would work but it's a very convoluted system but not doable due to the lack of DHCP and handing out IP's to 200+ users would be really slow. 
I also think that entire network would be slower since the boxes would have to also handle "LAN" traffic between machines/subnets.

Talking about hardware from our testing:
Multi CPU Systems are problematic due to the added latency when talking over the QPI bus.
Multi CPU breaks LAGG interfaces in some instances. 
Single CPU performs way better when there are no PLX chips involved. 

So we would just stick with a single system with a single CPU. 
And hope for PfSense/BSD to improve the networking overtime (since this is more of a investment then a really need ATM).

The diagram I provided will allow use of 2x DHCP pools, I said having a flat network with a layer2 bridge would prevent DHCP assigning gateways.  That would be an entirely different concept and diagram.  I only provided one as I was short on time.   As for the pfSense box doing a lot of work for LAN<>LAN between subnets this traffic will be next to nothing as devices at the bottom of the chain will be 1G which will be limited by their respective switch ports further down the network.

With the diagram I provided above
VLAN #1 would have a DHCP pool inside 172.16.1.0/24

VLAN #2 would have a DHCP pool inside 172.16.2.0/24
Machines in VLAN#1 can talk to machines in VLAN#2 via pfSense's routing and open ACLs

There are no 'easy' ways of doing it if u want true 10Gbit connectivity to the firewall while using free software that is doing packet filtering.  You can sure as hell do it with just routing or using layer3 routing on a 10G switch for example.  But that provides you no protection at your end points and no one-to-many NAT.

Recent multi-socket systems don't create latency or any problematic issues.  Don't know what experience you've had but I use dual and quad socket systems for all sorts on a daily basis.
Multi-CPU doesn't 'break' a LAGG interface, PCI-E architecture is pinned to a specific CPU depending on the circuit layout of the motherboard and which PCI-E slot you decided to use and PLX chips don't help at all here.  A LAGG interface that spans multiple PCI-E slots will not work properly if those devices span alternate CPU NUMA nodes, you should avoid using 1G cards in LAGG for this instance it isn't suitable.  This isn't a multi-cpu 'problem', this is a misunderstanding of how resources are assigned via the system.  It's well known and well documented.

 

Here's what I would do using what you currently have and minimal investment;

Reuse the X99 setup and make sure above 4G decoding is supported by the mainboard (X99 should have this)

Replace the existing network interface(s) with a Intel X710 (DA4) making sure its in a x16 slot disabling all other network interfaces

Perform WAN/LAN throughput testing on an internally created network using 10G interfaces on both sides

Find out what the max throughput is for the system then decide if you need more than one system and up link to your ISP

For example, if in testing you reach say 7Gbit/s maybe you are happy with that and won't require a second system.

Also don't try and put steam caching on the same box, that would just be stupid make sure its separate hardware

I really don't think people understand how expensive firewalling/packet filtering is at 10G, routing is pretty cheap but firewalling/packet filtering is not.

Please quote or tag me if you need a reply

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×