Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

To build a 10g/40g fiber pfSense router

This summer, I started moving as much data as possible from a 2-bay NAS to an unRaid server built from a repurposed MSI X99S Gaming 7 running an i7-5930K, two cache drives, two parity drives and five data drives.  Once I did that, I started planning a backup of that system, that right now is running on an ECS H55H-I motherboard and seven 3.5" drives.  My plan is to continue to use WD SmartWare to make incremental backups to my little NAS and use it for nothing else.  Nightly, that data will get synchronized to my primary unRaid server and I'll work out some kind of schedule to sync data to the other, figuring out how to do integrity checks and so on so that if file corruption takes place, I have some sort of window to copy it back over from a known good copy.  A third level of storage will be made to an array of SSD drives and kept in a safe deposit box.  Cloud storage, when I would want at least 5TB, is kind of expensive and I don't really trust the privacy of it all.

 

After I started moving terabytes of data from my old NAS into unRaid, I decided to pick up a few Mellanox 10g fiber cards to make transfers between systems just a little faster.  On all of my desktop PCs, and anywhere it's financially practical, I've done away with rotational drives and run entirely from SSDs, one PCIe and one SATA NVMe drive in my main desktop.  I'm not bound by drive speed so much, but I am bound by limitations of the interface and to some degree, limitations of Windows 10.  My desktop PC has three network interfaces in use.  One connecting to a segregated array of Raspberry Pi (behind pfSense), one connected peer-to-peer to my unRaid server and one connected to my router to communicate with everything else.  I want to streamline things and am planning to build a new pfSense router using a motherboard/cpu combination that's robust enough to operate three 2-port Mellanox cards, each running on 8 PCIe lanes.  The most likely prospects at this point are an i7-5820K running on an ASRock X99 Professional X99, 2011-3.  For that setup, I'm looking to use all 28 PCIe lanes (8 x 3 (fiber) + 4 (NVMe)), using one of the onboard interfaces for WAN and the other with a 24-port D-Link switch.

 

My questions are two.  Have any of you run pfSense using fiber before, and if so, can it be made to do so stable?  Second, if all I'm using it for is pfSense, what power supply rating would you expect to be adequate for this kind of build?  I don't know the power requirements of the Mellanox cards.  I'll set it up for 10g initially but will likely move to 40g within a year.  If possible, I'd like to drive the thing using a Corsair SF600 or an SF450.  This will be a wall mounted server and the only wires should be power and network.

Link to comment
Share on other sites

Link to post
Share on other sites

Wow. I have no idea where to start with this, but would like to be helpful.

 

I'd start by taking a look at the Pfsense appliance specs vs speed ratings. Compare their specs/ratings with your planned build, and be sure you will have the capacity to move the bandwidth you are hoping for.

 

I see that their XG-1541 is capable of 10 gigabit speeds, with an 8 core Xeon at just over 2ghz.

I'd pull the passmark score for the CPU they are using, and compare it to the one you have planned. Be sure you can score much higher if you are really looking at a 40 gigabit upgrade soon. 

 

Other things to consider may be the speed of your storage, can it really keep up with a 10 gigabit link, or 40?

How much data do you have to back up? Will you always need to back up all of that data? 

 

Just make sure that all of the time and money is well spent!

Link to comment
Share on other sites

Link to post
Share on other sites

pfSense is 'currently' software bound for filtering speed and I was unable to packet filter much higher than 7Gbit/s**, depending on hardware that is.  So if you are wanting more than that, you need to wait until they have done optimisations and added additional offloading which I believe is due in the 3.0+ versioning (supposedly).

 

**This is in my testing using 2x 10G x540's with a Dual Xeon R720 with 2x 2695 v2's and all the hardware acceleration features enabled**

 

Personally I would advise you to look at vyOS rather than pfSense for this task as it will give native switching speeds.

Please quote or tag me if you need a reply

Link to comment
Share on other sites

Link to post
Share on other sites

Routing at 10gb+ is costly, let alone filtering. Most people who have a need to move data at 10gb between VLANs or subnets get a layer 3 switch that can handle 10gbit.

 

FreeBSD packet filtering starts giving out around 4-7 as falcon mentioned. More than software, it is a FreeBSD kernel issue. I -believe- iptables on linux can handle 10gbit but it isn't plug and play. 

 

Personally I would create an isolated "storage" VLAN and put it all on the same subnet especially for home use (which I actually do). Only bind storage services to your 10gb cards, and bind all other services to your gigabit. Don't let crap bind to 0.0.0.0... and you've got some semi-form of security by restricting it to storage protocols.

 

Bought the Nexus 3048 which has 4 SFP+ ports for $130, couldn't be happier. You don't need to use 10gb cards to get out to the internet, so all gigabit connections can be routed and filtered by pfSense.

It is way cheaper to buy a 12 port or more SFP+ switch than build out an i7 router...

Also this should be moved to networking ;-).

Link to comment
Share on other sites

Link to post
Share on other sites

It sounds like a purpose-built switch is the better way to go.  Right now, I have two D-Link DGS-1210-28 switches, they each have four 1G SFP sockets, currently unused.  I suppose they could allow for low latency links between the switches, but I have them serving different subnets.  I like them because they're fairly compact, wall-mountable and virtually silent.  Once I have my second unRaid server ready to go, perhaps I'll replace one of those two switches with a model that provides for 10G.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, charlesshoults said:

It sounds like a purpose-built switch is the better way to go.  Right now, I have two D-Link DGS-1210-28 switches, they each have four 1G SFP sockets, currently unused.  I suppose they could allow for low latency links between the switches, but I have them serving different subnets.  I like them because they're fairly compact, wall-mountable and virtually silent.  Once I have my second unRaid server ready to go, perhaps I'll replace one of those two switches with a model that provides for 10G.

The us-16-xg is a small switch, though it still has fans.

https://www.ubnt.com/edgemax/edgeswitch-16-xg/

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share


×