Jump to content

10Gbit PFSense Router

Aelita Sophie
Go to solution Solved by System Error Message,
Just now, Aelita Sophie said:

So then out of the 2 options, the Dell R620 with 2x E5-2609 and 64GB DDR3 would be the best option then. Granted if we evenly place the RAM over the slots for effectively 8 channels. Do I understand that correct?

that is correct

Are you planning on simply providing routing on the pfSense device or are you going to be adding firewall/nat rules on it also?


Eggs in 1 basket on a single pfSense edge device doing lots of work isn't sensible, not when your business is dependent on it.  If it was simply doing routing and handing off to other 'firewall/routers' within the network then it would be pretty easy to spec up something.  If you are planning lots of FW rules and NAT'ing though you are going to need some serious equipment.

Please quote or tag me if you need a reply

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Falconevo said:

Are you planning on simply providing routing on the pfSense device or are you going to be adding firewall/nat rules on it also?


Eggs in 1 basket on a single pfSense edge device doing lots of work isn't sensible, not when your business is dependent on it.  If it was simply doing routing and handing off to other 'firewall/routers' within the network then it would be pretty easy to spec up something.  If you are planning lots of FW rules and NAT'ing though you are going to need some serious equipment.

Nah basically the Firewall will mostly only be deployed on 1 webserver and on the VPN-Enabled network for IPMI. For the rest it will only be used to simply set up an internal network (so our machines can talk with eachother) and assign WAN IP's to certain machines and virtual machines, thats it.

Main RIG: i7 4770k ~ 4.8Ghz | Intel HD Onboard (enough for my LoL gaming) | Samsung 960 Pro 256GB NVMe | 32GB (4x 8GB) Kingston Savage 2133Mhz DDR3 | MSI Z97 Gaming 7 | ThermalTake FrioOCK | MS-Tech (puke) 700W | Windows 10 64Bit

Mining RIG: AMD A6-9500 | ASRock AB350 Pro | 4GB DDR4 | 500GB 2.5 Inch HDD | 2x MSI AERO GTX 1060 6GB (Core/Memory/TDP/Avg Temp +160/+800/120%/45c) | 1x Asus Strix GTX 970 (+195/+400/125%/55c) | 1x KFA2 GTX 960 (+220/+500/120%/70c) | Corsair GS800 800W | HP HSTNS-PD05 1000W | (Modded) Inter-Tech IPC 4U-4129-N Rackmount Case

Guest RIG: FX6300 | AMD HD7870 | Kingston HyperX 128GB SSD | 16GB (2x 8GB) G.Skill Ripjaws 1600Mhz DDR3 | Some ASRock 970 Mobo | Stock Heatsink | some left over PSU  | Windows 10 64Bit

VM Server: HP Proliant DL160 G6 | 2x Intel Xeon E5620 @ 2.4Ghz 4c/8t (8c/16t total) | 16GB (8x 2GB) HP 1066Mhz ECC DDR3 | 2x Western Digital Black 250GB HDD | VMWare ESXI

Storage Node: 2x Intel Xeon E5520 @ 2.27Ghz 4c/8t (8c/16t total) | Intel ServerBoard S5500HCV | 36GB (9x 4GB) 1333Mhz ECC DDR3 | 3x Seagate 2TB 7200RPM | 4x Western Digital Caviar Green 2TB

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Aelita Sophie said:

Nah basically the Firewall will mostly only be deployed on 1 webserver and on the VPN-Enabled network for IPMI. For the rest it will only be used to simply set up an internal network (so our machines can talk with eachother) and assign WAN IP's to certain machines and virtual machines, thats it.

Not sure I follow correctly, can you create a diagram of what you are trying to accomplish as that should give an indication of what topology you are heading for.  May be able to provide some advice on things not to do, I work in this industry and trying to do it on the cheap needs some serious thought.

Please quote or tag me if you need a reply

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Falconevo said:

Not sure I follow correctly, can you create a diagram of what you are trying to accomplish as that should give an indication of what topology you are heading for.  May be able to provide some advice on things not to do, I work in this industry and trying to do it on the cheap needs some serious thought.

Sure, do mind, the diagram is still unfinished. I'm the only employee of my boss, and I started working just a few days ago. And I need to do everything at once xD

https://gyazo.com/4458fef1ad6372831d321c5486511750

Main RIG: i7 4770k ~ 4.8Ghz | Intel HD Onboard (enough for my LoL gaming) | Samsung 960 Pro 256GB NVMe | 32GB (4x 8GB) Kingston Savage 2133Mhz DDR3 | MSI Z97 Gaming 7 | ThermalTake FrioOCK | MS-Tech (puke) 700W | Windows 10 64Bit

Mining RIG: AMD A6-9500 | ASRock AB350 Pro | 4GB DDR4 | 500GB 2.5 Inch HDD | 2x MSI AERO GTX 1060 6GB (Core/Memory/TDP/Avg Temp +160/+800/120%/45c) | 1x Asus Strix GTX 970 (+195/+400/125%/55c) | 1x KFA2 GTX 960 (+220/+500/120%/70c) | Corsair GS800 800W | HP HSTNS-PD05 1000W | (Modded) Inter-Tech IPC 4U-4129-N Rackmount Case

Guest RIG: FX6300 | AMD HD7870 | Kingston HyperX 128GB SSD | 16GB (2x 8GB) G.Skill Ripjaws 1600Mhz DDR3 | Some ASRock 970 Mobo | Stock Heatsink | some left over PSU  | Windows 10 64Bit

VM Server: HP Proliant DL160 G6 | 2x Intel Xeon E5620 @ 2.4Ghz 4c/8t (8c/16t total) | 16GB (8x 2GB) HP 1066Mhz ECC DDR3 | 2x Western Digital Black 250GB HDD | VMWare ESXI

Storage Node: 2x Intel Xeon E5520 @ 2.27Ghz 4c/8t (8c/16t total) | Intel ServerBoard S5500HCV | 36GB (9x 4GB) 1333Mhz ECC DDR3 | 3x Seagate 2TB 7200RPM | 4x Western Digital Caviar Green 2TB

Link to comment
Share on other sites

Link to post
Share on other sites

So you are looking for the pfSense box to do all of the firewall rules and NAT'ing which it will need serious horse power if you want the rules to get more elaborate over time and be able to provide full line speed @ 10Gbit.  

 

The best I have seen pfSense pull off is around 6.2Gbit/s WAN<>LAN NAT'd speeds in my own testing with serious equipment in use.   pfSense is software without the 'hardware acceleration' component(s) and software stacks you will find in enterprise brands such as Cisco/Juniper etc.   Due to this routing, NAT'ing and filtering at 10Gb is currently a little out of reach at least in my testing, however here's what I used when I did my testing back on pfSense 2.4 release.  Unfortunately I don't have this hardware to re-use as it came from a customer purchase that went under so I borrowed it for a week to answer some of my own questions in my spare time.

 

pfSense box spec;

Dell R630

2x E5-2667 v4's (max perf bios / turbo active)

64GB RAM (overkill just wanted dual socket populated)

HBA330 (Pass-Through) with a refurb 400GB Intel DC3610 SSD (I think from memory)

2x Chelsio T520-CR (1 assigned to WAN interface, 1 assigned to LAN interface)

 

1x Arista 10Gb switch (cant recall the model off the top of my head as it was a Arista provided test unit that was used for VXLAN cross DC verification)

2x Dell R610's back end devices with each having a T520 card


What I saw from my own testing was single threaded bottlenecks (even on E5-2667's) when pushing serious packets per second via the pfSense box.  I also had a hard lock up issue which appeared to be related to the Chelsio driver and BSD.. I didn't get time to debug but when exceeding 6 million packets per second it would shit itself.   I had my desktop replaced at work due to internal IT dept deciding everyone needed Windows 10 over a weekend which had a bunch of screen dumps and other bits and bobs, I was going to do a write up on the pfSense forum but buying/renovating my house put a massive stop to my spare time :(

Please quote or tag me if you need a reply

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Falconevo said:

1x Arista

BOO!!!!!!!! HISS!!!!!!! BOO!!!!!!!!

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Falconevo said:

So you are looking for the pfSense box to do all of the firewall rules and NAT'ing which it will need serious horse power if you want the rules to get more elaborate over time and be able to provide full line speed @ 10Gbit.  

 

The best I have seen pfSense pull off is around 6.2Gbit/s WAN<>LAN NAT'd speeds in my own testing with serious equipment in use.   pfSense is software without the 'hardware acceleration' component(s) and software stacks you will find in enterprise brands such as Cisco/Juniper etc.   Due to this routing, NAT'ing and filtering at 10Gb is currently a little out of reach at least in my testing, however here's what I used when I did my testing back on pfSense 2.4 release.  Unfortunately I don't have this hardware to re-use as it came from a customer purchase that went under so I borrowed it for a week to answer some of my own questions in my spare time.

 

pfSense box spec;

Dell R630

2x E5-2667 v4's (max perf bios / turbo active)

64GB RAM (overkill just wanted dual socket populated)

HBA330 (Pass-Through) with a refurb 400GB Intel DC3610 SSD (I think from memory)

2x Chelsio T520-CR (1 assigned to WAN interface, 1 assigned to LAN interface)

 

1x Arista 10Gb switch (cant recall the model off the top of my head as it was a Arista provided test unit that was used for VXLAN cross DC verification)

2x Dell R610's back end devices with each having a T520 card


What I saw from my own testing was single threaded bottlenecks (even on E5-2667's) when pushing serious packets per second via the pfSense box.  I also had a hard lock up issue which appeared to be related to the Chelsio driver and BSD.. I didn't get time to debug but when exceeding 6 million packets per second it would shit itself.   I had my desktop replaced at work due to internal IT dept deciding everyone needed Windows 10 over a weekend which had a bunch of screen dumps and other bits and bobs, I was going to do a write up on the pfSense forum but buying/renovating my house put a massive stop to my spare time :(

I'm currently looking at R630's. When going to 10Gbit we will probably use Intel X550-T2 cards. Thanks for your insight though! Very helpful

Main RIG: i7 4770k ~ 4.8Ghz | Intel HD Onboard (enough for my LoL gaming) | Samsung 960 Pro 256GB NVMe | 32GB (4x 8GB) Kingston Savage 2133Mhz DDR3 | MSI Z97 Gaming 7 | ThermalTake FrioOCK | MS-Tech (puke) 700W | Windows 10 64Bit

Mining RIG: AMD A6-9500 | ASRock AB350 Pro | 4GB DDR4 | 500GB 2.5 Inch HDD | 2x MSI AERO GTX 1060 6GB (Core/Memory/TDP/Avg Temp +160/+800/120%/45c) | 1x Asus Strix GTX 970 (+195/+400/125%/55c) | 1x KFA2 GTX 960 (+220/+500/120%/70c) | Corsair GS800 800W | HP HSTNS-PD05 1000W | (Modded) Inter-Tech IPC 4U-4129-N Rackmount Case

Guest RIG: FX6300 | AMD HD7870 | Kingston HyperX 128GB SSD | 16GB (2x 8GB) G.Skill Ripjaws 1600Mhz DDR3 | Some ASRock 970 Mobo | Stock Heatsink | some left over PSU  | Windows 10 64Bit

VM Server: HP Proliant DL160 G6 | 2x Intel Xeon E5620 @ 2.4Ghz 4c/8t (8c/16t total) | 16GB (8x 2GB) HP 1066Mhz ECC DDR3 | 2x Western Digital Black 250GB HDD | VMWare ESXI

Storage Node: 2x Intel Xeon E5520 @ 2.27Ghz 4c/8t (8c/16t total) | Intel ServerBoard S5500HCV | 36GB (9x 4GB) 1333Mhz ECC DDR3 | 3x Seagate 2TB 7200RPM | 4x Western Digital Caviar Green 2TB

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Aelita Sophie said:

I'm currently looking at R630's. When going to 10Gbit we will probably use Intel X550-T2 cards. Thanks for your insight though! Very helpful

If they R630 is anything like it's R620 brother, they are great units and crazy quiet too :) 

Rock solid quality.

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Aelita Sophie said:

I'm currently looking at R630's. When going to 10Gbit we will probably use Intel X550-T2 cards. Thanks for your insight though! Very helpful

I know the Chelsio cards have really good driver support out of the box for 10Gb, I'm not sure on Intel for 10Gb with pfSense as I have never tried it personally.

We use a lot of of the X520 and X540 cards but have recently switched to SolarFlare stuff for cost and performance reasons.  Mainly performance and the added ability to program the hardware for certain use cases (ddos filtering for example) to prevent the traffic even touching the kernel of the operating system or using valuable CPU cycles.

Please quote or tag me if you need a reply

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Falconevo said:

I know the Chelsio cards have really good driver support out of the box for 10Gb, I'm not sure on Intel for 10Gb with pfSense as I have never tried it personally.

We use a lot of of the X520 and X540 cards but have recently switched to SolarFlare stuff for cost and performance reasons.  Mainly performance and the added ability to program the hardware for certain use cases (ddos filtering for example) to prevent the traffic even touching the kernel of the operating system or using valuable CPU cycles.

The Datacenter will take care of the most DDoS filtering, so we dont need to take that into account just yet. Might be in the future, but that is a problem for when that time comes.

Main RIG: i7 4770k ~ 4.8Ghz | Intel HD Onboard (enough for my LoL gaming) | Samsung 960 Pro 256GB NVMe | 32GB (4x 8GB) Kingston Savage 2133Mhz DDR3 | MSI Z97 Gaming 7 | ThermalTake FrioOCK | MS-Tech (puke) 700W | Windows 10 64Bit

Mining RIG: AMD A6-9500 | ASRock AB350 Pro | 4GB DDR4 | 500GB 2.5 Inch HDD | 2x MSI AERO GTX 1060 6GB (Core/Memory/TDP/Avg Temp +160/+800/120%/45c) | 1x Asus Strix GTX 970 (+195/+400/125%/55c) | 1x KFA2 GTX 960 (+220/+500/120%/70c) | Corsair GS800 800W | HP HSTNS-PD05 1000W | (Modded) Inter-Tech IPC 4U-4129-N Rackmount Case

Guest RIG: FX6300 | AMD HD7870 | Kingston HyperX 128GB SSD | 16GB (2x 8GB) G.Skill Ripjaws 1600Mhz DDR3 | Some ASRock 970 Mobo | Stock Heatsink | some left over PSU  | Windows 10 64Bit

VM Server: HP Proliant DL160 G6 | 2x Intel Xeon E5620 @ 2.4Ghz 4c/8t (8c/16t total) | 16GB (8x 2GB) HP 1066Mhz ECC DDR3 | 2x Western Digital Black 250GB HDD | VMWare ESXI

Storage Node: 2x Intel Xeon E5520 @ 2.27Ghz 4c/8t (8c/16t total) | Intel ServerBoard S5500HCV | 36GB (9x 4GB) 1333Mhz ECC DDR3 | 3x Seagate 2TB 7200RPM | 4x Western Digital Caviar Green 2TB

Link to comment
Share on other sites

Link to post
Share on other sites

Sounds like you are running just collocation equipment then :) Does the provider have an API for access to black hole addressing in the event a DDoS hits and it's too large for the capacity you have?  It is a good idea to check as a lot of them do, although not many have it for colo upstream connections depending on their network layout.

 

For example, if you are becoming a reseller to other companies and hosting their services then if one external IP gets nuked you don't want this affecting all the public IP address space you have.  So get the IP black holed upstream to prevent 1 customer affecting many others you may be hosting.  This is assuming you are doing this, you may just be hosting stuff for yourselves :), either way its useful information I thought I should share.

Please quote or tag me if you need a reply

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Lurick said:

BOO!!!!!!!! HISS!!!!!!! BOO!!!!!!!!

lol, our network team are very 'CISCO ONLY BRO' but they did love the feature set of the Arista stuff and it peaked their interest.  Plus the bastards had no spare Cisco stuff for me to use while testing.

We will have to see though, they are certainly cost effective compared with brand new Cisco stuff.  I dread to think how much the 9K ASR's cost that are just lying around being prep'd at the moment.

Please quote or tag me if you need a reply

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Falconevo said:

lol, our network team are very 'CISCO ONLY BRO' but they did love the feature set of the Arista stuff and it peaked their interest.  Plus the bastards had no spare Cisco stuff for me to use while testing.

We will have to see though, they are certainly cost effective compared with brand new Cisco stuff.  I dread to think how much the 9K ASR's cost that are just lying around being prep'd at the moment.

Had a few POCs where we were competing with Arista lately so I have to hiss at them on principle :P 

 

ASR 9000 or 9900 series?

I know those 100Gbit linecards for the 9900 are upwards of 1 million each (list of course).

 

You should see if you can get any N9K-C93180YC-EX or -FX hardware, it's pretty beastly :) 

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

They make good stuff Arista, well priced in the 10/40Gb area which is where most 'enterprise' people are still sitting these days.

The trouble is getting people off what they are comfortable and familiar with, especially those that have spent their own cash on vendor specific exams in their youth etc.

 

Think they are 9000 series as I seem to recall they had 40G line cards.

Please quote or tag me if you need a reply

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Aelita Sophie said:

SLA will only be given on payment base. If a customer requests a SLA, we will definitely put them on separate hardware. But more on that I can't answer just yet, as my boss hasnt been to clear about pricing and services.

Wat.

 

if you're buying dedicated gear that can sustain 10gbps why not just put all of your users on it??

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, Mornincupofhate said:

Wat.

 

if you're buying dedicated gear that can sustain 10gbps why not just put all of your users on it??

Why do that when you can get people to pay more for a higher 'grade' service?  Welcome to shady business practices.

Please quote or tag me if you need a reply

Link to comment
Share on other sites

Link to post
Share on other sites

Looks like you're not getting the full 10gb from freeBSD as others have pointed out:

https://forum.pfsense.org/index.php?topic=114270

 

I assume Sophos UTM (free license stops at 50 nodes) probably has the same limitations but can't hurt to give them a gander. Linux + iptables basically. If you want to saturate 10gb might want to hitup ebay? I just don't know what to tell you to buy.

 

When I first read ISP I was imaging you're running internet door to door lol. Didn't think about in a datacenter environment.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Falconevo said:

Why do that when you can get people to pay more for a higher 'grade' service?  Welcome to shady business practices.

I wouldn't say shady, but where costs are put, money needs to be made from it. That is how this world generally works. Same goes for like a Phone, yes they put a lot of money in R&D, but when you calculate that down per phone, the total costs of a phone is a LOT less then the actual price. Profit has to be made.

 

5 minutes ago, Mikensan said:

Looks like you're not getting the full 10gb from freeBSD as others have pointed out:

https://forum.pfsense.org/index.php?topic=114270

 

I assume Sophos UTM (free license stops at 50 nodes) probably has the same limitations but can't hurt to give them a gander. Linux + iptables basically. If you want to saturate 10gb might want to hitup ebay? I just don't know what to tell you to buy.

 

When I first read ISP I was imaging you're running internet door to door lol. Didn't think about in a datacenter environment.

Well I'm aware that FreeBSD locks to about 9Gbit, but that is fine. We don't expect to saturate the line more then 50%, but we like to be able to burst when needed.

Main RIG: i7 4770k ~ 4.8Ghz | Intel HD Onboard (enough for my LoL gaming) | Samsung 960 Pro 256GB NVMe | 32GB (4x 8GB) Kingston Savage 2133Mhz DDR3 | MSI Z97 Gaming 7 | ThermalTake FrioOCK | MS-Tech (puke) 700W | Windows 10 64Bit

Mining RIG: AMD A6-9500 | ASRock AB350 Pro | 4GB DDR4 | 500GB 2.5 Inch HDD | 2x MSI AERO GTX 1060 6GB (Core/Memory/TDP/Avg Temp +160/+800/120%/45c) | 1x Asus Strix GTX 970 (+195/+400/125%/55c) | 1x KFA2 GTX 960 (+220/+500/120%/70c) | Corsair GS800 800W | HP HSTNS-PD05 1000W | (Modded) Inter-Tech IPC 4U-4129-N Rackmount Case

Guest RIG: FX6300 | AMD HD7870 | Kingston HyperX 128GB SSD | 16GB (2x 8GB) G.Skill Ripjaws 1600Mhz DDR3 | Some ASRock 970 Mobo | Stock Heatsink | some left over PSU  | Windows 10 64Bit

VM Server: HP Proliant DL160 G6 | 2x Intel Xeon E5620 @ 2.4Ghz 4c/8t (8c/16t total) | 16GB (8x 2GB) HP 1066Mhz ECC DDR3 | 2x Western Digital Black 250GB HDD | VMWare ESXI

Storage Node: 2x Intel Xeon E5520 @ 2.27Ghz 4c/8t (8c/16t total) | Intel ServerBoard S5500HCV | 36GB (9x 4GB) 1333Mhz ECC DDR3 | 3x Seagate 2TB 7200RPM | 4x Western Digital Caviar Green 2TB

Link to comment
Share on other sites

Link to post
Share on other sites

some advice i can offer.

- avoid cisco. They are way too expensive if you wanna be competitive and offer a good price to your customers.

- You may want to hire some programmers and linux guys. Packetshader is a good example which lets you achieve 100Gb/s routing capacity easy.

- There are many ways to get 10Gb/s capable routers cheap. For pfsense you need to make sure your NICs have enough lanes for full 10Gb/s. That means you need at least 5 lanes PCIe V2 per NIC if you plan to go with full 10Gb/s. You can also get routers like the mikrotik CCR series which are capable of 10Gb/s easy and come with SFP+ ports. They are much much cheaper than the cisco equivalent (ISPs are buying up CCRs due to their low cost per performance but you will need skilled people to set it up). high end mikrotik CCRs have all their ports connected to the CPU directly rather then via PCIe.

- Implement another server for filtering. This is useful to mitigate botnets and attacks that originate from the internet and your own customers. Make sure to do the same with detecing malware activity. The router should pass traffic to the server for analysis so this helps reduce CPU load on the router itself.

- dont throttle p2p, a lot of applications rely on it even skype. If you need to manage traffic, use priorities to ensure important things get their bandwidth. Have the hardware needed to handle p2p connections as they can number in the thousands per customer. Many ISPs hate p2p because of the extra resource required to keep track of everything (which is why to not buy cisco, too pricey for the performance).

 

The only time to buy cisco is if you are an internet exchange requiring many Tb/s routing.

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Aelita Sophie said:

I wouldn't say shady, but where costs are put, money needs to be made from it. That is how this world generally works. Same goes for like a Phone, yes they put a lot of money in R&D, but when you calculate that down per phone, the total costs of a phone is a LOT less then the actual price. Profit has to be made.

 

Well I'm aware that FreeBSD locks to about 9Gbit, but that is fine. We don't expect to saturate the line more then 50%, but we like to be able to burst when needed.

in that thread they're saying 4gbit it starts capping out, so you're not going to get 9gbit. Maybe with 1:1 NAT it will achieve higher speeds since it's just passing through.

https://forum.pfsense.org/index.php?topic=113862.msg634832#msg634832

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, System Error Message said:

some advice i can offer.

- avoid cisco. They are way too expensive if you wanna be competitive and offer a good price to your customers.

- You may want to hire some programmers and linux guys. Packetshader is a good example which lets you achieve 100Gb/s routing capacity easy.

- There are many ways to get 10Gb/s capable routers cheap. For pfsense you need to make sure your NICs have enough lanes for full 10Gb/s. That means you need at least 5 lanes PCIe V2 per NIC if you plan to go with full 10Gb/s. You can also get routers like the mikrotik CCR series which are capable of 10Gb/s easy and come with SFP+ ports. They are much much cheaper than the cisco equivalent (ISPs are buying up CCRs due to their low cost per performance but you will need skilled people to set it up). high end mikrotik CCRs have all their ports connected to the CPU directly rather then via PCIe.

- Implement another server for filtering. This is useful to mitigate botnets and attacks that originate from the internet and your own customers. Make sure to do the same with detecing malware activity. The router should pass traffic to the server for analysis so this helps reduce CPU load on the router itself.

- dont throttle p2p, a lot of applications rely on it even skype. If you need to manage traffic, use priorities to ensure important things get their bandwidth. Have the hardware needed to handle p2p connections as they can number in the thousands per customer. Many ISPs hate p2p because of the extra resource required to keep track of everything (which is why to not buy cisco, too pricey for the performance).

 

The only time to buy cisco is if you are an internet exchange requiring many Tb/s routing.

 

Great advice! Thanks. I already try to avoid Cisco due to the costs. Right now the company is in such an early stage, that I can't make my boss pay for such equipment (Servers them selves are expensive as well)

As for hiring, it's not really my place to do so. I've got hired, because I know my boss for quite a few years. Though he is an experienced programmer and we are both Linux guys. I'll look into packetshader though! It's always to look for future upgrade paths.

This is the reason that I will always lean towards a decent Xeon and a workstation/severmotherboard to go with it. I'll look into mikrotik CCR as well, if it fits the budget I'll start learning on the hardware as fast as I can.

Server filtering is mostely done by the datacenter, though I do understand that it is recommended to bring in your own equipment for it as well. As far I am aware of, that is going to be sorted.

P2P won't be throttled, aslong the customers stay within the specs they pay for, we won't be monitoring their use or throttle them at all. We are trying to be as much hassle free where we can.

 

Thanks for your advice though!

Main RIG: i7 4770k ~ 4.8Ghz | Intel HD Onboard (enough for my LoL gaming) | Samsung 960 Pro 256GB NVMe | 32GB (4x 8GB) Kingston Savage 2133Mhz DDR3 | MSI Z97 Gaming 7 | ThermalTake FrioOCK | MS-Tech (puke) 700W | Windows 10 64Bit

Mining RIG: AMD A6-9500 | ASRock AB350 Pro | 4GB DDR4 | 500GB 2.5 Inch HDD | 2x MSI AERO GTX 1060 6GB (Core/Memory/TDP/Avg Temp +160/+800/120%/45c) | 1x Asus Strix GTX 970 (+195/+400/125%/55c) | 1x KFA2 GTX 960 (+220/+500/120%/70c) | Corsair GS800 800W | HP HSTNS-PD05 1000W | (Modded) Inter-Tech IPC 4U-4129-N Rackmount Case

Guest RIG: FX6300 | AMD HD7870 | Kingston HyperX 128GB SSD | 16GB (2x 8GB) G.Skill Ripjaws 1600Mhz DDR3 | Some ASRock 970 Mobo | Stock Heatsink | some left over PSU  | Windows 10 64Bit

VM Server: HP Proliant DL160 G6 | 2x Intel Xeon E5620 @ 2.4Ghz 4c/8t (8c/16t total) | 16GB (8x 2GB) HP 1066Mhz ECC DDR3 | 2x Western Digital Black 250GB HDD | VMWare ESXI

Storage Node: 2x Intel Xeon E5520 @ 2.27Ghz 4c/8t (8c/16t total) | Intel ServerBoard S5500HCV | 36GB (9x 4GB) 1333Mhz ECC DDR3 | 3x Seagate 2TB 7200RPM | 4x Western Digital Caviar Green 2TB

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Mikensan said:

in that thread they're saying 4gbit it starts capping out, so you're not going to get 9gbit. Maybe with 1:1 NAT it will achieve higher speeds since it's just passing through.

https://forum.pfsense.org/index.php?topic=113862.msg634832#msg634832

 

I've seen pfsense getting used by my fellow datacenter "friends", one rack holds the entire backbone of a spanish commercial radio, using 10Gbit because god knows why when they have a 80mbit stream rate. Anyhow they are able to push the 10Gbit link to about 9Gbit (give or take a few mbit). So if I'm unable to achieve this, I'll be asking them for a bit of advice.

Though in shortterm, we will only be using 1Gbit (though we've got 2 1Gbit lines we will be using as fail-over) for the next half year or so. Granted this might take longer or shorter adjusting on the demand.

Main RIG: i7 4770k ~ 4.8Ghz | Intel HD Onboard (enough for my LoL gaming) | Samsung 960 Pro 256GB NVMe | 32GB (4x 8GB) Kingston Savage 2133Mhz DDR3 | MSI Z97 Gaming 7 | ThermalTake FrioOCK | MS-Tech (puke) 700W | Windows 10 64Bit

Mining RIG: AMD A6-9500 | ASRock AB350 Pro | 4GB DDR4 | 500GB 2.5 Inch HDD | 2x MSI AERO GTX 1060 6GB (Core/Memory/TDP/Avg Temp +160/+800/120%/45c) | 1x Asus Strix GTX 970 (+195/+400/125%/55c) | 1x KFA2 GTX 960 (+220/+500/120%/70c) | Corsair GS800 800W | HP HSTNS-PD05 1000W | (Modded) Inter-Tech IPC 4U-4129-N Rackmount Case

Guest RIG: FX6300 | AMD HD7870 | Kingston HyperX 128GB SSD | 16GB (2x 8GB) G.Skill Ripjaws 1600Mhz DDR3 | Some ASRock 970 Mobo | Stock Heatsink | some left over PSU  | Windows 10 64Bit

VM Server: HP Proliant DL160 G6 | 2x Intel Xeon E5620 @ 2.4Ghz 4c/8t (8c/16t total) | 16GB (8x 2GB) HP 1066Mhz ECC DDR3 | 2x Western Digital Black 250GB HDD | VMWare ESXI

Storage Node: 2x Intel Xeon E5520 @ 2.27Ghz 4c/8t (8c/16t total) | Intel ServerBoard S5500HCV | 36GB (9x 4GB) 1333Mhz ECC DDR3 | 3x Seagate 2TB 7200RPM | 4x Western Digital Caviar Green 2TB

Link to comment
Share on other sites

Link to post
Share on other sites

the mikrotik CCR you need is either the ccr1036 with 2 SFP+ or CCR1072 with 8 SFP+. They both have ample performance but cost either $1000 or $3000 and offer cisco functionalities as well.

 

servers not so expensive. Its not about whether its a xeon. Packetshader tests were done with dual xeon and 2 GTX 480s, the limitting factor here being the PCIe lanes and CPU to CPU data travel. Ram bandwidth is another limiting factor too so virtualisation will kill throughput. Because of thise the LGA 1366 is better than any newer mainstream iseries up till DDR4 dual channel for the ram bandwidth and PCIe lanes. You can overclock the LGA 1366 to nearly double the speed using xeons and some decent cooling. So get platforms that have more ram channels and PCIe lanes (zen architecture is good here, go get yourself an epyc or even threadripper).

 

You need a seperate server for security via traffic analysis. You cant have them at the datacenter. Essentially you have your customer traffic coming into the gateway which is gonna be at your district office/some space you rent somewhere in the area to run the server. You cant rent a virtual server so you're limited to renting real servers or providing your own.

 

Just because its a xeon doesnt make it good. Some xeons are actually intel atoms and many xeons only use dual channel memory too.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, System Error Message said:

the mikrotik CCR you need is either the ccr1036 with 2 SFP+ or CCR1072 with 8 SFP+. They both have ample performance but cost either $1000 or $3000 and offer cisco functionalities as well.

 

servers not so expensive. Its not about whether its a xeon. Packetshader tests were done with dual xeon and 2 GTX 480s, the limitting factor here being the PCIe lanes and CPU to CPU data travel. Ram bandwidth is another limiting factor too so virtualisation will kill throughput. Because of thise the LGA 1366 is better than any newer mainstream iseries up till DDR4 dual channel for the ram bandwidth and PCIe lanes. You can overclock the LGA 1366 to nearly double the speed using xeons and some decent cooling. So get platforms that have more ram channels and PCIe lanes (zen architecture is good here, go get yourself an epyc or even threadripper).

 

You need a seperate server for security via traffic analysis. You cant have them at the datacenter. Essentially you have your customer traffic coming into the gateway which is gonna be at your district office/some space you rent somewhere in the area to run the server. You cant rent a virtual server so you're limited to renting real servers or providing your own.

 

Just because its a xeon doesnt make it good. Some xeons are actually intel atoms and many xeons only use dual channel memory too.

Well we are currently between a Dell R620 with 2x Xeon E5-2609 or a Selfbuild rack server with a Xeon E3-1220 v5. The only point for the selfbuild is that it has warranty.

Both will be supporting 16 lanes on the slot for a 10Gbit Dual Port NIC. Both supporting PCIe 3.0 though the Xeon E3 has DDR4, but again dualchannel.

 

I know the deal about xeons, and the E3-1220 v5 isn't a great Xeon, but according to the past advice on this topic, forum and other internet places it should be enough.

Main RIG: i7 4770k ~ 4.8Ghz | Intel HD Onboard (enough for my LoL gaming) | Samsung 960 Pro 256GB NVMe | 32GB (4x 8GB) Kingston Savage 2133Mhz DDR3 | MSI Z97 Gaming 7 | ThermalTake FrioOCK | MS-Tech (puke) 700W | Windows 10 64Bit

Mining RIG: AMD A6-9500 | ASRock AB350 Pro | 4GB DDR4 | 500GB 2.5 Inch HDD | 2x MSI AERO GTX 1060 6GB (Core/Memory/TDP/Avg Temp +160/+800/120%/45c) | 1x Asus Strix GTX 970 (+195/+400/125%/55c) | 1x KFA2 GTX 960 (+220/+500/120%/70c) | Corsair GS800 800W | HP HSTNS-PD05 1000W | (Modded) Inter-Tech IPC 4U-4129-N Rackmount Case

Guest RIG: FX6300 | AMD HD7870 | Kingston HyperX 128GB SSD | 16GB (2x 8GB) G.Skill Ripjaws 1600Mhz DDR3 | Some ASRock 970 Mobo | Stock Heatsink | some left over PSU  | Windows 10 64Bit

VM Server: HP Proliant DL160 G6 | 2x Intel Xeon E5620 @ 2.4Ghz 4c/8t (8c/16t total) | 16GB (8x 2GB) HP 1066Mhz ECC DDR3 | 2x Western Digital Black 250GB HDD | VMWare ESXI

Storage Node: 2x Intel Xeon E5520 @ 2.27Ghz 4c/8t (8c/16t total) | Intel ServerBoard S5500HCV | 36GB (9x 4GB) 1333Mhz ECC DDR3 | 3x Seagate 2TB 7200RPM | 4x Western Digital Caviar Green 2TB

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, System Error Message said:

some advice i can offer.

- avoid cisco. They are way too expensive if you wanna be competitive and offer a good price to your customers.

- You may want to hire some programmers and linux guys. Packetshader is a good example which lets you achieve 100Gb/s routing capacity easy.

- There are many ways to get 10Gb/s capable routers cheap. For pfsense you need to make sure your NICs have enough lanes for full 10Gb/s. That means you need at least 5 lanes PCIe V2 per NIC if you plan to go with full 10Gb/s. You can also get routers like the mikrotik CCR series which are capable of 10Gb/s easy and come with SFP+ ports. They are much much cheaper than the cisco equivalent (ISPs are buying up CCRs due to their low cost per performance but you will need skilled people to set it up). high end mikrotik CCRs have all their ports connected to the CPU directly rather then via PCIe.

- Implement another server for filtering. This is useful to mitigate botnets and attacks that originate from the internet and your own customers. Make sure to do the same with detecing malware activity. The router should pass traffic to the server for analysis so this helps reduce CPU load on the router itself.

- dont throttle p2p, a lot of applications rely on it even skype. If you need to manage traffic, use priorities to ensure important things get their bandwidth. Have the hardware needed to handle p2p connections as they can number in the thousands per customer. Many ISPs hate p2p because of the extra resource required to keep track of everything (which is why to not buy cisco, too pricey for the performance).

 

The only time to buy cisco is if you are an internet exchange requiring many Tb/s routing.

 

Have you checked out the Nexus 9K lineup recently?

Prices have come down quite a bit and performance has only gotten better. It might not be exactly on the nose with everyone else but it's far better than where it was.

I can't speak for the routing side of the house though as that isn't exactly my specialty. I tend to focus on Data Center and Security :) 

Not gonna say Cisco is the greatest vendor for Price/Performance ratio but they are doing much better, even if it's only on switching.

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.

×