Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
DaAznKnight

Sanity Check Needed: Dropping $3000 for pfsense box and home server.

Recommended Posts

Posted · Original PosterOP

Hey guys (this is Austin),

 

I decided to start a new thread this time, because this is about the entire server system and not just the case.

I'm building 2 servers for my new home, one for pfsense and another for FreeNAS (which will run a variety of things via Docker/Plugins like Emby, openHAB, etc.).

 

I think I've narrowed down the specs quite a bit, but I can't help but feel that a sanity check is in-order before I pull the trigger on thousands of dollars of gear.

In particular, I feel like the pfsense box is way overkill for home use, but I might also be wrong, and have no idea how to build a cheaper machine. Intel's website does not make it easy for a nub like me to figure out how to step it down sensibly.

 

Can you guys take a look and tell me if I'm being an idiot?

 

The setup is for my new home. I will be running 1x CAT-6A and 1x CAT-6 to every room for physical layer. There will be a rack in one of the rooms, with a patch panel for easy management. Connection wise, both my main PC and the Media Box will be connected via the 10G uplink ports of the switch for fast file transfers. Everyone else should be on 1G.

 

Use case is primarily media serving via Emby, but I will also be using openHAB, and whatever IP CCTV solution is available to be investigated later.

 

Here are the specs:

image.png.3cc75c7dbb7e103f447d137cd6c0eef8.png

 

(NO I am not Austin, but it just seems like a natural follow up to "Hey guys". Austin, please don't sue me for impersonation)

Link to post
Share on other sites

Well, you could throw an extra network card (intel, 2 ports) in an old thin client and use that for pfsense, it doesn't need much hardware.

You can re-use an old PC for experimenting with FreeNAS and it should work fine.

But if I'm speaking to somebody as nuts as me than I'd say go for it!

Here's my FreeNAS setup (I upgraded a couple things since the video):

 


No signature found

Link to post
Share on other sites

TBH people building their own pfSense box for home use never made sense to me.  For 99% of home users the router only needs to handle traffic between your network and the internet, and function as a firewall and DHCP server.  You don't need tons of computing power for that. 

 

You can buy a nice Netgate router with pfSense for a lot less than what you plan on spending (the SG-1100 for example is $159). 

Put a switch behind the router (sounds like you plan to do that anyway) and connect whatever amount of machines and access points you need to the switch.  All your internal traffic (for example between your media box and your PC) will go through the switch only.

 

On my home network (ISP modem -> pfSense router -> Netgear 8-port gigabit switch -> Linksys 300Mbps wireless access point + 2 PCs with gigabit ethernet ports + a 33TB FreeNAS box with gigabit ethernet) I'm running the older and less powerful SG-1000 as my router and have zero complaints about it.  No hiccups whatsoever and AFAIK all the functionality that pfSense offers. 

I'm consistently getting 110MB/s file transfers inside my network, which is basically gigabit.  On the internet side of things my ISP's modem is the limiting factor.  (I only pay for 30 down / 6 up and that's exactly what their old modems can handle). 

Link to post
Share on other sites

If you don't get many more replies, it's because this is a topic that's beat to death - there's 5-6 posts in the first 2 pages on this very topic "what should I get"

 

Yours caught my eye because your budget was in the subject.

 

I would buy 10x 4TB disks instead of 4x 10tb disks (for speed)

I would ditch the pfsense box and add more RAM to the storage box (max out the mobo)

I would buy an HBA card (ebay: HBA and look for ones that say flashed to IT mode)

I would buy an additional NIC (for WAN)

I would install ESXi, then virtualize pfsense and freeNAS

Link to post
Share on other sites

My one suggestion is about your physical cabling.

 

You said you're going to run 1x Cat 6a and 1x Cat 6 to every room.

 

I have to ask: Why? Why use different cable types? If you're going to run Cat 6a to every room anyway, you might as well just run Cat 6a x2 to every room, and drop the Cat 6 entirely.

 

I also suggest you ditch the pfSense Server entirely, unless there's some specific reason beyond what you've wrote, to use it.

 

As @Captain Chaos said, you can pick up a stupid cheap Router (pfSense or otherwise) that will definitely serve your needs, for far cheaper than the server.

 

Or as @Mikensan said, virtualize pfSense and just run it off of your Media Server (I also strongly suggest running ESXi as the base OS, and virtualize FreeNAS - I do this myself).


For Sale (lots of stuff):

Spoiler

[FS] [CAD] Various things

 

 

* Intel i7-4770K * ASRock Z97 Anniversary * 16GB RAM * 750w Seasonic Modular PSU *

* Crucial M4 128GB SSD (Primary) * Hitachi 500GB HDD (Secondary) *

* Gigabyte HD 7950 WF3 * SATA Blu-Ray Writer * Logitech g710+ * Windows 10 Pro x64 *

 

Link to post
Share on other sites
Posted · Original PosterOP
5 hours ago, RobbinM said:

Well, you could throw an extra network card (intel, 2 ports) in an old thin client and use that for pfsense, it doesn't need much hardware.

You can re-use an old PC for experimenting with FreeNAS and it should work fine.

But if I'm speaking to somebody as nuts as me than I'd say go for it!

Here's my FreeNAS setup (I upgraded a couple things since the video):

 

Yeah, no, I'm not gonna run used hardware. I've had an unRaid box for a while now, and this home server/networking stuff is like crack to me. Endless tinkering, playing with nifty new Dockers, new VMs, etc.

 

It's also a minor passion for me. I used to do physical networking for banks. Having my own mini "datacenter" setup is a minor dream, and these two servers are a starting point for that.

 

(Sweet setup btw! I don't know what to do with all that space yet, but I wanna end up there some day.)

 

5 hours ago, Captain Chaos said:

TBH people building their own pfSense box for home use never made sense to me.  For 99% of home users the router only needs to handle traffic between your network and the internet, and function as a firewall and DHCP server.  You don't need tons of computing power for that. 

 

You can buy a nice Netgate router with pfSense for a lot less than what you plan on spending (the SG-1100 for example is $159). 

Put a switch behind the router (sounds like you plan to do that anyway) and connect whatever amount of machines and access points you need to the switch.  All your internal traffic (for example between your media box and your PC) will go through the switch only.

 

On my home network (ISP modem -> pfSense router -> Netgear 8-port gigabit switch -> Linksys 300Mbps wireless access point + 2 PCs with gigabit ethernet ports + a 33TB FreeNAS box with gigabit ethernet) I'm running the older and less powerful SG-1000 as my router and have zero complaints about it.  No hiccups whatsoever and AFAIK all the functionality that pfSense offers. 

I'm consistently getting 110MB/s file transfers inside my network, which is basically gigabit.  On the internet side of things my ISP's modem is the limiting factor.  (I only pay for 30 down / 6 up and that's exactly what their old modems can handle). 

From what I understand, the router processes data transfers between subnets (routing!). The pfSense box will eventually be processing data transfers between my home security network (estimated maybe 5-6 high res cameras), my smarthome network, maybe my own personal cloud accessible via VPN, etc. etc. I'd like to eventually kill off my Google Drive/MEGA subscription.

 

Do you run multiple subnets?

 

4 hours ago, Mikensan said:

If you don't get many more replies, it's because this is a topic that's beat to death - there's 5-6 posts in the first 2 pages on this very topic "what should I get"

 

Yours caught my eye because your budget was in the subject.

 

I would buy 10x 4TB disks instead of 4x 10tb disks (for speed)

I would ditch the pfsense box and add more RAM to the storage box (max out the mobo)

I would buy an HBA card (ebay: HBA and look for ones that say flashed to IT mode)

I would buy an additional NIC (for WAN)

I would install ESXi, then virtualize pfsense and freeNAS

I see two people suggested ESXi. Sounds like more tinkering, which is great!

 

But why? Is there any particular advantage to virtualizing pfsense/FreeNAS?

9 minutes ago, dalekphalm said:

My one suggestion is about your physical cabling.

 

You said you're going to run 1x Cat 6a and 1x Cat 6 to every room.

 

I have to ask: Why? Why use different cable types? If you're going to run Cat 6a to every room anyway, you might as well just run Cat 6a x2 to every room, and drop the Cat 6 entirely.

 

I also suggest you ditch the pfSense Server entirely, unless there's some specific reason beyond what you've wrote, to use it.

 

As @Captain Chaos said, you can pick up a stupid cheap Router (pfSense or otherwise) that will definitely serve your needs, for far cheaper than the server.

 

Or as @Mikensan said, virtualize pfSense and just run it off of your Media Server (I also strongly suggest running ESXi as the base OS, and virtualize FreeNAS - I do this myself).

My reasons for a dedicated pfsense box is at the start of this post. There will be multiple subnets. Does it matter enough to justify a dedicated box?

 

Reason for using different cables is cost reduction, and ease of passing a Fluke test (for CAT-6). Back when I did physical networking it was a PITA to test/certify CAT-6A.

 

Also, again, you're the second person to mention ESXi. Is there any particular advantage to doing so?

Link to post
Share on other sites

Why are you using consumer-grade parts in your "datacentre". Your FreeNAS build is using a motherboard with a server-grade chipset but then you are using consumer grade parts for every else. Totally inappropriate for the application and waste of chipset features.


Software Developer by trade

My computer-based interests include programming, gaming, movies and TV series

(outdated) Check out my 32 TB storage server here

Link to post
Share on other sites

I ran pfsense on an Atom D525 with 4GB RAM and a 16GB SSD through a PCI 2 port Intel NIC with a sub 200W PSU, now I didn't hammer it hard as you're planning on hammering it but I got nowhere near fully utilizing my CPU. I was able to tune it to use a lot of RAM for local caching to speed up browsing somewhat but the difference was tough to notice. When the PSU died I just used a better router/wifi ap and don't notice much of a difference to be honest. Even running Clam AV barely strained the Atom, but it was single threaded so it did cause a little extra latency for some large files. That CPU is totally overkill. There's a company with pfsense hardware using an AMD Jaguar (I think) CPU which passive cools against the case and runs a laptop brick for power.

https://www.pcengines.ch/newshop.php?c=4

Something they offer would likely fit your needs, save space and money?

Link to post
Share on other sites
8 hours ago, DaAznKnight said:

From what I understand, the router processes data transfers between subnets (routing!). The pfSense box will eventually be processing data transfers between my home security network (estimated maybe 5-6 high res cameras), my smarthome network, maybe my own personal cloud accessible via VPN, etc. etc. I'd like to eventually kill off my Google Drive/MEGA subscription.

 

Do you run multiple subnets?

Ah, yes.  I didn't see any mention of subnets in the first post. 

 

I don't use subnets myself.  All I have in terms of network separation is that mobile devices are on a guest SSID. But that's set up in my AP, not on the router.

Link to post
Share on other sites
Posted · Original PosterOP
7 hours ago, alex75871 said:

Why are you using consumer-grade parts in your "datacentre". Your FreeNAS build is using a motherboard with a server-grade chipset but then you are using consumer grade parts for every else. Totally inappropriate for the application and waste of chipset features.

 

6 hours ago, Bitter said:

I ran pfsense on an Atom D525 with 4GB RAM and a 16GB SSD through a PCI 2 port Intel NIC with a sub 200W PSU, now I didn't hammer it hard as you're planning on hammering it but I got nowhere near fully utilizing my CPU. I was able to tune it to use a lot of RAM for local caching to speed up browsing somewhat but the difference was tough to notice. When the PSU died I just used a better router/wifi ap and don't notice much of a difference to be honest. Even running Clam AV barely strained the Atom, but it was single threaded so it did cause a little extra latency for some large files. That CPU is totally overkill. There's a company with pfsense hardware using an AMD Jaguar (I think) CPU which passive cools against the case and runs a laptop brick for power.

https://www.pcengines.ch/newshop.php?c=4

Something they offer would likely fit your needs, save space and money?

So having taken a quick look at ESXi, my fantasies started shifting.

I might just go with 1 server with virtualized FreeNAS/pfsense.

 

Here are the specs, mostly server grade I think:

image.png.f0cdbaa4217137d50c1ea782e656622e.png

Never played with Xeon before. No idea if the Scalable series is suitable. I just picked the maximum I could afford.

Also dropped some storage capacity, since I don't REALLY need 40TB.

Link to post
Share on other sites

Routing multiple subnets doesn't change anything, be it 2 or 20 - it is the packets per minute that will clog up a router. At 1gb/s across 4 links you won't have any issues with a 10 year old i7. VLANs are generally created for "security" so a bulk of them don't even have traffic, mostly just internet which being the destination and "clients" are the same really doesn't matter with how many segments you have. The PPM won't change.

 

Shit might hit the wall if you want to route between 10GB - but why would you. If you need 10GB just add an interface and join the same VLAN. It's extremely rare you'll route between 10GB and actually need 10GB speeds within a home lab.

 

Just to whet your whistle, I have 3 ESXi nodes all with a 10gb backplane. I trunk all my VLANs out of those 10GB links into my switch. My pfSense non-WAN interfaces ride that 10GB interface, and I have maybe... 7 VLANs maybe 20-30 VMs on 24/7. On my firewall I don't run anything crazy, HAProxy / pfblocker / OpenVPN to name a few packages - unless I'm looking at a graph I've never seen that CPU break 2%. I've only assigned it 2 CPUs and 4gb of RAM.

 

I am also a VMUG subscriber (all-access to vsphere suite) and run a horizon lab - so I have 2x Windows 10 desktops and 1 RDS host, all of which work beautifuly. I'm writing this post now over my RDS host from a web browser (god bless HTTPS tunneling).

 

I've also started playing with docker, I have Ubuntu Server 18 + ~5 containers (2x CPUs, 4GB of RAM) - all work great. Runs my Unifi controller, pihole, portainer, Traefik (actually moving everything off HAproxy to Traefik), MySQL, and phpmyadmin (whoops that's 6).

 

Now the kicker, this is all on 9/10 year old server hardware (Dell R610 + IBM x3650 m2) running x5670s. Electricty use is rather great considering, and performance is wonderful.

 

Now if I could go back and change something, I would've rolled Dell R210 II + JBOD enclosure instead of buying/building a NAS from all new parts.

Link to post
Share on other sites
On 3/15/2019 at 10:35 PM, DaAznKnight said:

 

So having taken a quick look at ESXi, my fantasies started shifting.

I might just go with 1 server with virtualized FreeNAS/pfsense.

 

Here are the specs, mostly server grade I think:

image.png.f0cdbaa4217137d50c1ea782e656622e.png

Never played with Xeon before. No idea if the Scalable series is suitable. I just picked the maximum I could afford.

Also dropped some storage capacity, since I don't REALLY need 40TB.

The Xeon Silver 4116 supports quad-channel memory configuration. Are those DIMMS registered or regular ECC DIMMS? Also with server motherboard it pays to check the official supported memory list.


Software Developer by trade

My computer-based interests include programming, gaming, movies and TV series

(outdated) Check out my 32 TB storage server here

Link to post
Share on other sites

If this is just for the home, and you are only doing switching for media around the house, have you considered just getting like a Netgate SG-1100 pfSense Appliance for 1/4 of the cost of building that massive 2U thing? It can easily switch gigabit around a home LAN...


Spoiler

Intel i7 3770K @ 4.6ghz | EVGA Z77 FTW | 2 x EVGA GTX1070 FTW | 32GB (4x8GB) Corsair Vengeance DDR3-1600 | Corsair H105 AIO, NZXT Sentry 3, Corsair SP120's | 2 x 256GB Samsung 850EVO, 4TB WD Black | Phanteks Enthoo Pro | OCZ ZX 1250w | Samsung 28" 4K Display | Ducky Shine 3 Keyboard, Logitech G502, MicroLab Solo 7C Speakers, Razer Goliathus Extended, X360 Controller | Windows 10 Pro | SteelSeries Siberia 350 Headphones

 

Spoiler

Corsair 400R, IcyDock MB998SP & MB455SPF, Seasonic X-Series 650w PSU, 2 x Xeon E5540's, 24GB DDR3-ECC, Asus Z8NA-D6C Motherboard, AOC-SAS2LP-MV8, LSI MegaRAID 9271-8i, RES2SV240 SAS Expander, Samsung 840Evo 120GB, 2 x 8TB Seagate Archives, 12 x 3TB WD Red

 

Link to post
Share on other sites
On 3/15/2019 at 8:35 PM, DaAznKnight said:

So having taken a quick look at ESXi, my fantasies started shifting.

I might just go with 1 server with virtualized FreeNAS/pfsense.

There have been many posts around ESXI but the summary is that it is a hypervisor operating system allowing you to run your virtual machines. It's the core of many VMWare products and has a large enterprise install base. Information is readily available. 

The main benefit will be allowing you to focus on one server, reducing space needed, heat, power, etc. 

And as you said, you'll have something more to tinker with. 

Main thing to look out for is that ESXI supports the NICS you are using. Depending on your usage, you may not even need to go server grade hardware. My ESXI host runs on an old i7 5820k. RAM is what you will want to max out before anything else.

Link to post
Share on other sites

That's a lot of money to drop on a lot of consumer grade stuff.  For about 700 I got 2 Opteron 6300 series 16 core CPUs, a Supermicro MB and 128 GB of ECC RAM.  I know you don't want used stuff but new decent server HW that's worth getting is $$$.  Get something used for a lot cheaper and chances are you won't even grow out of that. 

 

Like others have said, a pfsense box is overkill.  If you have a box laying around sure, but otherwise it's just needless money spent for the sake of having a pfsense box. 

 

Also, for a NAS, running it in a VM will degrade perf.   I'm one for VMs but high IO work loads don't do so hot in them.  Yes it can be done but that doesn't mean you should do it.  Get a normal Linux server, put ZFS or mdadm raid on it for a NAS, then use that for VMs/containers. 

 

I'm all for home server stuff (been doing it for a while)  but you're looking to spend a chunk of change for not that much benefit.


"Anger, which, far sweeter than trickling drops of honey, rises in the bosom of a man like smoke."

Link to post
Share on other sites
18 minutes ago, bloodthirster said:

That's a lot of money to drop on a lot of consumer grade stuff.  For about 700 I got 2 Opteron 6300 series 16 core CPUs, a Supermicro MB and 128 GB of ECC RAM.  I know you don't want used stuff but new decent server HW that's worth getting is $$$.  Get something used for a lot cheaper and chances are you won't even grow out of that.

Just because it's enterprise doesn't make it better or worse - depends on the context.

 

In your case, your Opteron 6300 CPU's are Bulldozer CPU's. They're not much better (worse, in many ways) than a 2010-era Xeon CPU w/ 8 Cores + HT.

 

Granted, you can get some good Enterprise server deals on eBay, but you gotta be careful what generation of servers you buy.

18 minutes ago, bloodthirster said:

Like others have said, a pfsense box is overkill.  If you have a box laying around sure, but otherwise it's just needless money spent for the sake of having a pfsense box. 

At this point, the OP has decided to look into virtualizing pfSense off of one Server.

18 minutes ago, bloodthirster said:

Also, for a NAS, running it in a VM will degrade perf.   I'm one for VMs but high IO work loads don't do so hot in them.  Yes it can be done but that doesn't mean you should do it.  Get a normal Linux server, put ZFS or mdadm raid on it for a NAS, then use that for VMs/containers. 

Running a NAS as a VM will not degrade performance, unless you do the setup wrong.


If you get an HBA, and pass the HBA directly to the VM, the VM will have direct hardware level access to the connected HDD's.

18 minutes ago, bloodthirster said:

I'm all for home server stuff (been doing it for a while)  but you're looking to spend a chunk of change for not that much benefit.

Agreed - though if he has the disposable income, and wants to learn and/or just do it for fun, awesome!


For Sale (lots of stuff):

Spoiler

[FS] [CAD] Various things

 

 

* Intel i7-4770K * ASRock Z97 Anniversary * 16GB RAM * 750w Seasonic Modular PSU *

* Crucial M4 128GB SSD (Primary) * Hitachi 500GB HDD (Secondary) *

* Gigabyte HD 7950 WF3 * SATA Blu-Ray Writer * Logitech g710+ * Windows 10 Pro x64 *

 

Link to post
Share on other sites

Unless you're going to run a bunch of additional packages, PFsense needs very little in the way of CPU or RAM. I ran PFsense on a R410 single 4 core CPU with 4GB of RAM and it was complete overkill. I stuck it in a VM with 2 v cores and 4GB of RAM last June and it has been running fine. Save about 65W of power to boot! It also greatly simplified the way I can route things for my VMs since the physical layer is handled through the hypervisor (you can still pass through network cards or hook physical ports to a v-switch in the hypervisor to have physical ports).

 

Another upside with it in a VM is you can start off small (1 core and 2GB of RAM) then you can always increase the resources if you need to. can't really do that sort of thing with a bare metal install. One downside to having PFsense in a VM is when that host is down for any reason, so is your network. So make the box you virtualize it on as reliable as possible.


There's no place like ~

Spoiler

Problems and solutions:

 

FreeNAS

Spoiler

Dell Server 11th gen

Spoiler

 

 

 

 

ESXI

Spoiler

 

 

 

 

 

 

Link to post
Share on other sites
55 minutes ago, dalekphalm said:

Just because it's enterprise doesn't make it better or worse - depends on the context.

 

In your case, your Opteron 6300 CPU's are Bulldozer CPU's. They're not much better (worse, in many ways) than a 2010-era Xeon CPU w/ 8 Cores + HT.

 

Granted, you can get some good Enterprise server deals on eBay, but you gotta be careful what generation of servers you buy.

Oh please, those chips are still good.   Are they great? No.   For most situations that deal with cores and not clocks, they still do good.   They chew through most things I give them.   Much better than the x5670s they replaced.

 

55 minutes ago, dalekphalm said:

At this point, the OP has decided to look into virtualizing pfSense off of one Server.

Unless you have a NIC which can be virtualized (in HW, not SW), not the best idea.  It would be more of a academic exercise.

 

55 minutes ago, dalekphalm said:

Running a NAS as a VM will not degrade performance, unless you do the setup wrong.


If you get an HBA, and pass the HBA directly to the VM, the VM will have direct hardware level access to the connected HDD's.

Yes, but you also need the same thing with a NIC (some sort of PCIE passthrough or something like Intel's ... forget the name) along with the disk controller.  Also, if you're not using all the ports of the controller for the NAS, you could have used them for something else, but now they're tied to a VM.   It's a lot of trouble than it's worth to run a NAS/storage server from a VM.  Especially since you can use that storage pool to host other VMs etc, rather that just being a NAS.  It's a lot easier to back up VMs to storage that isn't on a VM.  Yes, you CAN do it, but it doesn't mean you SHOULD do it.

 

55 minutes ago, dalekphalm said:

Agreed - though if he has the disposable income, and wants to learn and/or just do it for fun, awesome!

-shrugs-

 


"Anger, which, far sweeter than trickling drops of honey, rises in the bosom of a man like smoke."

Link to post
Share on other sites
3 minutes ago, bloodthirster said:

Oh please, those chips are still good.   Are they great? No.   For most situations that deal with cores and not clocks, they still do good.   They chew through most things I give them.   Much better than the x5670s they replaced.

 

Unless you have a NIC which can be virtualized (in HW, not SW), not the best idea.  It would be more of a academic exercise.

 

Yes, but you also need the same thing with a NIC (some sort of PCIE passthrough or something like Intel's ... forget the name) along with the disk controller.  Also, if you're not using all the ports of the controller for the NAS, you could have used them for something else, but now they're tied to a VM.   It's a lot of trouble than it's worth to run a NAS/storage server from a VM.  Especially since you can use that storage pool to host other VMs etc, rather that just being a NAS.  It's a lot easier to back up VMs to storage that isn't on a VM.  Yes, you CAN do it, but it doesn't mean you SHOULD do it.

 

-shrugs-

 

ESXi makes PCI passthrough unbelievably easy. If the OP is going to run ESXi, they'll need something like FreeNAS since VMware didn't really put any sort of "real" storage management in ESXi. Getting NIC cards is pretty easy also. You can get Intel Pro/1000 VT cards from eBay for a song now (around $10 USD) and they support VT-C


There's no place like ~

Spoiler

Problems and solutions:

 

FreeNAS

Spoiler

Dell Server 11th gen

Spoiler

 

 

 

 

ESXI

Spoiler

 

 

 

 

 

 

Link to post
Share on other sites

Generally its best practice not to virtualize your router, especially with a single hypervisor host. You'll have no network connectivity while you're doing maintenance on the ESXi host. And if it goes down, your entire network is down while you're getting ESXi and its VM's back up and running.

 

If you do virtualize it, you'll also want to have the pfSense VM start up first so that routes/vlan's/etc... are initialized to avoid issues with your VM's when they start up. 


Spoiler

Intel i7 3770K @ 4.6ghz | EVGA Z77 FTW | 2 x EVGA GTX1070 FTW | 32GB (4x8GB) Corsair Vengeance DDR3-1600 | Corsair H105 AIO, NZXT Sentry 3, Corsair SP120's | 2 x 256GB Samsung 850EVO, 4TB WD Black | Phanteks Enthoo Pro | OCZ ZX 1250w | Samsung 28" 4K Display | Ducky Shine 3 Keyboard, Logitech G502, MicroLab Solo 7C Speakers, Razer Goliathus Extended, X360 Controller | Windows 10 Pro | SteelSeries Siberia 350 Headphones

 

Spoiler

Corsair 400R, IcyDock MB998SP & MB455SPF, Seasonic X-Series 650w PSU, 2 x Xeon E5540's, 24GB DDR3-ECC, Asus Z8NA-D6C Motherboard, AOC-SAS2LP-MV8, LSI MegaRAID 9271-8i, RES2SV240 SAS Expander, Samsung 840Evo 120GB, 2 x 8TB Seagate Archives, 12 x 3TB WD Red

 

Link to post
Share on other sites
12 hours ago, bloodthirster said:

Oh please, those chips are still good.   Are they great? No.   For most situations that deal with cores and not clocks, they still do good.   They chew through most things I give them.   Much better than the x5670s they replaced.

They're... okay. The lowest end CPU the OP was considering - the i5-8400T has the same passmark score (actually about 100pts higher, though that's insignificant) compared to your 6300 series CPU's. Granted, you've got a dual socket setup, yes?

 

But in the end he changed his mind to a different CPU, (all his other choices are much faster than a 6300 series Opteron).

 

So, they definitely work fine. And yeah, if that's what you've got available, or you find them on a reallyyyyyyyyyyyyyy good deal, then sure go for that. But if the OP is specifically building out a new system from scratch, there's absolutely zero reason to choose an Opteron at all from the Bulldozer/Piledriver days. Even if he wants to go Enterprise, he has far better, and newer, choices on eBay for dirt cheap.

Quote

Unless you have a NIC which can be virtualized (in HW, not SW), not the best idea.  It would be more of a academic exercise.

I agree with you - personally I think pfSense all together is overkill - though those cheap pre-made pfSense hardware appliances seem very interesting to me.

Quote

Yes, but you also need the same thing with a NIC (some sort of PCIE passthrough or something like Intel's ... forget the name) along with the disk controller.  Also, if you're not using all the ports of the controller for the NAS, you could have used them for something else, but now they're tied to a VM.   It's a lot of trouble than it's worth to run a NAS/storage server from a VM.  Especially since you can use that storage pool to host other VMs etc, rather that just being a NAS.  It's a lot easier to back up VMs to storage that isn't on a VM.  Yes, you CAN do it, but it doesn't mean you SHOULD do it.

PCIe passthrough is stupid easy. I'm doing it right now (not for a Firewall/Router, but for my FreeNAS VM). And who cares if you lose a few ports? The system motherboard still has SATA ports that can be used for other things.

 

Plus, you can use iSCSI to map storage from the NAS back to the VM to host other VM's (just make sure the NAS VM itself is running on direct storage provided by ESXi). It works very well.

 

Unlike a virtualized router, a virtualized NAS works amazingly well and should definitely be considered if the OP decides to use VM's in general.

 

11 hours ago, Razor Blade said:

ESXi makes PCI passthrough unbelievably easy. If the OP is going to run ESXi, they'll need something like FreeNAS since VMware didn't really put any sort of "real" storage management in ESXi. Getting NIC cards is pretty easy also. You can get Intel Pro/1000 VT cards from eBay for a song now (around $10 USD) and they support VT-C

Agreed. ESXi essentially relies on you running a hardware RAID Card, if you're going to use ESXi for storage. It works, but it's nothing special.

10 hours ago, Jarsky said:

Generally its best practice not to virtualize your router, especially with a single hypervisor host. You'll have no network connectivity while you're doing maintenance on the ESXi host. And if it goes down, your entire network is down while you're getting ESXi and its VM's back up and running.

 

If you do virtualize it, you'll also want to have the pfSense VM start up first so that routes/vlan's/etc... are initialized to avoid issues with your VM's when they start up. 

Agreed here too.


For Sale (lots of stuff):

Spoiler

[FS] [CAD] Various things

 

 

* Intel i7-4770K * ASRock Z97 Anniversary * 16GB RAM * 750w Seasonic Modular PSU *

* Crucial M4 128GB SSD (Primary) * Hitachi 500GB HDD (Secondary) *

* Gigabyte HD 7950 WF3 * SATA Blu-Ray Writer * Logitech g710+ * Windows 10 Pro x64 *

 

Link to post
Share on other sites

@dalekphalm "In your case, your Opteron 6300 CPU's are Bulldozer CPU's. They're not much better (worse, in many ways) than a 2010-era Xeon CPU w/ 8 Cores + HT."

Couldn't agree more. Bulldozers were such a huge let down give the initial hype. A lot of applications are still single thread orientated and in a homelab where a bulk of VMs are idle, this is more important. The HT handles the trickle of interupts and low-cpu-intensive operations from idle VMs perfectly while the faster core clock chomps through intensive operations.

 

@bloodthirster 

"Much better than the x5670s they replaced."

I'd argue negligible differences, more heat, and higher power draw at best.

"Unless you have a NIC which can be virtualized (in HW, not SW), not the best idea.  It would be more of a academic exercise."

What impact exactly have you personally observed from a router using vnics? Haven't seen mine ever perform any less than gigabit speeds (generally within the same hypervisor I've seen ~4gbit/s). In a home-lab and small office you're not exactly pumping large amounts of packets per second (OP's use case in this thread). Not to mention Cisco's ASAv works great with vnics as well as vmware's NSX. Until you need 10gbit/s speeds or high pp/s support you don't need to dedicate hardware NICs, normally then you're looking at storage operations, not routing. Storage/10GB routing should be done at the switch anyway.

It's a lot of trouble than it's worth to run a NAS/storage server from a VM.

Not really... you give all storage to the NAS and create a NFS/iSCSI target for the hypervisor. This is the premise of hyperconverged solutions. Some solutions may bake the storage solution into the hypervisor O/S or run it as a VM. Nutanix I believe has it baked directly in if you're using their KVM based hypervisor vs their ESXI solution which I believe uses a VM (controller).  

 

@Jarsky Now that is a valid point, I ultimately bought a Unifi USG specifically for this reason. So that my home network can stay up even if my lab is undergoing maintenance. Worth every penny. For the budget concious maybe an Edgerouter X. Either way I would agree, something physical outside the lab is a really good idea.

Link to post
Share on other sites
1 hour ago, dalekphalm said:

They're... okay. The lowest end CPU the OP was considering - the i5-8400T has the same passmark score (actually about 100pts higher, though that's insignificant) compared to your 6300 series CPU's. Granted, you've got a dual socket setup, yes? 

  

But in the end he changed his mind to a different CPU, (all his other choices are much faster than a 6300 series Opteron). 

  

 

Couple of points here:

#1 I got 2 6380s for 100 dollars

#2 most people don't need even that much processing power (or even a single socket)

#3 the perf/dollar ratio is a LOT better

#4 Intel security flaws (although with an processor that doesn't have SMT, that gets rid of some of them)

 

Don't get me wrong, if you NEED perf, it's behind a firewall, and money isn't an object, Intel is your way to go.   But you can do a LOT better for the money.

 

1 hour ago, dalekphalm said:

 

 

 

1 hour ago, dalekphalm said:

PCIe passthrough is stupid easy. I'm doing it right now (not for a Firewall/Router, but for my FreeNAS VM). And who cares if you lose a few ports? The system motherboard still has SATA ports that can be used for other things.

 

Because ports (especially on a nice controller) are precious. 

1 hour ago, dalekphalm said:

Plus, you can use iSCSI to map storage from the NAS back to the VM to host other VM's (just make sure the NAS VM itself is running on direct storage provided by ESXi). It works very well. 

  

Unlike a virtualized router, a virtualized NAS works amazingly well and should definitely be considered if the OP decides to use VM's in general. 

  

Using iSCSI just so you can use a VM for a NAS seems like overkill.  You can't use iSCSI and a network FS at once either...  You could split the harddrives up and have some for iSCSI and some for a network FS but that just gets even messier.  It's a solution to a problem that doesn't exist. 

 

Generally with dedicated VM hosts (like ESXi), you have a seperate storage server for a reason.  For a single host solution, a "normal" OS with a storage pool running the VMs makes much more sense.  Just because you can do something doesn't mean you should.

 

 

1 hour ago, Mikensan said:

I'd argue negligible differences, more heat, and higher power draw at best.

See above.

1 hour ago, Mikensan said:

What impact exactly have you personally observed from a router using vnics? Haven't seen mine ever perform any less than gigabit speeds (generally within the same hypervisor I've seen ~4gbit/s). In a home-lab and small office you're not exactly pumping large amounts of packets per second (OP's use case in this thread). Not to mention Cisco's ASAv works great with vnics as well as vmware's NSX. Until you need 10gbit/s speeds or high pp/s support you don't need to dedicate hardware NICs, normally then you're looking at storage operations, not routing. Storage/10GB routing should be done at the switch anyway. 

I run 10Gb at home.  I wouldn't use virtualized  NICs for a router.  If it was passed through so there wouldn't be a lot of world switching then MAYBE.   It's one of those "just because you can, doesn't mean you should" things.

 

1 hour ago, Mikensan said:

Not really... you give all storage to the NAS and create a NFS/iSCSI target for the hypervisor. This is the premise of hyperconverged solutions. Some solutions may bake the storage solution into the hypervisor O/S or run it as a VM. Nutanix I believe has it baked directly in if you're using their KVM based hypervisor vs their ESXI solution which I believe uses a VM (controller). 

See above.   It's just another thing virtualized that doesn't need to be.  I'm all for virtualizing stuff when it makes sense, but there's no need and especially for someone who's not an expert in this sort of thing.


"Anger, which, far sweeter than trickling drops of honey, rises in the bosom of a man like smoke."

Link to post
Share on other sites
38 minutes ago, bloodthirster said:

 

Couple of points here:

#1 I got 2 6380s for 100 dollars

Not bad - but you can find newer generation Xeons for the same or cheaper. Example, you can find Xeon E3-1220 V3's (That's the Haswell version) for in the $50 per socket ballpark. 2x 1220 V3's would smoke the 6380's in most scenarios.

38 minutes ago, bloodthirster said:

#2 most people don't need even that much processing power (or even a single socket)

True - but your CPU's are also very power hungry. If he doesn't need much CPU power, he's better off doing a low power build anyway.

38 minutes ago, bloodthirster said:

#3 the perf/dollar ratio is a LOT better

Depends on the specific builds. As I've shown above, if I can get Xeons for the same price as your old opterons, it's a much better perf/dollar ratio than what you got.

38 minutes ago, bloodthirster said:

#4 Intel security flaws (although with an processor that doesn't have SMT, that gets rid of some of them)

AMD's older CPU's also had security flaws, so that's a wash at best.

 

I have no problem with AMD - and their Zen based server CPU's are kickass (Threadripper and and EPYC). But their Bulldozer derived architecture (including Piledriver, yours), was pretty terrible to begin with - it was an unfortunate mistake that AMD made, and it hurt them for a long time.

38 minutes ago, bloodthirster said:

Don't get me wrong, if you NEED perf, it's behind a firewall, and money isn't an object, Intel is your way to go.   But you can do a LOT better for the money.

Intel is definitely a valid option, and I've shown you can easily do very good using both Intel or AMD.

38 minutes ago, bloodthirster said:

Because ports (especially on a nice controller) are precious.

They can be, yes. But it entirely depends on what the OP wants to use said ports for. The SATA ports remain usable for the Hypervisor directly. I don't think it's as big a concern as you're implying. Especially since good used controllers are pretty cheap on eBay.

38 minutes ago, bloodthirster said:

Using iSCSI just so you can use a VM for a NAS seems like overkill.  You can't use iSCSI and a network FS at once either...  You could split the harddrives up and have some for iSCSI and some for a network FS but that just gets even messier.  It's a solution to a problem that doesn't exist.

Yes... yes you can. In ZFS for example, you create your overall array, then you can easily portion some out to iSCSI, leave the rest for SMB/Samba or NFS, etc.

 

And while iSCSI over a VM is overkill... so is this entire setup? If we wanted the non-overkill option, it'd be to buy an off the shelf QNAP or Synology and call it a day.

38 minutes ago, bloodthirster said:

Generally with dedicated VM hosts (like ESXi), you have a seperate storage server for a reason.  For a single host solution, a "normal" OS with a storage pool running the VMs makes much more sense.  Just because you can do something doesn't mean you should.

Except that other OS's that are more storage focused tend to have shit VM support - such as FreeNAS. Sure you can run VM's on FreeNAS, but it's not a great experience.

 

Running my FreeNAS VM on ESXi was the best decision I ever made with my home server. It runs flawlessly, with bare metal performance, and I don't have to deal with shit VM support.

 

Granted, ESXi is not the only choice. If your concern is storage support, then use ProxMox instead - it's linux based, has ZFS support out of the box, and is an excellent Type-1 Hypervisor.

38 minutes ago, bloodthirster said:

See above.

I run 10Gb at home.  I wouldn't use virtualized  NICs for a router.  If it was passed through so there wouldn't be a lot of world switching then MAYBE.   It's one of those "just because you can, doesn't mean you should" things.

I don't think virtualizing a router is a terribly good idea to begin with, but in the vast majority of home user scenarios, performance won't be an issue. It could be a problem - but again, depends on the scenario.

38 minutes ago, bloodthirster said:

See above.   It's just another thing virtualized that doesn't need to be.  I'm all for virtualizing stuff when it makes sense, but there's no need and especially for someone who's not an expert in this sort of thing.

It can be useful, and it can also be a learning experience.

 

How much the OP wants to virtualize is entirely up to him. He may decide that it's not worth the trouble, and do a different setup. Or he may decide to learn the setup.

 

We've given him advice. If he has questions, we'll address those if they come up.

 

I'm not saying your suggestions are bad - they have merit, but you're dismissing things that are pretty standard procedure as if they're scary unheard of processes.


For Sale (lots of stuff):

Spoiler

[FS] [CAD] Various things

 

 

* Intel i7-4770K * ASRock Z97 Anniversary * 16GB RAM * 750w Seasonic Modular PSU *

* Crucial M4 128GB SSD (Primary) * Hitachi 500GB HDD (Secondary) *

* Gigabyte HD 7950 WF3 * SATA Blu-Ray Writer * Logitech g710+ * Windows 10 Pro x64 *

 

Link to post
Share on other sites
  • Bought two x5670s for $70 last year, currently $20/ea. Cheapest 6380 I see is around $50/ea. 6378 is better priced at $30/ea.
    • Market is flooded with Rx0's and D3x0's, they're crazy cheap and plentiful - I rarely come across something as popular running Opteron.
  • In my environment the faster processors are making a difference, not exactly running a couple apache servers and git servers though...
  • iSCSI takes maybe 5 minutes and you portion out what you want for the extent, not the whole volume...... So easily able to create SMB shares...
  • I run 10gb across 3 nodes connected to a switch at home as well - not sure how this changes anything?
  • You keep saying you wouldn't use vnics for a router but do not say why. Performance is great and Cisco/VMware trust them enough for their virtual routers... 
  • I don't see where any of this requires an expert, it is all well documented.

 

Overall I feel like you're quick to judge using your own logic vs having actually tried any of it. I think if you had (or do in the future) you'll find yourself surprised. Otherwise people who actually have tried wouldn't be replying saying so.

 

Anywho, you've made up your mind and OP has chosen his path. OP will be happy and have extra funds for more stuff as a result, no use beating a dead horse as they say.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×