Jump to content

Hardware needs for a Server noobie

I am planning on building a server with probably a $2500-3000 budget. 

I already have a rack, and a chassis.  i would like to get a single RU KVM, but that's not in this budget. 

this budget allocation is specifically for all of the stuff that goes into the chassis. 

 

I'm planning to use this as a VM machine and a NAS. 

The chassis I have will support up to eATX, with any standard ATX power supply. 

i'd like to have redundant power supplies, but i know that's hard to do with ATX form factor. 

The chassis supports up to 12 SAS or SATA drives. 

 

I want to run a VM for a NAS, a VM for general purpose windows 10 tools for my network and home control, and a VM for a NVR server for home cameras. 

Ideally, i would use an m.2 SSD for the hypervisor, and the 12 SAS/SATA bays i would fill as i go, assigning drives as RAID1 pairs to VMs as i expand in sets of 2.

this way i can use more of the budget on performance hardware up front, and get storage hardware on the back end. 

 

Things that would be helpful to me:

  • 2/3 or more NICs 
  • Lots of PCI-e lanes for expanding NICs and adding GPUs for in home video routing
  • 2CPU slots if i need to expand later (and 8-16 RAM slots) if affordable.
  • m.2 on MoBo for hypervisor/OS drive

 

I was thinking about getting a single EPYC socket MoBo and an 8 core EPYC, but i may just consider an eATX workstation motherboard with thread ripper or higher core ryzen. a dual socket Epyc board would only be an option if it's really affordable. 

I know the consumer workloads pretty well and have been building that for a while, and I currently do networked AV solutions professionally, but things like NAS OSs, Hypervisors, and NVR servers are new to me, so i don't know what kind of hardware best suits my needs. if it's cost effective for roughly the same (10% or less) performance i'd be willing to get last-gen parts if the cost difference is large enough. 

 

If it matters I planned on using WindowsServer 2019 or (even win10 pro really) as the "hypervisor"

the i'd use VMware or VSphere as my real hypervisor, with freeNAS as a my NAS VM. i havent figured out my NVR solution yet though. 

 

if anyone has any better solutions for a hardware's primary OS/hypervisor please let me know. I don't have to have WinServer or Win10Pro, but i do need to be able to remote desktop into it, and preferably my VMs directly.

We can't Benchmark like we used to, but we have our ways. One trick is to shove more GPUs in your computer. Like the time I needed to NV-Link, because I needed a higher HeavenBench score, so I did an SLI, which is what they called NV-Link back in the day. So, I decided to put two GPUs in my computer, which was the style at the time. Now, to add another GPU to your computer, costs a new PSU. Now in those days PSUs said OCZ on them, "Gimme 750W OCZs for an SLI" you'd say. Now where were we? Oh yeah, the important thing was that I had two GPUs in my rig, which was the style at the time! They didn't have RGB PSUs at the time, because of the war. The only thing you could get was those big green ones. 

Link to comment
Share on other sites

Link to post
Share on other sites

Does that budget include drives. I agree epyic is probalby a good choice here.

 

But nothing you listed really needs that much cpu power or ram, so it seems kinda overkill. A am4 system seems to fit your needs fine and would be much cheaper, or a older dual 2011/2011v3 xeon setup.

 

For os, look at proxmox aswell, It does ZFS  which would work well here, and would allow easy web access to controll all the vms.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Electronics Wizardy said:

Does that budget include drives. I agree epyic is probalby a good choice here.

 

But nothing you listed really needs that much cpu power or ram, so it seems kinda overkill. A am4 system seems to fit your needs fine and would be much cheaper, or a older dual 2011/2011v3 xeon setup.

 

For os, look at proxmox aswell, It does ZFS  which would work well here, and would allow easy web access to controll all the vms.

As far as i understand it, NAS servers are kind of RAM hungry. 

this budget includes some drives, but it doesn't have to be the full 12. a single m.2 and 6 storage drives would work for me i think. then i get three RAID1s to assign to VMs. 

sidenote, i need 5 TB of storage out of the box in order to migrate storage i already have. 

 

one of the reasons i want to overkill a little is because a lot of the storage i have is games. i'm not sure if i want to keep the file data there and move it back and forth as need, or if i'll try to play those games from a VM because i already have them on the machine and the machine will pass video all around the house. i'm just not sure where to start. 

 

i currently have an olf AM3 fx-9590 32GB RAM system laying around to do some hypervisor testing with, but i don't want to keep it forever, and i may need to get hardware RAID controllers in order to populate the full 12 drives on it. i was kind of hoping to use a hypervisor to software RAID the disks for the VMs.

We can't Benchmark like we used to, but we have our ways. One trick is to shove more GPUs in your computer. Like the time I needed to NV-Link, because I needed a higher HeavenBench score, so I did an SLI, which is what they called NV-Link back in the day. So, I decided to put two GPUs in my computer, which was the style at the time. Now, to add another GPU to your computer, costs a new PSU. Now in those days PSUs said OCZ on them, "Gimme 750W OCZs for an SLI" you'd say. Now where were we? Oh yeah, the important thing was that I had two GPUs in my rig, which was the style at the time! They didn't have RGB PSUs at the time, because of the war. The only thing you could get was those big green ones. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, VioDuskar said:

As far as i understand it, NAS servers are kind of RAM hungry. 

Not really, extra ram can help with caching, but unless you need high speeds you can get away with very little ram. How fast do you need you na.

 

1 minute ago, VioDuskar said:

this budget includes some drives, but it doesn't have to be the full 12. a single m.2 and 6 storage drives would work for me i think. then i get three RAID1s to assign to VMs

Id use a ssd, maybe anouther m.2 for the vm boot drives, hdds are just too slow to run vms off of. For mass storage, id personally shuck those 8tb externals. 

 

Id make a raid 10, instead of seprate raid 1's.

 

2 minutes ago, VioDuskar said:

one of the reasons i want to overkill a little is because a lot of the storage i have is games. i'm not sure if i want to keep the file data there and move it back and forth as need, or if i'll try to play those games from a VM because i already have them on the machine and the machine will pass video all around the house. i'm just not sure where to start. 

Playing games off a nas is slightly anooying, as some games don't like running off smb shares, so iscsi would work best, but id just put a big hdd in your desktop and that should be fine. I have tried using a nas for games, and I have never found a reason to, internal drives work the best.

 

3 minutes ago, VioDuskar said:

i currently have an olf AM3 fx-9590 32GB RAM system laying around to do some hypervisor testing with, but i don't want to keep it forever, and i may need to get hardware RAID controllers in order to populate the full 12 drives on it. i was kind of hoping to use a hypervisor to software RAID the disks for the VMs.

Id run software raid on the hypervisor. Thats why I like proxmox, good support for ZFS. THen you don't have to worry about raid cards.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Electronics Wizardy said:

Not really, extra ram can help with caching, but unless you need high speeds you can get away with very little ram. How fast do you need you na.

i'd like to see 100MB/s read local network, and 35MB/s read over internet. hashtag realistic goals.

 

8 minutes ago, Electronics Wizardy said:

Id use a ssd, maybe anouther m.2 for the vm boot drives, hdds are just too slow to run vms off of. For mass storage, id personally shuck those 8tb externals. 

i'd run all the VMs off of 1 or 2 big m.2s then, and the spin drives would all be for NAS or NVR at that point? sounds like a good idea. 

what do you mean shuck the 8TB externals? most of my current storage is on internal 3.5in HDDs right now. 

 

12 minutes ago, Electronics Wizardy said:

Id make a raid 10, instead of seprate raid 1's.

Playing games off a nas is slightly anooying, as some games don't like running off smb shares, so iscsi would work best, but id just put a big hdd in your desktop and that should be fine. I have tried using a nas for games, and I have never found a reason to, internal drives work the best.

so if i put the VMs on m.2s then the general purpose/mild game VM can just pull it's game files from a disk to the m.2 partition and play them locally off the m.2. migrating game data back and forth as needed from the disk. 

 

if the disks are only used for NAS and NVR at that point, why would i need RAID10? i shouldn't need the speed. they should hit 100MB without a RAID0

 

 

 

That ProxMox looks neat. I'll try it out. 

We can't Benchmark like we used to, but we have our ways. One trick is to shove more GPUs in your computer. Like the time I needed to NV-Link, because I needed a higher HeavenBench score, so I did an SLI, which is what they called NV-Link back in the day. So, I decided to put two GPUs in my computer, which was the style at the time. Now, to add another GPU to your computer, costs a new PSU. Now in those days PSUs said OCZ on them, "Gimme 750W OCZs for an SLI" you'd say. Now where were we? Oh yeah, the important thing was that I had two GPUs in my rig, which was the style at the time! They didn't have RGB PSUs at the time, because of the war. The only thing you could get was those big green ones. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, VioDuskar said:

if the disks are only used for NAS and NVR at that point, why would i need RAID10? i shouldn't need the speed. they should hit 100MB without a RAID0

 

Raid 10 would just make it easier to manage the drives as its all one big pool, but having one pool for nvr and one for nas would work well.

2 minutes ago, VioDuskar said:

what do you mean shuck the 8TB externals? most of my current storage is on internal 3.5in HDDs right now. 

 

You can get 8tb and bigger external drives for much cheaper than the internal drives, so get those if you need more drives, and remove the drives from inside them.

 

3 minutes ago, VioDuskar said:

i'd like to see 100MB/s read local network, and 35MB/s read over internet. hashtag realistic goals.

 

Yea that is super easy to hit, don't really have to do anything special to see those speeds.

3 minutes ago, VioDuskar said:

i'd run all the VMs off of 1 or 2 big m.2s then, and the spin drives would all be for NAS or NVR at that point? sounds like a good idea. 

 

Thats what I do, it works well

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Electronics Wizardy said:

You can get 8tb and bigger external drives for much cheaper than the internal drives, so get those if you need more drives, and remove the drives from inside them.

ahh, yeah, that trick. i've done that before. got any specific models in mind? i thought you meant shuck like trash, but now i see you mean shuck like removing the husk.

 

so do i got epyc 8 core or a threadripper with 16cores for the same price?

i won't be able to find a dual socket treadripper and i won't find a board with more than 1/2 NICs, but i may still get at least 8 DIMM slots.

 

We can't Benchmark like we used to, but we have our ways. One trick is to shove more GPUs in your computer. Like the time I needed to NV-Link, because I needed a higher HeavenBench score, so I did an SLI, which is what they called NV-Link back in the day. So, I decided to put two GPUs in my computer, which was the style at the time. Now, to add another GPU to your computer, costs a new PSU. Now in those days PSUs said OCZ on them, "Gimme 750W OCZs for an SLI" you'd say. Now where were we? Oh yeah, the important thing was that I had two GPUs in my rig, which was the style at the time! They didn't have RGB PSUs at the time, because of the war. The only thing you could get was those big green ones. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, VioDuskar said:

ahh, yeah, that trick. i've done that before. got any specific models in mind? i thought you meant shuck like trash, but now i see you mean shuck like removing the husk.

Normallyl the wd ones are the best bet as there not smr.

 

Just now, VioDuskar said:

so do i got epyc 8 core or a threadripper with 16cores for the same price?

i won't be able to find a dual socket treadripper and i won't find a board with more than 1/2 NICs, but i may still get at least 8 DIMM slots.

You really don't seem to need that much computer power or io, so id probalby just get a 1151 system or a am4 system, epyc or threadripper seems pretty overkill, and you don't need that much ram

 

What do you need all those nics for? A single nic should be fine here.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Electronics Wizardy said:

Normallyl the wd ones are the best bet as there not smr.

 

You really don't seem to need that much computer power or io, so id probalby just get a 1151 system or a am4 system, epyc or threadripper seems pretty overkill, and you don't need that much ram

 

What do you need all those nics for? A single nic should be fine here.

to give each VM it's own NIC. I have a 48port cisco catalyst in the same rack. 

i'd like the NVR to at least have it's own NIC. and the NAS too if i have space.

 

or IDK, barring that, LinkAggregation on the hypervisor if it supports it. 

We can't Benchmark like we used to, but we have our ways. One trick is to shove more GPUs in your computer. Like the time I needed to NV-Link, because I needed a higher HeavenBench score, so I did an SLI, which is what they called NV-Link back in the day. So, I decided to put two GPUs in my computer, which was the style at the time. Now, to add another GPU to your computer, costs a new PSU. Now in those days PSUs said OCZ on them, "Gimme 750W OCZs for an SLI" you'd say. Now where were we? Oh yeah, the important thing was that I had two GPUs in my rig, which was the style at the time! They didn't have RGB PSUs at the time, because of the war. The only thing you could get was those big green ones. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, VioDuskar said:

to give each VM it's own NIC. I have a 48port cisco catalyst in the same rack. 

i'd like the NVR to at least have it's own NIC. and the NAS too if i have space.

 

or IDK, barring that, LinkAggregation on the hypervisor if it supports it. 

Id just run it all over a virtual switch, no reason to run multiple cables. Run vlans if you want to keep the NVR seprate, but I think a a single nic would be fine here.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Electronics Wizardy said:

Id just run it all over a virtual switch, no reason to run multiple cables. Run vlans if you want to keep the NVR seprate, but I think a a single nic would be fine here.

 

 

i have hundreds of RJ45s and hundreds of feet of cat6a cable. I want it. 

 

hashtag unnecessary goals.

We can't Benchmark like we used to, but we have our ways. One trick is to shove more GPUs in your computer. Like the time I needed to NV-Link, because I needed a higher HeavenBench score, so I did an SLI, which is what they called NV-Link back in the day. So, I decided to put two GPUs in my computer, which was the style at the time. Now, to add another GPU to your computer, costs a new PSU. Now in those days PSUs said OCZ on them, "Gimme 750W OCZs for an SLI" you'd say. Now where were we? Oh yeah, the important thing was that I had two GPUs in my rig, which was the style at the time! They didn't have RGB PSUs at the time, because of the war. The only thing you could get was those big green ones. 

Link to comment
Share on other sites

Link to post
Share on other sites

If it's connectivity you're after, investigate the GIGABYTE C621-SU8 mainboard. It's a proper server board, socket 3647, with dual networking (possibly even 10Gb) and a management port. Bunch of PCIe slots as well as SATA connectors, plenty of RAM support (1TB, thank you very much!) and as an ATX form factor it'll fit in your case.

 

Alternatives include the ASUS P10S-E/4L (socket 1151) (more RJ45, less RAM, PCIe and SATA) and ASUS Z11PA-D8 (dual socket 3647), less PCIe, no SATA, but 4x SAS and 5x RJ45. Regrettably, finding AMD Epyc based server boards isn't so easy for consumers/prosumers, probably 'cause they're really high end EOM stuff with price tags to match.

 

As for the OS, FreeNAS, ProxMox et all do a fine or even great job, but have a cost to it. A decent, recent Linux distro will do the same thing if you're prepared to put in some work yourself so it comes down to the eternal "how much is my time worth?" question. That choice is obviously yours. I do recommend using HBA cards over RAID and let the Linux kernel take care of any RAID functions with the mdadm tools it has.

 

For your immediate storage needs I'd suggest getting 3x4TB spinning disks in a RAID5, with part of a 1TB boot NVMe as cache for that RAID5 (realistically, you'd only need 64GB max for the OS and that includes log files). This would leave the remainder of the boot drive as well as any additional M.2 NVMe slots available for future cache expansion or even super-fast storage.

 

HTH!

"You don't need eyes to see, you need vision"

 

(Faithless, 'Reverence' from the 1996 Reverence album)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×