Jump to content

NVMe Based Epyc Server for NAS / Virtualization - OS + GPU Decisions?

Budget (including currency): ~$50,000 USD

Country: USA

Games, programs or workloads that it will be used for: High volume NAS storage of large numbers of small files. Mostly photo work documentation and productivity file management. Secondly will need to run 6-8 VM's in a mix of Linux (currently Debian GUI) and Windows platforms. Linux systems are running database management systems and high volume mail server. Windows VM's will be client access systems for running simple 3D modeling software for construction projects. Min requirements 64 bit Dual core / 8 GB RAM / 1 GB Dedicated GPU. Need to run 4-6 VM"s with this workload at a time.

 

Other details (existing parts lists, whether any peripherals are needed, what you're upgrading from, when you're going to buy, what resolution and refresh rate you want to play at, etc): We are tentatively planning on using a Supermicro 2U Hyper A+ server with up to 24 NVMe Drives, interior M.2 NVMe as boot drives, ~768 GB - 1 TB of RAM, and 40 Gb networking.

 

Currently need about 60 TB of data. Hoping to use ZFS RAID Z2 with hot spare, and dedicated NVMe for special metadata device as we manage millions of small files and metadata management (index loading, search abilities, etc) is a HUGE issue.

 

Not a super Linux expert so was originally thinking of using TrueNAS Scale but running into concerns about the VM management. Namely stability of the hypervisor in running a high workload of VM's, and splitting a GPU between multiple windows VM's.

 

Also thinking about what GPU to include when the server itself has either 1x PCIe 5.0 x16 or 2x PCIe 5.0 x8 slots - single height and 11.5" long. Server also supports another PCIe 5.0 x16 slot  in the AIOM form factor. Looking at RTX 4000 GPU currently, mostly because it appears it will fit. We are not super restricted on rack space and have plenty of rack cooling, so open to other platforms, but build quality, warranty, supportability, easy of management, hot swap front drive bays, etc - are all important features.

 

All input appreciated!

 

Link to comment
Share on other sites

Link to post
Share on other sites

On GPU - since I'm more limited by interior space than I am by PCI slots - eGPU of some sort? There must be an easy way to do PCIe 5.0 x8 to rack mount multiple GPU's? Then possibly I could give each VM a dedicated GPU and not deal with splitting a single GPU across multiple VM's?

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×