Jump to content

Building a Proxmox Virtualization Server

Pietro95

Hello everybody, I am considering purchasing a virtualization server and use Proxmox.

We need a solution to virtualize various development VMs, if possible I would prefer an AMD EPYC processor

 

I have seen the DELL PowerEdge R6515 Server Standard server with:

 

- AMD EPYC 7402P 2.80GHz

- 4x 32GB RDIMM, 3200MT/s, Dual Rank

- 1x 480GB SSD SATA

- 2x 1.2TB Hard Drive SAS 12Gbps 10k 512n 2.5in Hot-Plug (RAID 1 setup)

 

For an estimated cost of 4500 EUR.
 
Considering that I am not an expert, what do you think and what do you recommend? 
Thank you :)
Link to comment
Share on other sites

Link to post
Share on other sites

How many VMs?

How many users?

Budget?

Back-up solution?

 

While this is a decent base system, the storage solution is sub optimal.

You want a RAID1 for the operating system drive, and for VMs, ideally, you'll want a RAID10 or similar config made up of SSDs.

Harddisks are fine for bulk storage, archive data etc.

 

Proxmox doens't like to be installed on a pre-defined RAID though. Get a server with an HBA instead of a RAID controller.

 

 

 

PC Specs - AMD Ryzen 7 5800X3D MSI B550M Mortar - 32GB Corsair Vengeance RGB DDR4-3600 @ CL16 - ASRock RX7800XT 660p 1TBGB & Crucial P5 1TB Fractal Define Mini C CM V750v2 - Windows 11 Pro

 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, NelizMastr said:

How many VMs?

How many users?

Budget?

Back-up solution?

 

While this is a decent base system, the storage solution is sub optimal.

You want a RAID1 for the operating system drive, and for VMs, ideally, you'll want a RAID10 or similar config made up of SSDs.

Harddisks are fine for bulk storage, archive data etc.

 

Proxmox doens't like to be installed on a pre-defined RAID though. Get a server with an HBA instead of a RAID controller.

 

 

 

Thank you for your reply.

 

Quote

How many VMs?

Currently 4 VM:

 

N1: 16GB RAM

N2: 32GB RAM

N3 8GB RAM

N4: 16GB RAM

 

Quote

How many users?

Users can vary, they connect through their development tools to the VMs.

 

Quote

Budget?

The budget is to be defined, 4000 EUR already seems a lot, so this is an unknown, if it is possible to save we would be happy

 

Quote

Back-up solution?

We do not currently plan a backup, we rely on the RAID system

 

Quote

Proxmox doens't like to be installed on a pre-defined RAID though. Get a server with an HBA instead of a RAID controller.

Can't I install Proxmox on the 480GB drive and RAID 1 the two 1.2TB drives?

 

 

Do you know of any hardware pre-configured servers to recommend?

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Skar3 said:

Thank you for your reply.

 

Currently 4 VM:

 

N1: 16GB RAM

N2: 32GB RAM

N3 8GB RAM

N4: 16GB RAM

 

Users can vary, they connect through their development tools to the VMs.

 

The budget is to be defined, 4000 EUR already seems a lot, so this is an unknown, if it is possible to save we would be happy

In the server world, €4000 isn't much if you're buying new. Even servers we sell to SMB customers cost over 5 grand all-in. That's single CPU 64-128GB RAM servers with a bunch of SSDs and a handful of SAS disks.

7 minutes ago, Skar3 said:

 

We do not currently plan a backup, we rely on the RAID system

RAID is not a backup. Take into consideration how fast you can get your system up and running again.

If that's more than a few hours, it's cheaper to get a couple of USB HDDs to backup to.

7 minutes ago, Skar3 said:

 

Can't I install Proxmox on the 480GB drive and RAID 1 the two 1.2TB drives?

Yes, but RAID1 means half the write performance and it's not scalable. You can't expand a RAID1.

7 minutes ago, Skar3 said:

 

 

Do you know of any hardware pre-configured servers to recommend?

A decent config would be something like this (see att.), but as you can see, it's not cheap.

And even that's not ideal as it comes with a RAID controller.

afbeelding.thumb.png.d1fa70d82d448cac0341ef827445f6c3.png

 

PC Specs - AMD Ryzen 7 5800X3D MSI B550M Mortar - 32GB Corsair Vengeance RGB DDR4-3600 @ CL16 - ASRock RX7800XT 660p 1TBGB & Crucial P5 1TB Fractal Define Mini C CM V750v2 - Windows 11 Pro

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Pietro95 said:
Quote

 

We do not currently plan a backup, we rely on the RAID system

Raid is not a backup, make sure to have a backup if the data is important

 

 

1 hour ago, Pietro95 said:

Can't I install Proxmox on the 480GB drive and RAID 1 the two 1.2TB drives?

 

 

Do you know of any hardware pre-configured servers to recommend?

Don;t get sas hdds, get ssds here, hdds are gonna be slow.

 

Id get the boss card for boot from dell.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Few things:

1) 24-core CPU with 128GB RAM is in 90% of cases overkill. In vast majority of cases, it's RAM that is issue, not CPU. Only if you really are running simultaneously CPU intensive tasks you need 24 cores with only 128GB RAM.

2) @NelizMastr 'Yes, but RAID1 means half the write performance and it's not scalable. You can't expand a RAID1.' -> this is false. RAID1 has about the same write performance as single drive, but about 2x read performance.

3) Proxmox works with HW RAID. Since in this config you have only 2 drives, you should use onboard RAID controller for RAID1 (mirror) on those drives. Install OS on them, and VM's. SSD should be used for caching, and few VM's that require fast disk access. Keep in mind that SSD has no redundancy at all.

4) If you do need HDD's (price), SAS drives, especially 10kRPM are decent. SSD's are, of course, faster, but 10k SAS drives are quite fast. Way faster than usual SATA 7200rpm.

 

Overall, if you ask me, I'd just go with CPU of upto 12 cores, and use extra cash for more drives (preferably SSD's), possibly even more RAM, depending what you intend to run.

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Electronics Wizardy said:

Raid is not a backup, make sure to have a backup if the data is important

 

 

Don;t get sas hdds, get ssds here, hdds are gonna be slow.

 

Id get the boss card for boot from dell.

 

 

 

16 hours ago, NelizMastr said:

In the server world, €4000 isn't much if you're buying new. Even servers we sell to SMB customers cost over 5 grand all-in. That's single CPU 64-128GB RAM servers with a bunch of SSDs and a handful of SAS disks.

RAID is not a backup. Take into consideration how fast you can get your system up and running again.

If that's more than a few hours, it's cheaper to get a couple of USB HDDs to backup to.

Yes, but RAID1 means half the write performance and it's not scalable. You can't expand a RAID1.

A decent config would be something like this (see att.), but as you can see, it's not cheap.

And even that's not ideal as it comes with a RAID controller.

afbeelding.thumb.png.d1fa70d82d448cac0341ef827445f6c3.png

 

Thanks to both of you for the answers, we probably need to review our budget well and if it's really worth it for us.

 

Link to comment
Share on other sites

Link to post
Share on other sites

We are currently re-evaluating our needs and are considering such a setup:
 
DELL Server 8 slots x 2.5", AMD EPYC 7302P 16C, 64GB RAM, n4 SAS SSD 960GB (RAID 5 setup) = 4500 EUR
Link to comment
Share on other sites

Link to post
Share on other sites

Some points. Dell especially, but maybe HP too, often have much better pricing when you speak to a sales advisor. But you ideally want the spec nailed down so your asking for exactly what you need.

 

You certainly want RAID1 for boot. Your storage is a bit odd though. Are you planning on running the VM's from the HDD's? I would try to tend towards all SSD these days unless its bulk storage. If your using Proxmox, make sure the HBA supports passthru of drives. With Dell, typically the lower spec models eg H3x0 will do this no problem, but the higher end cards (eg H7x0) will not.

 

My VM servers here for instance have a pair of small drives for boot (some SSD, some HDD, typically 120-147gb) Then they are largely using SSD for VM storage, with the odd exception where large volumes of data storage is required. However even then, the main engineering file server is all SSD, it was simply too slow on mechanical drives with 30-40 users hitting it.

 

In the past, where money was tight, i've opted to buy a bare-bones config from Dell, and then add extra RAM and storage myself. But this ofcourse has warranty issues and you occasionally run into weird quirks, like one Dell R540 which simply hated a particular model of Seagate hard drive. As soon as you slotted it in, the server ramped the fans to 100% and sat there screaming with no way to stop it until you pulled the drive again.

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, Aragorn- said:

Some points. Dell especially, but maybe HP too, often have much better pricing when you speak to a sales advisor. But you ideally want the spec nailed down so your asking for exactly what you need.

Agree, list prices can be toned down quite a lot actually.

47 minutes ago, Aragorn- said:

You certainly want RAID1 for boot. Your storage is a bit odd though. Are you planning on running the VM's from the HDD's? I would try to tend towards all SSD these days unless its bulk storage. If your using Proxmox, make sure the HBA supports passthru of drives. With Dell, typically the lower spec models eg H3x0 will do this no problem, but the higher end cards (eg H7x0) will not.

I disagree.

RAID1 is generally preferred when using HDD's.

RAID5 with 4 SSD drives is fine, for boot too. Just be sure to have BBU, as without it performance will be quite impacted. Also be sure to have HBA which supports RAID5.

There is absolutely no need to have passthru of drives, even if using ZFS. Yes, it's preferable to use ZFS with 'native' drives due to checksuming, but it's not requirement. In this case, I'd say go with RAID5 on controller.

 

@Pietro9564GB RAM is quite low for virtualization machine. But as stated previously - you know what you need. 64GB RAM with 16-core CPU is not balanced.

Link to comment
Share on other sites

Link to post
Share on other sites

54 minutes ago, Aragorn- said:

Some points. Dell especially, but maybe HP too, often have much better pricing when you speak to a sales advisor. But you ideally want the spec nailed down so your asking for exactly what you need.

Yes thank you, for now we have only used the online configurator, we plan to talk to some sales advisor.

3 minutes ago, Nick7 said:

Agree, list prices can be toned down quite a lot actually.

I disagree.

RAID1 is generally preferred when using HDD's.

RAID5 with 4 SSD drives is fine, for boot too. Just be sure to have BBU, as without it performance will be quite impacted. Also be sure to have HBA which supports RAID5.

There is absolutely no need to have passthru of drives, even if using ZFS. Yes, it's preferable to use ZFS with 'native' drives due to checksuming, but it's not requirement. In this case, I'd say go with RAID5 on controller.

 

@Pietro9564GB RAM is quite low for virtualization machine. But as stated previously - you know what you need. 64GB RAM with 16-core CPU is not balanced.

We planned to use RAID 5 configuration for booting and for VM storage.

The Server is a DELL PowerEdge 6515 with a PERC H330 RAID Controller, I shouldn't have any problems right?

 

Quote

64GB RAM is quite low for virtualization machine. But as stated previously - you know what you need. 64GB RAM with 16-core CPU is not balanced.

We are also planning to increase to 96GB, should be better right?

 

Thank you very much for the support, and sorry but I'm not very experienced on the server side

Link to comment
Share on other sites

Link to post
Share on other sites

Why would you raid5 a boot drive? Its just a small/slow disk to get the OS up, a mirror is totally fine. I've seen USB sticks used in the past, but this is janky, just use a real drive.

 

Dont put your host OS on a VM drive, it just causes headaches in future if you need to upgrade. Been there, done that.

 

ZFS itself might not particularly care about native drives (certainly not in the way the freenas folks would make you think) HOWEVER, native drives mean the OS's monitoring utilities can view the drive smart data, which is invaluable for spotting failures before they bite you in the arse. RAID controllers often hide that sort of data and you dont get the same visibility of whats going on with the underlying hardware. In any case, the H330 listed in the spec above will simply pass any unconfigured drives thru to the OS, so its all good on that front.

 

I would generally avoid controller based RAID arrangements, If the controller dies, your up shit creek. Use software based RAID and you can stick those disks in any old machine and it'll boot right up.

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, Aragorn- said:

Why would you raid5 a boot drive? Its just a small/slow disk to get the OS up, a mirror is totally fine. I've seen USB sticks used in the past, but this is janky, just use a real drive.

OK thanks, then we plan to raid 5 storage for VMs only. We will add two disks just for booting

Link to comment
Share on other sites

Link to post
Share on other sites

I was thinking about this final setup:
Quote

 

Smart Value Power Edge R6515 Server Basic

2.5" Chassis with up to 10 Hard Drives, including up to 8 SAS/SATA or 9 NVME Drives

AMD EPYC 7302P 3GHz, 16C/32T, 128M Cache (155W) DDR4-3200

96GB RDIMM, 3200MT/s, Dual Rank

4x 960GB SSD SATA Read Intensive 6Gbps 512 2.5in Hot-plug AG Drive, 1 DWPD, 1752 TBW

2x 240GB SATA SSD

PERC H330 RAID Controller

 

 

I was thinking of mirroring the two 240 drives for the OS. And a RAID 5 setup for storage

 

Does anyone know the difference between these two types of discs?
 

Quote

 

960GB SSD SATA Read Intensive 6Gbps 512 2.5in Hot-plug AG Drive, 1 DWPD, 1752 TBW

960GB SSD SATA Mix Use 6Gbps 512 2.5in Hot-plug AG Drive, 3 DWPD, 5256 TBW

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

probably a different type of flash. the "Read Intensive" drives are, as the name suggests, likely better for applications that are heavy on reads. You will note the "TBW" rating is much lower, this references how much data can be written to the drive over its lifetime (TeraBytesWritten)

 

The Mix use drives will be more suited to mixed read-write workloads.

 

It would be nice if they told you what actual drives they are using, but they tend not to as they source from multiple suppliers.

Link to comment
Share on other sites

Link to post
Share on other sites

58 minutes ago, Pietro95 said:

was thinking of mirroring the two 240 drives for the OS. And a RAID 5 setup for storage

 

Does anyone know the difference between these two types of discs?

Get a Boss card for this use, made exactly for boot drives, and has raid built in.

 

4 hours ago, Pietro95 said:

We planned to use RAID 5 configuration for booting and for VM storage.

The Server is a DELL PowerEdge 6515 with a PERC H330 RAID Controller, I shouldn't have any problems right?

Thats more of a hba, I woudln't use it for raid 5. Id just use zfs in proxmox for raid then, or get a perc h730p if you want hardware raid.

 

 

Do you need new? Id really look into something like a used dell r730. Then fill it up with 256gb of ram, and like dual 14 cores. Should be faster and about the same price

 

Link to comment
Share on other sites

Link to post
Share on other sites

What is being done on the VMs. Development is a generic term.  

I build VMs all day at work in Vcenter and typically only give 4GB RAM and they all work fine.

Also wondering why you are stuck on Epyc?  

Your budget isn't really gonna get you something spectacular.   With 1.2TB data, it doesn't seem like you have a particularly large work load.  

You should give more specifics of what it is you are doing on this server so we can see if it's really even worth what you are doing.  

 

Also, remember the server is just part of the equation. What about connectivity?  Do you have commercial grade switches? Do you have a UPS for the server?  What are the user end points?  Some times a server doing the work load sounds cool but is ultimately impractical. 

You can get laptops or desktops with good CPUs and RAM to match your requirements for everyone for less than or close to your budget just for the server.  

 

Main Computer: CPU - Ryzen 5 5900x Cooler - NZXT Kraken x53  RAM - 32GB Corsairsrair Vengeance Pro GPU - Zotac RTX 3070 Case - Lian Li LanCool II RGB (White) Storage - 1TB Inland Premium M.2 SSD and 2x WD 2TB Black.

Backup Computer: CPU - Ryzen 7 3700x Cooler - CoolerMaster ML240 V2 RAM - 32GB G.Skill RipJaws GPU - Gigabyte GTX 1070 FE Case - Cougar QBX Storage - 500GB WD Black M.2 SSD 

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, Electronics Wizardy said:

Do you need new? Id really look into something like a used dell r730. Then fill it up with 256gb of ram, and like dual 14 cores. Should be faster and about the same price

Unfortunately, it is difficult for us to buy used equipment

 

9 minutes ago, TargetDron3 said:

What is being done on the VMs. Development is a generic term.  

When I talk about development I mean virtual machines that host the server part for the development of SCADA applications. These machines typically require 32GB of RAM, developers have clients on their PCs that connect to these development virtual machines. A virtual machine can serve multiple users at the same time.

 

15 minutes ago, TargetDron3 said:

Also wondering why you are stuck on Epyc?  

I imagined using one of these processors because among the latest and best performing ones, we are ready to evaluate even more

17 minutes ago, TargetDron3 said:

Also, remember the server is just part of the equation. What about connectivity?  Do you have commercial grade switches? Do you have a UPS for the server?

 
yes, we have cisco switches that are used throughout the office, and also a UPS ready for this eventual new server.
 
 
Thank you all for your interest and help.
Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Pietro95 said:

imagined using one of these processors because among the latest and best performing ones, we are ready to evaluate even more

Might want to look at the r440 here. Normaly cheaper chassis, and you won't need the extra cpu power of epyc it seems. Then use that savings to buy more ram and better drives.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Electronics Wizardy said:

Might want to look at the r440 here. Normaly cheaper chassis, and you won't need the extra cpu power of epyc it seems. Then use that savings to buy more ram and better drives.

 

 

Thank you, suggest a minimum of 12 cores?

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Pietro95 said:

Thank you, suggest a minimum of 12 cores?

Really depends on what those vms need? 

 

Id get a xeon silver at least, there much faster and nost much more, but really depends on your needs.

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, Pietro95 said:

 

960GB SSD SATA Read Intensive 6Gbps 512 2.5in Hot-plug AG Drive, 1 DWPD, 1752 TBW

960GB SSD SATA Mix Use 6Gbps 512 2.5in Hot-plug AG Drive, 3 DWPD, 5256 TBW

Main difference is in price.

Read Intensive are in consumer world usualyl 'TLC' drives, and can be written less.

Next one can be written more (3 drive writes per day, vs 1), and will be more expensive.

For majority of users, RI drives are good enough. They are even used in low/mid/high end storage systems too.

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, Electronics Wizardy said:

Really depends on what those vms need? 

 

Id get a xeon silver at least, there much faster and nost much more, but really depends on your needs.

immagine.png.762a990e2294136b9011fc542f16a1bb.png

 

This is the example of the resources required by such a development machine (small/medium application size). I could have two machines of this type + two more with much lower loads.

 

To contain the budget we are now evaluating a solution with this CPU:

PowerEdge R640 Chassis 8 x 2.5in Hot Plug +  Xeon Silver 4210R + 96GB RAM

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Pietro95 said:

This is the example of the resources required by such a development machine (small/medium application size). I could have two machines of this type + two more with much lower loads.

 

To contain the budget we are now evaluating a solution with this CPU:

PowerEdge R640 Chassis 8 x 2.5in Hot Plug +  Xeon Silver 4210R + 96GB RAM

Yea that setup makes a lot of sense to me. Then probbly get the boss card for boot, then put a few ssds in for vms.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Electronics Wizardy said:

Yea that setup makes a lot of sense to me. Then probbly get the boss card for boot, then put a few ssds in for vms.

We have planned two of these disks for booting, are they wasted just for booting?

Quote

480GB SSD SATA Read Intensive 6Gbps 512e 2.5in Hot-plug Drive

 

If it doesn't change so much for our simplicity, we would avoid adding another card.

 

For storage we have:

4x 960GB SSD SATA Read Intensive 6Gbps 512 2.5in Hot-plug AG Drive, 1 DWPD, 1752 TBW in RAID 5 setup (3+1 hot swap)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×