Jump to content

Advice for new Server needed - HP ML350p Gen8 vs custom Ryzen build

tik_1

Hi,

 

I want to buy a Homeserver for up to 1000 € (without storage).
I already have some SSDs and HDDs which I will use in ZFS RAIDs.

Since I live in a small apartment, it should be silent and not too large.


A Tower Server seems like the best option.

I already configured a system with refurbished components for 900€:

 

- HP ML350p Gen8 with 16x12,5" SFF
- 2x Xeon E5-2697 v2 12 Core
- 64 GB REG ECC DDR3 RAM
- 2x HP Smart Array P420(i) 8 port Raid Controller
If the fans are too loud, I could replace them with more silent ones.



But I have 2 concerns:

 

- The HP Raid Controllers seem to support HBA mode, but according to some internet sources it is not really suitable for ZFS.
  What could I use instead for the 16x 2,5" drives?
 

- Power consumption and noise may be pretty high.
  Is there any chance I could configure a sytem with the new Ryzen series and ECC RAM with the same budget? Any suggestions for the Mainboard/RAM?

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, tik_1 said:

I already have some SSDs and HDDs which I will use in ZFS RAIDs.

What drives do you have? Normally id stay away from 2.5in hdds for mass storage

 

 

 

That system is probably gonna be pretty power hungry. Id guessabout 150w undles idle with a few vms. Probably peaking at >300w.

 

Id Personlly get a diy system here for the power savings. Im a big fan of these asrock rack am4 boards https://www.newegg.com/asrock-rack-x470d4u-amd-ryzen-2nd-generation-series-processors/p/N82E16813140023?Description=asrock rack am4&cm_re=asrock_rack am4-_-13-140-023-_-Product&quicklink=true

 

Then get something like a 5600x, some ecc unbuffered ddr4 dimms and your idling sub 50w, and have 8 sata ports + 2 m.2s for boot.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Electronics Wizardy said:

What drives do you have? Normally id stay away from 2.5in hdds for mass storage

I have 4x 18 TB 3,5" from Toshiba which I will connect via an external HDD enclosure.
I will use the 2,5" bays only for SSDs. Right now I have 2x250 GB for the OS, but I will add 4x 1 TB in the future. And maybe some cheap refurbished SAS drives.

 

 

10 minutes ago, Electronics Wizardy said:

Looks good, but is there any with more PCIe ports? I plan to add a GPU in the future.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, tik_1 said:

I have 4x 18 TB 3,5" from Toshiba which I will connect via an external HDD enclosure.
I will use the 2,5" bays only for SSDs. Right now I have 2x250 GB for the OS, but I will add 4x 1 TB in the future. And maybe some cheap refurbished SAS drives.

Id go for the am4 platform, then you can have the3.5in hdds internal, and much quieter system, and much less power draw. 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Electronics Wizardy said:

Id go for the am4 platform, then you can have the3.5in hdds internal, and much quieter system, and much less power draw. 

Yes, I think that would be the better option. Thanks for your Mainboard suggestion.
Can you recommend any other Mainboard with more PCIe slots for future upgrades?

 

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, tik_1 said:

Can you recommend any other Mainboard with more PCIe slots for future upgrades?

Unless you need a bunch of individual x4/x1 devices, more slots aren't useful on mainstream platforms as you haven't the PCIe lanes to run all of them. 

Intel HEDT and Server platform enthusiasts: Intel HEDT Xeon/i7 Megathread 

 

Main PC 

CPU: i9 7980XE @4.5GHz/1.22v/-2 AVX offset 

Cooler: EKWB Supremacy Block - custom loop w/360mm +280mm rads 

Motherboard: EVGA X299 Dark 

RAM:4x8GB HyperX Predator DDR4 @3200Mhz CL16 

GPU: Nvidia FE 2060 Super/Corsair HydroX 2070 FE block 

Storage:  1TB MP34 + 1TB 970 Evo + 500GB Atom30 + 250GB 960 Evo 

Optical Drives: LG WH14NS40 

PSU: EVGA 1600W T2 

Case & Fans: Corsair 750D Airflow - 3x Noctua iPPC NF-F12 + 4x Noctua iPPC NF-A14 PWM 

OS: Windows 11

 

Display: LG 27UK650-W (4K 60Hz IPS panel)

Mouse: EVGA X17

Keyboard: Corsair K55 RGB

 

Mobile/Work Devices: 2020 M1 MacBook Air (work computer) - iPhone 13 Pro Max - Apple Watch S3

 

Other Misc Devices: iPod Video (Gen 5.5E, 128GB SD card swap, running Rockbox), Nintendo Switch

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, Zando_ said:
46 minutes ago, tik_1 said:

 

Unless you need a bunch of individual x4/x1 devices, more slots aren't useful on mainstream platforms as you haven't the PCIe lanes to run all of them. 

I will need two slots:
- GPU

- RAID/Storage Controller (more SATA ports and eSATA for external enclosure)

So it would be enough theoretically.

 

I would like to have a hardware RAID 1 for my boot drives while using ZFS for the other drives.

What RAID Controller could I use that supports JBOD mode (suitable for ZFS)?

The AsRock Rack X470D4U seems to support RAID modes. Could I use RAID 1 for the boot drives and JBOD mode for the other ones (for ZFS)?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Electronics Wizardy said:

 

Id Personlly get a diy system here for the power savings. Im a big fan of these asrock rack am4 boards https://www.newegg.com/asrock-rack-x470d4u-amd-ryzen-2nd-generation-series-processors/p/N82E16813140023?Description=asrock rack am4&cm_re=asrock_rack am4-_-13-140-023-_-Product&quicklink=true

 

Then get something like a 5600x, some ecc unbuffered ddr4 dimms and your idling sub 50w, and have 8 sata ports + 2 m.2s for boot.

Does this Mainboard even support Ryzen 5000 series?

Link to comment
Share on other sites

Link to post
Share on other sites

Would the ASUS TUF Gaming X570-Plus Mainboard be compatible with "Kingston Server Premier DIMM 32GB, DDR4-3200" RAM?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, tik_1 said:

Does this Mainboard even support Ryzen 5000 series?

Yup the 5000 series are supported on that board.

 

Im a fan of those boards as they have a gpu built in so you don't need one in a pcie slot, and you get ipmi for remote management, and optional 10gbe.

 

20 minutes ago, tik_1 said:

Would the ASUS TUF Gaming X570-Plus Mainboard be compatible with "Kingston Server Premier DIMM 32GB, DDR4-3200" RAM?

It should work, but the ecc function might not be working correctly.

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, tik_1 said:

Hi,

 

I want to buy a Homeserver for up to 1000 € (without storage).
I already have some SSDs and HDDs which I will use in ZFS RAIDs.

Since I live in a small apartment, it should be silent and not too large.


A Tower Server seems like the best option.

I already configured a system with refurbished components for 900€:

 

- HP ML350p Gen8 with 16x12,5" SFF
- 2x Xeon E5-2697 v2 12 Core
- 64 GB REG ECC DDR3 RAM
- 2x HP Smart Array P420(i) 8 port Raid Controller
If the fans are too loud, I could replace them with more silent ones.



But I have 2 concerns:

 

- The HP Raid Controllers seem to support HBA mode, but according to some internet sources it is not really suitable for ZFS.
  What could I use instead for the 16x 2,5" drives?
 

- Power consumption and noise may be pretty high.
  Is there any chance I could configure a sytem with the new Ryzen series and ECC RAM with the same budget? Any suggestions for the Mainboard/RAM?

You probably don’t need SSD’s at all for this beside boot. The bottleneck is going to be networking way before it is the ZFS array. I wouldn’t spend money in buying new SSD’s for this. Just use what you have for boot, no reason to make a flash based vdev. 
 

I wouldn’t mix SAS and SATA. Just use SATA. 
 

Why will you connect some externally? Why not plug them in to internal SATA? 
 

Remember, you can’t just add more drives to a ZFS array. You need to build entirely new vdevs which each require their own redundancy. You may want to consider unraid if you plan to slow roll drive purchases. ZFS is better for days integrity and performance, but it is not as flexible when it comes to adding drives. 
 

I would steer very clear of dual socket systems. You don’t need nearly as much cpu power (or RAM) as you think. I ran my homelab which was ESXi as my hypervisor with truenas with 10x4TB Z2 array, home assistant vm, multiple Ubuntu VM’s, docker containers, windows LTSC vm, and a few other random things on a core i3 6100 and 28 GB of ECC RAM for years, and everything was perfectly happy and performed well. A dual socket xeon is just going to consume huge amounts of electricity for no reason, and the % utilization will be 5% tops most likely, maybe with a few spikes up into the teens. Plus, new CPU’s are much faster clock for clock, E5 v2’s are not fast and are very power inefficient. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/18/2022 at 6:39 PM, LIGISTX said:

You probably don’t need SSD’s at all for this beside boot. The bottleneck is going to be networking way before it is the ZFS array. I wouldn’t spend money in buying new SSD’s for this. Just use what you have for boot, no reason to make a flash based vdev. 

As I will be limited to 128 GB of RAM with a custom build, I will later add more SSDs for additional ZFS caching and for VMs.

 

On 10/18/2022 at 6:39 PM, LIGISTX said:

 

I wouldn’t mix SAS and SATA. Just use SATA. 

I would not mix them, but create a separate, faster SAS volume for VMs. So I can use the SATA HDDs for data storage only.

 

On 10/18/2022 at 6:39 PM, LIGISTX said:

 

Why will you connect some externally? Why not plug them in to internal SATA? 

I would only do that with the mentioned HP server, it has only 2,5" slots. With a custom build, I would use a case with at least 8 internal 3,5" slots.

 

On 10/18/2022 at 6:39 PM, LIGISTX said:

 

Remember, you can’t just add more drives to a ZFS array. You need to build entirely new vdevs which each require their own redundancy.

I can't add new vdevs, but I have solid backups and could simply recreate it.
I could also create another identical ZFS volume and stripe them for capacity.

 

On 10/18/2022 at 6:39 PM, LIGISTX said:

You may want to consider unraid if you plan to slow roll drive purchases. ZFS is better for days integrity and performance, but it is not as flexible when it comes to adding drives. 

I will use Proxmox on an hardened debian server for better security.

Also, integrity and performance are more important to me than super high flexibiliy.

 

On 10/18/2022 at 6:39 PM, LIGISTX said:

 

I would steer very clear of dual socket systems. You don’t need nearly as much cpu power (or RAM) as you think. I ran my homelab which was ESXi as my hypervisor with truenas with 10x4TB Z2 array, home assistant vm, multiple Ubuntu VM’s, docker containers, windows LTSC vm, and a few other random things on a core i3 6100 and 28 GB of ECC RAM for years, and everything was perfectly happy and performed well. A dual socket xeon is just going to consume huge amounts of electricity for no reason, and the % utilization will be 5% tops most likely, maybe with a few spikes up into the teens. Plus, new CPU’s are much faster clock for clock, E5 v2’s are not fast and are very power inefficient. 

Yes, I would prefer a custom build.
Just want to make sure that ECC works correctly, as its not always documented well for Ryzen CPUs.

 

On 10/18/2022 at 6:39 PM, LIGISTX said:

You don’t need nearly as much cpu power (or RAM) as you think

I do.

I will run multiple game servers (Ark, Minecraft etc), Plex with transcoding, some deep learning tasks, Gitlab, Nextcloud, Windows, Firewall/VPN, some docker stuff etc.

Right now it is spread across multiple servers/vps, but I think a 5000 Ryzen with more than 20k Passmark score will handle that fine.

I also need a lot of RAM for solid ZFS caching, since there are lots of different small files.

 

Link to comment
Share on other sites

Link to post
Share on other sites

How about this configuraton?

 

CPU - Ryzen 7 5800X

Mainboard - ASRock X470D4U AMD X470 So.AM4

RAM - Kingston Server Premier ECC DDR4-2666 (32 GB module)

Power supply - bew quirt! Pure Power 11 650 Watt

Case - Fractal Design Define R5 Midi Tower

 

Is everything compatible, so that the ECC would work fine?

Can anyone recommend any other cases?

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, tik_1 said:

I can't add new vdevs, but I have solid backups and could simply recreate it.
I could also create another identical ZFS volume and stripe them for capacity.

You probably don’t need to keep recreating it from backup every time you went to add more space… if this is your actual plan, unraid is a much better solution. 
 

If you create more vdevs (which I think is what your trying to say in the second part), you don’t “need” to stripe them together. Just store different data on different vdevs. But if you do stripe them together, remember, if one of the vdevs has no redundancy at all, your entire pool has no redundancy at all. 
 

32 minutes ago, tik_1 said:

I will use Proxmox on an hardened debian server for better security.

Proxmox is Debian… don’t virtualize proxmox under yet another hypervisor. Proxmox is plenty secure and hardened. Don’t turn on SSH, use a password manager so no passwords are alike, that’s really all there is to do. Don’t mess with the underlying Debian code of proxmox, there isn’t anything you need to do to make it “safer” or more “hardened”. The only way anyone will be able to gain access to your proxmox box is if they gain entry into another device on your network, and then are able to leverage that device to ssh in or gain access via the webUI… but again, there is nothing to harden here except make sure ssh is off since you likely don’t need it to be on. 
 

37 minutes ago, tik_1 said:

I will run multiple game servers (Ark, Minecraft etc), Plex with transcoding, some deep learning tasks, Gitlab, Nextcloud, Windows, Firewall/VPN, some docker stuff etc.

Right now it is spread across multiple servers/vps, but I think a 5000 Ryzen with more than 20k Passmark score will handle that fine.

I also need a lot of RAM for solid ZFS caching, since there are lots of different small files.

The only thing here that will really need power is the game servers. Transcoding isn’t that big a deal unless your trying to transcode 4k, in which case “your doing it wrong”, don’t transcode 4k. But my i3 6100 build, Plex was given 2 threads of that 4 thread chip and I could transcode multiple movies at once, while truenas and all other VM’s hummed along just fine.I have (and had) multiple VPN’s set up, docker containers, windows LTSC, multiple Ubuntu VM’s. All I didn’t have was a firewall and game servers, and a i3 from 2015 was fine. Now… I have a 28 thread xeon, with the addition of pfsense virtualize to everything else I previously mentioned, and it is extreme overkill. I don’t think I have ever seen it to over 15% utilization. 
 

RAM… I mean. Sure. More RAM makes ZFS happy. But, also, gigabit networking is going to be the bottleneck. Are you going 10GBe? 
 

Also, your not planning to run DB’s, game servers or VM’s off the ZFS array…. Right? 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, tik_1 said:

How about this configuraton?

 

CPU - Ryzen 7 5800X

Mainboard - ASRock X470D4U AMD X470 So.AM4

RAM - Kingston Server Premier ECC DDR4-2666 (32 GB module)

Power supply - bew quirt! Pure Power 11 650 Watt

Case - Fractal Design Define R5 Midi Tower

 

Is everything compatible, so that the ECC would work fine?

Can anyone recommend any other cases?

I am not sure. This is why I prefer user server gear… I know it’s going to work. Hopefully someone can help shed light on this setup, a 5800x would be a good option. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, LIGISTX said:

Proxmox is Debian… don’t virtualize proxmox under yet another hypervisor. Proxmox is plenty secure and hardened. Don’t turn on SSH, use a password manager so no passwords are alike, that’s really all there is to do. Don’t mess with the underlying Debian code of proxmox, there isn’t anything you need to do to make it “safer” or more “hardened”. The only way anyone will be able to gain access to your proxmox box is if they gain entry into another device on your network, and then are able to leverage that device to ssh in or gain access via the webUI… but again, there is nothing to harden here except make sure ssh is off since you likely don’t need it to be on. 

Proxmox is installed on top of a debian installation. It uses kvm, no other hypervisor involved. I will only make some configurations of the debian itself.

SSH will be enabled, but only reachable with Wireguard VPN.

 

14 minutes ago, LIGISTX said:

The only thing here that will really need power is the game servers. Transcoding isn’t that big a deal unless your trying to transcode 4k, in which case “your doing it wrong”, don’t transcode 4k. But my i3 6100 build, Plex was given 2 threads of that 4 thread chip and I could transcode multiple movies at once, while truenas and all other VM’s hummed along just fine.I have (and had) multiple VPN’s set up, docker containers, windows LTSC, multiple Ubuntu VM’s. All I didn’t have was a firewall and game servers, and a i3 from 2015 was fine. Now… I have a 28 thread xeon, with the addition of pfsense virtualize to everything else I previously mentioned, and it is extreme overkill. I don’t think I have ever seen it to over 15% utilization. 

Let's just agree that there is nothing wrong with the Ryzen 7 5800X for under 300 € 😊

 

14 minutes ago, LIGISTX said:

RAM… I mean. Sure. More RAM makes ZFS happy. But, also, gigabit networking is going to be the bottleneck. Are you going 10GBe? 

Maybe in the future.

I just thought about the many small files (from Gitlab and Cloud) spread across the drives.

 

14 minutes ago, LIGISTX said:

Also, your not planning to run DB’s, game servers or VM’s off the ZFS array…. Right? 

They will run on a dedicated SSD or SAS RAID 1.

The large ZFS array will only be used for storage for Plex, Nextcloud, Network share, Gitlab etc.

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, tik_1 said:

Proxmox is installed on top of a debian installation. It uses kvm, no other hypervisor involved. I will only make some configurations of the debian itself.

SSH will be enabled, but only reachable with Wireguard VPN.

Correct, but there isn’t much you would need to alter… again, it’s pretty secure from the factory. It’s like when people ask what settings should I change on pfsense to make it secure…… the less you change the more secure it is, it’s secure by default. 
 

What do you need SSH for on your hypervisor host? If you do need to use it, go into the webUI and turn it on. Then back off. Again, just trying to limit threat surface. Sure, you can give it a separate NIC, or I suppose you can likely do it with vlans, and bind ssh to a subnet onky WireGuard is part of, but, ¯\_(ツ)_/¯. 
 

25 minutes ago, tik_1 said:

They will run on a dedicated SSD or SAS RAID 1.

The large ZFS array will only be used for storage for Plex, Nextcloud, Network share, Gitlab etc.

Use an SSD. No reason not to. Get an NVMe SSD, and use that as your proxmox boot and datastore for VM’s and LXC’s. No point in running OS’s or applications on spinning rust. 
 

FYI, by default proxmox takes half of the drive it’s installed on for boot, which is sorta stupid. I used this guide to help fix that:

 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/21/2022 at 6:14 PM, LIGISTX said:

Correct, but there isn’t much you would need to alter… again, it’s pretty secure from the factory. It’s like when people ask what settings should I change on pfsense to make it secure…… the less you change the more secure it is, it’s secure by default. 

I don't want to go too off-topic, but having an IT Sec background, I think there are a few things to consider.
For example explicitly disabling KSM which can result in side channel attacks with memory access between the VMs.
Also, AppArmor can be used to further restrict privileges.
I can minimize applications and kernel modules and implement log monitoring/ids.
Additionally, I can restrict cryptographic functions and mitigate physical risks (like DMA attacks).

 

On Debian with Proxmox, I know exactly what I do, but UNRAID seems intransparent to me.
Working with an unknown environment usually results in more misconfigurations and security flaws.

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/21/2022 at 5:06 PM, tik_1 said:

How about this configuraton?

 

CPU - Ryzen 7 5800X

Mainboard - ASRock X470D4U AMD X470 So.AM4

RAM - Kingston Server Premier ECC DDR4-2666 (32 GB module)

Power supply - bew quirt! Pure Power 11 650 Watt

Case - Fractal Design Define R5 Midi Tower

 

Is everything compatible, so that the ECC would work fine?

Can anyone recommend any other cases?

Can anyone tell me more about this?

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×