Jump to content

Looking for NAS server mobo & cpu recommendations

Building a Unraid NAS server with some dockers like Jellyfin, Nextcloud, and a bit of a home lab, multi server back up.
Mobo needs 6-8 (usable) sata ports, Ideally 2x M.2 ports but 1 is OK, 8x PCIE slot for a 10GBE NIC, additional PCIE slots are a bonus for additional expansion down the road, would be nice to have ECC ram but not a deal breaker if not.
I dont mind if its ATX or MATX

Theres some NAS specific boards out there and some server specific boards, which I have no experience with yet. So besides you standard home / office boards what should I be looking at? Unless of course there are some standard boards which meet my requirements. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

you're gonna need MATX minimum, if not ATX.

 

Almost no boards today have 6+ SATA, you'll need a PCIE Card.

NIC Card in PCIE

GPU in PCIE

+ Expansion = ATX

 

Link to comment
Share on other sites

Link to post
Share on other sites

42 minutes ago, tkitch said:

you're gonna need MATX minimum, if not ATX.

 

Almost no boards today have 6+ SATA, you'll need a PCIE Card.

NIC Card in PCIE

GPU in PCIE

+ Expansion = ATX

 

Yeah as I mentioned, I dont mind what size it is if it fits the requirements.
 

Link to comment
Share on other sites

Link to post
Share on other sites

57 minutes ago, tkitch said:

you're gonna need MATX minimum, if not ATX.

 

Almost no boards today have 6+ SATA, you'll need a PCIE Card.

NIC Card in PCIE

GPU in PCIE

+ Expansion = ATX

 

mATX
this little beauty covers everything w/o the x8 PCIE for the NIC, but has 2x 2.5G which would still do nicely
 

image.png.e8eabe2ea3039ffc9a03a4cbfb66600f.png

image.png.9acaeb5b2e5ecc27aa98130417f11db5.png

 

You could also sacrifice a M.2 for additional sata storage
image-1.png

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, K1LLA_KING_KONG said:

Building a Unraid NAS server with some dockers like Jellyfin, Nextcloud, and a bit of a home lab, multi server back up.
Mobo needs 6-8 (usable) sata ports, Ideally 2x M.2 ports but 1 is OK, 8x PCIE slot for a 10GBE NIC, additional PCIE slots are a bonus for additional expansion down the road

CPU: Intel Celeron G6900 3.4 GHz Dual-Core Processor
Motherboard: ASRock B760M Pro RS Micro ATX LGA1700 Motherboard
Case: KOLINK Satellite MicroATX Desktop Case
 

6 hours ago, K1LLA_KING_KONG said:

would be nice to have ECC ram but not a deal breaker if not.

With the above components you can use DDR5 RAM modules that all have ECC functionality.

 

The motherboard gives you 3 Hyper M.2 + 4 SATA3

You have seven HDD/SSD slots in total.

 

You can use 1 M.2 slot for the operating system and you would still have 6 slots.

I you use HDD/SSD with large capacity, 6 (storage) slots is more than enough for any use case.

 

The GUI from the operating system is a very large attack surface so it is always better that you don't use 'GUI operating systems' when you don't have too.

Also, Docker is one of the most insecure IT technologies, so it's better to use more secure FreeBSD Jails.

 

Unraid setups with Docker are more insecure, and if you value security you have better options:

https://klarasystems.com/articles/building-your-own-freebsd-based-nas-with-zfs/

https://klarasystems.com/articles/part-2-tuning-your-freebsd-configuration-for-your-nas/

https://klarasystems.com/articles/part-3-building-your-own-freebsd-based-nas-with-zfs/

OS: FreeBSD 13.3  WM: bspwm  Hardware: Intel 12600KF -- Kingston dual-channel CL36 @6200 -- Sapphire RX 7600 -- BIOSTAR B760MZ-E PRO -- Antec P6 -- Xilence XP550 -- ARCTIC i35 -- 980 PRO 500GB

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, The Hope said:

CPU: Intel Celeron G6900 3.4 GHz Dual-Core Processor
Motherboard: ASRock B760M Pro RS Micro ATX LGA1700 Motherboard
Case: KOLINK Satellite MicroATX Desktop Case
 

With the above components you can use DDR5 RAM modules that all have ECC functionality.

Thanks for your input: seems like a cheap set up for sure

CPU doesn't mention ECC capability, how do you know it can?
Also dual core? I feel like 4 core would be a minimum.. 2 core would be enough for a basic NAS, but If I ever want to run a VM...

Mobo does not support ECC ram and only has 4 sata, a bonus is that it has 3x M.2 which would allow me to use 1 as a sata expander and still have 2 as my main mirrored cache pool (that has me thinking now...) I mean the more M.2 the better. 

Do you have experience with this combo?

Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, K1LLA_KING_KONG said:

Thanks for your input: seems like a cheap set up for sure

CPU doesn't mention ECC capability, how do you know it can?
Also dual core? I feel like 4 core would be a minimum.. 2 core would be enough for a basic NAS, but If I ever want to run a VM...

Mobo does not support ECC ram and only has 4 sata, a bonus is that it has 3x M.2 which would allow me to use 1 as a sata expander and still have 2 as my main mirrored cache pool (that has me thinking now...) I mean the more M.2 the better. 

Do you have experience with this combo?

https://www.tomshardware.com/news/intel-enables-ecc-on-12th-gen-core-cpus

 

G6900  = Alder Lake

 

ASRock B760M Pro RS Micro ATX LGA1700 Motherboard is a very new motherboard with DDR5 support.

 

https://www.kingston.com/en/blog/pc-performance/ddr5-overview

 

On-die ECC (Error Correction Code) is a new feature designed to correct bit errors within the DRAM chip. As DRAM chips increase in density through shrinking wafer lithography, the potential for data leakage increases. On-die ECC mitigates this risk by correcting errors within the chip, increasing reliability and reducing defect rates. This technology cannot correct errors outside of the chip or those that occur on the bus between the module and memory controller housed within the CPU. ECC-enabled processors for servers and workstations feature the coding that can correct single or multi-bit errors on the fly. Extra DRAM bits must be available to allow this correction to occur, as featured on ECC-class module types such as ECC unbuffered, registered and load reduced.

 

It's a good question whether my exact build is going to give you working ECC support or not, but I suspect so. And it's entirely possible for Intel to make this work perfectly. It is best that you have dual channel DDR5 RAM as this has higher bandwidth but single channel RAM is also going to work well. It is best to take as high a frequency as possible with the best timings. But the differences are not big but it does make a difference.

 

You don't have to worry about virtualization. I regularly do virtualization on my Intel i3-3240. (2x slower than the advised Celeron CPU)

To give you an idea of the performance of FreeBSD + VirtualBox + i3-3240 + 850 EVO 500GB:

-Alpine Linux fully boots in +- 8 seconds in VirtualBox

-Clear Linux gets over 100 score in Speedometer 2.0 (via the Brave browser) and also boots (very) quickly in VirtualBox

-When I virtualize TrueNAS on this hardware via VirtualBox, I see that I get download speeds on Nexcloud of +- 35 MB/s, which is the maximum for our home network. So we never see higher download speeds on our network. TrueNAS (via VirtualBox) also boots (very) quickly on this old hardware.

-windows7 boots very fast. I didn't time it but it boots incredibly fast and it's very snappy and responsive in VirtualBox.

 

This is what I see in VirtualBox on FreeBSD + i3-3240 + 850 EVO 500GB.

When I use bhyve on FreeBSD I see higher performance than with VirtualBox.

OS: FreeBSD 13.3  WM: bspwm  Hardware: Intel 12600KF -- Kingston dual-channel CL36 @6200 -- Sapphire RX 7600 -- BIOSTAR B760MZ-E PRO -- Antec P6 -- Xilence XP550 -- ARCTIC i35 -- 980 PRO 500GB

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, The Hope said:

https://www.tomshardware.com/news/intel-enables-ecc-on-12th-gen-core-cpus

 

G6900  = Alder Lake

 

ASRock B760M Pro RS Micro ATX LGA1700 Motherboard is a very new motherboard with DDR5 support.

 

https://www.kingston.com/en/blog/pc-performance/ddr5-overview

 

On-die ECC (Error Correction Code) is a new feature designed to correct bit errors within the DRAM chip. As DRAM chips increase in density through shrinking wafer lithography, the potential for data leakage increases. On-die ECC mitigates this risk by correcting errors within the chip, increasing reliability and reducing defect rates. This technology cannot correct errors outside of the chip or those that occur on the bus between the module and memory controller housed within the CPU. ECC-enabled processors for servers and workstations feature the coding that can correct single or multi-bit errors on the fly. Extra DRAM bits must be available to allow this correction to occur, as featured on ECC-class module types such as ECC unbuffered, registered and load reduced.

 

It's a good question whether my exact build is going to give you working ECC support or not, but I suspect so. And it's entirely possible for Intel to make this work perfectly. It is best that you have dual channel DDR5 RAM as this has higher bandwidth but single channel RAM is also going to work well. It is best to take as high a frequency as possible with the best timings. But the differences are not big but it does make a difference.

 

You don't have to worry about virtualization. I regularly do virtualization on my Intel i3-3240. (2x slower than the advised Celeron CPU)

To give you an idea of the performance of FreeBSD + VirtualBox + i3-3240 + 850 EVO 500GB:

-Alpine Linux fully boots in +- 8 seconds in VirtualBox

-Clear Linux gets over 100 score in Speedometer 2.0 (via the Brave browser) and also boots (very) quickly in VirtualBox

-When I virtualize TrueNAS on this hardware via VirtualBox, I see that I get download speeds on Nexcloud of +- 35 MB/s, which is the maximum for our home network. So we never see higher download speeds on our network. TueNAS (via VirtualBox) also boot (very) quickly on this old hardware.

 

This is what I see in VirtualBox on FreeBSD.

When I use bhyve on FreeBSD I see even higher performance than with VirtualBox.

Thanks again for the very informative response.

Just for discussion sake (not an argument as I mentioned ECC or not is not an issue for me) these On die-ECC is not full module ECC, helpful but just not the same as actual ECC, explained here

And you mention Truenas 🤢 Im sorry I tried it and hated it. Very fkn difficult to get any kind of support and guides. For me the main benefit of Unraid is that there are a multitude of Youtube guides for almost any docker install and config. I mean I couldnt even get nextcloud working locally on Truenas, let alone a remote connection. Now I have everything working 100% on Unraid, still takes some research, trial and error. But the process is not infuriating like TrueNAS. Also I would stand a chance without the GUI 😂 
But I do understand what your saying and it may be more ideal if I was experienced with all commands in shell

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, K1LLA_KING_KONG said:

Just for discussion sake (not an argument as I mentioned ECC or not is not an issue for me) these On die-ECC is not full module ECC, helpful but just not the same as actual ECC, explained here

Well I had heard a lot of people say that you had to have DDR5 RAM for the ECC feature. But after looking at your link I understand that it is not at all the same as the normal ECC memory. It is rather a method to sell defective RAM to people, and also to mislead people with the 'ECC' naming.

 

I think ECC is kind of needed for a decent NAS. W680 motherboards aren't cheap so that's not an option.

Making a relatively cheap setup with recent hardware and with full ECC support is not easy, maybe it's impossible.

AMD Ryzen 5 PRO 5650G has ECC support and 'very low' power consumption in idle but is not cheap to buy separately.
AMD Ryzen™ 3 PRO 3200G still has acceptable power consumption and is not expensive, but it is no longer available in most stores.

The above two CPUs seem to me to be the only viable options, unless you go for very old (and insecure) hardware.

 

3 hours ago, K1LLA_KING_KONG said:

And you mention Truenas 🤢 Im sorry I tried it and hated it. Very fkn difficult to get any kind of support and guides. For me the main benefit of Unraid is that there are a multitude of Youtube guides for almost any docker install and config. I mean I couldnt even get nextcloud working locally on Truenas, let alone a remote connection. Now I have everything working 100% on Unraid, still takes some research, trial and error. But the process is not infuriating like TrueNAS. Also I would stand a chance without the GUI 😂 
But I do understand what your saying and it may be more ideal if I was experienced with all commands in shell

Unraid is based on Slackware. Slackware was a very high quality distro 10-15 years ago, but like most Linux distros the quality has gone down a lot in the last 10-15 years due to the commercialization of Linux. The point I want to make is that FreeBSD in 2023 is a much more robust foundation than Slackware, in many areas.

 

https://www.linuxquestions.org/questions/slackware-14/what-is-wrong-with-slackware-these-days-865294/

 

 

https://distrowatch.com/weekly.php?pollnumber=344&myaction=NewVote&issue=20220214&newvote=1#slackware

While I was working on this review I spent some time on social platforms like SlashDot and the Slackware Reddit forum where people were talking about the new 15.0 release. One thing which I kept noticing was people celebrating the new release kept talking about how they got their first start with Linux by installing Slackware from floppy disks. People remembered fondly running Slackware in school back in 1997 or seeing a boxed copy of Slackware for sale in 1995. Something eventually occurred to me: no one in any of these discussions mentioned having their first start with Slackware after the year 2001. It suggests to me not many new people have wandered into the Slackware community in the past 20 years and, given the project's apparent intent to avoid evolution, I suspect not many newcomers are going to try out Slackware and stick with it.

 

I know some people in the Slackware community will argue that not all change is progress. And I agree. However, I would also argue there can be no progress without change. Slackware refuses to accept almost all change and, while it side-steps a few problems this way, it also misses out on all the progress made in the past two decades.

 

https://www.linuxquestions.org/questions/slackware-14/my-slackware-crashes-frequently-4175717818/

 

https://www.linuxquestions.org/questions/slackware-14/daily-crashes-again-hardware-or-software-4175719195/

 

Unraid has only supported ZFS for a few months, while FreeBSD has had the best ZFS support of any operating system for over a decade.

 

I also think you haven't tested TrueNAS long enough. It may look overwhelming to newbies at first, but if you work with it for a while you will find that it is user-friendly and it is also very well documented.

 

If you have specific problems you can also always find quick help on their forums:

https://www.truenas.com/community/

 

Most questions have already been answered here. In regards to getting Nextcloud to work on TrueNAS, this is only three or four mouse clicks work as far as I remember. I could do all this type of things without ever consulting a tutorial.

 

 

 

OS: FreeBSD 13.3  WM: bspwm  Hardware: Intel 12600KF -- Kingston dual-channel CL36 @6200 -- Sapphire RX 7600 -- BIOSTAR B760MZ-E PRO -- Antec P6 -- Xilence XP550 -- ARCTIC i35 -- 980 PRO 500GB

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, The Hope said:

Well I had heard a lot of people say that you had to have DDR5 RAM for the ECC feature. But after looking at your link I understand that it is not at all the same as the normal ECC memory. It is rather a method to sell defective RAM to people, and also to mislead people with the 'ECC' naming.

This is not at all the case. The consumer facing DDR5 has ECC working on the dimms only. This is a good thing…. Not a bad thing. 
 

Typical ECC historically has also covered data on flight, which is a better thing, but that doesn’t mean ECC only on the DIMM’s themself is bad, it just isn’t as good from a corruption mitigation standpoint, but it’s certainly better then none at all which has been the norm for consumer systems forever. 
 

TLDR; no one would argue that some error correcting is worse then no error correcting at all, and that’s what consumer DDR5 brings to the table. If you need true end to end ECC, you still need to go with server grade gear.

 

1 hour ago, The Hope said:

Unraid has only supported ZFS for a few months, while FreeBSD has had the best ZFS support of any operating system for over a decade.

Most people running unraid don’t actually want ZFS. I agree, truenas is a better enterprise solution, and it’s what I run myself. But a lot of folks want to be able to slowly add storage to their NAS via adding a drive at a time, ZFS doesn’t let you do this… if someone does want to use ZFS as it is a far superior file system, they should use truenas, or roll their own, but I wouldn’t suggest BSD… I do run legacy BSD truenas, but these days I would use scale if I was starting over (I will migrate one of these days…). Being Debian based it is so much better at all things virtualization. KVM is fantastic, and probably the biggest reason to go with Scale over Core.

 

Also, I wouldn’t call docker insecure…. Docker is perfectly fine. 
 

5 hours ago, K1LLA_KING_KONG said:

Also dual core? I feel like 4 core would be a minimum.. 2 core would be enough for a basic NAS, but If I ever want to run a VM...

This always leads to misunderstanding. For reference, I ran my homelab on an i3 6100 for years, that’s a 2 core 4 thread part. I ran ESXi as my hypervisor (I don’t recommend this, but at the time it was a good solution for me), with VM’s consisting of:

truenas

3x Ubuntu server (one held a Plex server)

home assistant

Windows LTSC

a few other random but light weight VM’s

Handful of docker containers including pihole

 

the CPU was never the limiting factor, RAM eventually became the issue and prompted an upgrade to user server gear so I could run much more RAM, and PCIe devices. 
 

That said, these days I think a current gen i3 makes a perfect homelab CPU. PLENTY of power for a long time to come, and very well priced. If you need more PCIe or want true ECC (not required, but not a bad idea), used server gear on eBay is a great option. I got my mobo, CPU, RAM, nvme riser card, all used on eBay for ~500 bucks for the homelab parts seen in my signature. 
 

If you need more SATA ports then whatever mobo you find, an HBA is what you will want. Look for Dell h310’s on eBay, specifically you need one flashed to IT mode (can be done yourself as well, but sort of a pain to do), and you can find them with SAS to SATA breakout cables, pre-flashed to IT mode, for about 50 bucks shipped. 
 

Do you really need a 10gbe NIC? An unraid box isn’t going to saturate 10gbe networking anyways. Unraid reads and writes to a single drive at a time, since it isn’t actually RAID. A harddrive these days does about 150 mbps, which is just over gigabit. I can get about 5gigsbit to and from my truenas box and it has a 10 drive wide Z2 array… 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

Even if you don't want to use TrueNAS software, you could consider their hardware. I have a TrueNAS Mini XL+ which I installed my own OS on and it has nearly everything you want I think. Main concern is PCI slot for further expansion, if you ever wanted to do hardware encoding of video or something I'm not sure if it has a slot, and even if it did it would still be hard (if not practically impossible) given the form factor of the case. The hardware is nice for what it is though.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, LIGISTX said:

If you need more SATA ports then whatever mobo you find, an HBA is what you will want. Look for Dell h310’s on eBay, specifically you need one flashed to IT mode (can be done yourself as well, but sort of a pain to do), and you can find them with SAS to SATA breakout cables, pre-flashed to IT mode, for about 50 bucks shipped. 

I have a an H310 on my business server, works great.

what are you thoughts on M.2 to sata adaptors where you have a spare M.2 slot?
Wholesale 5 Port Non-RAID SATA III 6Gbp/S Ke M.2 B + M Adaptor Kunci PCI-e  3.0X2 Banddengan From m.alibaba.com
 

 

11 hours ago, LIGISTX said:

Do you really need a 10gbe NIC? An unraid box isn’t going to saturate 10gbe networking anyways. Unraid reads and writes to a single drive at a time, since it isn’t actually RAID. A harddrive these days does about 150 mbps, which is just over gigabit. I can get about 5gigsbit to and from my truenas box and it has a 10 drive wide Z2 array… 

No, I don't really. This is how I justify it: If the mobo come with a Gigabit LAN, I would want to add a NIC, if I'm going to add a NIC it may as well be 10GB. Also my unraid does not write to the HDD, it writes to the NVME cache first before moving to HDD later. 

This is how I'm able to edit (premiere pro) off the server without compromise. Its makes no difference if the file is on the server or on the local machine NMVE

 

 

11 hours ago, LIGISTX said:

Most people running unraid don’t actually want ZFS. I agree, truenas is a better enterprise solution, and it’s what I run myself. But a lot of folks want to be able to slowly add storage to their NAS via adding a drive at a time, ZFS doesn’t let you do this… if someone does want to use ZFS as it is a far superior file system, they should use truenas, or roll their own, but I wouldn’t suggest BSD… I do run legacy BSD truenas, but these days I would use scale if I was starting over (I will migrate one of these days…). Being Debian based it is so much better at all things virtualization. KVM is fantastic, and probably the biggest reason to go with Scale over Core.

 

Yeah, I only tested Core, and as I did I realised I should have been testing Scale. My options were Scale or Unraid. But honestly Im a big fan of Unraid now I got it all figured out. And also you right about ZFS, I don't need it. I don't believe its any faster if using NVME anyway. If there's a bottle neck its going to be SMB.

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, K1LLA_KING_KONG said:

what are you thoughts on M.2 to sata adaptors where you have a spare M.2 slot?

I wouldn’t. I’d get a proper HBA. 
 

22 minutes ago, K1LLA_KING_KONG said:

If there's a bottle neck its going to be SMB.

100% this. You could set up iSCSI I suppose, but, it has its downsides as well. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

OK so I liked the mobo suggested by @The Hope although I think I will settle with the DDR4 version as I already have 64GB of RAM 3200 on hand, and all this ECC talk is not really relevant without going for a full ECC server grade set up anyway...

Motherboard: ASROCK B760M PRO RS/D4 only has 4xsata ports, but makes up for it with 3x M.2

CPU: For only a bit more money I can go with a i3-12100 which is 4 core & 8 threads & PCIE of 1x16+4 or 2x8+4, which is future proofing and allows more performance for VMs and such

 

Storage layout would be:

2x1TB NVME mirror cache
4x HDD
2.5GB LAN is enough for now
That leaves me with 2 PCIE slots, 1 for an H310 HBA card which will give me an additional 8x sata when needed and 1 for 10GB NIC if needed in the future..
And I still have a spare M.2 slot which has numerous expansion options

I also found this 13 bay case https://enlight-indonesia.com/product/infinity-c-13/ which is way more than I need, but its very good value and easy to work in.


 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, K1LLA_KING_KONG said:

CPU: For only a bit more money I can go with a i3-12100 which is 4 core & 8 threads & PCIE of 1x16+4 or 2x8+4, which is future proofing and allows more performance for VMs and such

AFAIK the PCIe configuration in 2x8 +4 is limited in a few halo Z690/Z790 motherboards, and not present in B-class. If there were PCIe x16 slots other than the one closest to the processor, they should be solely connected to PCH with up to 4 lanes. But still, they have plenty of bandwidth for 10Gb NIC or anything else.

 

Such configuration would be really decent anyway.😋

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, LIGISTX said:

Most people running unraid don’t actually want ZFS. I agree, truenas is a better enterprise solution, and it’s what I run myself. But a lot of folks want to be able to slowly add storage to their NAS via adding a drive at a time, ZFS doesn’t let you do this… if someone does want to use ZFS as it is a far superior file system, they should use truenas, or roll their own, but I wouldn’t suggest BSD… I do run legacy BSD truenas, but these days I would use scale if I was starting over (I will migrate one of these days…). Being Debian based it is so much better at all things virtualization.

I have used all Unix systems for a long time, including the most difficult Unix systems in existence. (NixOS, OpenBSD, GNU Guix, Void Linux, DragonFly BSD, Slackware, Devuan, etc)

 

And I also have very in-depth knowledge of file systems. Btrfs is not ready for professional use at the moment. Only RAID10 works reliably on Btrfs, all other RAID modes are currently buggy. It also has the problem that their current documentation completely contradicts many best practices. Red Hat recently literally said that they also believe that Btrfs is not suitable for their professional customers, and they have completely discontinued their investment in Btrfs and started developing another file system.

 

XFS lacks many crucial basic functions of ZFS, and cannot guarantee data integrity. Moreover, although it is known as a good performer, tests show that it underperforms ZFS in many situations, although it is not competitive with ZFS in terms of features. Moreover, the stability of XFS is a joke compared to ZFS. I've seen many times in CentOS that just during installation you can have error messages related to XFS, and after installation I also see many XFS error messages that I never see with ZFS.

 

ReiserFS is a dead file system.

 

So what file system does Unraid have that is suitable for storing data? In my opinion, just ZFS. But I don't think their ZFS is competitive with FreeBSD's, since even Debian, NixOS and Void Linux don't have a ZFS implementation on par with FreeBSD's.

 

Quote

KVM is fantastic, and probably the biggest reason to go with Scale over Core.

https://klarasystems.com/articles/virtualization-showdown-freebsd-bhyve-linux-kvm/

 

https://www.youtube.com/watch?v=uV61mVYsFM8

 

https://it-notes.dragas.net/2023/03/14/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-3/

Several Linux VMs are just the basis on which Docker runs. One of them (not even among the busiest) started, every 12/15 hours, to completely freeze. It stopped responding to ping, and it was impossible to give any type of command from the console. In a word: stuck.

I tried various solutions such as changing the storage driver, the number of cores, the distribution (from Alpine to Debian), etc., but none of these operations solved the issue. I also noticed that the problem occurs with all Linux VMs, but only those with a recent kernel (> 5.10.x) freeze, while the others continue to work. The problem does not occur, however, with the *BSDs.

The hardware specifications of the destination physical server are slightly better than the starting one, but the final performance of the setup has greatly improved. The VMs are very responsive (even those that were previously LXC containers running directly on bare metal) and, thanks to ZFS, I can make local snapshots every 5 minutes. In addition, every 10 minutes, I can copy (using the excellent zfs-autobackup) all the VMs and jails to other nodes both as a backup and as an immediate restart in case of disaster. I just need to map the IPs, and everything will start working very quickly. Proxmox also allows you to perform this type of operation with ZFS, but you still need to have Proxmox (in a compatible version) on the target machine. With the current setup, I only need any FreeBSD node that supports bhyve.

For setups like the one described, the new configuration based on FreeBSD has shown significantly interesting performance and greater management and maintenance granularity.

I'm totally satisfied with my migration and the result is far better than I expected.

 

Quote

Also, I wouldn’t call docker insecure…. Docker is perfectly fine.

Technically, there is nothing about Docker that is properly secured, and that makes sense because the security aspect was completely skipped during the development of Docker. The images on the Hub are also full of malware. And finally, there is also a problem with their patches, that their developers often notice that after patching a problem they have introduced a new security vulnerability. Jails existed 10 years before Docker, so it has been audited for 10 more years by extremely skilled developers. Which can't be said about Docker at all.

https://www.reddit.com/r/freebsd/comments/5vfj3w/comment/de1ujes/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

https://www.bleepingcomputer.com/news/security/docker-hub-repositories-hide-over-1-650-malicious-containers/

https://www.helpnetsecurity.com/2023/02/23/hidden-vulnerabilities-docker-containers/

https://www.cvedetails.com/vulnerability-list/vendor_id-13534/product_id-28125/Docker-Docker.html

https://www.csoonline.com/article/570137/half-of-all-docker-hub-images-have-at-least-one-critical-vulnerability.html

https://thehackernews.com/2022/09/hackers-targeting-weblogic-servers-and.html

https://labs.watchtowr.com/i-dont-need-no-zero-dayz-part-1-docker-containers/

https://medium.com/@azizulmaqsud/docker-container-revolted-first-but-now-requires-more-security-cab3eaf29091

https://www.crowdstrike.com/cybersecurity-101/cloud-security/exploitation-of-misconfigured-image-containers/

 

Quote

the CPU was never the limiting factor, RAM eventually became the issue and prompted an upgrade to user server gear so I could run much more RAM, and PCIe devices.

That's also my experience. You already have to virtualize a lot to go above 16 GB and most motherboards now support 64 GB and 128 GB RAM.

 

Quote

Do you really need a 10gbe NIC?

Most likely his network will be the bottleneck and a 2.5GB connection will give exactly the same performance as a 10GB connection.

 

Quote

what are you thoughts on M.2 to sata adaptors where you have a spare M.2 slot?

They have become unnecessary simply because M.2 storage has become just as cheap as HDD storage because M.2 storage can be produced very cheaply.

 

Take the Kingston NV2 2TB M.2 PCIe 4.0 X4 for example, which you can buy new for $40.59.

OS: FreeBSD 13.3  WM: bspwm  Hardware: Intel 12600KF -- Kingston dual-channel CL36 @6200 -- Sapphire RX 7600 -- BIOSTAR B760MZ-E PRO -- Antec P6 -- Xilence XP550 -- ARCTIC i35 -- 980 PRO 500GB

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, The Hope said:

They have become unnecessary simply because M.2 storage has become just as cheap as HDD storage because M.2 storage can be produced very cheaply.

 

Take the Kingston NV2 2TB M.2 PCIe 4.0 X4 for example, which you can buy new for $40.59.

Where I am (Indonesia) I found my best option was a 4TB PNY

Screenshot_20230710_201042.thumb.jpg.df66c8e8d21cc930c400e3306f4fb3b4.jpg

 

So with 3 M.2 slots that's 12TB of Nvme (un-mirrored) which HEAPS. 

But I wish I could get 2TB for $40🙄

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, K1LLA_KING_KONG said:

Where I am (Indonesia) I found my best option was a 4TB PNY

So with 3 M.2 slots that's 12TB of Nvme (un-mirrored) which HEAPS. 

But I wish I could get 2TB for $40🙄

I think it is a temporary action.

You can see that M.2 storage is getting closer and closer to the price of normal HDD's.

 

Capture d’écran du 2023-07-10 14-25-45.png

Capture d’écran du 2023-07-10 14-27-09.png

OS: FreeBSD 13.3  WM: bspwm  Hardware: Intel 12600KF -- Kingston dual-channel CL36 @6200 -- Sapphire RX 7600 -- BIOSTAR B760MZ-E PRO -- Antec P6 -- Xilence XP550 -- ARCTIC i35 -- 980 PRO 500GB

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, The Hope said:

So what file system does Unraid have that is suitable for storing data? In my opinion, just ZFS.

I don’t think you understand why people pick unraid… 

 

I believe most folks pick XFS, and the sole reason for this is because of its flexibility in how unraid allows you to add drives to the array as you can afford them. This is a huge benefit to many home users as they don’t feel the need/can not purchase enough storage up front to build out a ZFS array. This is just a fact of life… and these arrays work fine enough for most people. 
 

No one is saying unraid and XFS is ready for enterprise; it isn’t. But that is not the point. For many people, it’ll get the job done. 
 

I agree ZFS is the best file system we current have, and that’s why I run it. But we need to understand it’s limitations as it applies to other peoples money and requirements… if someone can’t build out a proper ZFS setup, sitting there and saying “I have used every linux distro that exists and the only file system you should use is ZFS” is not actually helpful to them… that’s similar to saying some use windows because it’s subpar to linux, even tho the software someone uses is only available on windows, they sort of don’t have a choice. 
 

I’d also love to see what docker containers have been found to be malware… a large portion of the world runs on docker at this point. I’m sure there are containers they are nefarious, it’s like downloading any windows software… do some due diligence before you spin up containers. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

My NAS has a Ryzen 5 Pro 2400G with 64GB of ECC DDR4 on an AsRock B450M-Pro4.  ECC is recognized and used by the OS being it is a Pro chip.  

 

This CAN be done on the cheap using modern components as long as you get creative.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, 2dfx said:

My NAS has a Ryzen 5 Pro 2400G with 64GB of ECC DDR4 on an AsRock B450M-Pro4.  ECC is recognized and used by the OS being it is a Pro chip.  

 

This CAN be done on the cheap using modern components as long as you get creative.

Please tell me more... Did you run into any issues along the way? 

What OS are you running, with what on it,  and how does it perform?

Looking at mobo. How could one be sure it runs ECC? Besides you anecdotal evidence?
image.thumb.png.2114b1abb027d2ace1a236f722affd79.png



Also this board is quite old with only gen 3 & 2 Pcie
This is where the negatives outweigh the benefits for an ECC capable system. Its either has to be expensive server grade or old.
Where as an non-ECC system can be Gen 5 Pcie with a Alder Lake CPU, this is performance...

Id like to consider a system that meets in the middle...

Link to comment
Share on other sites

Link to post
Share on other sites

  • 5 months later...
On 7/9/2023 at 6:06 AM, The Hope said:

CPU: Intel Celeron G6900 3.4 GHz Dual-Core Processor
Motherboard: ASRock B760M Pro RS Micro ATX LGA1700 Motherboard
Case: KOLINK Satellite MicroATX Desktop Case
 

With the above components you can use DDR5 RAM modules that all have ECC functionality.

 

The motherboard gives you 3 Hyper M.2 + 4 SATA3

You have seven HDD/SSD slots in total.

 

You can use 1 M.2 slot for the operating system and you would still have 6 slots.

I you use HDD/SSD with large capacity, 6 (storage) slots is more than enough for any use case.

 

The GUI from the operating system is a very large attack surface so it is always better that you don't use 'GUI operating systems' when you don't have too.

Also, Docker is one of the most insecure IT technologies, so it's better to use more secure FreeBSD Jails.

 

Unraid setups with Docker are more insecure, and if you value security you have better options:

https://klarasystems.com/articles/building-your-own-freebsd-based-nas-with-zfs/

https://klarasystems.com/articles/part-2-tuning-your-freebsd-configuration-for-your-nas/

https://klarasystems.com/articles/part-3-building-your-own-freebsd-based-nas-with-zfs/

This being said should I skip trying out Unraid any longer and try to find the program you suggest, if, I plan on trying to access my movies away from home?

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Quozl said:

This being said should I skip trying out Unraid any longer and try to find the program you suggest, if, I plan on trying to access my movies away from home?

There is nothing wrong with unraid, nor docker. 
 

If you want to access data while outdid your LAN, regardless of what OS your server runs, you will want to set up a VPN. Witeguard is a great and easy to use option. There are guides online showing how to set this up for unraid, personally I can’t help as I don’t use unraid, but I assume it’ll be some docker solution as that’s what unraid has as a built in add on solution.

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/13/2023 at 8:28 PM, Quozl said:

This being said should I skip trying out Unraid any longer and try to find the program you suggest, if, I plan on trying to access my movies away from home?

You might just want to use Nextcloud if you try to access your movies away from home. Nextcloud works well on FreeBSD, TrueNAS, and it also works well on most Linux systems. Although I would say the Nextcloud setup procedure for Ubuntu is unnecessarily cumbersome compared to the Nextcloud installation procedure for FreeBSD and TrueNAS.

OS: FreeBSD 13.3  WM: bspwm  Hardware: Intel 12600KF -- Kingston dual-channel CL36 @6200 -- Sapphire RX 7600 -- BIOSTAR B760MZ-E PRO -- Antec P6 -- Xilence XP550 -- ARCTIC i35 -- 980 PRO 500GB

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×