Jump to content

NAS Build reccomendations for storage, torrents and plex

I saw the LTT video on the JONSBO N1 and decided to finally build my own NAS, I was hesitant for a long time but finally decided to ho ahead.

At the moment I have Hetzner server running Cloudbox with basically plex, torrents and a bunch of minor stuff.

 

The plan is to have the NAS at home with the torrents going trough a VPN and a bunch of services for mostly media but also for standard archiving for me and my family and I'm thinking about trying Nextcloud for that.

 

I've put together this list: https://it.pcpartpicker.com/user/giosann/saved/#view=pYky4D

 

All the HDDs would be in a pool with one for parity, the 2 2TB SSDs would be in a separate "fast storage" pool (also with one for parity) and the 1TB one would be for cache.

I've put in a board with 3 M2 slots to have 2 for SSDs and one for an M2 to SATA adapter.

 

I'm not sure about a couple of things:

Since I'm planning to use a GTX 1650 (or the quadro P1000, whichever I can find cheaper) low profile for media encoding I'm not sure which CPU to use. I've put in a ryzen 5 5500 but I don't know when they will come out in my region, I'm open to other CPUs or to even drop the GPU in favor of an integrated one?

Are there any downsides on starting with 3 HDD (1 for parity) and then adding up to 5 as time goes on? I could buy 5 or 6 (for cold storage) together but it would be quite the expense. I was going to go with Unraid.

 

The GTX 1650 and Quadro P1000 both have a maximum of 3 encode streams (https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new). Since I would have at most 4 I think it would be enough but it worries me a little. The would usually be 1080p->1080p as most of my library is in 1080 but sometimes they would be 4k->1080p.

 

Maybe some of the things here are too ambitious, like putting 3 SSDs or trying to cram a GPU in a SFF case, any feedback is appreciated.

 

Thanks you for your time.

Link to comment
Share on other sites

Link to post
Share on other sites

To consider some cost and space/heat savings....

 

What is the reason for the SSD's?

Considered going an Intel i5 11th or 12th gen with Intel graphics and using QuickSync?

How many transcodes are you looking at on Plex? If its just 1-2 at a time CPU transcoding would be just fine as well on a modern CPU

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Jarsky said:

To consider some cost and space/heat savings....

 

What is the reason for the SSD's?

Considered going an Intel i5 11th or 12th gen with Intel graphics and using QuickSync?

How many transcodes are you looking at on Plex? If its just 1-2 at a time CPU transcoding would be just fine as well on a modern CPU

1 ssd would be for cache and the other 2 for a "fast storage" pool, I don't have a specific idea in mind but I thought a faster storage pool would come useful if i need to have some data not on spinning metal.

Nevermind... I saw tht unraid does not support multiple pools so I guess il just keep the HDDs and one SSD for cache.

 

I'll look aroud the i5s and see what are the capabilities, my transcode needs would be in a very worst case scenario 4 streams of wich one 4k->1080p and the others 1080p->1080p; usually only 1 or 2 transcodes 1080p -> 1080p.

 

Thanks!

Link to comment
Share on other sites

Link to post
Share on other sites

49 minutes ago, giosann said:

1 ssd would be for cache and the other 2 for a "fast storage" pool, I don't have a specific idea in mind but I thought a faster storage pool would come useful if i need to have some data not on spinning metal.

Nevermind... I saw tht unraid does not support multiple pools so I guess il just keep the HDDs and one SSD for cache.

 

I'll look aroud the i5s and see what are the capabilities, my transcode needs would be in a very worst case scenario 4 streams of wich one 4k->1080p and the others 1080p->1080p; usually only 1 or 2 transcodes 1080p -> 1080p.

 

Thanks!

 

SSD cache will be pointless, especially in UnRAID as its purely a file-on-disk read cache only. If you're using UnRAID then i'd just have the NVMe to run your Docker or any VM's off, and not worry about a cache at all. 

 

UnRAID doesnt support multiple pools, but you can cheat that in multiple ways. 

1. You can use the ZFS Plugin and create a ZFS pool (which you can configure through terminal like with any Linux distro) and assign the paths as needed or;

2. UnRAID does support multiple caching pools. You can create a cache pool completely independant of the array. This is what I do for my VM's in a mirror. So you can see I have 1 cache which is the cache for my RAID6 pool, and i have a second cache, which is a mirrored pool e.g

 

image.png.286b5594ac220538ac33d077d78fa4a6.png

 

 

An 11th/12th Gen i5 will easily transcode 4 streams, especially if you use the QuickSync (hardware acceleration). the newer Intel UHD 700 series are excellent for transcoding. The 700 series will also support all the encoding/decoding options you'd probably want to use, and is equivelant to an RTX 2000 series as far as supported features. It's better than a 10 series or Quadro P1000 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

58 minutes ago, Jarsky said:

SSD cache will be pointless, especially in UnRAID as its purely a file-on-disk read cache only. If you're using UnRAID then i'd just have the NVMe to run your Docker or any VM's off, and not worry about a cache at all.

Sorry but I don't quite understand, isn't the cache SSD used as a writing cache for the RAID5 array? You also have one no?

I can "copy" from your config and have 1 cache pool with one SSD for the RAID5 and one with 2 SSDs in RAID 1 for VMs/Docker and whatnot.

 

I've put in the list a i5-12500, dropping the GPU would also allow me to use the PCI slot for a PCI to SATA expansion card that seems easyer to find compared to a M2 to SATA card.

That leaves me with

- 5 HDD IN RAID5 (4 connected to the Motherboard one to the expansion card)

- 1 SATA SSD for cache to the raid connected to the eaxpansion card

- 2 M2 M-key SSDs in a cache pool for VMs/Docker

- 1 M2 E-key slot wich does not seem very useful

 

As for the HDDs is it fine if I buy 3 now and the others as time goes on or are there drawbacks like complicated rebuilding after adding a drive?

 

Thanks

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, giosann said:

Sorry but I don't quite understand, isn't the cache SSD used as a writing cache for the RAID5 array? You also have one no?

I can "copy" from your config and have 1 cache pool with one SSD for the RAID5 and one with 2 SSDs in RAID 1 for VMs/Docker and whatnot.

 

Sorry I meant write cache...its late here 😂

You can do that if you want, but yeah it is overkill...the cache adds no benefit, and I just have the SSD's in RAID0 only because i had a few leftover that I upgraded to better NVMe's. 

 

1 hour ago, giosann said:

I've put in the list a i5-12500, dropping the GPU would also allow me to use the PCI slot for a PCI to SATA expansion card that seems easyer to find compared to a M2 to SATA card.

That leaves me with

- 5 HDD IN RAID5 (4 connected to the Motherboard one to the expansion card)

- 1 SATA SSD for cache to the raid connected to the eaxpansion card

- 2 M2 M-key SSDs in a cache pool for VMs/Docker

- 1 M2 E-key slot wich does not seem very useful

 

As for the HDDs is it fine if I buy 3 now and the others as time goes on or are there drawbacks like complicated rebuilding after adding a drive?

 

Thanks

 

You could just run a single M.2 NVMe, and use the lower one for an M.2 SATA adapter like this https://www.aliexpress.com/item/1005003580987667.html

That would still leave your PCIe slot free for future expansion. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for your precious feedback, I think I'll go with your suggestion and do

  • 5 HDDs of wich one for parity (SATA)
  • 2 SSDs in mirror for VMs/Docker/Etc. (1 on M2 slot and one on SATA)
  • M2 to SATA expansion card
  • PCI slot free for the future

I read some discussions on the unraid forums regarding issues with 12th gen CPUs but from what i got it was higher end CPUs that had both P and E cores, the i5-12500/12600 only have P cores.

 

As for the HDDs i guess I'll go for 3 now and expand later.

 

An unexpected problem has arisen with the case and at this point I'm not even sure I'll buy the JONSBO N1.

I was looking for a aliexpress seller that ships from the EU to avoid russia and customs issues but already 2 different sellers cancelled my order for ""lack of stock"" shipping from france and spain...

I'll see if I want to buy from china from a seller that has already sold many or pivot to a Silverstone DS380 that are actually sold in my area.

 

Thanks again

Link to comment
Share on other sites

Link to post
Share on other sites

UnRaid seems like overkill for this if the only way they get ZFS is via a plugin and cli. However, if you have ZFS you have the ability to set a cache to everything.

 

All Cache's are not created the same. Linux's typical cache is a LRU.. where as in ZFS you have the ARC. (The ARC is an adaptive LRU/MFU with 2 ghost lists that control how much is dedicated to LRU or MFU) The ARC is extremely proficient in real world workflows, vanilla LRU not so much.

 

You don't really need 2 pools either unless you want that for some reason.. ZFS's design is based around the idea of putting all your disks into a single pool and just added as you need separating it out with "datasets" Of course it can be done if you want. (Ubuntu does that by default to ensure boot compatibility with grub but linux on zfs root is the only OS that does that.)

RAID 5 (Raidz1) is depreciated and I'd advise against using it depending on how much you like your data. The reason for this is you have 1 parity block throughout your pool (ZFS does not use a single parity disk.. it stripes the parity throughout the array for performance reasons) but you have 1 parity block for each group of data blocks.. so when you lose a device you can calculate using the existing and parity blocks and restore the lost blocks.

So far so good, you now have data on 4 drives.

The reason it's not used is because.. what happens if you get a uncorrectable disk error in one of the sectors when you are restoring (resilvering)? Ah.. well that is easy for raid 5.. you throw the thing out because it's hosed. 😄 as we expand storage we have more and more blocks yet the rate of failure on those blocks has not increased. The manufacturing tolerances of them are at or close to the physical limit. You can see that as we add more and more blocks to a storage pool the likelihood of encountering one of those blocks being bad rises. It's like rolling dice, the more rolls you make the more chance you have to hit snake eyes and in this case snake eyes a single time means you data is gone. For home use I'd recommend double parity (raidz2) at a minimum and enterprise is at triple parity (raidz3) since 2017.

 

Remember to scrub yourself (and your pool)

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for all the info @jde3 but I think that some of what I meant got lost in translation.

 

I don't plan to use zfs, the plan is to have a normal unraid installation with one pool for all the HDDs to store data and one pool for 2 SSDs to run VMs and Docker.

 

When I said RAID5/RAIDZ1 before i meant "RAID5/RAIDZ1 like": meaning the unraid thing of having 1 parity drive.

 

If i go with trunas core and a raidz2 i would need to buy all the drives together and have only 3/5 contribute to space from the get go.

If i go with unraid i can buy 3 drives now of which one parity and with time i can either arrive at 5 with one parity or decide to have 2 parity drives.

 

I'll consider truenas and zfs raidz2 only if i decide to buy all the disks together.

 

Thanks

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, giosann said:

Thanks for all the info @jde3 but I think that some of what I meant got lost in translation.

 

I don't plan to use zfs, the plan is to have a normal unraid installation with one pool for all the HDDs to store data and one pool for 2 SSDs to run VMs and Docker.

 

When I said RAID5/RAIDZ1 before i meant "RAID5/RAIDZ1 like": meaning the unraid thing of having 1 parity drive.

 

If i go with trunas core and a raidz2 i would need to buy all the drives together and have only 3/5 contribute to space from the get go.

If i go with unraid i can buy 3 drives now of which one parity and with time i can either arrive at 5 with one parity or decide to have 2 parity drives.

 

I'll consider truenas and zfs raidz2 only if i decide to buy all the disks together.

 

Thanks

Yeah, It's Linux MD Raid and it's primitive. Best I can say is It's your data... GL.

 

I'm really not a fan of "home NAS products" UnRaid especially. They pay Linus to sell their software to novices.. 90% of that (the stuff that does all the work) is free.

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

I'll see what to do in the next few weeks. Thanks for the insight.

 

As for Linus I don't really remember it unraid ever was amongst the sponsors so... maybe a long time ago?

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, giosann said:

I'll see what to do in the next few weeks. Thanks for the insight.

 

As for Linus I don't really remember it unraid ever was amongst the sponsors so... maybe a long time ago?

I'm sure it was, I've been a storage admin for years and I go to conferences. I never heard of anyone ever using it except LTT ppl. TrueNAS markets conferences, UnRaid markets social media.

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

I use UnRAID only because its a convenient amalgamation of tools with QEMU, Docker, Storage, SMB and monitoring all with a nice UI making it quick and easy to manage. 

 

Its worth noting that while UnRAID is based on MD, it is its own thing.

As for building out an entire array on ZFS, the problem is that currently they do not support that natively and at least 1 disk needs to be sacrificed to the primary pool .

 

I'm not sure LTT were ever sponsored by UnRaid, but it was the most suitable OOTB solution for the skillset of their staff at the time for doing their 2 Gamers 1 CPU, which wasnt an LTT idea either. People like Spaceinvader were already showing this off before LTT picked up the idea. Also Wendells Level1Techs was the first comprehensive source which was a featured setup on GamersNexus for doing ZFS on UnRAID, which is the source for LTTs video

 

There is a massive community of users out there, outside of LTT. 

 

Yes Unraids native storage idea isnt as resilient or as fast as ZFS with all its features, but its perfect for this type of use case where you have large contigous files that you just want to store, such as a Plex media server. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, Jarsky said:

I use UnRAID only because its a convenient amalgamation of tools with QEMU, Docker, Storage, SMB and monitoring all with a nice UI making it quick and easy to manage. 

 

Its worth noting that while UnRAID is based on MD, it is its own thing.

As for building out an entire array on ZFS, the problem is that currently they do not support that natively and at least 1 disk needs to be sacrificed to the primary pool .

 

I'm not sure LTT were ever sponsored by UnRaid, but it was the most suitable OOTB solution for the skillset of their staff at the time for doing their 2 Gamers 1 CPU, which wasnt an LTT idea either. People like Spaceinvader were already showing this off before LTT picked up the idea. Also Wendells Level1Techs was the first comprehensive source which was a featured setup on GamersNexus for doing ZFS on UnRAID, which is the source for LTTs video

 

There is a massive community of users out there, outside of LTT. 

 

Yes Unraids native storage idea isnt as resilient or as fast as ZFS with all its features, but its perfect for this type of use case where you have large contigous files that you just want to store, such as a Plex media server. 

So you are paying for the UI?

 

I suppose that is fine but keep in mind as your skillset improves you won't need the training wheels that UI provides. And... to someone like me who can do it all from CLI the webgui's get in the way when trying to do advanced configurations. They also can hurt your ability to learn.

 

From Linux's perspective, ZFS is just a filesysystem like any other. There is no real reason one couldn't put Unraid on a ZFS root other than they either didn't want to do the work to make it possible or they don't want to. The fact it's a plugin speaks to the latter. Focusing on MD Raid instead seems to speak that they don't really care about their users data integrity.. They care about sales.

 

Ordinarily if I want to do this kind of setup, for me, I'll do it on either Vanilla FreeBSD or Ubuntu.. because all the options for what I want to do with it are left open, I don't need to hope someone has a plugin.. However there are other products out there that are completely free and at the same times have webgui's capable of advanced configs. Take a look at XigmaNAS (The original FreeNAS before IX Systems forked it.)

 

By the way.. you can also put a webgui on any system.. pretty much.. the ancient webmin is still out there and sometimes useful for certain situations. It supports nearly everything tho.. Linux, FreeBSD, Solaris you name it.

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, jde3 said:

I suppose that is fine but keep in mind as your skillset improves you won't need the training wheels that UI provides. And... to someone like me who can do it all from CLI the webgui's get in the way when trying to do advanced configurations. They also can hurt your ability to learn.

My skill set? I've been using Linux for 22 years, I manage and build out all my web server and game servers (when i ran a game hosting service) infrastructure using CLI, and personally I dont have a single Linux server with an XDesktop, theyre all headless. The vast majority of what I do in Linux is through CLI. Skillset isnt the reason 😂

 

6 hours ago, jde3 said:

From Linux's perspective, ZFS is just a filesysystem like any other.

Yup well aware of this, used to run ZFS and still use it for various tasks its more suited too

 

6 hours ago, jde3 said:

There is no real reason one couldn't put Unraid on a ZFS root other than they either didn't want to do the work to make it possible or they don't want to. The fact it's a plugin speaks to the latter. Focusing on MD Raid instead seems to speak that they don't really care about their users data integrity.. They care about sales.

The plugin is community developed and was just a way to get native ZFS on Unraid easily since its not part of the Slackware package that Unraid is based on. You can see all the script does is install/upgrade ZFS with very basic pool monitoring to display on the GUI. https://raw.githubusercontent.com/Steini1984/unRAID6-ZFS/master/unRAID6-ZFS.plg    ZFS support is on the books for UnRAID 7, this is all discussed in the prerelease discussions in the Unraid community. 

 

As I said before though, UnRAID is not MD, its their own proprietary pooling implementation.  

One of the big advantages to the way UnRAID works is the ability to park drives which helps with drive longevity and for power/heat, only spinning up drives that are required. It can even cache upper level folders so it doesnt need to spin up drives to get a simple folder listing. Also because files are contigious (and data isnt striped) and each disk has its own file system, in a catastrophic failure, you only lose the data on the disks that are lost. Every other disks data is intact. 

Even right now, with all my "media" apps running, watching Plex, and having a look at the latest torrent downloads, only a handful of drives are active.

 

image.png.371e709e26fd974a8bcbd56debe87a89.png

 

6 hours ago, jde3 said:

Ordinarily if I want to do this kind of setup, for me, I'll do it on either Vanilla FreeBSD or Ubuntu.. because all the options for what I want to do with it are left open, I don't need to hope someone has a plugin.. However there are other products out there that are completely free and at the same times have webgui's capable of advanced configs. Take a look at XigmaNAS (The original FreeNAS before IX Systems forked it.)

 

By the way.. you can also put a webgui on any system.. pretty much.. the ancient webmin is still out there and sometimes useful for certain situations. It supports nearly everything tho.. Linux, FreeBSD, Solaris you name it.

 

Webmin is terrible for the use of my NAS thats running UnRAID. I also have a Debian server running Proxmox with ZFS and Cockpit as well. I also have a TrueNAS server running ZFS which is purely an iSCSi server for my VMware lab.  Theres nothing elitest about using terminal if it takes you 3x longer to do a job (e.g using virsh to query & start a VM rather than open a page and click a button. ). Using the right tools for the job, theres nothing wrong with using UIs for convenience in the right situation.

 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Jarsky said:

My skill set? I've been using Linux for 22 years, I manage and build out all my web server and game servers (when i ran a game hosting service) infrastructure using CLI, and personally I dont have a single Linux server with an XDesktop, theyre all headless. The vast majority of what I do in Linux is through CLI. Skillset isnt the reason 😂

 

Yup well aware of this, used to run ZFS and still use it for various tasks its more suited too

 

The plugin is community developed and was just a way to get native ZFS on Unraid easily since its not part of the Slackware package that Unraid is based on. You can see all the script does is install/upgrade ZFS with very basic pool monitoring to display on the GUI. https://raw.githubusercontent.com/Steini1984/unRAID6-ZFS/master/unRAID6-ZFS.plg    ZFS support is on the books for UnRAID 7, this is all discussed in the prerelease discussions in the Unraid community. 

 

As I said before though, UnRAID is not MD, its their own proprietary pooling implementation.  

One of the big advantages to the way UnRAID works is the ability to park drives which helps with drive longevity and for power/heat, only spinning up drives that are required. It can even cache upper level folders so it doesnt need to spin up drives to get a simple folder listing. Also because files are contigious (and data isnt striped) and each disk has its own file system, in a catastrophic failure, you only lose the data on the disks that are lost. Every other disks data is intact. 

Even right now, with all my "media" apps running, watching Plex, and having a look at the latest torrent downloads, only a handful of drives are active.

 

image.png.371e709e26fd974a8bcbd56debe87a89.png

 

 

Webmin is terrible for the use of my NAS thats running UnRAID. I also have a Debian server running Proxmox with ZFS and Cockpit as well. I also have a TrueNAS server running ZFS which is purely an iSCSi server for my VMware lab.  Theres nothing elitest about using terminal if it takes you 3x longer to do a job (e.g using virsh to query & start a VM rather than open a page and click a button. ). Using the right tools for the job, theres nothing wrong with using UIs for convenience in the right situation.

 

Well it's good to know I'm talking to an expert. You never know around here. 🙂 I'm not trying to be elitist or anything like that, I'm trying to teach novices. (Of which you are not.) You understand why I tell ppl to try and understand the mechanics below their beloved WebGUI, right? Granted I do agree with you that UI's have a time and a place.

 

As for Unraid.. To me (someone that manages storage) It sounds like they have a lot of pseudo features.. what they call "Standby mode" is easily accomplished in ZFS as well (FreeBSD does it automatically / tunable via camcontrol timeout) and I highly doubt their cache is anywhere near as effective as ZFS's ARC in real world. It feels like they want to say on paper they have ZFS's features to fill a sales sheet, but just don't.

 

An implementation like that based solely on LVM and MD parity stands a high risk for corruption and in my case it's highly impractical because I'd want the I/O bandwidth and iops of all the drives not just a few. And saying you can get "part" of your data back after a failure isn't a selling point I'd be proud of. I really feel they are tailored for the home user market. Even TrueNAS can't really cut it in enterprise. I had evaluated it last year and fount it's features for enterprise poorly implemented and lacking. (it's largest problems were it was broken in LDAP and SNMP and it forces you to use a single admin account for the UI. If those parts don't work.. why bother?)

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, jde3 said:

As for Unraid.. To me (someone that manages storage) It sounds like they have a lot of pseudo features.. what they call "Standby mode" is easily accomplished in ZFS as well (FreeBSD does it automatically / tunable via camcontrol timeout)

Not to the same extent, since your data is striped across the vdev(s), the big difference is that UnRAID stores files contigously per drive. By default it uses what it calls "high water" where it will fill a % of the drive, before moving onto the next and it will continue doing this getting progressively tighter each time. The next most popular is "fill" which completely fills a disk before moving to the next in the pool. The result is that a lot of chunks of data will be on a single drive. So say i have 3 x 8TB drives it will fill the first one to 50% (4TB) before moving onto the next. The result being that recent data is likely to all by on the same drive, so say you have a Plex and mostly watch recent releases, it will really only ever spin up that single drive for reading. 

 

5 hours ago, jde3 said:

and I highly doubt their cache is anywhere near as effective as ZFS's ARC in real world. It feels like they want to say on paper they have ZFS's features to fill a sales sheet, but just don't.

It's not the same since ARC is memory, but if you mean L2ARC then yeah its still not the same thing. The "cache" is utilized via a fuse mount which is /mnt/user in the screenshot below, which shows the array data (disks /dev/mdX) incuding /dev/cache.

Because of it being a faux cache, data has to be moved from/dev/cache to the /dev/mdX devices according to the balancing configuration, to be parity protected (hence why UnRAID does a daily move) and its only a write cache, not a read cache. It is a "dumb" cache, and this limitation is why many do not use caching in UnRAID. 

 

image.thumb.png.3545f22c5030589799d69a3a589508bb.png

 

5 hours ago, jde3 said:

An implementation like that based solely on LVM and MD parity stands a high risk for corruption and in my case it's highly impractical because I'd want the I/O bandwidth and iops of all the drives not just a few.

Again that comes back to the point of using a suitable solution for the job. Most people who run UnRAID are using it at home for mass storage or in small office or site server environments that dont need the IO. Its not an enterprise storage solution. 

 

5 hours ago, jde3 said:

And saying you can get "part" of your data back after a failure isn't a selling point I'd be proud of. I really feel they are tailored for the home user market. 

Thats not how they market it, it was just a point I was making. The array has full parity protection with the equivelant of RAID5/6  by having the equivelant of 1-2 disks worth of parity bits same as any traditional RAID.

 

Hence why I was particular about saying catastrophic failure.

That is, say you have a 5 disk RAID5, and you lose 2 disks; in a traditional RAID your entire array is now inaccessible. In UnRAID you only lose those 2 inaccessible drives. The other 3 can be mounted and data can be accessed as normal.  Lets say this UnRAID is for a Plex server per the OP's use case. If you cant afford to, or do not want to back up...40TB of media that can be redownloaded, then UnRAID is a good compromise of speed for some form or recovery in the case of a major issue. 

 

Obviously the trade off, is that reading/writing you only have a single spindle worth of speed; but in a lot of use cases for home and small office thats acceptable. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

Thank you both for your extensive knowledge but I think we are getting a bit off-topic (and sincerely too in-depht for my limited knowledge) in this discussion of zfs vs unraid vs MD.

I can get my way around in the linux cli but I'm by no means an expert, at work I work exclusely with windows server and don't interact much with linux-based systems. I think I'll just go with unraid, it just seems easyer, in the end this will just be a nas with movies/TV and family data.

If I feel like I'll need to I'll back up the most important data somewhere else.

 

In the end I decided to go with the Silverstone DS380 with this configuration: https://it.pcpartpicker.com/user/giosann/saved/#view=pYky4D

 

Since the case supports up to 12 Drives for now I'll use an M2 slot for M2 to SATA expansion and leave an M2 Slot Open but in the future I'll probably just put another expansion slot punnting the HDD on the M2 to SATA adapters and the SSD directly on the SATA connectors.

The PSU only comes with 3 SATA power connectors, I guess l'll but some molex to sata adapters.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Jarsky said:

Not to the same extent, since your data is striped across the vdev(s), the big difference is that UnRAID stores files contigously per drive. By default it uses what it calls "high water" where it will fill a % of the drive, before moving onto the next and it will continue doing this getting progressively tighter each time. The next most popular is "fill" which completely fills a disk before moving to the next in the pool. The result is that a lot of chunks of data will be on a single drive. So say i have 3 x 8TB drives it will fill the first one to 50% (4TB) before moving onto the next. The result being that recent data is likely to all by on the same drive, so say you have a Plex and mostly watch recent releases, it will really only ever spin up that single drive for reading. 

 

It's not the same since ARC is memory, but if you mean L2ARC then yeah its still not the same thing. The "cache" is utilized via a fuse mount which is /mnt/user in the screenshot below, which shows the array data (disks /dev/mdX) incuding /dev/cache.

Because of it being a faux cache, data has to be moved from/dev/cache to the /dev/mdX devices according to the balancing configuration, to be parity protected (hence why UnRAID does a daily move) and its only a write cache, not a read cache. It is a "dumb" cache, and this limitation is why many do not use caching in UnRAID. 

 

image.thumb.png.3545f22c5030589799d69a3a589508bb.png

 

Again that comes back to the point of using a suitable solution for the job. Most people who run UnRAID are using it at home for mass storage or in small office or site server environments that dont need the IO. Its not an enterprise storage solution. 

 

Thats not how they market it, it was just a point I was making. The array has full parity protection with the equivelant of RAID5/6  by having the equivelant of 1-2 disks worth of parity bits same as any traditional RAID.

 

Hence why I was particular about saying catastrophic failure.

That is, say you have a 5 disk RAID5, and you lose 2 disks; in a traditional RAID your entire array is now inaccessible. In UnRAID you only lose those 2 inaccessible drives. The other 3 can be mounted and data can be accessed as normal.  Lets say this UnRAID is for a Plex server per the OP's use case. If you cant afford to, or do not want to back up...40TB of media that can be redownloaded, then UnRAID is a good compromise of speed for some form or recovery in the case of a major issue. 

 

Obviously the trade off, is that reading/writing you only have a single spindle worth of speed; but in a lot of use cases for home and small office thats acceptable. 

I still feel they are pseudo-features. The same thing can be done with ZFS You have a stripe on mirrors and hot swap standby disks (called spares in ZFS) that are powered down via the camcontrol timeouts. You can add more mirrors dynamically or convert the hot spares to mirrors to extend.  You can automate any of these features using (zed) the zfs events daemon.

 

Also is this *really* less work for the drives? Is it better to hammer one drive 100% of the time, or hammer a group of mirrors 20% of the time? Only the mirror that has the data needs to actually do work. It's less power sure but I wouldn't be so sure it extends the longevity of drives.. In a parity raid like Raidz, only one drive is actually doing work at any one time.. so it's balanced across all the hardware and ZFS also auto balances the data allocations (it's not necessarily a direct stripe across and all of the disks are used for parity, not just one.). - I don't know the answer to that question but I'm skeptical of the claims it improves drive life. Heat (and vibration) is what actually kills drives, not use, and using a single all the time sounds like a lot of heat in one spot.. Fun fact, did you know ZFS automatically balances and array to avoid parity on disks close to each other? This is done to prevent vibration hot spots in the array in a small enclosure. I think it's pretty neat they thought about that in the design of it.

 

The L2ARC works with the ARC (L1ARC), it's the same thing it knows about a cache hit or miss in either the L1 or the L2 and can adjust itself accordingly and it does not use the god awful fuse implementation. (you know what the U stands for.. userland is SLOW.) - It feels like Unraid is using ducktape to try to match ZFS's native kernel based features.

 

I've never used Unraid.. so maybe there are clever things in there but after speaking with you about it my impression is it's worse than I thought..

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, giosann said:

In the end I decided to go with the Silverstone DS380 with this configuration: https://it.pcpartpicker.com/user/giosann/saved/#view=pYky4D

 

Since the case supports up to 12 Drives for now I'll use an M2 slot for M2 to SATA expansion and leave an M2 Slot Open but in the future I'll probably just put another expansion slot punnting the HDD on the M2 to SATA adapters and the SSD directly on the SATA connectors.

The PSU only comes with 3 SATA power connectors, I guess l'll but some molex to sata adapters.

 

When that case has been reviewed by a number of people before, it is quite cramped and has terrible airflow. It also came out back around 2013 so add some hot nvme's to the mix and it will exhasterbate the cooling issues with the case. The main issue is the solid front panel and almost completely solid drive sleds. Theres a mega thread here that discusses the case and also has cooling mods that can be done to help with the issue 

 

A popular mITX form factor build that supports 6 x 3.5" drives is the Fractal Design Node 304 (if you arent going to have a PCIe card). 

Or a Fractal Design Node 804 will hold up to 8 x 3.5" drives, and an mATX size build in a split design to keep the drives and system seperate and cool. 

 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

I might be a bit off for your specific use case so fact check what I say. This is also based on my experience with ZFS, not UNRAID. ZFS land handles things differently.

1. CPU/GPU = overkill for most NAS tasks
2. 1TB capacity for a cache drive is probably overkill unless you have a very large amount of medium-hot data and basically no hot data. As an FYI you'll probably want to overprovision your cache drive (think restricting to 500GB instead of 100GB) because performance plummets as the drive gets fuller and drive amplification becomes a problem for drive lifespan. I'm an optane fanboy. An Intel 800p on ebay for $120 is a great cache drive for ZFS based tasks and even the 58GB version is SOLID. Also, at least for ZFS, more RAM before more Cache drive. For what it's worth on my NAS, my ARC (about 25GB of RAM) gets about 50x more hits than the 118GB optane drive I'm using. I'm mostly running games and storing photos and a handful of videos. If I plopped in a 1.6 TB optane drive it'd basically get the same number of hits as my 118GB drive despite being 10x the size... nearly everything that gets hit is in RAM.
3. SSDs - any reason for not going nvme? There's not much of a cost premium.
4. HDDs - verify that these are CMR, not SMR. SMR drives are AWFUL for NASes.

 

 

Quote

Sorry but I don't quite understand, isn't the cache SSD used as a writing cache for the RAID5 array? You also have one no?

At least in ZFS land there isn't a concept of a write cache, more like a deferred write log (SLOG). Data goes through the SLOG (usually 5 seconds worth) and then gets written to the HDD array. This is often more performant than a direct write to HDD since the data gets "streamed" sequentially instead of placed in a more haphazard manner (think drive thrashing on HDDs). If the HDDs can't keep up even with a SLOG, then the alternative would be to have an SSD pool and to migrate that over to the HDDs offline, later (think at night) using a script of some sort. Usually 16-64GB of capacity is enough for a SLOG (only needs to hold a few seconds of data and usually just sync writes - async writes go into RAM and follow a similar process). A single 32GB optane stick works AWESOME for this if you're using a 1Gbe or possible 2x1Gbe network link. If you go up to a bigger optane stick (think 118GB 800p or 100GB p4801x) you could probably partition off 20GB or so from that and have the device perform double duty (don't do this with regular SSDs, mixed duty KILLS NAND SSD performance - think 90% performance hit and lots of wear and tear from write amplification).


Note that I'm not saying to USE ZFS. TrueNAS (what I'd suggest using) isn't trivial to get going and TrueNAS Scale (uses Linux) is still a bit quirky and requires tuning (I had to jump through some hoops to use more than 16GB RAM as ARC) such as fiddling with SMB permissions (was way easier with QNAP and Synology).

 

On 4/6/2022 at 12:07 PM, jde3 said:

I'm sure it was, I've been a storage admin for years and I go to conferences. I never heard of anyone ever using it except LTT ppl. TrueNAS markets conferences, UnRaid markets social media.

Another way of stating that is ZFS is targeted towards well-trained, highly experienced and technical people willing to spend HOURS AND HOURS AND HOURS to learn about its set up and configuration and UnRaid is targeted at people who just want something to run and work well enough.

I'm bright. I've worked at top places like Google. I also struggled with setting up ZFS (first on a blank Ubuntu install and later on TRUENAS SCALE) at moments. It was fun when a TRUENAS update changed security permissions and suddenly I couldn't connect to my NAS and I had to fiddle around for an hour...
I'm also NOT a system admin or storage admin, just a hobbyist with some math knowledge. I'm definitely better at using Linux right now than a year ago but that required a bit of pain.

 

As an aside, this is the performance I'm getting on my NAS (4 core Excavator, 32GB RAM, 118GB Optane L2ARC, 4x 4TB HDDs RAIDz1, 10GBe NIC on a PCIe 3.0 x1 slot[max speed of 800MB/s before overhead/innefficiencies]) - no drive set up for SLOG, I'll probably flush my L2ARC at some point, partition it and set it up as SLOG.

image.png.763df5d4db664005275215c0a27e29b8.png

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah, I've worked with ZFS since it came out in Solaris 10. Sometimes my view of it does not reflect the typical home world use. It is good technology though and can be used in a lot of novel ways. I don't really believe you need a high level of knowledge to use it but you do if you want to do advanced tricks with it.

 

So the SLOG. Due to it's transactional design ZFS can't "buffer up" writes.. it wouldn't be safe to do so. (how could it ensure parity of a cached write?) They either must be on the disk or not.. but what it can do is write them to persistent storage then reorder them into logical transactions groups and flush them to disk in batches.. that is what the SLOG really does. It helps ZFS do fewer random writes and there is a performance boost there. Also the SLOG does not need to be very large. Save most of the fast storage for L2ARC.

 

+1 cmndr.

 

One thing to add.. keep in mind a raidz configuration only has access to a single devices bandwidth at a time.. where as mirrored pairs (like raid 10) can access many disks at once. So the choice of mirrors or raidz will depend on your capacity or performance requirements. A raid 10 like layout is a pretty good middle ground. ZFS can be very performant but it takes some pre-planning.

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

One bit to add to that is if you only care about performance to ONE client and don't need to share files from that client... iSCSI is a thing and it allows client-side block caching.

 

This is my NAS with a 58GB optane stick caching it client side (best overall test run).
Do note that I'm using lvm-cache on linux (the closest thing on Windows is PrimoCache, which is AWESOME). This is probably NOT the most performant caching set up out there and it's probably NOT tuned to using my optane stick well. Also, generally speaking, caching systems tend to better for a given set of numbers than a straight up SSD would since the most used files land in cache, provided you're using the system a bunch and aren't looking at new data (you have worse performance on new data).

image.png

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

Yes.. iSCSI has some overhead (due to tcp), if you really need performance you might be able to do FC in the home.. depending on your budget. FC targets can back to zvols.

 

Don't get me wrong, iSCSI is a good thing and solves a lot of problems on it's own.

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, cmndr said:

I might be a bit off for your specific use case so fact check what I say. This is also based on my experience with ZFS, not UNRAID. ZFS land handles things differently.

1. CPU/GPU = overkill for most NAS tasks
2. 1TB capacity for a cache drive is probably overkill unless you have a very large amount of medium-hot data and basically no hot data. As an FYI you'll probably want to overprovision your cache drive (think restricting to 500GB instead of 100GB) because performance plummets as the drive gets fuller and drive amplification becomes a problem for drive lifespan. I'm an optane fanboy. An Intel 800p on ebay for $120 is a great cache drive for ZFS based tasks and even the 58GB version is SOLID. Also, at least for ZFS, more RAM before more Cache drive. For what it's worth on my NAS, my ARC (about 25GB of RAM) gets about 50x more hits than the 118GB optane drive I'm using. I'm mostly running games and storing photos and a handful of videos. If I plopped in a 1.6 TB optane drive it'd basically get the same number of hits as my 118GB drive despite being 10x the size... nearly everything that gets hit is in RAM.
3. SSDs - any reason for not going nvme? There's not much of a cost premium.
4. HDDs - verify that these are CMR, not SMR. SMR drives are AWFUL for NASes.

1. since I'm having some plex transcodes I'd like to have some gpu power, in the end i decided on the i5-12500 that has a nice UHD 770

2. for now I've dropped the cache drive.

3. mITX board have at most 2 M2 slots and It comes down on deciding on having a PCI to Sata expansion card or a M2 to Sata expansion card.

With the first one maybe i can put in 2 nvme SSDs in the M2 slots but I would not have the PCI slot free, the second option would give me maybe 1 M2 slot free that would not allow me to have a parity nvme and I would have to use a SATA ssd as parity.

4. thanks for the reminder, this drives are indeed CMR. They are also enterprise class drive (weirdly cheaper than nas grade drives).

10 hours ago, Jarsky said:

 

When that case has been reviewed by a number of people before, it is quite cramped and has terrible airflow. It also came out back around 2013 so add some hot nvme's to the mix and it will exhasterbate the cooling issues with the case. The main issue is the solid front panel and almost completely solid drive sleds. Theres a mega thread here that discusses the case and also has cooling mods that can be done to help with the issue 

 

A popular mITX form factor build that supports 6 x 3.5" drives is the Fractal Design Node 304 (if you arent going to have a PCIe card). 

Or a Fractal Design Node 804 will hold up to 8 x 3.5" drives, and an mATX size build in a split design to keep the drives and system seperate and cool. 

 

hmmm, the node 304 could be a good option and if I need 6 HDDs and some SSDs I could get creative with some double sided tape, SSDs should not care too much...

I excluded the node 804 because it is too big, the DS380 is really small for having 8 x 3,5" and 4 x 2,5" drives, I could just take that and mod it with a piece of plastic and maybe some noctua fans but it is also 180-210€ (+mods) compared to 115€ for the node 304.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×