Jump to content

I recently put my Stargate Blu-Ray into my PS4 to watch it again only to find out the disc has started to rot so it wouldn't play. This hit hard that my discs aren't going to work forever and I should back them up, but with nearly 1000 Blu-Ray discs to back up I'm going to need some serious storage, and my current server isn't going to cut it. I have an old X99 MB with a 5820k and 16gb of RAM that I hope to repurpose for it (my old server is using a 2600k that is still going strong but probably is probably a bit dated and the supplementary SATA controller on the MB already failed so it only has 4 good SATA ports. However my previous storage needs were relatively meagre and was mostly storing non-critical data that I wasn't too fussed about losing (and the data I was fussed about I just had duplicated on my gaming PC too) but now with this much data that I want redundancy for it seems like I'm going to have to use RAID, which I'm not familiar with and would like some help figuring out how I should set up.

I hope to get some Seagate Exos X18 Enterprise 18TB drives (5-6 for ~72TB of storage + redundancy) and a Silverstone CS383 (when available) to house them, having the backplane for the drives seems very handy given how difficult it is to get the SATA power cables to fit without putting stress on the drive connections.

 

I have some key requirements for the server:

  • Needs to be able to run some Windows only services and applications.
  • Needs a least 1 drive of redundancy for bulk storage.
  • Needs storage to be shared to Windows PCs over the network.
  • Needs to be able to recover from hardware failure with only the bulk storage drives. Which should also make it possible to be able to transfer just the drives to a new PC and have access to all the data on them without access to any data/hardware from the old PC being available.
  • When a drive fails it needs to be easy and hard to screw up replacing it and rebuilding the array.
  • This is not a high availability server and doesn't need to be super fast, it'll probably be limited to 1gbit network anyway which is plenty for how it will be used, and if it is not available for a day or two when replacing a drive that will be fine.
  • The RAID array being expandable would be a nice bonus.
  • Needs to work with PIA VPN at the OS level with kill switch enabled for internet, but still be able to have direct LAN access. That seems to work in Windows 10 but not sure about other OS options.

 

My questions:

  1. What OS? Windows 10 would be ideal if it weren't for EOL soon, and since it'll be online 24/7 it seems that might be a problem. Windows 11 is something I hate but could live with on the server, but TPM 2 might be a problem, I have a MSI X99A Gaming Pro Carbon which seems to have TPM header on it, if I bought a TPM module would that be enough for Windows 11? Would I need a specific TPM module? Linux is another option if it would be possible to run Windows 10/11 in a VM on it, but I'm not sure if the hardware would support that though, I was looking at maybe TrueNAS scale but am not sure how portable a RAID array would be if I used TrueNAS, I'm also concerned about using something that complex as a relatively novice Linux user (basically just used SteamOS on Steam Deck, and a tiny bit of Ubuntu). I would have liked HexOS except for the fact that I need to run some Windows based software, but if I have overlooked support for that please let me know.
  2. RAID scares me, I've always been able to recover my data from drives on their way out because it was just a drive, I could dump it into an external enclosure and run recovery software to restore 99%+ of my data, but RAID seemingly adds so many more points of failure, hardware raid seems like total loss if the controller dies, I've seen indications that you need data about the RAID configuration so if the drive that has that dies you lose it all too, and I've seen conflicting reports about needing to know drive orders or what ports they are plugged into. Seems like a bunch more points of failure none of which have redundancy which given that the entire RAID array is at risk seems contrary to the purpose of RAID. Could someone clear up what the actual risks and points of failure are?
  3. What RAID configuration should I use? I was thinking of using RAID 5 or 6, how likely am I to need the 2 drives of redundancy in RAID 6 vs 1 in RAID 5? I will probably be getting the drives all at the same time (and likely same batch) which will probably add risk and it sounds like rebuilding the array after replacing a dead drive could take a while at a point when the other drives are likely at risk of dying too.
  4. How easy is it to add a new drive to the array to expand capacity?
  5. Can a RAID 5 array be upgraded to a RAID 6 array by adding another drive later?
  6. Can I easily pre-emptively swap a drive in a RAID array?
  7. I have my old GTX 1080 for the GPU, but I basically just need something for the Windows GUI and web browsing so that seems like overkill for this, are there cheap GPUs that would be better suited? Or should I just accept the increased power draw and heat? Also does that rule out using a Linux based OS?

 

I've tried looking into things myself, but with RAID and Linux being so old I've stumbled on to so much conflicting information it has been incredibly difficult to figure out what is actually true for what situation 😞

 

Thanks in advance for any help you can provide, and if you have any suggestions for things to look into that would also be great 🙂

Link to post
Share on other sites

I'd suggest investigating ProxMox as your OS. It's Linux based, so operating it from the cli is always a fail-safe fallback option. TPM is not an issue for anything else then Win-11. It's not bad persé by itself, the mandatory requirement of it for Win-11 absolutely is 🤬

 

RAID isn't scary, as long as you understand how it works (plenty of tutorials on the web). However, you shouldn't buy drives from a single brand, use at most 2 drives from a particular manufacturer if using a RAID6, one on a RAID5. That mitigates the risk you've correctly identified about losing another drive during re-silvering an array. As for the drives themselves, you can get 16TB refurbished enterprise drives from ebay for around 200-250USD each. Don't skimp on the drives, they're your biggest asset in that server: it holds your data 😛  Use a HBA in IT mode to connect the drives to the system. It simplifies replacing the mainboard on future upgrades, planned or otherwise. Don't use hardware RAID from said mainboard as that means getting that exact mainboard when the old one dies. Maybe not a big issue today, but in a few years: good luck!

 

I'm partial to RAID6 as it gives a little more leeway when a drive fails. Having said that, there is a write penalty for it but on modern hardware that's less of an issue as said hardware is fast enough to perform the required calculations pretty much on the fly. My current array is 6x 16TB RAID6, meaning 64TB effective storage with 2 drive redundancy. I started out with RAID5 (on older, much lower capacity drives) but upgrading to RAID6 is pretty easy: add the drive, resilver the updated array and off you go. I'm using Webmin as GUI for my NAS (Linux based) which is fine for beginners as well, but lacks the slick one-click GUI experience of TrueNAS, ProxMox and UnRAID. Meaning it's a bit more involved and for some more complex tasks you need to know what you're doing. It might require more steps then the one or two click experience from the aforementioned NAS-OS's to set stuff up properly.

 

Your 1080 is fine, Linux has decent & official GPU support for nVidia cards. But if you want to spend money, an Intel Arc B580 card is generally a good choice.

 

HTH!

"You don't need eyes to see, you need vision"

 

(Faithless, 'Reverence' from the 1996 Reverence album)

Link to post
Share on other sites

  • 2 weeks later...

Thanks for the suggestions, it lead me down a heap of rabbit holes for research and eventually I settled on using TrueNAS with a Mint VM, and will use a RAIDZ2 array as the new versions of ZFS support expanding the array with additional drives down the line. Unfortunately I live in Australia and it is harder to find a reliable source for refurbished drives here, but got a reasonable deal on the 18tb exos drives brand new. 2 drive redundancy certainly seems like the right call though, especially since I'm getting them all at the same time which makes it all the more likely for a 2nd drive failure during re-silvering, though I plan to replace a couple of them proactively over the next few years and use the replaced drives for non-critical storage so that I end up with a better mix.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×