Jump to content

How do you guys have your NAS OS drives set up?

Ziggidy

Hey all, 

 

I have a 2u Supermicro which I use as my NAS and it has a virtual machine running Plex on it as well. It has been nearly rock solid for 5 years now. I did make a mistake and used an old OCZ Vertex 60 Gig SSD when I initially set it up. I had a pair in there planning to have a backup of the OS on the 2nd drive but it failed before that was ever done. That was a few years back. More recently I acquired a pair of 1u Supermicro servers which I use 1 for Minecraft servers and the other for an Ark Survival server. I just completed an install of Ubuntu 20.04 LTS on a 4 disk 120 gig SSD Raid10 array using each of the 4 3.5" hotswap bays on both chassis. At this point I am wanting (maybe NEEDING is a better way to put it) a solution for my NAS to avoid (or at least minimize) downtime on the Plex server. I am thinking I may do another Raid10 array with 4x 500 Gig SSDs, but not sure what I will go with.

 

How do you guys handle your "roll your own" NAS OS setups for the eventuality of a drive failure?

Gaming Rig - ASUS ROG Crosshair VIII Dark Hero, AMD Ryzen 7 5800X (stock), ND-D15 Chromax Black, MSI Gaming Gaming X Trio RTX 3070, Corsair Vengeance LPX 32Gig (2 x 16G) DDR4 3600 (stock), Phanteks Eclipse P500A, 5x Noctua NF-P14 redux-1500, Seasonic FOCUS PX-850, Samsung 870 QVO 2TB (boot), 2x XPG SX8200 Pro 2 TB NVMe (game libraries), 2x Seagate BarraCuda ST8000DM004 8TB (storage), 2x Dell (27") S2721DGF, 2x Asus (24") VP249QGR, Windows 10 Pro, SteelSeries Arctis 1 Wireless, Vive Pro 2, Valve Index

NAS /Plex Server - Supermicro SC826TQ-R800LPB (2U), X8DTN+, 2x E5620 (Stock), 72GB DDR3 ECC, 2x Samsung 860 EVO (500GB) (OS & Backup), 6x WD40EFRX (4TB) in RaidZ2, 2x WD 10TB white label (Easy Store shucks), 2x Q Series HDTS225XZSTA (256GB) ZIL & L2ARC mirrored, Ubuntu 20.04 LTS

Other Servers -2x Supermicro CSE-813M ABC-03 (1U), X9SCL, i3-2120 (stock), 8 Gigs DDR3, 4x Patriot Burst 120GB SSD (Raid10 OS array), Mushkin MKNSSDHL250GB-D8 NVMe (game drive), Ubuntu 20.04 LTS - RAID 10 failed after a power outage... dang.

Link to comment
Share on other sites

Link to post
Share on other sites

I have 2 OpenBSD backup servers that are accessed through SSHFS and rsync'd to each other, and they're set up slightly differently:

 

Primary: Sun T5220 with 2 drives in RAID 1 for the operating system, and 4 drives in RAID 5 for the data (both using OpenBSD's softraid). The T5220 has 2 additional bays which I use for spare drives in case of failure.  Separating the RAID arrays means a failure of the RAID 5 array will not affect the OS, so the OS can still boot to repair it.

 

Secondary: IBM System x3200 M3 with 4 drives in the built in LSI raid controller. This is just a simple mirror done through the controller, with 2 drives as spares. The primary server will back up to this one nightly, and I bought a spare backplane of the same make/model to keep handy in case of backplane failure. I'm not a fan of hardware RAID controllers, but this one doesn't like to boot from OpenBSD's softraid mirrors for some reason.

 

The backup syncs with a cloud server nightly, and both the primary and backup can restore from this cloud server if necessary. So, if a drive fails on either server, the data is available on the other one. If something knocks out both of them, the data is available on the cloud server as well.

Link to comment
Share on other sites

Link to post
Share on other sites

For my own virtualization servers I opt for RAID10 for VM storage and RAID1 for the boot disk. My own file server uses just a single boot disk and the main storage pool is a RAID60. Backup servers use RAID5.

 

If you have the money and the slots/ports RAID10 will give you the best read/write and IOPS performance but if you need more storage or want to save money parity can work well depending on your throughput needs.

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks all. I was jus curious about that. I don't have any requirements for HA, my concern stems from I SUCK at Linux command line, so this is the preferred method to ensure that even with a failure I can still boot my machines, lol. 

Gaming Rig - ASUS ROG Crosshair VIII Dark Hero, AMD Ryzen 7 5800X (stock), ND-D15 Chromax Black, MSI Gaming Gaming X Trio RTX 3070, Corsair Vengeance LPX 32Gig (2 x 16G) DDR4 3600 (stock), Phanteks Eclipse P500A, 5x Noctua NF-P14 redux-1500, Seasonic FOCUS PX-850, Samsung 870 QVO 2TB (boot), 2x XPG SX8200 Pro 2 TB NVMe (game libraries), 2x Seagate BarraCuda ST8000DM004 8TB (storage), 2x Dell (27") S2721DGF, 2x Asus (24") VP249QGR, Windows 10 Pro, SteelSeries Arctis 1 Wireless, Vive Pro 2, Valve Index

NAS /Plex Server - Supermicro SC826TQ-R800LPB (2U), X8DTN+, 2x E5620 (Stock), 72GB DDR3 ECC, 2x Samsung 860 EVO (500GB) (OS & Backup), 6x WD40EFRX (4TB) in RaidZ2, 2x WD 10TB white label (Easy Store shucks), 2x Q Series HDTS225XZSTA (256GB) ZIL & L2ARC mirrored, Ubuntu 20.04 LTS

Other Servers -2x Supermicro CSE-813M ABC-03 (1U), X9SCL, i3-2120 (stock), 8 Gigs DDR3, 4x Patriot Burst 120GB SSD (Raid10 OS array), Mushkin MKNSSDHL250GB-D8 NVMe (game drive), Ubuntu 20.04 LTS - RAID 10 failed after a power outage... dang.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×