Jump to content

Advice on home file server

Hello,

 

I am Ben, new to these forums and I am looking to upgrade my home server but looking for advice. My server is used as a samba file share for my iMac and MacBook Air as well as a Plex Media Server for the family.

 

My current set up is a 2014 Mac Mini running macOS High Sierra. It has 4GB of RAM and a a 1.4GHz 5 processor.

Through USB connection are storage drives:

  • Alpha - RAID 5 enclosure with 4 x 6TB hard drive (18TB usable space) - day to day files and photo library
  • Beta - RAID 5 enclosure with 4 x 6TB hard drive (18TB usable space) - archived files, used less often but still need to keep
  • Gamma - RAID 5 enclosure with 4 x 4TB hard drive (12TB usable space) - Plex media
  • Zaxira - WD MyBook with 2 x 3TB hard drives in Raid (6TB usable space) - TimeMachine backups for Mac Mini and other Macs on network
  • This gives total usable space of 54TB of which I am utilising 31.5TB across all the drives.

There are a mix of file sizes and types but the majority of space is used by large video files. My photos library is 800GB and there are also several Final Cut Projects.

These drives are shared over gigabit internet through my local network. For my iMac which is my main computer which I work on from home I have a thunderbolt cable connected between the iMac and the Mac Mini. This enables me to connect to the Mac Mini directly and achieve write speeds of upto 250MB/s and read speeds of upto 350MB/s.

The Mac Mini is backed up to the cloud with Backblaze personal account. I love this service as I get truly unlimited file backup for a fixed fee.

The system works great except the Mac often has file permission issues and I have to go onto the Mac Mini to correct this to enable me to create folders, rename files or move files from my iMac or MacBook Air. I am also running out of physical space, USB points and wall power sockets to add additional raid enclosures when needed.

 

What am I hoping to achieve?

  • I would like to consolidate the hardware - ideally into a server rack.
  • I am unsure on operating system - I am thinking UnRaid but that could be because I have seen it work successfully for LTT. It needs to resolve the file permission issue and have a primarily GUI interface - I can cope with the occasional coding. I have also had bad experiences with Windows storage pools so would like to avoid this.
  • I want to be able to use a mix match of drive sizes and manufacturers. My plan is to reuse the 3.5" drives from this build and add some additional larger drives. The system needs the ability to add drives as time goes on without having to rebuild the array.
  • The OS needs to be able to run the Plex Media Server. Either directly on the OS or in a VM.
  • There needs to be an offsite backup solution. I realise Backblaze personal only works on macOS and Windows. Either an affordable cloud storage provider or looking to build a second backup server to put in the server room at work. Not sure if the Duplicati docker would be good in this scenario? (the initial backup would be done over the local network)
  • Gigabit or ideally 10GbE. Keeping my thunderbolt direct link would be great if I could get it to work.

How you can help?

I ask the LTT community if you are able to advise on:

  • The most suitable OS to use based on my needs above.
  • Recommendations for hardware enclosures
  • Recommendation on hardware components including processor and amount of memory
  • I am happy to use ex-enterprise gear or self build depending on what you all believe is best.

I thank you in advance for your contributions. If you need any further information on my plans please ask.

Edited by Phillips2010
Link to comment
Share on other sites

Link to post
Share on other sites

First off,  you are a mad man running USB connected RAID 0s on media archives you plan to keep.

 

I would suggest you get a Synology NAS for about 56 different reasons. Maybe two devices if you are serious, one for backup. You can run it in SHR, install Plex, and easily meet your requirements without having to do anything. 

 

Ex-enterprise gear is loud and not good for home. Self build is great, but there's a huge delta between a single client Mac server, and a problem when your unRAID box is hosed. Consider sticking to a Mac Mini and using something like a Drobo and it's BeyondRAID format to mix and match. A permissions issue isn't a reason to change hardware, the age of the system and drives, power usage and a USB RAID-0 octopus are the reasons to change.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, InFlightConsultant said:

First off,  you are a mad man running USB connected RAID 0s on media archives you plan to keep.

This.....what in gods name are you thinking running all those RAID0's, especially on your photos and archives??

 

I second InFlightConsultant on a Synology, and I would urge that this is a high priority thing to setup and that you use RAID5 at a bare minimum, preferably RAID6. 

Theyre perfectly capable of working with an apple environment. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

Thank you for the suggestions. And my mistake. Alpha, Beta and Gamma drives should have read Raid5. I am not that mad.

 

I’ll take a look at Synology NAS as you are both suggesting this is the best route to go.

 

still welcome additional comments.

 

thank you.

 

Ben

Link to comment
Share on other sites

Link to post
Share on other sites

So let me start by asking you this - how much time do you want to spend building/setting up/configuring the system and how "pretty" do you need the interface to be? (i.e. how familiar are you with a text terminal/console)?

 

There are a lot of potential solutions out there and since you mentioned that you want to be able to mix and match hard drives (which I will make the explicit assumption that you might want to be able to mix and match drive capacities as well) -- do you want it to be just one giant RAID0 array or do you want a series of RAID5 arrays/volmes?

IB >>> ETH

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, alpha754293 said:

So let me start by asking you this - how much time do you want to spend building/setting up/configuring the system and how "pretty" do you need the interface to be? (i.e. how familiar are you with a text terminal/console)?

 

There are a lot of potential solutions out there and since you mentioned that you want to be able to mix and match hard drives (which I will make the explicit assumption that you might want to be able to mix and match drive capacities as well) -- do you want it to be just one giant RAID0 array or do you want a series of RAID5 arrays/volmes?

I am happy to take the time to build and configure the system. I’d prefer to do this with a nice GUI, as while I can use a terminal anything beyond basic install and configuration will require some self help guides.

 

Mixing and matching hard drive capacities is a key here. The drives I have work and while going forwards I will need more capacity, I want to continue to use the drives I’ve already invested in. I envision a mix of 10TB, 6TB and 4TB drives, phasing out the smaller drive sizes as drive size increases and data costs come down.

 

The number of parity drives the system has is less important to me as long as parity is in place whether this be Raid5 or 6 or greater. Going forwards I am looking at having 5 storage pools but to maximise value I think it would make sense to have a massive array which is split by the OS

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, InFlightConsultant said:

Ex-enterprise gear is loud and not good for home.

If you're running pretty much an exclusively Apple/macOS/iOS shop, then you'll want something that will support AFP.

 

Enterprise gear doesn't have to be loud.

 

It really depends on what you get.

 

My main servers, when it is at idle, is maybe like 40 dB. Some people think that's loud, but as a server, it's really not. My old Sun Microsystems SunFire X4200 - now that thing is loud (106 dB), which, for obvious reasons, I don't use anymore.

 

The other thing about building a massive file server like this is that even if you have enterprise grade cloud backups and/or offsite backups -- the trick is going to be actually doing the remote backups.

 

Your internet connection will be at the mercy of the upload speeds, and at 54 TB+, that starts becoming a problem (because unless you have something like Google fibre or FTTN/FTTH where you can actually upload at the full 1000 Mbps), you'll likely be in a perpetual state of trying to execute the remote backup all the time. (The initial takes the longest time and there is a possibility that it may never complete before the next incremental is scheduled.)

 

This is the problem that I have with my home data servers right now at 90 TB, 60 TB, 24 TB, and 12 TB respectively each. (Raw sizes.)

 

It is, for that reason, why I am looking at my next build being tape backups. Initial cost is high, but I'm at the point now where it is absolutely critical that I do that.

 

Having said that, you can get a 12-bay SATA enclosure off eBay for probably less than $700. You can also pick up 10 GbE NICs and cables relatively inexpensively. Getting a 10 GbE switch might be more challenging, but there is a 12-port switch that I think I found for < $600 maybe? (https://www.compsource.com/ttechnote.asp?part_no=QSW12088CUS&vid=2640&src=F)

 

The advantage with a home built system is that most of the time, if or when something goes wrong, using COTS hardware, it's a little easier to debug and troubleshoot.

 

I've actually stepped away from that and started using NAS appliances, but my next build (after my tape backups) will go back to actual servers for this reason.

 

So there are lots of options.

IB >>> ETH

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Phillips2010 said:

The number of parity drives the system has is less important to me as long as parity is in place whether this be Raid5 or 6 or greater. Going forwards I am looking at having 5 storage pools but to maximise value I think it would make sense to have a massive array which is split by the OS

So that's the thing with mixing and matching capacities -- the size of the array will be the size of the SMALLEST drive in the array, and in RAID5, it will be (n-1)*smallest capacity = array capacity.

 

SOME systems will allow you to do in-place drop in/replacements, one at a time; but it has also been, in my experience, that by the time that you might be ready to expand your storage, it might be time to either get build a new system so that you can migrate your entire existing pool over, or you might have to do it piecemeal. (A lot of hardware RAID host bus adapter cards might support it, but it's not a very fast process. It's not like you can swap out a drive every couple of minutes. It's more closer (especially with old enterprise hardware) to swapping out a drive every couple of days (since what it's fundamentally doing is rebuilding the array with each swap, which is why it's super slow).

 

So if you have a bunch of 10 TB drives, mixed with 6 TB drives, mixed with 4 TB drives; you'll likely end up with an array that's formed by all of the 10 TB drives into one, all of the 6 TB drives into another, and all of the 4 TB drives into a third. This minimizes the amount of raw storage space lost due to how the RAID works. (Only RAID0 doesn't care about the individual drive capacities. Pretty much all other RAID modes/levels cares a LOT (about the capacity of each drive).)

 

The last migration that I went through was quite a bit of a pain because it was playing musical chairs with the data so that the systems could be offloaded, rebuilt, reloaded with the data, and move on to the doing the same with the next system. And I did lose a little bit of data in that process because it was just difficult to keep all of that straight when you're playing musical chairs with it.

 

It was cheaper, but it did come at a cost with a little bit of data loss in the process.

 

This isn't to necessarily discourage you from doing something like that, but just for you to be aware.

 

Having said that, I know that LTT uses FreeNAS, which looks quite nice. I've never personally used it (my servers were running Windows because I wanted to be able to recover a failed NTFS volume if I need to), and I've also used SunOS (Solaris and ZFS before that, and there is ZERO data recovery tools for ZFS (even in talking with the devs when it was just coming up), so I've completely abandoned anything that doesn't have a data recovery method should the data bite the dust (since I don't have tape backups yet).

IB >>> ETH

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, alpha754293 said:

If you're running pretty much an exclusively Apple/macOS/iOS shop, then you'll want something that will support AFP.

 

Enterprise gear doesn't have to be loud.

 

It really depends on what you get.

 

My main servers, when it is at idle, is maybe like 40 dB. Some people think that's loud, but as a server, it's really not. My old Sun Microsystems SunFire X4200 - now that thing is loud (106 dB), which, for obvious reasons, I don't use anymore.

 

The other thing about building a massive file server like this is that even if you have enterprise grade cloud backups and/or offsite backups -- the trick is going to be actually doing the remote backups.

 

Your internet connection will be at the mercy of the upload speeds, and at 54 TB+, that starts becoming a problem (because unless you have something like Google fibre or FTTN/FTTH where you can actually upload at the full 1000 Mbps), you'll likely be in a perpetual state of trying to execute the remote backup all the time. (The initial takes the longest time and there is a possibility that it may never complete before the next incremental is scheduled.)

 

This is the problem that I have with my home data servers right now at 90 TB, 60 TB, 24 TB, and 12 TB respectively each. (Raw sizes.)

 

It is, for that reason, why I am looking at my next build being tape backups. Initial cost is high, but I'm at the point now where it is absolutely critical that I do that.

 

Having said that, you can get a 12-bay SATA enclosure off eBay for probably less than $700. You can also pick up 10 GbE NICs and cables relatively inexpensively. Getting a 10 GbE switch might be more challenging, but there is a 12-port switch that I think I found for < $600 maybe? (https://www.compsource.com/ttechnote.asp?part_no=QSW12088CUS&vid=2640&src=F)

 

The advantage with a home built system is that most of the time, if or when something goes wrong, using COTS hardware, it's a little easier to debug and troubleshoot.

 

I've actually stepped away from that and started using NAS appliances, but my next build (after my tape backups) will go back to actual servers for this reason.

 

So there are lots of options.

Thanks for this.

 

What OS do you use for your servers and what was the basis behind this?

 

I agree noise is not a big concern. The fans from my current 4 raid enclosures is audiable but I have got use to them. And as you highlight from your 40db server, Enterprise doesn’t automatically mean loud.

 

I am in a fully Apple environment and happy to use AFP but surprisingly have found SMB to be more reliable with my Mac Mini.

 

I currently use Backblaze as my cloud backup which has 26TB backed up. It took a couple of months on my fibre connection to do the initial backup but incremental backups appear to happen no problem. If I was to build a second server for offsite backup I would do the initial backup onsite over the local network for exactly the issue you highlight.

 

as you have pointed out building my own server to run unraid or another OS can be done and I will enjoy setting everything up. What would start to frustrate me is trying to debug for several hours if something did go wrong. But given the amount of data I want to store, backup and use is NAS viable.

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Phillips2010 said:

Thanks for this.

 

What OS do you use for your servers and what was the basis behind this?

 

I agree noise is not a big concern. The fans from my current 4 raid enclosures is audiable but I have got use to them. And as you highlight from your 40db server, Enterprise doesn’t automatically mean loud.

 

I am in a fully Apple environment and happy to use AFP but surprisingly have found SMB to be more reliable with my Mac Mini.

 

I currently use Backblaze as my cloud backup which has 26TB backed up. It took a couple of months on my fibre connection to do the initial backup but incremental backups appear to happen no problem. If I was to build a second server for offsite backup I would do the initial backup onsite over the local network for exactly the issue you highlight.

 

as you have pointed out building my own server to run unraid or another OS can be done and I will enjoy setting everything up. What would start to frustrate me is trying to debug for several hours if something did go wrong. But given the amount of data I want to store, backup and use is NAS viable.

So, I used to use SunOS/Solaris (with ZFS) for my servers but then after having experienced a second, unrecoverable loss of the entire zpool array, I abandoned it entirely. (I actually tried to work directly with the Sun Microsystems engineers who designed and developed ZFS and their ultimate answer boils down to: "restore from backup" (except that my ZFS server WAS the backup server, so in other words, I was screwed).

 

After that, I switched over to Windows Server 2012 and that worked fine except that as a server, it consumes more power than a NAS appliance running some version of Linux, so now I am using NAS appliances.

 

ZFS was because it was new and shiny at the time and was able to make older hardware (ca. 2007) work better than it actually did due to ZFS magic. But ZFS ultimately failed me because it was really designed for $30,000 Sun hardware, not sub-$1000 home servers (running an AMD Duron at the time).

 

Windows Server 2012 because of NTFS and there are data recovery tools available for NTFS that isn't available (at ALL for ZFS). But by then, I had bought old dual Xeon servers off eBay, so the system itself performed better because it wasn't an AMD Duron with like MAYBE 1 GB of RAM. RAM was a lot more expensive back then compared to the systems of today. ZFS was just too advanced for its time, but it still doesn't have any data recovery tools if the zpool array bites the dust, so there's still that. (It is, for the same reason, why I also don't use BtrFS in any Linux distro/implementation, BTW.)

 

But like I said, my next build after I get a tape backup solution up and running, will likely be back to using either Windows Server 2016 or some kind of Linux. (There are other factors are going to come into play in regards to that that surrounds my high speed (100 Gbps) networking/interconnect infrastructure that I still haven't quite all sorted out, so it's a tad undecided at the moment what the next move is going to be quite just yet as my next actual server server will likely be returning back to the pizza box and I'll just eat the increased power consumption for better network expandability at a fraction of a NAS appliance that has the same potential expandability.)

 

My systems are only loud (~70-ish dB) when they're performing CPU/computationally intensive tasks, but that's a different question altogether as my former storage server and my compute servers are entirely different systems.

 

Yeah, I bought a NAS appliance for a few hundred bucks and loaded up the drives myself to save a little bit on the cost of the drives.

 

But it was pointed out by Wendall on Level1 Tech that if something goes wrong with a NAS appliance, debugging and possibly trying to find replacement parts years from now, if it is a hardware failure, can be incredibly difficult, which is part of the motivating factor for me to move back to an actual "pizza box" file server rather than a NAS appliance.

 

But also admittedly, I like my current NAS appliances (Qnap TX-832X which has dual GbE and dual 10 GbE using SFP+ connector), hence why Qnap also sells their 10 GbE switch, which, also surprisingly, none of the other major networking switch vendors sell anything like that.

 

So that's kind of nice.

 

(And the ONLY reason why I haven't got one of those switches myself is because I am trying to figure out how I am going to step up/step down between 100 Gbps (4X EDR IB), 10 GbE, and 1 GbE. Once I figure that out, then I would be in a better position, but that's if I even will go through the whole step up/step down process. If I build my new server after building my tape backup system up and running, my new file server might actually have direct 4X EDR IB (100 Gbps) connection, at which point, I won't need the 10 GbE SFP+ connection anymore, at all.

IB >>> ETH

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×