Jump to content

First Home NAS/Plex build

BDsankey
 Share

Hello all, been building PCs for awhile but this will be my first NAS. I am aiming to repurpose quite a bit of hardware from my old PC that is certainly dated but has been working flawlessly for quite some time. I did have a few questions which I will add after my hardware list.

My planned files would mainly be family pictures, movies/tv shows (I'd add Plex, that's why I'm leaning toward having a GPU), family recipes, as well as important documents (tax info, titles, birth certs etc). The documents would get backed up to google drive (or similar) but we have too many pictures for me to justify paying a monthly google drive/icloud subscription any longer. I would rather drop capacity and gain redundancy VS a "balls out" performance setup. I may also "offload" steam games from my C drive that I don't play all that often to it as well, I am undecided on that.

I am aiming for around 10TiB of usable space before it would hit the 20% free space buffer. Seeing as it has taken me ages to actually fill up my 2tb raid setup on my PC I forsee this lasting for quite some time. I am certainly aware that going server hardware/ECC is a better route long term (and I will get there eventually) but all of this hardware (except the HBA and drives) are already sitting here from my past PC. Until I get an ECC setup I will still keep everything backed up to either the cloud or a cold storage drive like a 10 or 12tb drive that gets activated once every month or so to backup the NAS and is kept in my safe in a static bag. Once I build or buy a new home this current NAS will become my "offsite" backup and I will build a new version that will be rack mounted but that is a few years off. 


Mobo: Asus P8Z77-V Deluxe (Link)
CPU: Intel i7-3770k
Ram: 32gb of Corsair Dominator Platinum, I forget the speed/timings
PSU: EVGA Supernova 650w w/hybrid fan
GPU: EVGA GTX1050 or 1080, I have both laying around
HDDs: Undecided, likely 4-8tb WD Red/Seagate IronWolf (CMR regardless)
Caching: I was going to use 240gb SLC drives from WD (WD Green) as they're ~$52 

ZIL/SLOG: I was going to use 240gb SLC drives from WD (WD Green) as they're ~$52 each, these would be mirrored for redundancy
HBA: LSI 9211-8i or LSI 9207-8i or LSI 9300-8i

UPS: TBD but once the system is up and operational it would live in my basement on a UPS


I really had 2 main questions:
1) Would it be ideal to hook up the caching and mirrored ZIL/SLOG devices to the HBA or to one of the Mobo's ports? I ask because that levies largely on what drives I use. If I run 6x 6tb drives in Z3 I will have a capacity of 11.85 TiB usable with the ability to lose 3 drives at approximately $60.74 per usable TiB (6tb CMR WD red is ~$119.99 on Newegg currently). If I run 4x 8tb drives and run them in Z2 I will have the capacity of 10.89TiB at a cost of approximately $73.41 per usable TiB (8tb CMR WD Red is $149.99 on Newegg currently). This is using the Wintelguy ZFS calc including slop space/20% free space. 

2) Does the drive capacity for ZIL/SLOG actually matter? IE is there a downside of using 240gb SLC SSDs for this task? 

Link to comment
Share on other sites

Link to post
Share on other sites

Your numbers for available space seem kinda off imo. At 4x8tb Z2 you should be at around 14tb, same with 6x6Z3.

I'd say with the cold storage in mind, going Z2 is your best bet. That way you have a second layer of redundancy even when resilvering goes wrong. 

I personally run 6x4tbZ2 with around 15TB available space. 

 

Honestly I'd throw caching away completely. Unless you do crazy things with the NAS I wouldn't bother with it. And if you feel like you need it you can always add it later. With ZFS ram caching is more important and with 32gb you should be all set. 

 

You also don't need an HBA if your board has enough ports natively. 

Gaming HTPC:

R7 1700X - Scythe Big Shuriken 3 - Asus ROG B350-i - Asus GTX 1080 Strix - 16gb G.Skill Ripjaws V 3333mhz - Silverstone SX500-LG - 500gb 960 EVO - Fractal Design Node 202 - Samsung 60KS7090 - Logitech G502 - Thrustmaster T500RS - Noblechairs Icon


Desktop PC:
R9 3900X - H100i GTX - Asus Prime X570 Pro - EVGA RTX2060KO - 32gb LPX 3200mhz - EVGA 750G2 - 250gb 970 Evo - 6TB WD My Book Duo (Reds) - Inwin 103 White - Dell U3415W - Qpad MK-85 Brown - Logitech MX518 Legendary - Blue Yeti Platinum - Noblechairs Icon 


Boss-NAS [Build Log]:
R5 2400G - Noctua NH-D14 - Asus Prime X370-Pro - 16gb G.Skill Aegis 3000mhz - Seasonic Focus Platinum 550W - Fractal Design R5 - 
250gb 970 Evo (OS) - 2x500gb 860 Evo (Raid0) - 6x4TB WD Red (RaidZ2)

 

Audio Gear:

Hifiman HE-400i - Kennerton Magister - Beyerdynamic DT880 250Ohm - AKG K7XX - Fostex TH-X00 - O2 Amp/DAC Combo - 
Klipsch RP280F - Klipsch RP160M - Klipsch RP440C - Yamaha RX-V479

 

Reviews and Stuff:

GTX 780 DCU2 // 8600GTS // Hifiman HE-400i // Kennerton Magister
Folding all the Proteins! // Boincerino

Useful Links:
Do you need an AMP/DAC? // Recommended Audio Gear // PSU Tier List 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, FloRolf said:

Your numbers for available space seem kinda off imo. At 4x8tb Z2 you should be at around 14tb, same with 6x6Z3.

I'd say with the cold storage in mind, going Z2 is your best bet. That way you have a second layer of redundancy even when resilvering goes wrong. 

I personally run 6x4tbZ2 with around 15TB available space. 

 

Honestly I'd throw caching away completely. Unless you do crazy things with the NAS I wouldn't bother with it. And if you feel like you need it you can always add it later. With ZFS ram caching is more important and with 32gb you should be all set. 

 

You also don't need an HBA if your board has enough ports natively. 

If I look at ZFS usable storage then yes, I get ~14.8TiB with 6x6tb Z3 or ~13.62TiB with 4x8tb Z2. 4x6tb Z2 gives me ~10.2TiB

I am genuinely curious, what makes you want to go away from caching? IE is there just not much of a performance uplift there? 

Also, from what I can tell, FreeNAS/TrueNAS do not require a conventional boot drive as they live in memory, is that accurate? 

My board has 8 ports technically, 4 are 3gb/s and 4 are 6gb/s. 6 are ran by an intel controller (4x 3gb/s and 2x 6gb/s) and 2 by a marvel controller. Ideally the goal of using an HBA (cheap) is that it gives me one piece of tech and no messing around with BIOS settings to try and ensure the ports will play nice as well as bandwidths are all the same/similar whereas with an HBA each port is the same speed etc and is very simple to re-setup should I have a mobo/cpu failure. 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, BDsankey said:

If I look at ZFS usable storage then yes, I get ~14.8TiB with 6x6tb Z3 or ~13.62TiB with 4x8tb Z2. 4x6tb Z2 gives me ~10.2TiB

I am genuinely curious, what makes you want to go away from caching? IE is there just not much of a performance uplift there? 

Also, from what I can tell, FreeNAS/TrueNAS do not require a conventional boot drive as they live in memory, is that accurate? 

My board has 8 ports technically, 4 are 3gb/s and 4 are 6gb/s. 6 are ran by an intel controller (4x 3gb/s and 2x 6gb/s) and 2 by a marvel controller. Ideally the goal of using an HBA (cheap) is that it gives me one piece of tech and no messing around with BIOS settings to try and ensure the ports will play nice as well as bandwidths are all the same/similar whereas with an HBA each port is the same speed etc and is very simple to re-setup should I have a mobo/cpu failure. 

Correct, as far as I have heard the performance increase from caching is mostly negligible. Especially considering a raidz2 with 6 drives (for example) is quite fast and you always have ram caching. 

 

You obviously do need a boot drive but it can be a USB thumb drive BUT I highly advise against that. I say get a small 60gb ssd for the OS instead. You CANNOT use the boot drive for anything else, so getting higher capacity is useless. 

 

Fair point with the HBAs, I forgot ivy bridge wasn't full sata 3 yet. 

Gaming HTPC:

R7 1700X - Scythe Big Shuriken 3 - Asus ROG B350-i - Asus GTX 1080 Strix - 16gb G.Skill Ripjaws V 3333mhz - Silverstone SX500-LG - 500gb 960 EVO - Fractal Design Node 202 - Samsung 60KS7090 - Logitech G502 - Thrustmaster T500RS - Noblechairs Icon


Desktop PC:
R9 3900X - H100i GTX - Asus Prime X570 Pro - EVGA RTX2060KO - 32gb LPX 3200mhz - EVGA 750G2 - 250gb 970 Evo - 6TB WD My Book Duo (Reds) - Inwin 103 White - Dell U3415W - Qpad MK-85 Brown - Logitech MX518 Legendary - Blue Yeti Platinum - Noblechairs Icon 


Boss-NAS [Build Log]:
R5 2400G - Noctua NH-D14 - Asus Prime X370-Pro - 16gb G.Skill Aegis 3000mhz - Seasonic Focus Platinum 550W - Fractal Design R5 - 
250gb 970 Evo (OS) - 2x500gb 860 Evo (Raid0) - 6x4TB WD Red (RaidZ2)

 

Audio Gear:

Hifiman HE-400i - Kennerton Magister - Beyerdynamic DT880 250Ohm - AKG K7XX - Fostex TH-X00 - O2 Amp/DAC Combo - 
Klipsch RP280F - Klipsch RP160M - Klipsch RP440C - Yamaha RX-V479

 

Reviews and Stuff:

GTX 780 DCU2 // 8600GTS // Hifiman HE-400i // Kennerton Magister
Folding all the Proteins! // Boincerino

Useful Links:
Do you need an AMP/DAC? // Recommended Audio Gear // PSU Tier List 

Link to comment
Share on other sites

Link to post
Share on other sites

i like that i can have my drives spin down after a while when most of the shares other than my media library are on cache except for new writes in unraid.  Not sure if that functionality is present with your software of choice.  but thats one reason to have cache. power/temp concerns. also, all my dockers and VM's run on the cache, so they dont run like shit.   

 

Mind you, unraid isnt like zfs at all,  stuff is stored on a single drive + parity with unraid.  your transfer times are likely a LOT better with zfs.  

Link to comment
Share on other sites

Link to post
Share on other sites

You likely don’t need a GPU, unless you plan on doing a lot of transcoding… so don’t even worry about that for now. If you end up needing it, add it later, but it’s not worth the electricity impact since you almost definitely don’t need it. Especially just for playing files on LAN assuming the clients support the codecs the files are in (thus no transcoding). 
 

Don’t run a SLOG or ZIL. It will just make things more complicated for no reason at all. A home NAS to store videos and “normal user data” will not benefit at all, seeing as the array itself will easily be able to saturate gigabit networking. 
 

SLOG and ZIL are not really “caching”. ARC and L2ARC are your cache (also don’t worry about L2ARC…), and ARC lives in RAM.  ZFS keeps a copy of most access data in RAM (in the ARC), to help speed things up. If you have huge volumes of data with lots of users accessing it, the basic strategy is “add as much RAM as you possibly can, then add L2ARC via NVMe SSD’s or preferably Optane”, but, again, for home use, this is not needed at all. 
 

I ran my Truenas VM on 16 GB of RAM for years, and it was a 10x4 Z2 array, with 2 threads of a i3 6100, and it never had any issues, or any sort, and performance was always either limited by my gigabit network for the most part.
 

Use a SSD for boot, you can use a flash drive, but it’s not recommended anymore. SSD’s are cheap enough where it makes more sense to just use one. 

Rig: i7 10700k @ 5.1Ghz, 4.8 Ring - - Z490 Vision G - - EVGA RTX 2080 XC Ultra @ 2025Mhz - - 4x8GB Vengeance Pro 3000Mhz 15-17-17-34 @ 3500MHz 16-19-19-38 - - Samsung 950 Pro 512 NVMe Boot + Main Programs - - Samsung 830 Pro 256 RAID 0 Lightroom + Photo work - - WD Blue 1 TB SSD for Games - - Corsair RM850x - - Sound BlasterX EA-5 - - EK Supremacy Evo - - XT45 X-Flow 420 + UT60 280 rads - - EK Full Cover GPU Block - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 64 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - 10TB WD Red for expendable data - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone Xs - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, FloRolf said:

Correct, as far as I have heard the performance increase from caching is mostly negligible. Especially considering a raidz2 with 6 drives (for example) is quite fast and you always have ram caching. 

 

You obviously do need a boot drive but it can be a USB thumb drive BUT I highly advise against that. I say get a small 60gb ssd for the OS instead. You CANNOT use the boot drive for anything else, so getting higher capacity is useless. 

 

Fair point with the HBAs, I forgot ivy bridge wasn't full sata 3 yet. 

10-4. Is backing up to say a 10tb single drive every month or two possible by plugging the drive in and specifying that drive or should I place it inside my desktop PC and set my desktop PC as the backup point? 

 

14 hours ago, derr12 said:

i like that i can have my drives spin down after a while when most of the shares other than my media library are on cache except for new writes in unraid.  Not sure if that functionality is present with your software of choice.  but thats one reason to have cache. power/temp concerns. also, all my dockers and VM's run on the cache, so they dont run like shit.   

 

Mind you, unraid isnt like zfs at all,  stuff is stored on a single drive + parity with unraid.  your transfer times are likely a LOT better with zfs.  

I honestly don't know too much about that side of FreeNAS/TrueNAS. 

 

 

13 hours ago, LIGISTX said:

You likely don’t need a GPU, unless you plan on doing a lot of transcoding… so don’t even worry about that for now. If you end up needing it, add it later, but it’s not worth the electricity impact since you almost definitely don’t need it. Especially just for playing files on LAN assuming the clients support the codecs the files are in (thus no transcoding). 
 

Don’t run a SLOG or ZIL. It will just make things more complicated for no reason at all. A home NAS to store videos and “normal user data” will not benefit at all, seeing as the array itself will easily be able to saturate gigabit networking. 
 

SLOG and ZIL are not really “caching”. ARC and L2ARC are your cache (also don’t worry about L2ARC…), and ARC lives in RAM.  ZFS keeps a copy of most access data in RAM (in the ARC), to help speed things up. If you have huge volumes of data with lots of users accessing it, the basic strategy is “add as much RAM as you possibly can, then add L2ARC via NVMe SSD’s or preferably Optane”, but, again, for home use, this is not needed at all. 
 

I ran my Truenas VM on 16 GB of RAM for years, and it was a 10x4 Z2 array, with 2 threads of a i3 6100, and it never had any issues, or any sort, and performance was always either limited by my gigabit network for the most part.
 

Use a SSD for boot, you can use a flash drive, but it’s not recommended anymore. SSD’s are cheap enough where it makes more sense to just use one. 

Noted on the GPU. I currently don't have much media. My home devices are Apple TVs (2x), PCs, iPad Pro 12.9 and 2x iPhone 12 Pro Max's. 

As for ZIL/SLOG, the only reason I wanted to run them was in the event of a write failure due to power loss. That is one thing I don't know how to configure is how to configure the OS to shutdown upon a power outage. My next iteration will be much more "professional" than this, I am simply tired of using my desktop (which is a raid 0 SSD array for a game drive) to house all of our media and documents as it offers zero redundancy. 

I will certainly pick up a small SSD for boot. The SLC 240gb at ~$51 make a compelling argument as they're a well known brand (WD) and being SLC should theoretically outlive most other types of drives. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, BDsankey said:

I honestly don't know too much about that side of FreeNAS/TrueNAS

With ZFS, you won’t be having drives spin down. That works for unraid since it doesn’t need the entire array active to function, but ZFS does need all drives spinning to perform a read or a write. 
 

2 hours ago, BDsankey said:

As for ZIL/SLOG, the only reason I wanted to run them was in the event of a write failure due to power loss.

That won’t really do anything for you. If data is in flight while you lose power, your likely going to fail to write all the data since the machine sending the data will go down as well… as will your networking. If all of the data actually makes it to the truenas box and then immediately power is lost, yes, you may end up with the data you just wrote being corrupted, but what are the odds of this exact thing happening? Very, very low, bordering on impossible. For a home use case, this isn’t a concern, it’s a larger concern when your using the array as a SAN in a data center which is what ZFS is actually built to do. Us home users just get to benefit from its amazing resiliency seeing as they provide truenas for free (you can also just use Ubuntu or other linux OS’s, truenas is just a really nicely packaged storage appliance). 
 

What isn’t “professional” about this planned setup? It’ll be plenty fine for your needs. ECC is a nice to have, but it isn’t required by any means. ECC doesn’t magically make things more resilient, and honestly for home use it’s not that big a deal. ECC ensures all data internal to the system, in flight from the cpu through the RAM is error checked and corrected. But, honestly, how often is there a read or write error in RAM or between RAM and CPU? Very rare is the answer. And if a bit is flipped in a video file, no one will likely ever even know. If a bit is flipped in financial data, or military data, or scientific datasets…. You may have a problem. But ECC doesn’t “make the data more redundant” or “make the harddrives safer”. 
 

Also, you could likely use a drive to do s backup, but what would be the better idea is use truenas to backup to backblaze B2 or Amazon cloud. Back up your important personal data to the cloud. I have a large array, but my personal dats that can’t easily get back is backed up to backblaze B2. All of my “easily replaceable” data that I stream to my TV… if that is lost, oh well. I run Z2 so I can have better up time, and have redundancy within my array, but the first three rules of RAID is RAID IS NOT A BACKUP.

 

Hope this helped 🙂

Rig: i7 10700k @ 5.1Ghz, 4.8 Ring - - Z490 Vision G - - EVGA RTX 2080 XC Ultra @ 2025Mhz - - 4x8GB Vengeance Pro 3000Mhz 15-17-17-34 @ 3500MHz 16-19-19-38 - - Samsung 950 Pro 512 NVMe Boot + Main Programs - - Samsung 830 Pro 256 RAID 0 Lightroom + Photo work - - WD Blue 1 TB SSD for Games - - Corsair RM850x - - Sound BlasterX EA-5 - - EK Supremacy Evo - - XT45 X-Flow 420 + UT60 280 rads - - EK Full Cover GPU Block - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 64 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - 10TB WD Red for expendable data - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone Xs - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LIGISTX said:

With ZFS, you won’t be having drives spin down. That works for unraid since it doesn’t need the entire array active to function, but ZFS does need all drives spinning to perform a read or a write. 
 

That won’t really do anything for you. If data is in flight while you lose power, your likely going to fail to write all the data since the machine sending the data will go down as well… as will your networking. If all of the data actually makes it to the truenas box and then immediately power is lost, yes, you may end up with the data you just wrote being corrupted, but what are the odds of this exact thing happening? Very, very low, bordering on impossible. For a home use case, this isn’t a concern, it’s a larger concern when your using the array as a SAN in a data center which is what ZFS is actually built to do. Us home users just get to benefit from its amazing resiliency seeing as they provide truenas for free (you can also just use Ubuntu or other linux OS’s, truenas is just a really nicely packaged storage appliance). 
 

What isn’t “professional” about this planned setup? It’ll be plenty fine for your needs. ECC is a nice to have, but it isn’t required by any means. ECC doesn’t magically make things more resilient, and honestly for home use it’s not that big a deal. ECC ensures all data internal to the system, in flight from the cpu through the RAM is error checked and corrected. But, honestly, how often is there a read or write error in RAM or between RAM and CPU? Very rare is the answer. And if a bit is flipped in a video file, no one will likely ever even know. If a bit is flipped in financial data, or military data, or scientific datasets…. You may have a problem. But ECC doesn’t “make the data more redundant” or “make the harddrives safer”. 
 

Also, you could likely use a drive to do s backup, but what would be the better idea is use truenas to backup to backblaze B2 or Amazon cloud. Back up your important personal data to the cloud. I have a large array, but my personal dats that can’t easily get back is backed up to backblaze B2. All of my “easily replaceable” data that I stream to my TV… if that is lost, oh well. I run Z2 so I can have better up time, and have redundancy within my array, but the first three rules of RAID is RAID IS NOT A BACKUP.

 

Hope this helped 🙂

Professional in terms of using more "server" or 24/7 oriented equipment (mobo/cpu) and it will be rack mounted in our new home (at that time) as we are going to have a nice network enclosure in our mechanical room. 

Thanks for the info on Backlbaze, it seems EXTREMELY convenient as I would only want to backup my documents and photos, any movies/tv would be replaceable or "not a big deal" if lost. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, BDsankey said:

Professional in terms of using more "server" or 24/7 oriented equipment (mobo/cpu) and it will be rack mounted in our new home (at that time) as we are going to have a nice network enclosure in our mechanical room. 

Thanks for the info on Backlbaze, it seems EXTREMELY convenient as I would only want to backup my documents and photos, any movies/tv would be replaceable or "not a big deal" if lost. 

Server gear is just desktop gear with more confusing names… I am being a little over the top, but, at the end of the day this is pretty accurate. 
 

A xeon is just a i5 or i7, with ECC enabled in firmware and the chip itself is just a slightly better performer at lower voltages. They all come from the same wafers (for the most part). It’s not like Intel sprinkles some special 24/7 magic dust on the silicon destined for xeons vs the desktop chips. 
 

Mobo’s are similar. Yes, Supermicro uses components that are designed to last for many many years, and they are designed to work under more specific situations (less overall comparability with less BIOS options, but more focused BIOS options on things a “server” would need”, but any good consumer mobo will also use good components and will last realistically just as long while running 24/7. If anything, high end desktop boards have extremely beefy VRM’s for instance. 
 

I am not trying to talk you out of anything, just informing with potentially useful info. I run a Supermicro mobo, 28 thread Xeon, and 64 GB of ECC ram in my homelab, but I also have a fair bit more then just a file server going on and was able to piece that together with used parts from eBay for ~500 bucks including s new noctua heatsink and new nvme SSD (not including drives, case, PSU, fans). 

Rig: i7 10700k @ 5.1Ghz, 4.8 Ring - - Z490 Vision G - - EVGA RTX 2080 XC Ultra @ 2025Mhz - - 4x8GB Vengeance Pro 3000Mhz 15-17-17-34 @ 3500MHz 16-19-19-38 - - Samsung 950 Pro 512 NVMe Boot + Main Programs - - Samsung 830 Pro 256 RAID 0 Lightroom + Photo work - - WD Blue 1 TB SSD for Games - - Corsair RM850x - - Sound BlasterX EA-5 - - EK Supremacy Evo - - XT45 X-Flow 420 + UT60 280 rads - - EK Full Cover GPU Block - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 64 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - 10TB WD Red for expendable data - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone Xs - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, LIGISTX said:

Server gear is just desktop gear with more confusing names… I am being a little over the top, but, at the end of the day this is pretty accurate. 
 

A xeon is just a i5 or i7, with ECC enabled in firmware and the chip itself is just a slightly better performer at lower voltages. They all come from the same wafers (for the most part). It’s not like Intel sprinkles some special 24/7 magic dust on the silicon destined for xeons vs the desktop chips. 
 

Mobo’s are similar. Yes, Supermicro uses components that are designed to last for many many years, and they are designed to work under more specific situations (less overall comparability with less BIOS options, but more focused BIOS options on things a “server” would need”, but any good consumer mobo will also use good components and will last realistically just as long while running 24/7. If anything, high end desktop boards have extremely beefy VRM’s for instance. 
 

I am not trying to talk you out of anything, just informing with potentially useful info. I run a Supermicro mobo, 28 thread Xeon, and 64 GB of ECC ram in my homelab, but I also have a fair bit more then just a file server going on and was able to piece that together with used parts from eBay for ~500 bucks including s new noctua heatsink and new nvme SSD (not including drives, case, PSU, fans). 

I completely understand where you're going. I do agree they are very very similar and that consumer PC hardware is extremely good. I do not forsee any issues with this setup short of having a mobo/cpu failure on a board/cpu from the 2012 timeframe which appears to be extremely easy to deal with by simply doing an "import" of the dataset should a failure occur (also another reason I want to run everything I can on the HBA instead of the board, kinda takes the board out of the equation). 

I would love to use NVME but this board does not natively support it. If this deployment goes well I will be redoing it again when I upgrade my current PC and using my i5-10600k/mobo as the next backbone of the system. 


It also seems that there isn't much of a gain going with say a 9300-8i over a 9207-8i since I won't be using any SSDs for storage. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, BDsankey said:

I would love to use NVME but this board does not natively support it. If this deployment goes well I will be redoing it again when I upgrade my current PC and using my i5-10600k/mobo as the next backbone of the system. 

Are you running 10gbit or faster ethernet? Because if you are not then that's a waste of money. 

 

3 hours ago, BDsankey said:

(also another reason I want to run everything I can on the HBA instead of the board, kinda takes the board out of the equation). 

Also this isn't correct, ZFS doesn't care where you have your drives plugged in. It won't help you to use an HBA but like I said previously I get it for the speed and easy-to-use factors. 

 

As ligistx said, there is no need to get server components for this type of scenario. If you decide to update the hardware I'd do it purely in order to have a more efficient system with less power consumption and maybe new features like faster encoding or whatnot. I really can't stress enough how important power consumption is for a system that runs 24/7. That's why I got a synology that runs 24/7 and my main truenas only powers on on demand. Kinda stupid but saves me a lot of money. 

Gaming HTPC:

R7 1700X - Scythe Big Shuriken 3 - Asus ROG B350-i - Asus GTX 1080 Strix - 16gb G.Skill Ripjaws V 3333mhz - Silverstone SX500-LG - 500gb 960 EVO - Fractal Design Node 202 - Samsung 60KS7090 - Logitech G502 - Thrustmaster T500RS - Noblechairs Icon


Desktop PC:
R9 3900X - H100i GTX - Asus Prime X570 Pro - EVGA RTX2060KO - 32gb LPX 3200mhz - EVGA 750G2 - 250gb 970 Evo - 6TB WD My Book Duo (Reds) - Inwin 103 White - Dell U3415W - Qpad MK-85 Brown - Logitech MX518 Legendary - Blue Yeti Platinum - Noblechairs Icon 


Boss-NAS [Build Log]:
R5 2400G - Noctua NH-D14 - Asus Prime X370-Pro - 16gb G.Skill Aegis 3000mhz - Seasonic Focus Platinum 550W - Fractal Design R5 - 
250gb 970 Evo (OS) - 2x500gb 860 Evo (Raid0) - 6x4TB WD Red (RaidZ2)

 

Audio Gear:

Hifiman HE-400i - Kennerton Magister - Beyerdynamic DT880 250Ohm - AKG K7XX - Fostex TH-X00 - O2 Amp/DAC Combo - 
Klipsch RP280F - Klipsch RP160M - Klipsch RP440C - Yamaha RX-V479

 

Reviews and Stuff:

GTX 780 DCU2 // 8600GTS // Hifiman HE-400i // Kennerton Magister
Folding all the Proteins! // Boincerino

Useful Links:
Do you need an AMP/DAC? // Recommended Audio Gear // PSU Tier List 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, FloRolf said:

Are you running 10gbit or faster ethernet? Because if you are not then that's a waste of money. 

 

Also this isn't correct, ZFS doesn't care where you have your drives plugged in. It won't help you to use an HBA but like I said previously I get it for the speed and easy-to-use factors. 

 

As ligistx said, there is no need to get server components for this type of scenario. If you decide to update the hardware I'd do it purely in order to have a more efficient system with less power consumption and maybe new features like faster encoding or whatnot. I really can't stress enough how important power consumption is for a system that runs 24/7. That's why I got a synology that runs 24/7 and my main truenas only powers on on demand. Kinda stupid but saves me a lot of money. 

Still 1gbe. I am stating that when I upgrade my desktop PC I will either recycle that hardware down or build a second NAS (most likely) and put this initial NAS offsite for backups. 

Is it worthwhile undervolting? Obviously there is more overhead than a synology type box but this also is a much cheaper option at the moment as I have the hardware here. Once it goes offsite our business (3hrs away) won't feel that power consumption nor notice it even occurred. 

While ZFS doesn't care what port is used, the HBA does 2 things for me. It allows me to have one centralized access point/connection for drives (data wise) as well as allows me to run all drives at the same speed due to my board being miss-matched on it's 8 ports. Plus it also means in the future, should I have to replace a mobo/cpu, I don't have to worry about SATA ports as I will simply have to plop the HBA into a PCIE slot and then I'm done with hardware instead of "a 90deg cable would be nice here" or "why won't this cable reach anymore" due to port spacing. 

Link to comment
Share on other sites

Link to post
Share on other sites

Also, would WD Red Plus (6tb CMR, 128mb cache, 5640rpm) be a better choice than the Seagate IronWolf (6tb, 256mb cache, 7200rpm)? The price difference is $20 across 4 drives. 

4x6tb in Z2 gives me ~10.5TiB usable space before slop allocation/20% headroom. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, BDsankey said:

Also, would WD Red Plus (6tb CMR, 128mb cache, 5640rpm) be a better choice than the Seagate IronWolf (6tb, 256mb cache, 7200rpm)? The price difference is $20 across 4 drives. 

4x6tb in Z2 gives me ~10.5TiB usable space before slop allocation/20% headroom. 

It won’t make any difference either way. 

Rig: i7 10700k @ 5.1Ghz, 4.8 Ring - - Z490 Vision G - - EVGA RTX 2080 XC Ultra @ 2025Mhz - - 4x8GB Vengeance Pro 3000Mhz 15-17-17-34 @ 3500MHz 16-19-19-38 - - Samsung 950 Pro 512 NVMe Boot + Main Programs - - Samsung 830 Pro 256 RAID 0 Lightroom + Photo work - - WD Blue 1 TB SSD for Games - - Corsair RM850x - - Sound BlasterX EA-5 - - EK Supremacy Evo - - XT45 X-Flow 420 + UT60 280 rads - - EK Full Cover GPU Block - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 64 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - 10TB WD Red for expendable data - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone Xs - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

No need for cache.

 

Buy fewer larger drives they will pay for themselves in energy savings and space savings.

 

Use a SSD for boot and one for apps/ transcode storage.

 

HBA won't make spinning rust any faster, but will make connections easier.

 

Gigabit nic will be bottleneck.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/23/2022 at 11:40 AM, LIGISTX said:

It won’t make any difference either way. 

I didn't know if one was deemed more "reliable" than the other. 

 

On 9/24/2022 at 2:11 AM, Bdavis said:

No need for cache.

 

Buy fewer larger drives they will pay for themselves in energy savings and space savings.

 

Use a SSD for boot and one for apps/ transcode storage.

 

HBA won't make spinning rust any faster, but will make connections easier.

 

Gigabit nic will be bottleneck.

 

 

 

I understand the HBA won't make the drives faster, it is a "quality of life" improvement IMO in terms of connections and ensuring all drives are given the same speed connection as I only have 4 3gb/s ports and 4 6gb/s ports. The HBA also eliminates having to deal with two independent controllers on the board. 

My entire network is 1gbe so it won't be limiting anything else. 1gbe is perfectly fine for my needs atm as it's storage and not used as an editing machine. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, BDsankey said:

I didn't know if one was deemed more "reliable" than the other. 

It really comes down to luck more than anything. If you get a bad batch of drives, or a bad firmware, or more then anything if the UPS guy dropped your box one to many times.

 

All drives have very similar failure rates, if you want to get way in the weeds, backblaze produces quarterly harddrive failure reports and publishes that information publicly... or at least they used to, I assume they still do.

Rig: i7 10700k @ 5.1Ghz, 4.8 Ring - - Z490 Vision G - - EVGA RTX 2080 XC Ultra @ 2025Mhz - - 4x8GB Vengeance Pro 3000Mhz 15-17-17-34 @ 3500MHz 16-19-19-38 - - Samsung 950 Pro 512 NVMe Boot + Main Programs - - Samsung 830 Pro 256 RAID 0 Lightroom + Photo work - - WD Blue 1 TB SSD for Games - - Corsair RM850x - - Sound BlasterX EA-5 - - EK Supremacy Evo - - XT45 X-Flow 420 + UT60 280 rads - - EK Full Cover GPU Block - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 64 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - 10TB WD Red for expendable data - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone Xs - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, LIGISTX said:

It really comes down to luck more than anything. If you get a bad batch of drives, or a bad firmware, or more then anything if the UPS guy dropped your box one to many times.

 

All drives have very similar failure rates, if you want to get way in the weeds, backblaze produces quarterly harddrive failure reports and publishes that information publicly... or at least they used to, I assume they still do.

I actually stumbled across that chart yesterday, the WD 6tb drives had a higher failure rate than the IronWolf in the same capacity so I decided to go with the IronWolf. I know they will be a little more power hungry than the WD Reds but to me that is a minor price to pay for the bump in reliability. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, BDsankey said:

I actually stumbled across that chart yesterday, the WD 6tb drives had a higher failure rate than the IronWolf in the same capacity so I decided to go with the IronWolf. I know they will be a little more power hungry than the WD Reds but to me that is a minor price to pay for the bump in reliability. 

It honestly will be luck. I have 10 4TB reds, and a buddy has had over 20, all in operation for the same length of time. I have had to replace 3 drives, he has had to replace none. We have such a smaller sample size, and the failure rate across all drives manufactured is so low, on an individual home user basis it’s more about luck then anything else. Well, that and how nice your ups driver was. 
 

If you want to significantly increase your “luck” make sure to burn the drives in with badblocks before hand. This is helpful for identifying infant mortality. Just google drive burn in bad blocks and run that on all of them before you deploy them, and if you ever get new or replacement drives, run it in those as well. 

Rig: i7 10700k @ 5.1Ghz, 4.8 Ring - - Z490 Vision G - - EVGA RTX 2080 XC Ultra @ 2025Mhz - - 4x8GB Vengeance Pro 3000Mhz 15-17-17-34 @ 3500MHz 16-19-19-38 - - Samsung 950 Pro 512 NVMe Boot + Main Programs - - Samsung 830 Pro 256 RAID 0 Lightroom + Photo work - - WD Blue 1 TB SSD for Games - - Corsair RM850x - - Sound BlasterX EA-5 - - EK Supremacy Evo - - XT45 X-Flow 420 + UT60 280 rads - - EK Full Cover GPU Block - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 64 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - 10TB WD Red for expendable data - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone Xs - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, LIGISTX said:

It honestly will be luck. I have 10 4TB reds, and a buddy has had over 20, all in operation for the same length of time. I have had to replace 3 drives, he has had to replace none. We have such a smaller sample size, and the failure rate across all drives manufactured is so low, on an individual home user basis it’s more about luck then anything else. Well, that and how nice your ups driver was. 
 

If you want to significantly increase your “luck” make sure to burn the drives in with badblocks before hand. This is helpful for identifying infant mortality. Just google drive burn in bad blocks and run that on all of them before you deploy them, and if you ever get new or replacement drives, run it in those as well. 

Sounds good, thanks for the advice! 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share


×