Jump to content

Looking for advice on expanding my Plex server. Lots of questions inside. Enter if you dare!

So right now, using Windows 7 with a PERC H700 and 8, 4TB drives in a RAID 6 setup. (i5 6600K/8GB RAM). I'm looking to expand my array by adding 4 more, 4TB drives. I've done tons of reading and thought about going with Linux (Ubuntu LTS Server or Antergos) and using ZFS, but since I'm not going to use ECC memory, I'm getting gun shy about just using a HBA+my motherboard SATA connectors to hook all the drives up because of nightmare scenarios regarding using ZFS without ECC memory.

 

The ONLY reason I want to use ZFS, is for faster rebuild times in case of a drive failure. (I wonder what the rebuild time would be on a 4TB drive with ZFS (assuming the drive only has 1.5TB of data on it) vs the rebuild time with a LSI RAID 6 card with the same amount of data?)

 

Even if my fears are calmed about using ZFS without ECC memory, I wonder what kind of CPU load I'll see during scrubs? I realize my CPU will be busy as I copy 14TB of data back to this newly created setup but after that, it will just be running Plex and transcoding with the occasional write as I copy a new TV show over to the array or some pictures/movies. (I'll be bumping my CPU up to an i7 7700K as part of this upgrade.) Also with ZFS, do things start to slow down over time as your drives start filling up?

 

Next up, operating system. I refuse to use Windows 10 so this leaves me with Windows 7 (which support ends for in a hair over 2 years from now) or Linux. I'd rather not switch over to Linux 2 years from now so when I do my upgrade, it will be using the Linux OS. (Will PROBABLY be Ubuntu Server LTS or Debian 9 stable. Which is better when it comes time for full upgrades to the next version?) I'm a newbie when it comes to Linux but I've been dual booting between Mint, Antergos and Debian for roughly a year and since the only thing my server will be running is Plex, Sonarr, NZBGet and Radarr, I'm confident that there's enough information out there that if I run into a problem, some Googling around, I'll be able to get things fixed and be back up and running.

 

So if ZFS is a no go, thoughts/opinions on using Ubuntu Server LTS/Debian 9 with a nice LSI card? (LSI hardware should work just as good with Ubuntu or Debian, right?)

 

Speaking of LSI cards, would anyone care to recommend a RAID 6 card (for under $400, cheaper the better) that offers great speed in a RAID 6 setup? I'll probably have to couple it with something like an Intel SAS expander (RES2SV240) so I can hook up all 12 drives. The drives I'll be using are plain Jane HGST 4TB NAS drives.

 

Thanks for reading and looking forward to your opinions!

Link to comment
Share on other sites

Link to post
Share on other sites

49 minutes ago, jblflip5 said:

i have no idea what you are talking about it is interesting

Was that the song I was thinking about? BTW, thanks for helping my friend make it out of the train station 2 years ago! Unfortunately, he has since sworn allegiance to the Galactic Federation and we haven't heard from him in over 2 minutes. His half eaten chocolate bar has since gained self-awareness and is determined to go deer hunting with or without the extra shoelaces next week. Only time will tell if KBL can find the lunar volcano but I doubt the barn has crossed off that last 'to do' item before the dog learned to speak. I know, right?

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, road hazard said:

Was that the song I was thinking about? BTW, thanks for helping my friend make it out of the train station 2 years ago! Unfortunately, he has since sworn allegiance to the Galactic Federation and we haven't heard from him in over 2 minutes. His half eaten chocolate bar has since gained self-awareness and is determined to go deer hunting with or without the extra shoelaces next week. Only time will tell if KBL can find the lunar volcano but I doubt the barn has crossed off that last 'to do' item before the dog learned to speak. I know, right?

I was going to answer some of your questions from the original post, but what the hell is this?

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, brwainer said:

I was going to answer some of your questions from the original post, but what the hell is this?

I figured that I'd reply to jblflip5 with as much sense as he used when replying to my original post=none.

 

I'm kind of neurotic with this. The more I read, the more my head hurts with information.

 

After more research, my new build will be Linux, no questions asked. (Ubuntu Server or Debian, still deciding but will probably go with the one that has less horror stories about failed upgrades to new, major versions.)

 

Just need to find a good RAID 6 card that's supported under Linux and one that works with those cheap Intel SAS expanders (res2sv240). I'm liking the LSI 9261-8i or 9361-8i so far.

 

And every NOW AND THEN, mdadm (with a HBA (M1015)+a few SATA ports on my motherboard to get all 12 drives hooked up) pops up into my head but since Neil Brown stepped away from it, I fear it will 'die on the vine'. I know that somebody else has picked up the development torch on mdadm but I fear it will eventually give way to ZFS/BTRFS and be forgotten.

Link to comment
Share on other sites

Link to post
Share on other sites

Both mdadm and hardware RAID aren’t recommended for large arrays ( more than a few TBs) because of the fairly high risk of bit rot. They’re only acceptable if you set the RAID card or mdadm to do Patrol Reads, which means once or more a month it reads every bit on every disk to make sure they are correct and valid. Otherwise, if a drive dies and you have to resilver a replacement, when the corrupt data is read, there isn’t a way to resolve it, or the corruption is seen as being the correct data and is written to the new disk. Even RAID6 isn’t seen as a solution to this issue. Thats’s why everything is moving to some form of advanced software RAID, be it ZFS, BTRFS, or Storage Spaces with ReFS on the Microsoft side.

 

Linux is good and you can choose either Ubuntu server or debian, both are fine and upgrade issues can be prevented by waiting a few weeks before upgrading, and try not to install anything outside of a repository (after you install Plex Server the first time, you can enable a repository for easy updating: https://support.plex.tv/hc/en-us/articles/235974187-Enable-repository-updating-for-supported-Linux-server-distributions?mobile_site=true ). I’m personally more a fan of CentOS, but I’ve worked with my friend’s Ubuntu Server systems and have no complaints. Whatever you choose, you want a kernel that you has the latest version of ZFSonLinux. Right now that’s 0.7.3 .

 

The hardware RAID cards you mentioned are fine, they are LSI so support by every OS is basically guaranteed. You can even use them as an HBA, flashing to IT firmware is recommended but not required as any disk that isn’t configured in a RAID on the controller will be  presented to the host OS unmolested. LSI cards are good about this, that’s why they are recommended everywhere. Even the M1015 you mention is really an LSI card that was rebranded.

 

Personally I’d go with an LSI SAS HBA or RAID card, your linux distro of choice, and use ZFS. Mixing between mobo ports and ones from an addon card is normally safe, with modern motherboards at least (last 10 years or so, just an estimate by me).

 

The reason ECC memory is recommended with ZFS is that asynchronous writes (anything that the application doesn’t demand be written to disk before it continues on to other things, called synchronous writes) are stored in memory until enough of them accumulate to write to the disks in one long stream, which improves wrote performance drastically, or until a timeout expires. Regardless of whether enough data is queued or the timer runs out, the period of time where data is stored in RAM and vulnerable to a memory error is very short, and there is protections in place both at the OS level and in ZFS to try to make non-ECC safer. The one area where ECC becomes really important is with deduplication, because a table of hashes of the existing data is stored in RAM. Incoming writes are compared to the table, and if a hash matches, it doesn’t get written out again. Without ECC RAM, a value in the has table may change, and thus the system will think two different blocks are the same. On the one hand, memory corruption is very rare, but on the other hand there are real documented cases of this messing up people's data. Overall I wouldn’t be worried, but at the same time I wouldn’t enable deduplication.

 

If there was something you mentioned that I didn’t address let me know, I tried to read over all your questions from both posts.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

@brwainer 

 

Thanks for the detailed reply!

 

I realize that cosmic rays can flip bits in memory and I believe bit rot is real. But for me, these are two things I don't worry toooooo much about. My current RAID card (PERC H700) does indeed do patrol reads every week. I think what leads me away from ECC RAM is because to get a server that supports ECC -AND- has a CPU with a passmark score of 12,000+, requires spending well over $1,000. Sure, I could just buy some server with a low end Xeon to get ECC memory and stuff it with drives and use my existing server as the Plex server and just pull my data from this cheap server acting as a NAS but this means I'll need two systems running 24x7. I'd rather have a single server and all the drives stuff in it. For ZFS I'd be using an SSD for ARC/L2ARC (to keep memory requirements down) and would not use de-dupe or compression. And besides all that, I'm back to ZFS slowing down as the pull fills up. (But with no de-dupe or compression, maybe that slow down scenario won't apply to me?)

 

Not saying it's not but if ECC is so critical, why isn't it standard on ALL the NAS boxes from QNAP, Synology, Drobo? (It still baffles me why Synology offers BTRFS when it's a documented fact that there are parity issues with RAID 5/6 (or as Synology calls it, SHR/2.)

 

Is ZFSLinux .7.x in Ubuntu Server LTS? I thought they were lagging a little behind and were still at 0.6? I'm going to install it (and Debian 9) on a scratch SSD here in a bit and will answer that question myself. :) (same for Debian 9).

 

I'd LOVE to use ZFS (only for the faster rebuild times) but with no ECC memory and possible pool degradation as it fills up, I think a hardware RAID 6 card(+Intel SAS expander) is going to win out at the end of the day.

 

I guess the focus of this topic can shift to finding the ideal RAID card.

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, road hazard said:

I think what leads me away from ECC RAM is because to get a server that supports ECC -AND- has a CPU with a passmark score of 12,000+, requires spending well over $1,000.

Just want to point out that Ryzen CPUs have ECC enabled, and the Ryzen 5 1600 has a passmark score just over 12,000. At that point you just need a motherboard that works with ECC, a good place to find that out is reviews by LevelOneTechs - they also will tell you how good the motherboard worked with Linux in general. (EDIT: found this list just now: http://www.overclock.net/t/1629642/ryzen-ecc-motherboards )

 

2 hours ago, road hazard said:

Not saying it's not but if ECC is so critical, why isn't it standard on ALL the NAS boxes from QNAP, Synology, Drobo?

Do QNAP and Synology use ZFS? I thought they used other things like mdadm or proprietary. I know Drobo is definitely proprietary.

 

2 hours ago, road hazard said:

Is ZFSLinux .7.x in Ubuntu Server LTS? I thought they were lagging a little behind and were still at 0.6? I'm going to install it (and Debian 9) on a scratch SSD here in a bit and will answer that question myself. :) (same for Debian 9).

I don’t know about Ubuntu Server and Debian’s kernel ZFS level, but I do know that Proxmox 5.1 has kernel 4.13 with ZFS 0.7.3. Proxmox is just Debian (Stretch for Proxmox 5.x) with a custom kernel and an extra repository with some packages. It uses the standard debian repositories for everything else. So you could actually use Proxmox as “Debian with better ZFS” and ignore the fact that there is a hypervisor running, and install Plex directly just as if it was a normal Debian. Or, since Proxmox has instructions for how to add their stuff on top of an existing Debian install, you could install regular debian stretch, and then get the Kernel and ZFS from the Proxmox repository. That would probably be the easiest way to go about it, if Debian Stretch doesn’t offer a kernel with up to date ZFS.

 

2 hours ago, road hazard said:

I'd LOVE to use ZFS (only for the faster rebuild times) but with no ECC memory and possible pool degradation as it fills up, I think a hardware RAID 6 card(+Intel SAS expander) is going to win out at the end of the day.

Fair enough, its not a bad decision at all. In that case, why are you trying to replace your PERC? Is it not compatible with SAS Expanders?

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, brwainer said:

Fair enough, its not a bad decision at all. In that case, why are you trying to replace your PERC? Is it not compatible with SAS Expanders?

I've had ZERO problems with my PERC H700 but was looking at replacing it with a slightly newer, maybe faster, LSI card. If I keep my H700 then yeah, I'll pair it with that Intel SAS expander and RAID 6 all 12 drives.

 

Do you know if the H700 can pass SMART status through to Linux?

Link to comment
Share on other sites

Link to post
Share on other sites

You'll need to decide on your filesystem first - as that will determine your hardware, and partially your OS. 

 

Personally i'm quite happy running a hardware RAID in RAID6 - using a LSI 9271-8i with a RES2SV240 expander with 12 drives. If I was looking at larger drives though i'd probably go with ZFS on Linux, since my rebuild time is already approx 5 days. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

I think @brwainer is right and ryzen is a really good choice to get ECC mem working without buying a xeon. You can use most linux distributions with ZFS but make sure you get the latest kernel to get the latest ZFS version !

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, gatekeeper_stereotype said:

I think @brwainer is right and ryzen is a really good choice to get ECC mem working without buying a xeon. You can use most linux distributions with ZFS but make sure you get the latest kernel to get the latest ZFS version !

 

 

Until you realise 6 months down the line, there are still no AM4 boards with official ECC support....

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Jarsky said:

You'll need to decide on your filesystem first - as that will determine your hardware, and partially your OS. 

 

Personally i'm quite happy running a hardware RAID in RAID6 - using a LSI 9271-8i with a RES2SV240 expander with 12 drives. If I was looking at larger drives though i'd probably go with ZFS on Linux, since my rebuild time is already approx 5 days. 

My OS will be Linux, no doubt in my mind! (Ubuntu is leading the race followed by a new contender, Mint.....but Ubuntu is in the lead.) I decided to drop Debian from the running.) I'd like to keep my existing 'non-ECC' hardware to keep costs low. Just need to pop in an i7 7700K CPU for more powa'!

 

A was talking to a friend earlier today about all this and and he's been using mdadm for years and loves it. I asked him if he was concerned that the new maintainer may one day decide to give up on mdadm and let it die. He just shrugged it off and said if he does, and I run into a bug that won't be fixed or a limitation, I'll move on to something else when I need to.

 

The more I think about this the more I'm convinced there is no "right" way forward and I'm over analyzing everything. Each option has their own pros and cons.

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, Jarsky said:

 

 

Until you realise 6 months down the line, there are still no AM4 boards with official ECC support....

Hahahah, so very true. It really does seem that AMD is following firmly in Intel's footsteps' regarding ECC support on their lower tier boards.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, road hazard said:

The more I think about this the more I'm convinced there is no "right" way forward and I'm over analyzing everything. Each option has their own pros and cons.

 

 

This is basically it ^

 

And mdadm is extremely robust, theres a reason that most QNAP's & Synology's are built on it. It just doesn't have the more advanced filesystem features of ReFS, BTRFS or ZFS.

 

Since you don't want Windows for some reason, ReFS with Storage Spaces is out.

There are many Linux options out there which can run Plex - you don't necessarily have to go with a full Linux distro with the likes of FreeNAS, unRAID, Rockstor, etc...and their support for VM's, Dockers & Containers.


Personally i'm not a fan of Mint - my top picks currently for a server OS are Debian 9, Centos 7+ or Ubuntu 14.04+

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, road hazard said:

I've had ZERO problems with my PERC H700 but was looking at replacing it with a slightly newer, maybe faster, LSI card. If I keep my H700 then yeah, I'll pair it with that Intel SAS expander and RAID 6 all 12 drives.

 

Do you know if the H700 can pass SMART status through to Linux?

If the OS is only being presented a single drive, then it would have no way of knowing that it should be trying to check status on physical drives. I think there is something available with LSI MegaRAID cards if you install the Megaraid Storage Manager. I don’t know if there is any specific aupport for your card as I have never had to research it. In general though, the answer is no

 

1 hour ago, Jarsky said:

 

 

Until you realise 6 months down the line, there are still no AM4 boards with official ECC support....

There are several boards that have ECC memory on their QVL - meaning if you used them, you get official support http://www.overclock.net/t/1629642/ryzen-ecc-motherboards

 

 

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

There's a lot going on in here for a single plex server. 

 

If you're concerned with array types and rebuild times then I'd ask- what's your backup strategy?

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/19/2017 at 9:28 PM, Jarsky said:

There are many Linux options out there which can run Plex - you don't necessarily have to go with a full Linux distro with the likes of FreeNAS, unRAID, Rockstor, etc...and their support for VM's, Dockers & Containers.

Small nitpick, FreeNAS is not a Linux Distro, or based on Linux in any capacity (aside from good ideas that one OS steals from the other, and vice versa). It's a fork of BSD, which is a Unix-based OS (they can't legally use the word "Unix", because of copyright, but it's basically Unix).

 

Linux is also based off of Unix, and while Linux and FreeNAS (BSD) are closer to each other than, say, Windows, they're still different.

 

/end rant

 

On topic:

The Dell H700 is a fine RAID Card. There's little reason to replace it, unless you're going to SSD's or something. I actually have an H800, which is basically the external variant of the H700. I'm not currently using it though, because I'm running FreeNAS, so I swapped out the H800 for an LSI HBA Card to get direct SMART data for ZFS to work properly.

 

I would keep your existing hardware, and just add drives. If you want something to "manage" yourself and tweak, go Ubuntu or Mint (I'd go Ubuntu simply do to bigger marketshare and therefore better resources for troubleshooting, etc). I'd avoid ZFS because you would need to replace your RAID Card with an HBA.

 

Frankly, just use the H700 for hardware RAID6. Make sure you've got the RAM and battery installed in the H700.

 

If you want to use MDADM Software RAID, you can, but it's less ideal with a hardware RAID card.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, dalekphalm said:

Small nitpick, FreeNAS is not a Linux Distro, or based on Linux in any capacity (aside from good ideas that one OS steals from the other, and vice versa). It's a fork of BSD, which is a Unix-based OS (they can't legally use the word "Unix", because of copyright, but it's basically Unix).

 

Linux is also based off of Unix, and while Linux and FreeNAS (BSD) are closer to each other than, say, Windows, they're still different.

 

/end rant

 

 

Well I know that FreeNAS is BSD - i deal with Linux, BSD, Solaris, etc....at work - but his options were all Full Linux distros, my point was that he doesn't need to go for a full linux distro, when there are stripped down dedicated NAS builds available with virtualization features. 

 

Don't sweat the small stuff, like the guys that constantly keep having to point out redundancy isnt backup ;)

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, dalekphalm said:

If you want to use MDADM Software RAID, you can, but it's less ideal with a hardware RAID card.

Well actually, after another solid day of reading, testing, reading........ I've decided to go with:

 

Mint 18.2 Cinnamon for the OS. (I hear what you're saying about Ubuntu and market share and agree but, the only DE I enjoy using is  Cinnamon. There is just something about Cinnamon that I love. (Mint's version of Cinnamon!) I hate all the DE's that Ubuntu comes with. (Tried them all.) Now, I know I can add an older version of Cinnamon to Ubuntu but I'm worried about what will happen when it comes time to upgrade to a new version of LTS. Worried I'll break something or the upgrade will fail since I had to use a PPA to install an older version of Cinnamon.

 

Next up..... and this is where I need a little more help.... I'm going to add my 4 extra drives and.....................just go with a HBA and mdadm RAID 6 and format the partition as XFS (because I read that ext4 has problems going beyond a 16TB partition size?). But which HBA? 9207-8i (heard horror stories concerning the P20 firmware), 9211-8i or an m1015 flashed to IT mode? Which one to pick or should I look at something else?

 

Lastly, you say that mdadm is less ideal than a RAID card. Can you expand on that? At the end of the day (and as somebody else just mentioned), this is just a Plex/file server. My backup strategy is that every night, the Plex server (every single file on it) gets backed up to another server.

 

I guess I don't need super ultra fast resilvering (sorry ZFS :(.I just want something that is dead simple, reliable and will give me next to no problems. Just want to 'set it and forget it'.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, road hazard said:

Well actually, after another solid day of reading, testing, reading........ I've decided to go with:

 

Mint 18.2 Cinnamon for the OS. (I hear what you're saying about Ubuntu and market share and agree but, the only DE I enjoy using is  Cinnamon. There is just something about Cinnamon that I love. (Mint's version of Cinnamon!) I hate all the DE's that Ubuntu comes with. (Tried them all.) Now, I know I can add an older version of Cinnamon to Ubuntu but I'm worried about what will happen when it comes time to upgrade to a new version of LTS. Worried I'll break something or the upgrade will fail since I had to use a PPA to install an older version of Cinnamon.

 

Next up..... and this is where I need a little more help.... I'm going to add my 4 extra drives and.....................just go with a HBA and mdadm RAID 6 and format the partition as XFS (because I read that ext4 has problems going beyond a 16TB partition size?). But which HBA? 9207-8i (heard horror stories concerning the P20 firmware), 9211-8i or an m1015 flashed to IT mode? Which one to pick or should I look at something else?

 

Lastly, you say that mdadm is less ideal than a RAID card. Can you expand on that? At the end of the day (and as somebody else just mentioned), this is just a Plex/file server. My backup strategy is that every night, the Plex server (every single file on it) gets backed up to another server.

 

I guess I don't need super ultra fast resilvering (sorry ZFS :(.I just want something that is dead simple, reliable and will give me next to no problems. Just want to 'set it and forget it'.

Sure if you have a preferred Distro, that's no problem. Most people on here asking have no idea though so starting with Ubuntu for the uninitiated is usually easier.

 

In terms of HBA, as long as it uses the SAS2008 chipset, Linux shouldn't care too much about the specific model.

 

However I see now that the LSI 9207 uses a newer chipset, so maybe there are issues with that.

 

I actually have a 9207-8e - I'm using it with FreeNAS. I have no issues with mine, but I've never used Linux with it. I'm using it with 6x 3TB HDD's in a single RAIDZ1 ZFS volume.

 

MDADM is less than ideal when using a Hardware RAID Card (In IR mode) because Hardware RAID does not generally pass through all SMART data, even when you configure each HDD as their own volume (no RAID).

 

You CAN get it to work nicely by flashing the RAID Card into IT mode (IR: Initiator Receiver; IT: Initiator Target).

 

Flashing a RAID Card into IT mode basically turns it into a fully functional HBA with direct disk passthrough to the OS. SMART Data is generally fully accessible by the OS.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, dalekphalm said:

In terms of HBA, as long as it uses the SAS2008 chipset, Linux shouldn't care too much about the specific model.

Glad you brought that up.... I remember looking on here:

https://www.servethehome.com/buyers-guides/top-hardware-components-freenas-nas-servers/top-picks-freenas-hbas/

...and for their top list of HBA's, they mention 'SAS 3200'. But I don't understand that. Where can I go to find an EASY TO UNDERSTAND chart that lists all the cards that are in the 3200 series?

 

13 minutes ago, dalekphalm said:

MDADM is less than ideal when using a Hardware RAID Card (In IR mode) because Hardware RAID does not generally pass through all SMART data, even when you configure each HDD as their own volume (no RAID).

 

You CAN get it to work nicely by flashing the RAID Card into IT mode (IR: Initiator Receiver; IT: Initiator Target).

 

Flashing a RAID Card into IT mode basically turns it into a fully functional HBA with direct disk passthrough to the OS. SMART Data is generally fully accessible by the OS.

 

Yup, totally understand all that. My H700 can't be flashed to IT mode and since I'm going with mdadm RAID 6, that's why I need to find an excellent HBA because once this thing is set up, I should be good to go for at least a 1-2 years (based on my current rate of hoarding). Once upgraded, I don't want to touch none of this until it's time for a refresh. That's why I'm doing a ton of homework and want to pick the best card I can for my situation. Doing all this reading and testing makes my head hurt. :(

 

Link to comment
Share on other sites

Link to post
Share on other sites

To bring this thread to a close, I ordered an LSI 9207-8i HBA (will make sure it's on the P19 firmware since everyone complains about the P20 version) and will re-use my forward breakout cables from the PERC H700 to hook up 8 of my 4TB drives then hook the other 4, 4TB drives straight to my motherboard and combine them all into an MDADM RAID 6 array.

 

CPU will be an i7 7700K and I'm going to add another 8GB of RAM taking me to a total of 16GB. OS will be Linux Mint 18.2 Cinnamon.

 

Thanks for all the replies everyone!

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×