Jump to content

Starter Data Servers

Guest

Hi, I've been planning out some data servers for a while now and this is my ideology. Two main types of servers, one for archiving/safe backups and the other for main operating storage. I was told it is okay to use RAID or recommend in the right sense and if I recall correctly, I believe this situation fits. The main server will be using RAID for redundancy because it will be worked off of and the other is the archiving/backup server and will not use RAID. My idea is to have the more important data on the archive/backup server, while work in progression or just general data files are on the main uptime host server. 

 

I'm personally not as big of a fan of RAID as I used to be when I found out it had some major flaws like the RAID card not being redundant, but does this setup work? And for the archive/backup server, what configuration/hardware would you recommend? 

Link to comment
Share on other sites

Link to post
Share on other sites

Id suggest using raid on both. Also i suggest using one server with a hdd or tape backup every month so that a virus or something can't destroy the drives. RAID is safer and faster and if you use ZFS or btrfs you don't have a certian raid card.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Electronics Wizardy said:

Id suggest using raid on both. Also i suggest using one server with a hdd or tape backup every month so that a virus or something can't destroy the drives. RAID is safer and faster and if you use ZFS or btrfs you don't have a certian raid card.

I'm avoiding RAID on the archive/backup server for safety and RAID can fail. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Yames said:

Hi, I've been planning out some data servers for a while now and this is my ideology. Two main types of servers, one for archiving/safe backups and the other for main operating storage. I was told it is okay to use RAID or recommend in the right sense and if I recall correctly, I believe this situation fits. The main server will be using RAID for redundancy because it will be worked off of and the other is the archiving/backup server and will not use RAID. My idea is to have the more important data on the archive/backup server, while work in progression or just general data files are on the main uptime host server. 

 

I'm personally not as big of a fan of RAID as I used to be when I found out it had some major flaws like the RAID card not being redundant, but does this setup work? And for the archive/backup server, what configuration/hardware would you recommend? 

I would say to consider something like ZFS or BTRFS for the archive server. Or if you choose to keep the archive server JBOD, you'll have to keep note which drive holds what (Which might get to be a pain to track).

 

You can make a RAID card redundant if you have the appropriate backplane system with a proper SAS expander on it. I know SuperMicro server chassis supports this, but I kind of don't have the need to have two RAID cards for a home NAS....just one RAID card is plenty overkill already. The RAID controller has to support RAID spanning though. That being said, it's not too hard to find a RAID card of the same brand and just toss it into the same array if a RAID card does die. As usual, please get a RAID card with the onboard battery for it. Something like the LSI 9361-8i with the battery would be good. Ah, make sure you check the settings for the RAID card, as there's a setting for the build / rebuild rate...I didn't know this, but by default it's 10% speed...so I waited 7 days for it to build the array (Four 4TB in RAID10). When I got to building my new array, I learned about this setting and cranked it up to 100%, and wow, it took only 10 hours to build a larger array (Six 4TB in RAID10)...make sure to turn it back to something reasonable once the array is built (Though I just left it at 100% since it's a home NAS and I really don't care if it's slow or not...I couldn't notice anyway). I would run Patrol reads every week.

 

Also be sure to get a proper UPS for the whole server system to protect against power loss. Make sure to use proper server equipment (Xeon, ECC memory, Cxxx series chipset motherboards) and not consumer grade stuff.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, scottyseng said:

-snip

Any good videos on explaining ZFS or BTRFS? And for making a RAID card redundant, can you show examples? 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Yames said:

Any good videos on explaining ZFS or BTRFS? And for making a RAID card redundant, can you show examples? 

Ah, sorry, I don't know of any videos for either. I learned about them by reading the documentation for both ZFS and BTRFS. If you have a limit on RAM, BTRFS (Rockstor / Unraid) is the better option.

 

Ah, I don't remember exactly the details on how to set up a redundant RAID card array. I did e-mail LSI support a while back and it is possible, but the requirements are really strict (All SAS drives, LSI Expander chip based backplane, same model controllers). I wouldn't recommend it unless you have the funds (For example, a WD Red 4TB is $140, where my WD Re SAS 4TB was $250. The WD Re Sata drive is cheaper as well). I think you'll be fine with one RAID controller as long as you use proper server hardware (Xeon, Cxxx series motherboard...no overclocking), as they're fairly hard to kill. In Linus' case with the dead LSI MegaRAID card, the motherboard pretty much killed the single controller (It was a consumer motherboard). Also, if you have consistency check and patrol reads scheduled weekly, the whole issue of bitrot is reduced. SAS drives also let the RAID controller know that there's a drive read error (Where on Sata it tries to fix it / reports it as good).

Link to comment
Share on other sites

Link to post
Share on other sites

Also note ZFS/BTRFS are not RAID, these are redundant and resilient software storage designed to mitigate the short comings of RAID.

 

Currently I don't consider either traditional RAID or ZFS/BTRFS to be out right better than the other but both have usages that are clearly better for them. Your archive server should most definitely use ZFS/BTRFS.

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/16/2016 at 8:09 AM, Yames said:

And for making a RAID card redundant, can you show examples? 

This would be very expensive. Easily a few thousand for the cards plus requiring dual port SAS drives (think $400/drive as cheap). Unless you need the 24/7/365 uptime, you'll not need redundant hot raid controllers. All good raid controllers do not store the array config on the card but on the array itself, this allows you to have a cold spare in the event of a failure if you have the budget for it and cannot withstand the downtime to overnight a replacement.

(Personally, I've have none of my highend HP raid cards actually fail since I switch away from LSI 6 years ago.)

 

You did not say what the operational storage was, but if its something like a highly active SQL server, then that can change a lot of things. There are a lot of things you can do to make solid reliable storage, but money is going to be a very major factor.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Kadah said:

This would be very expensive. Easily a few thousand for the cards plus requiring dual port SAS drives (think $400/drive as cheap). Unless you need the 24/7/365 uptime, you'll not need redundant hot raid controllers. All good raid controllers do not store the array config on the card but on the array itself, this allows you to have a cold spare in the event of a failure if you have the budget for it and cannot withstand the downtime to overnight a replacement.

(Personally, I've have none of my highend HP raid cards actually fail since I switch away from LSI 6 years ago.)

Yep even NL-SAS is very expensive for personal use, and really shouldn't be considered since SSDs are just so much more appropriate in these cases for the same price and size.

 

Also HP RAID cards for a long time have been LSI OEM, they recently switched to PMC Adaptec for either the Px10 or Px20 generation cards but I can't remember which. Every other server manufacturer still uses LSI OEM though. The only thing that could have made a difference would be better firmware or product compatibility testing and qualified parts lists etc.

Link to comment
Share on other sites

Link to post
Share on other sites

The advantage with proprietary RAID cards (lie HP SmartArray P420 as example) is: when your RAID card fails, you can easily replace the RAID card with one of the same line and it will work (because the information of that RAID is stored also on the disks themselves). The disadvantage is: you stick to that RAID card and replacing with newer or better cards is a real pain or impossible (so once decided on HP SmartArray P420 as RAID controller, you have to keep using it and can't just upgrade it to another card or another vendor).

The other advantage with those RAID cards is: they are independent to the OS (so Windows will show the LUN the same like Linux would do).

Link to comment
Share on other sites

Link to post
Share on other sites

The solution I had was that i picked up a HP Proliant ML350 G6 server, supports up to 200GB RAM 2 Xeon processors X5570s and it takes 6 Hard drives off the bat, The biggest bonus is that it also comes with the HP Raid card above which would be my solution also. The other thing I would point out is that the server has ultra low power options and the fans were quieter than my HTPC and the server itself cost less than £100 / $150 USD. The other advantage is that these units are built for enterprise and therefore reliable, I migrated my 5tb disk from one machine and literally plugged it into another machine with a HP card and it detected it provisioned it and it was totally painless with zero config required , Not even a reboot!

 

Just another option

Link to comment
Share on other sites

Link to post
Share on other sites

49 minutes ago, pat-e said:

The disadvantage is: you stick to that RAID card and replacing with newer or better cards is a real pain or impossible (so once decided on HP SmartArray P420 as RAID controller, you have to keep using it and can't just upgrade it to another card or another vendor).

I've had arrays import from P420's on to P812's. Wasn't intentional as the drives were going to be wiped. Normally you would clone data from the old array to a new one in place or from a backup.

 

Same for raid migrations, unless the downtime can not be tolerated, it's much faster to import from back up on to a new array than to expand. On my first level backup sever, the last expansion of adding 3 docks took over 12 days while an import from backup would have taken less than 2. 2 days without the backup system was not a good option, so the slow expansion was preferred in that case.

 

ZFS does not have equivalent expansion capabilities, only adding of addition storage to the end of the data set. (Think LVM)

This is why I use it on VMs with over provisioning. Define the vmdk with the max I think it will need before the VM well need to be replaced or redesigned to support more, and add additional storage to the host (usual SAN).

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Kadah said:

I've had arrays import from P420's on to P812's. Wasn't intentional as the drives were going to be wiped. Normally you would clone data from the old array to a new one in place or from a backup.

Yep done the same thing many times even across different vendors, HP to IBM etc. Except for the new HP cards (PMC Adaptec) they are all LSI OEM so there should be no reason you cannot do this type of thing.

Link to comment
Share on other sites

Link to post
Share on other sites

So my originally stated configuration, what drives should I use? Planning on WD's drives because of how easy it is for color coordination and reference, I can take a look into Seagate later. So... Red, Red Pro, Re, Se, or Ae? 

 

Sounds like Red drives but I could be wrong

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Yames said:

So... Red, Red Pro, Re, Se, or Ae? 

 

If you are budget limited, Reds are not bad for general data stores, just don't expect performance.

 

For "operational storage" it depends on the application they'll be used for. Generally the Re would be the good option though, just the most expensive. If its a storage service centric, the Se, Red Pro, Red are good (in that order in terms of budget from high to low).

 

For archival/backup, Red or Red Pro. The Ae are expensive and intended for write few, read infrequently, would be good for long term backups in place of tapes, but for the cost, not sure if its worth it on small scale deployments.

 

As far as Seagate, I use their Enterprise NAS HDD for VM datastores. These seem quite good at a wide level of tasks in my experience.

 

My current backup system is using HGST NAS drives because last year they were the only suitable 6TB drive for that application. The WDC Red Pros were not in stock in 6TB at the time, but would have been suitable, just more expensive.

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, Kadah said:

 

-snip

for archiving I was thinking something like greens because they turn on when being accessed and other than that they stay off, but I'm not very good with hard drives so I don't know exactly but that does sound like an okay option

 

But I don't think I really need enterprise grade drives, I don't understand why I would need that class/teir of them, what feature would they offer that would be of interest that stands out above the other drives? 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Yames said:

for archiving I was thinking something like greens because they turn on when being accessed and other than that they stay off, but I'm not very good with hard drives so I don't know exactly but that does sound like an okay option

 

But I don't think I really need enterprise grade drives, I don't understand why I would need that class/teir of them, what feature would they offer that would be of interest that stands out above the other drives? 

Any disk can be configured to power down when idle.

 

NAS certified disks are required when using them with RAID, ZFS, BTRFS etc. The critical reason for this is TLER (or equiv name). What this does is when there is a read error it will give up within 7 seconds so the disk is not incorrectly marked as failed in the array/disk pool. It can do this since there will be another copy of the data on another disk which is not the case for a single drive configuration. This is also why you should not use a NAS disk by itself since giving up on a read in this situation is data loss, you want to keep trying.

 

For a non NAS certified disk they will keep trying to read the problematic data basically forever halting all I/O in the array unless the system is configured to drop the disk if it does not respond, usually 9 seconds.

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/21/2016 at 0:18 PM, Yames said:

for archiving I was thinking something like greens because they turn on when being accessed and other than that they stay off, but I'm not very good with hard drives so I don't know exactly but that does sound like an okay option

 

But I don't think I really need enterprise grade drives, I don't understand why I would need that class/teir of them, what feature would they offer that would be of interest that stands out above the other drives? 

The Reds (non-pro) are pretty much raidable Greens. If you're doing JBOD or swapable disk rotation, Greens would be ok, but I would never put them in an array or rely on a single one for archive (ie. one hot, one cold, swap them each week).

Link to comment
Share on other sites

Link to post
Share on other sites

Was browsing eBay and found this server....

http://www.ebay.com/itm/HP-Proliant-DL180-G6-Server-2U-2x-2-66GHz-Quad-Core-Xeon-12gb-DDR3-500gb-/381541116254?hash=item58d59f695e:g:eqgAAOSwzgRWuKby

 

Read a couple of reviews and power draw looks like I would be in the 50-150 range vs no telling how much with the older Dell.  Also found some others with a little lower end Xeon processors with a lower TDP on Intel's site.  Would this be a little better suited for my needs?

 

I'm just wondering how some of these Xeons with do with transcoding with Plex?

i7-4790K l Z97-Deluxe l 32gb Corsair XMS l M.2 480gb SSD XFX R9 280X 3GB l NXZT x61 NZXT HALE90 V2 850W  l Source 530

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, impalaguy1230 said:

Was browsing eBay and found this server....

http://www.ebay.com/itm/HP-Proliant-DL180-G6-Server-2U-2x-2-66GHz-Quad-Core-Xeon-12gb-DDR3-500gb-/381541116254?hash=item58d59f695e:g:eqgAAOSwzgRWuKby

 

Read a couple of reviews and power draw looks like I would be in the 50-150 range vs no telling how much with the older Dell.  Also found some others with a little lower end Xeon processors with a lower TDP on Intel's site.  Would this be a little better suited for my needs?

 

I'm just wondering how some of these Xeons with do with transcoding with Plex?

Those are nice, I've got a couple of them.

Look for X5650 or X5660 Xeons, they are not expensive and plentiful. I run a lot of these, better than X5500 series.

Link to comment
Share on other sites

Link to post
Share on other sites

On 16/2/2016 at 5:09 PM, Yames said:

Any good videos on explaining ZFS or BTRFS? And for making a RAID card redundant, can you show examples? 

Wendell from Tek Syndicate has a video about RAID and ZFS/BTRFS. You may want to give it a look.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×