Jump to content

Building a storage server 80TB

Hi All,

 

I need to build a 80TB storage server for backup my video's.

It is very important to keep the risk of losing data as small as possible.

With raid10 i missing to much space so raid6 seems for me the best option for me so far what i can find, i looked also to rockstor and freenas, but rockstor not advice raid6 into production, and freenas is based on freebsd and i not have much skills on freebsd.

 

I am really struggling with choosing the best and safes option, so any advice in this is very welcome.

 

The follow system setup i have in my mind:

16x hotswap supermicro case
Supermicro X10SLL-F
Intel Xeon E3-1220 V3
16Gb ECC Memory
LSI MegaRAID SAS 9260-16i

Intel 10G Ethernet Server Adapter 10Gbps Dual Port PCI-E X520-DA2
4x HGST Deskstar NAS 4TB HDN724040ALE640

 

Starting with 4x 4TB to keep the cost low as possible, later when i need more storage i will adding more disks into the system till 80TB.

 

Thanks a lot for your reply.

 

Greetings

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, oxzhor said:

Hi All,

 

I need to build a 80TB storage server for backup my video's.

It is very important to keep the risk of losing data as small as possible.

With raid10 i missing to much space so raid6 seems for me the best option for me so far what i can find, i looked also to rockstor and freenas, but rockstor not advice raid6 into production, and freenas is based on freebsd and i not have much skills on freebsd.

 

I am really struggling with choosing the best and safes option, so any advice in this is very welcome.

 

The follow system setup i have in my mind:

16x hotswap supermicro case
Supermicro X10SLL-F
Intel Xeon E3-1220 V3
16Gb ECC Memory
LSI MegaRAID SAS 9260-16i

Intel 10G Ethernet Server Adapter 10Gbps Dual Port PCI-E X520-DA2
4x HGST Deskstar NAS 4TB HDN724040ALE640

 

Starting with 4x 4TB to keep the cost low as possible, later when i need more storage i will adding more disks into the system till 80TB.

 

Thanks a lot for your reply.

 

Greetings

FreeNAS is administered from a web interface so that fact that it runs FreeBSD shouldn't be an issue. If you want to go with a hardware RAID solution make sure you also get the BBU + cache upgrade for the card else RAID 6 write will still be slow, also setup weekly patrol reads to scan for errors and to fix them.

 

Edit: Also expanding an existing storage pool in FreeNAS/ZFS isn't a simple thing to do, basically it's a side by side or rip and replace. People with more experience on it can chime in since I don't use FreeNAS or ZFS. Hardware RAID 6 is online expandable though which is it's one pro over ZFS.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, DarkRuskov said:

Is the xeon really necessary? 

and if your videos are that important to you, you really should have a backup (preferably off-site) of your files in addition to your fileserver.

This cpu i already have so that why i want to use it into this build :).

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, oxzhor said:

This cpu i already have so that why i want to use it into this build :).

I personally don't see any issues using that CPU for a NAS build - FreeNAS does need some grunt for RAIDZ2 (the ZFS equivalent of RAID6) and it will give future expansion capability. Overall the system sounds fine to me, my two servers are basically the same except for the HBA and chassis. My two concerns are - 1. Make sure FreeNAS has support for that NIC, and 2. Flash that RAID card to IT mode if you're going to use FreeNAS

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

xeons are always neccessary for mission critical work.  

 

I suggest you dont use raid at all.  if its really absolutely critical you dont lose anything, then use each disk as a seperate volume, and create a mirror of that disk. eliminate the raid and parity issues.

 

 

also, you should get some WD  ae drives and back up the entire thing once a month.  you can do this as a compressed .img  or again, as mirrord backups of your volumes.  keep this device in a seperate location, on a different power circuit. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, leadeater said:

FreeNAS is administered from a web interface so that fact that it runs FreeBSD shouldn't be an issue. If you want to go with a hardware RAID solution make sure you also get the BBU + cache upgrade for the card else RAID 6 write will still be slow, also setup weekly patrol reads to scan for errors and to fix them.

Thanks for you reply.

It is a Dell perc H700 1GB incl bbu i have this one left from a server. The off-site backup i was thinking about this to.

The reason why i added a Intel Ethernet Server Adapter 10Gbps Dual Port PCI-E X520-DA2 also is for fast file transfers into the internal network after video editing on the office.

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, brwainer said:

I personally don't see any issues using that CPU for a NAS build - FreeNAS does need some grunt for RAIDZ2 (the ZFS equivalent of RAID6) and it will give future expansion capability. Overall the system sounds fine to me, my two servers are basically the same except for the HBA and chassis. My two concerns are - 1. Make sure FreeNAS has support for that NIC, and 2. Flash that RAID card to IT mode if you're going to use FreeNAS

The NICs are supported with FreeNAS. That's the nice thing about Intel NICs they are extremely well supported, would be hard to find an OS that doesn't.

Link to comment
Share on other sites

Link to post
Share on other sites

I didn't like FreeNAS at first, but it's fine once you get used to it.. have not had a problem since I properly set it up. Now running 2*3TB WD reds, no raid... I just mirror the data a few times a week from my PC using Freefilesync to a wd mycloud, just in case of failure etc. So I have 3 copies of the data including the original for my important data, 3 separate systems... plus I also backup to a USB 3.0 device about twice a month - this is kept offsite.

With FreeNAS in the beginning I had a few problems with permissions, mainly them not changing when I changed them, but I think it was user error in the setup... I started over with the new knowledge and everything has been fine since. I find that it help to know what you want to share and to whom before starting at all.

Please quote my post, or put @paddy-stone if you want me to respond to you.

Spoiler
  • PCs:- 
  • Main PC build  https://uk.pcpartpicker.com/list/2K6Q7X
  • ASUS x53e  - i7 2670QM / Sony BD writer x8 / Win 10, Elemetary OS, Ubuntu/ Samsung 830 SSD
  • Lenovo G50 - 8Gb RAM - Samsung 860 Evo 250GB SSD - DVD writer
  •  
  • Displays:-
  • Philips 55 OLED 754 model
  • Panasonic 55" 4k TV
  • LG 29" Ultrawide
  • Philips 24" 1080p monitor as backup
  •  
  • Storage/NAS/Servers:-
  • ESXI/test build  https://uk.pcpartpicker.com/list/4wyR9G
  • Main Server https://uk.pcpartpicker.com/list/3Qftyk
  • Backup server - HP Proliant Gen 8 4 bay NAS running FreeNAS ZFS striped 3x3TiB WD reds
  • HP ProLiant G6 Server SE316M1 Twin Hex Core Intel Xeon E5645 2.40GHz 48GB RAM
  •  
  • Gaming/Tablets etc:-
  • Xbox One S 500GB + 2TB HDD
  • PS4
  • Nvidia Shield TV
  • Xiaomi/Pocafone F2 pro 8GB/256GB
  • Xiaomi Redmi Note 4

 

  • Unused Hardware currently :-
  • 4670K MSI mobo 16GB ram
  • i7 6700K  b250 mobo
  • Zotac GTX 1060 6GB Amp! edition
  • Zotac GTX 1050 mini

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, brwainer said:

I personally don't see any issues using that CPU for a NAS build - FreeNAS does need some grunt for RAIDZ2 (the ZFS equivalent of RAID6) and it will give future expansion capability. Overall the system sounds fine to me, my two servers are basically the same except for the HBA and chassis. My two concerns are - 1. Make sure FreeNAS has support for that NIC, and 2. Flash that RAID card to IT mode if you're going to use FreeNAS

Thanks for your reply.

Only one concert with freenas is that you need 1GB memory at 1TB storage and my board can maximal 32GB that mean i can run max 32TB of storage. Or do i read it wrong?

The RAIDZ2 option sounds really good to me because i want to keep my data as safe as possible.

 

You let me rethinking everything brwainer :)

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, oxzhor said:

Thanks for you reply.

It is a Dell perc H700 1GB incl bbu i have this one left from a server. The off-site backup i was thinking about this to.

The reason why i added a Intel Ethernet Server Adapter 10Gbps Dual Port PCI-E X520-DA2 also is for fast file transfers into the internal network after video editing on the office.

 

Very nice RAID card then. What OS do the systems that will access the server run?

 

The only questions left are:

  • What OS to run on this server (best recommendation depends on client OS).
  • Hardware RAID or a software solution (Both Windows and Linux/BSD have software options)

Hardware RAID pros/cons are

 

Pros:

  • Easily expandable RAID 6
  • High performance low CPU demand
  • Easily migrated between systems, also of different operating systems
  • Very safe with weekly patrol reads

Cons:

  • Reliant on the RAID card, array imports across different models is possible but not perfect.
  • Long term data integrity is lower than ZFS or Windows Storage Spaces + ReFS

Software solutions pros/cons are

 

Pros:

  • Not reliant on storage controller (HBA/RAID card)
  • Typically the best possible long term data integrity
  • Can be migrated between systems (slightly more difficult than hardware RAID)
  • Very safe with scrubs (patrol read equiv)

Cons:

  • Higher CPU demand
  • Best used with ECC system memory (not really a con as you should anyway, some people see it as a con)
  • Can be migrated between systems, slightly more difficult than hardware RAID (both pro and con for this reason)
Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, oxzhor said:

Thanks for your reply.

Only one concert with freenas is that you need 1GB memory at 1TB storage and my board can maximal 32GB that mean i can run max 32TB of storage. Or do i read it wrong?

The RAIDZ2 option sounds really good to me because i want to keep my data as safe as possible.

 

You let me rethinking everything brwainer :)

1GB per 1TB isn't actually the case, this is really only when you have deduplication turned on.

 

Quote

Effective use of deduplication may require large RAM capacity; recommendations range between 1 and 5 GB of RAM for every TB of storage. Insufficient physical memory or lack of ZFS cache can result in virtual memory thrashing when using deduplication, which can either lower performance or result in complete memory starvation. Solid-state drives (SSDs) can be used to cache deduplication tables, thereby speeding up deduplication performance.

https://en.wikipedia.org/wiki/ZFS#Deduplication

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

Very nice RAID card then. What OS do the systems that will access the server run?

 

The only questions left are:

  • What OS to run on this server (best recommendation depends on client OS).
  • Hardware RAID or a software solution (Both Windows and Linux/BSD have software options)

Hardware RAID pros/cons are

 

Pros:

  • Easily expandable RAID 6
  • High performance low CPU demand
  • Easily migrated between systems, also of different operating systems
  • Very safe with weekly patrol reads

Cons:

  • Reliant on the RAID card, array imports across different models is possible but not perfect.
  • Long term data integrity is lower than ZFS or Windows Storage Spaces + ReFS

Software solutions pros/cons are

 

Pros:

  • Not reliant on storage controller (HBA/RAID card)
  • Typically the best possible long term data integrity
  • Can be migrated between systems (slightly more difficult than hardware RAID)
  • Very safe with scrubs (patrol read equiv)

Cons:

  • Higher CPU demand
  • Best used with ECC system memory (not really a con as you should anyway, some people see it as a con)
  • Can be migrated between systems, slightly more difficult than hardware RAID (both pro and con for this reason)

Very very interested information form you all guys! I start reading again on freenas wiki and forum and search more information for my know how about the system. So far what i read freenas match the best with my wishes.

 

If i choose RAID-Z2 24x HGST Deskstar NAS 4TB HDN724040ALE640
Raw Storage: 96.0 TB / 96000.0 GB
Usable Storage: 80.0 TB / 81956.4 GB

 

This will be perfect for me, i realize that i can better upgrade the case to a 24 bays case.

And use 3x LSI 9240-8i ( IBM M1015 ) with Intel 10G Ethernet Server Adapter 10Gbps Dual Port PCI-E X520-DA2 for off-site backup.

Only needed to look for a supermicro board with enough pci-e slots to add all cards.

 

I found the X9SCM-F but this one is already a old socket 1155. Some one know a newer supermicro board with the requirement slots?

Link to comment
Share on other sites

Link to post
Share on other sites

FreeNAS advises against so many disks in a single vdev, usually recommending around 10 per vdev. One of the biggest issues is when a drive fails you have to resilver it (recalculate the parity basically) and the larger the vdev the longer it takes. The other issue is you're limiting your write IOPS by using a single vdev (according to their wiki). There's other caveats that I can't remember but do a little research / planning on how to create your large pool.

 

You can expand your pool by adding vdevs at any point. Just never add a single vdev disk - if any vdev fails then the whole pool dies.

Link to comment
Share on other sites

Link to post
Share on other sites

54 minutes ago, Mikensan said:

FreeNAS advises against so many disks in a single vdev, usually recommending around 10 per vdev. One of the biggest issues is when a drive fails you have to resilver it (recalculate the parity basically) and the larger the vdev the longer it takes. The other issue is you're limiting your write IOPS by using a single vdev (according to their wiki). There's other caveats that I can't remember but do a little research / planning on how to create your large pool.

 

You can expand your pool by adding vdevs at any point. Just never add a single vdev disk - if any vdev fails then the whole pool dies.

Thank you for your reply.

I read it to:

Quote

It is not recommended that VDevs contain more than 11 disks under any

circumstances.

 

So what i can do is split it into 3x 8 vdev RAID-Z2
Raw Storage: 32.0 TB / 32000.0 GB
Usable Storage: 21.8 TB / 22351.7 GB

 

Total 65.4TB

 

is 14.6TB to less for what i needed :).

Link to comment
Share on other sites

Link to post
Share on other sites

This server for long term video storage is a bad idea.  You should seriously look at LTO tape drives and the like instead.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, oxzhor said:

Thank you for your reply.

I read it to:

 

So what i can do is split it into 3x 8 vdev RAID-Z2
Raw Storage: 32.0 TB / 32000.0 GB
Usable Storage: 21.8 TB / 22351.7 GB

 

Total 65.4TB

 

is 14.6TB to less for what i needed :).

You could do 12 disks per vdev, there isn't any limitation just recommendations.

 

12 x4TB Raidz2 = 36.38TiB, so you'll have ~72TiB usable. 8TB drop isn't terrible for 2 disk failover per vdev.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, oxzhor said:

Thank you for your reply.

I read it to:

 

So what i can do is split it into 3x 8 vdev RAID-Z2
Raw Storage: 32.0 TB / 32000.0 GB
Usable Storage: 21.8 TB / 22351.7 GB

 

Total 65.4TB

 

is 14.6TB to less for what i needed :).

 

3 hours ago, Mikensan said:

You could do 12 disks per vdev, there isn't any limitation just recommendations.

 

12 x4TB Raidz2 = 36.38TiB, so you'll have ~72TiB usable. 8TB drop isn't terrible for 2 disk failover per vdev.

You could also enable deduplication but this comes with the downside of requiring much more RAM.The above 72TB setup is probably the best solution. 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, AshleyAshes said:

This server for long term video storage is a bad idea.  You should seriously look at LTO tape drives and the like instead.

terrible idea for the home user. It is unlikely that he is going to store them in a climate controlled area, and lets face it, the data on the tape will degrade pretty quickly, if not stored properly.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Blake said:

terrible idea for the home user. It is unlikely that he is going to store them in a climate controlled area, and lets face it, the data on the tape will degrade pretty quickly, if not stored properly.

When you thinking about building an 80TB box with Supermicro hardware for the purpose of long term cold storage of film/video projects and hot storage for active projects you have exceeded the specifications of 'Home User'.  More over, if this data is important, like 'content creator important', having it all in one box, a single point of failure, is foolhardy.  RAID (And alternatives) Is Not A Backup.  This is a lot of money to invest, presumably in pursuit of making a machine that is 'bullet proof' will not result in a box that is actually bullet proof by a long shot.  Multiple backups are the closest you can get.  A long term storage solution like LTO, Blu-Ray XL or anything like that is a more grounded solution for long term storage.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, AshleyAshes said:

When you thinking about building an 80TB box with Supermicro hardware for the purpose of long term cold storage of film/video projects and hot storage for active projects you have exceeded the specifications of 'Home User'.  More over, if this data is important, like 'content creator important', having it all in one box, a single point of failure, is foolhardy.  RAID (And alternatives) Is Not A Backup.  This is a lot of money to invest, presumably in pursuit of making a machine that is 'bullet proof' will not result in a box that is actually bullet proof by a long shot.  Multiple backups are the closest you can get.  A long term storage solution like LTO, Blu-Ray XL or anything like that is a more grounded solution for long term storage.

You're right and wrong. There is nothing that suggests he is or is not a home user. But OP needed to ask questions at this level shows he does not know how to store LTO tapes for long term storage (i.e. Tape vault), and there fore it is not a leap to assume what I have assumed. With regards to SPOF and RAID, you are correct. LTO and Blu-Ray are not suitable for long term storage, as both require climate controlled environments to keep the data safe (disk rot is common for optical media at approx the 10 year mark).

 

Unless you have a plan to replace the storage media on a regular scheduled you wont have a proper long term solution (i'd say just throw it in AWS/Azure, and configure it as geo-redundant).

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, Blake said:

You're right and wrong. There is nothing that suggests he is or is not a home user. But OP needed to ask questions at this level shows he does not know how to store LTO tapes for long term storage (i.e. Tape vault), and there fore it is not a leap to assume what I have assumed. With regards to SPOF and RAID, you are correct. LTO and Blu-Ray are not suitable for long term storage, as both require climate controlled environments to keep the data safe (disk rot is common for optical media at approx the 10 year mark).

 

Unless you have a plan to replace the storage media on a regular scheduled you wont have a proper long term solution (i'd say just throw it in AWS/Azure, and configure it as geo-redundant).

That is one of the core reasons why we are moving away from tape backups. In fact many businesses no longer use tape backups at all.

 

We use two 500TB SANs in different cities as backend disk storage for Commvault which currently keeps backups on disk for 1 year, only uses about 56% of this capacity but the DDB disks are too full to increase this length any further. We do still use tapes but while the tapes themselves are very cheap auto loading tape libraries and the tape drives that go in them are VERY expensive.

 

Tapes still have there place but they are not bullet proof and neither are the tapes drives that read/write them.

 

There is no perfect answer on how to best do backups. Everyone's situation, requirements and finances are different so it is about finding the right mix of protection vs cost. In this specific case I would agree two systems in two locations setup with replication would be ideal, you can start small and add vdevs as capacity is required, this way you keep the initial cost low and benefit from ever decreasing $/GB in disks. Basically the same money now for 80TB spread over 2 years would likely give you 120TB-160TB+.

 

The problem here is that a lot of the key components already existed so a second system would add significantly to the cost, even without the 10GB NIC and only using a cheap HBAs (IBM M1015-IT).

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, JCBiggs said:

-snip-

I suggest you dont use raid at all.  if its really absolutely critical you dont lose anything, then use each disk as a seperate volume, and create a mirror of that disk. eliminate the raid and parity issues

 

Mirroring the disk? He might as well just do Raid 1 then, there's no point manually mirroring it unless it's for backup.

 

I suggest OP go with FreeNAS

Speedtests

WiFi - 7ms, 22Mb down, 10Mb up

Ethernet - 6ms, 47.5Mb down, 9.7Mb up

 

Rigs

Spoiler

 Type            Desktop

 OS              Windows 10 Pro

 CPU             i5-4430S

 RAM             8GB CORSAIR XMS3 (2x4gb)

 Cooler          LC Power LC-CC-97 65W

 Motherboard     ASUS H81M-PLUS

 GPU             GeForce GTX 1060

 Storage         120GB Sandisk SSD (boot), 750GB Seagate 2.5" (storage), 500GB Seagate 2.5" SSHD (cache)

 

Spoiler

Type            Server

OS              Ubuntu 14.04 LTS

CPU             Core 2 Duo E6320

RAM             2GB Non-ECC

Motherboard     ASUS P5VD2-MX SE

Storage         RAID 1: 250GB WD Blue and Seagate Barracuda

Uses            Webserver, NAS, Mediaserver, Database Server

 

Quotes of Fame

On 8/27/2015 at 10:09 AM, Drixen said:

Linus is light years ahead a lot of other YouTubers, he isn't just an average YouTuber.. he's legitimately, legit.

On 10/11/2015 at 11:36 AM, Geralt said:

When something is worth doing, it's worth overdoing.

On 6/22/2016 at 10:05 AM, trag1c said:

It's completely blown out of proportion. Also if you're the least bit worried about data gathering then you should go live in a cave a 1000Km from the nearest establishment simply because every device and every entity gathers information these days. In the current era privacy is just fallacy and nothing more.

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, JCBiggs said:

If you use raid and get an error in the disk,  then raid automatically copies the error.   Delayed sync is the better way for mission critical 

Not necessarily. RAID can still pickup errors on the disk, that is what patrol reads are for. The risk of corruption happens if you leave the period between patrol reads too long or a disk fails in the array before a patrol read has happened and if there is an error is on the non failed then the corruption will be copied in the rebuild or cause the rebuild to fail.

 

People seem to think that there is no protection in RAID from corruption, there is but not all RAID cards do patrol reads automatically sometimes you actually have to enable it. What makes ZFS safer is it has end to end integrity and the data is checked on every read and write, you should also schedule scrubs just like patrol reads on RAID.

 

There has been way too much demonizing of RAID out on the internet, especially when it comes to explaining the virtues of ZFS. ZFS can stand on it's own without the need for everyone to go around saying RAID is dead. Anyone that says that, to me, is a clear sign of a lack of real work experience with it.

 

You don't need a nail gun to drive in every nail, sometimes you can't use a nail gun but clearly a nail gun is the best tool for driving in nails.

 

Edit: Also when an error happens on a disk it does not trigger a copy of the error to the other disks, that's not how RAID works. It's write once with implicit trust that the data is always safe and true, ZFS does not have this assumption which is why integrity is checked with every read.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×