Jump to content

Redundant storage server for movie editing

Not too long ago one of my external hard drives, containing all my movie projects and media, died. Of course I had made some back-ups on different drives, so not everything was lost. This did set me back some days though.

Anyhow, since I was also having problems storing all the data that comes with 4K movies on external hard drives and the transfer speed that comes with USB 3.0, I decided to try to build a storage server that will stream 4K footage over the network to my editing software (Avid Media Composer).

 

The problem: I have no actual clue on how to do this.

I searched a bit online and found tons of guides on how to build a NAS or storage server but none of them are specifically for movie editing. I was hoping there is someone here that has experience with this.

 

Now, I have some old but working hardware laying around.

 

What I have:
Asus P8Z77-V PREMIUM motherboard

Intel core i7 3770K CPU

 

I was planning to buy a 2U hot-swap storage chassis. This one for instance: https://www.inter-tech.de/en/products/ipc/storage-cases/2u-2408

And also I was planning to use UNRAID as the OS for the server.

I have a 10Gb ethernet wiring in my home. There is no other 10Gb networking gear though, like a switch or something.

 

The couple of questions I have:

- Will ECC memory work as ECC memory in this motheboard.

- How do I get a fast enough transfer link from the server to my PC so that it can stream 4K media over the network to my timeline?

- What RAID card do I need? Do I need a RAID card?

- Do I need a network card to achieve 10Gb transfer speed. Do I need 10Gb transfer speed?

- Should I buy new/used server hardware (mobo and CPU) rather than re-use the harware I have laying around?

- Will 8 hot-swappable bays be enough to achieve 40 TB of redundant Raid storage?

- Do I need any other networking gear to achieve a 10Gb link?

 

I hope there is anyone here that has experience with network storage for movie editing. Maybe even specifically network storage for Avid products.

Any help is greatly appreciated!

 

Thanks and Grtz,
Alean

 

=]

Link to comment
Share on other sites

Link to post
Share on other sites

Reason why you want 2u, 4u is great here, more room for drives and easier to build in.

 

Don't use unraid here, its slow and a pain to use iscsi(what you need for avid, at least when i used it a year ago)

 

What do you mean by 10 gig wiring? fiber? copper?

 

What type of footage? raw? uncompressed? h264?

 

Budget? Your probably thinking 2k+ with drives

 

Id personally get a c2100 or r510, fill it with 6tb drives, but a basica linux distro on it, then use a iscsi share. Your thinking about 200 for the server and about 1500 for the drives. Then a 10gbe nic and a boot ssd. Id also personally run zfs, but a hba in it.

 

 

Your questions

 

No ecc on that board

 

Yep you can easily stream 4k, as 4k is a resolution not a bitrate.

 

Depends on the storage setup, probably go hba + zfs here.

 

10gbe will be nice here, id get the used cards on ebay.

 

Id personally get a used server here, but you can reuse those parts.

 

Yep, but for 40tb of storage on 8 bays your getting at least 6 tb drives.

 

You need point to point connection and and some basic network knolege.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I agree with @Electronics Wizardy

I would get a 2U case with 12 drive bays. that way you can get your 40 or 48tb now assumming 6 drive 12tb each raid 6. and then add on another set later

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Electronics Wizardy said:

Reason why you want 2u, 4u is great here, more room for drives and easier to build in.

 

Don't use unraid here, its slow and a pain to use iscsi(what you need for avid, at least when i used it a year ago)

 

What do you mean by 10 gig wiring? fiber? copper?

 

What type of footage? raw? uncompressed? h264?

 

Budget? Your probably thinking 2k+ with drives

 

Id personally get a c2100 or r510, fill it with 6tb drives, but a basica linux distro on it, then use a iscsi share. Your thinking about 200 for the server and about 1500 for the drives. Then a 10gbe nic and a boot ssd. Id also personally run zfs, but a hba in it.

 

 

Your questions

 

No ecc on that board

 

Yep you can easily stream 4k, as 4k is a resolution not a bitrate.

 

Depends on the storage setup, probably go hba + zfs here.

 

10gbe will be nice here, id get the used cards on ebay.

 

Id personally get a used server here, but you can reuse those parts.

 

Yep, but for 40tb of storage on 8 bays your getting at least 6 tb drives.

 

You need point to point connection and and some basic network knolege.

 

 

Thanks so much for the reply!

 

I should probably explain my current workflow a bit.

Currently, what I do is:

- I copy the RAW video files to a hard drive. These RAW video files usually have a bitrate of around 2000 Mbit/s (ofcourse dependant on the codec etc.).

- I then transcode the footage into a proxy resolution. 1080p 24 DNxHD 115 Mbit/s.

- I import the transcoded footage into Media Composer for offline editing.

 

What I would like to be able to do in the future is stream the RAW files directly to my timeline for viewing and stream a 4K proxy resolution (DNxHR 444 

1492 Mbits/s) to my timeline for  editing.

My basic counting knowlage tells me: 1492Mbits/s is more than 1 Gbit. So I will need a 10 Gbit link.

 

As for the link itself, I do have a both a thunderbolt 3 connection to workstation and a 10Gbit ethernet connection to my workstation.

My office also has CAT6a copper ethernet wiring.

So the wires are there, just not the networking equipment.

 

Could you  elaborate on the hba+zfs setup? How is this done? What do I need?

 

As for budget. I was indeed thinking of a number somewhere between $2000 and $2500.

 

 

Thanks =]

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Alean said:

Thanks so much for the reply!

 

I should probably explain my current workflow a bit.

Currently, what I do is:

- I copy the RAW video files to a hard drive. These RAW video files usually have a bitrate of around 2000 Mbit/s (ofcourse dependant on the codec etc.).

- I then transcode the footage into a proxy resolution. 1080p 24 DNxHD 115 Mbit/s.

- I import the transcoded footage into Media Composer for offline editing.

 

What I would like to be able to do in the future is stream the RAW files directly to my timeline for viewing and stream a 4K proxy resolution (DNxHR 444 

1492 Mbits/s) to my timeline for  editing.

My basic counting knowlage tells me: 1492Mbits/s is more than 1 Gbit. So I will need a 10 Gbit link.

 

As for the link itself, I do have a both a thunderbolt 3 connection to workstation and a 10Gbit ethernet connection to my workstation.

My office also has CAT6a copper ethernet wiring.

So the wires are there, just not the networking equipment.

 

Could you  elaborate on the hba+zfs setup? How is this done? What do I need?

 

As for budget. I was indeed thinking of a number somewhere between $2000 and $2500.

 

 

Thanks =]

This is what id personally get for your setup. Reasonble cost,  should hit 500+ mB/s and lots of storage.

 

So for the base chassis Id get this https://www.ebay.com/itm/DELL-FS12-TY-C2100-2x-QUAD-CORE-E5520-2-26GHz-8GB-RAM-2x-TRAYS-H700/281725798891?hash=item41982a89eb:g:q08AAOSwnF9Y7kwN

300USD

 

They have gotten more expensive since it looked last, but still a good deal. Dual quad cores that will be fine. 8gb of ram will work, but extra will be used as a cache and make it faster. This server has lots of ram slots and registered ddr3 ecc isn't too expensive, so id fill it up with 72gb.

 

https://www.ebay.com/itm/72GB-18X-4GB-PC3-10600R-FOR-DELL-POWEREDGE-R710-R910-REG-DDR3-MEMORY/351400240312?hash=item51d11624b8:g:m3cAAOSwfLVZ5TS2

200USD

 

10gbe nic. Id picked this one, any should work.

https://www.ebay.com/itm/RT8N1-0RT8N1-DELL-MELLANOX-CONNECTX-2-PCIe-10GBe-ETHERNET-NIC-SERVER-ADAPTER/351416547732?hash=item51d20ef994:g:PogAAOSwoRBaaRIb

 

Here is where there are more options. You can either use the included hardware raid card, or go with zfs. They both have pros and cons, but id personally go zfs here. For that you need a hba so the system can use the drives directly with no raid. Get a h200.

 

This will work fine https://www.ebay.com/itm/Dell-PERC-H200-8-Port-6Gb-s-SAS-SATA-RAID-PCI-E-Controller-047MCV/282841918082?hash=item41dab12a82:g:7TMAAOSwWkFafd-4

 

40USD

 

Then for os id suggest freeNAS. It uses zfs and freeBSD and is basically a nice web interface for this. If you like linux and the command line, go dive in, but freeNAS is a simpler solution for most users.

 

You will need a usb stick to install the os on, get a 8 or 16gb one for about 10 bucks

 

10USD

 

Drives. You have a few options here. Id go bigger so you can expand easier in the future. 

 

These are a great deal and have wd reds in them, id go for them if you can. https://www.bestbuy.com/site/wd-easystore-8tb-external-usb-3-0-hard-drive-black/5792401.p?skuId=5792401

 

Otherwise get ironwolfs or wd reds. Otherdrives will work fine, but nas drives are much more here, you have virbrations that they can help with.

 

12x 8tb drives = 1960USD and gives you about 70TiB usable with raid z2.

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Electronics Wizardy said:

This is what id personally get for your setup. Reasonble cost,  should hit 500+ mB/s and lots of storage.

 

So for the base chassis Id get this https://www.ebay.com/itm/DELL-FS12-TY-C2100-2x-QUAD-CORE-E5520-2-26GHz-8GB-RAM-2x-TRAYS-H700/281725798891?hash=item41982a89eb:g:q08AAOSwnF9Y7kwN

300USD

 

They have gotten more expensive since it looked last, but still a good deal. Dual quad cores that will be fine. 8gb of ram will work, but extra will be used as a cache and make it faster. This server has lots of ram slots and registered ddr3 ecc isn't too expensive, so id fill it up with 72gb.

 

https://www.ebay.com/itm/72GB-18X-4GB-PC3-10600R-FOR-DELL-POWEREDGE-R710-R910-REG-DDR3-MEMORY/351400240312?hash=item51d11624b8:g:m3cAAOSwfLVZ5TS2

200USD

 

10gbe nic. Id picked this one, any should work.

https://www.ebay.com/itm/RT8N1-0RT8N1-DELL-MELLANOX-CONNECTX-2-PCIe-10GBe-ETHERNET-NIC-SERVER-ADAPTER/351416547732?hash=item51d20ef994:g:PogAAOSwoRBaaRIb

 

Here is where there are more options. You can either use the included hardware raid card, or go with zfs. They both have pros and cons, but id personally go zfs here. For that you need a hba so the system can use the drives directly with no raid. Get a h200.

 

This will work fine https://www.ebay.com/itm/Dell-PERC-H200-8-Port-6Gb-s-SAS-SATA-RAID-PCI-E-Controller-047MCV/282841918082?hash=item41dab12a82:g:7TMAAOSwWkFafd-4

 

40USD

 

Then for os id suggest freeNAS. It uses zfs and freeBSD and is basically a nice web interface for this. If you like linux and the command line, go dive in, but freeNAS is a simpler solution for most users.

 

You will need a usb stick to install the os on, get a 8 or 16gb one for about 10 bucks

 

10USD

 

Drives. You have a few options here. Id go bigger so you can expand easier in the future. 

 

These are a great deal and have wd reds in them, id go for them if you can. https://www.bestbuy.com/site/wd-easystore-8tb-external-usb-3-0-hard-drive-black/5792401.p?skuId=5792401

 

Otherwise get ironwolfs or wd reds. Otherdrives will work fine, but nas drives are much more here, you have virbrations that they can help with.

 

12x 8tb drives = 1960USD and gives you about 70TiB usable with raid z2.

 

Thanks again!

This is pretty much all I needed to know.
I'll dive in and do some more research and price scouting to see what kinds of deals there are around my place.


Regarding the WD easystore: What kind of deal is that?! They sell a 250$ drive for $160? Seems like a great deal indeed!


Thanks =]

Alean

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Alean said:

Thanks again!

This is pretty much all I needed to know.
I'll dive in and do some more research and price scouting to see what kinds of deals there are around my place.


Regarding the WD easystore: What kind of deal is that?! They sell a 250$ drive for $160? Seems like a great deal indeed!


Thanks =]

Alean

Yea lots of people are getting the easy stores and ripping the drives out. Ive done some my self and there good drives.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Hahaha, great tip!
I'll definately remember that one =]

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just a quick question about your current 10gbit card, is it sfp+ or copper? When you go to buy your 10gb card for the NAS, just make sure you get the right SFP+ module. 

 

I think I got a mellanox 10gb card + DAC (direct attached cable) for $24, which if you buy the SFP+ separate is going to cost about $20 anyway. Ultimately it might be cheaper to just buy another mellanox card for your workstation and use a DAC.

 

If you plan on expanding your 10gb network, Ubiquiti makes a SFP+ switch which is nice: https://www.ubnt.com/edgemax/edgeswitch-16-xg/

 

There's a lot of variables that go into disk speed and what you'll see. Personally with 5 disks in RaidZ2 I am able to get 400mbyte/s (ish) (32gb of RAM, SLC SSD SLOG). Wanting speeds over 1gbit, I'd definitely suggesting getting as much RAM as your budget and board allows, possibly even set aside a little bit for a SSD to use for SLOG in case you aren't getting the speeds you want.

 

One other thing that's common (that linus does as well) is using an SSD array for active projects, and spinners for storage.

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/10/2018 at 11:35 AM, Electronics Wizardy said:

This is what id personally get for your setup. Reasonble cost,  should hit 500+ mB/s and lots of storage.

 

So for the base chassis Id get this https://www.ebay.com/itm/DELL-FS12-TY-C2100-2x-QUAD-CORE-E5520-2-26GHz-8GB-RAM-2x-TRAYS-H700/281725798891?hash=item41982a89eb:g:q08AAOSwnF9Y7kwN

300USD

 

They have gotten more expensive since it looked last, but still a good deal. Dual quad cores that will be fine. 8gb of ram will work, but extra will be used as a cache and make it faster. This server has lots of ram slots and registered ddr3 ecc isn't too expensive, so id fill it up with 72gb.

 

https://www.ebay.com/itm/72GB-18X-4GB-PC3-10600R-FOR-DELL-POWEREDGE-R710-R910-REG-DDR3-MEMORY/351400240312?hash=item51d11624b8:g:m3cAAOSwfLVZ5TS2

200USD

 

10gbe nic. Id picked this one, any should work.

https://www.ebay.com/itm/RT8N1-0RT8N1-DELL-MELLANOX-CONNECTX-2-PCIe-10GBe-ETHERNET-NIC-SERVER-ADAPTER/351416547732?hash=item51d20ef994:g:PogAAOSwoRBaaRIb

 

Here is where there are more options. You can either use the included hardware raid card, or go with zfs. They both have pros and cons, but id personally go zfs here. For that you need a hba so the system can use the drives directly with no raid. Get a h200.

 

This will work fine https://www.ebay.com/itm/Dell-PERC-H200-8-Port-6Gb-s-SAS-SATA-RAID-PCI-E-Controller-047MCV/282841918082?hash=item41dab12a82:g:7TMAAOSwWkFafd-4

 

40USD

 

Then for os id suggest freeNAS. It uses zfs and freeBSD and is basically a nice web interface for this. If you like linux and the command line, go dive in, but freeNAS is a simpler solution for most users.

 

You will need a usb stick to install the os on, get a 8 or 16gb one for about 10 bucks

 

10USD

 

Drives. You have a few options here. Id go bigger so you can expand easier in the future. 

 

These are a great deal and have wd reds in them, id go for them if you can. https://www.bestbuy.com/site/wd-easystore-8tb-external-usb-3-0-hard-drive-black/5792401.p?skuId=5792401

 

Otherwise get ironwolfs or wd reds. Otherdrives will work fine, but nas drives are much more here, you have virbrations that they can help with.

 

12x 8tb drives = 1960USD and gives you about 70TiB usable with raid z2.

 

Just a heads up, since no one else mentioned this:

@Alean With ZFS (Whether it's FreeNAS, FreeBSD, Linux, etc), the "upgrade path" for growing your storage, is, simply put, not user friendly.

 

You cannot just drop more drives into the system to add to the existing array.

 

To make an array bigger, you have to either:

1. Take one drive out, replace with a bigger drive, rebuild - repeat for every single drive - this will take days (maybe longer) with 12 HDD's.

2. Alternatively, if you have enough empty drive bays, you can create an identical 2nd array, and "span" the two together.

 

If Array #1 is a RAIDZ2 (RAID6 equivalent), and you expanded with a 2nd RAIDZ2 array, you'd essentially have the ZFS equivalent of RAID60.

 

Some (but not all) Hardware RAID cards make expanding easier. But you really need to plan an upgrade thoroughly.

 

There is a third option:

3. Have somewhere to dump all your files, remove the array (pull all HDD's), replace with a brand new array (replace with all bigger HDD's, create new array). But you need to have enough HDD space to put any data that lives on the original array.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Electronics Wizardy said:

Can I ask what sd your using? Ive been looking for a good slog from my zfs system.

Honestly I got lucky, it was a 120gb (128gb?) Intel SSD I had laying around. I don't know the model off the top of my head, I'll have a look when I get home.

 

I have 3 Samsung 850 Evos, 500gb I've used in various configurations (Striped 2 disks, used 1 for SLOG, striped all 3, RaidZ1 all 3 etc..) and using NFS (100% sync writes) it under performed my spinner array with the intel ssd. Just outright terrible performance (50-60mbyte/s iirc). So now I use iSCSI with the 3 striped so it at least uses async writes mostly and with other VMs running etc... I can get 800mbyte/s in crystalmark inside a vm. I just prefer NFS for my datastores, can manipulate the files directly from the NAS and get a clearer picture of free space from the NAS.

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, dalekphalm said:

You cannot just drop more drives into the system to add to the existing array.

 

To make an array bigger, you have to either:

1. Take one drive out, replace with a bigger drive, rebuild - repeat for every single drive - this will take days (maybe longer) with 12 HDD's.

2. Alternatively, if you have enough empty drive bays, you can create an identical 2nd array, and "span" the two together.

 

If Array #1 is a RAIDZ2 (RAID6 equivalent), and you expanded with a 2nd RAIDZ2 array, you'd essentially have the ZFS equivalent of RAID60.

 

Some (but not all) Hardware RAID cards make expanding easier. But you really need to plan an upgrade thoroughly.

This is why a lot of people prefer to use mirror vdevs and just keep adding more to the pool to get the capacity required, the up shot of this is you also get excellent performance. This is the only configuration where you can easily add disks, caveat of 2 at a time.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Mikensan said:

I have 3 Samsung 850 Evos, 500gb I've used in various configurations (Striped 2 disks, used 1 for SLOG, striped all 3, RaidZ1 all 3 etc..) and using NFS (100% sync writes) it under performed my spinner array with the intel ssd. Just outright terrible performance (50-60mbyte/s iirc).

I have/had the exact same issues with my 840 Pros and 850 Pros in hardware RAID (LSI 9631-8i FastPath) especially if the created array is used for an ESXi datastore, literally the exact same throughput you mention.

 

After much fucking around with great frustration I did find a solution to fix it but still ended up completely changing how I was using them,  anyway the fix is to 20% OP them then the performance stays fine. If you don't then when you initially test the array and datastore you get 1GB/s+ and it all looks fine, test about 30 minutes later and stuck at 50-60 MB/s, argh.

 

Did a bunch of looking around and many people had the same issues with these SSD and my RAID card and LSI at one point silently removed them from the HCL list, they did get added back later after some firmware updates from both LSI for the RAID card and Samsung SSD firmware but boy I'm still salty about that, so much so I'm reluctant to even buy Samsung enterprise SSD to replace them even though I know they are the best on the market, just don't want that risk.

 

I still use the SSDs but in storage spaces (on a HBA) which works fine since TRIM actually works and Windows doesn't try to murder SSDs unlike hardware RAID and ESXi does.

 

Just need to mention it again in case the off chance someone from Samsung sees it, yes I'm still pissed off. I brought 6 512GB Pro versions back when they were basically the best and biggest sane size around specially to use in my home server setup and I know these are not server rated but bloody hell they are Pro SSDs for the home and professional market they should NOT have RAID issues or issues with ESXi. Not even the two crappy ADATA SP550 120GB SSDs I ended up buying to figure out WTF was going on had these issues. 

 

Spoiler

pissed-off_o_1743391.jpg

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, leadeater said:

This is why a lot of people prefer to use mirror vdevs and just keep adding more to the pool to get the capacity required, the up shot of this is you also get excellent performance. This is the only configuration where you can easily add disks, caveat of 2 at a time.

I may end up replacing my RAIDZ1 array with mirrored vdevs eventually, but only as my needs outgrow my 15TB of usable space, and if I can scrounge up enough spare HDD's. The good news is that I have 6 free HDD bays though, so my current plan is to just add a second 6 drive RAIDZ1 (whether I stripe it with the other RAIDZ1 array is another question).

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

@leadeater I read somewhere it's mainly because of the Samsung SSD's shitty 4k speeds, which I'm not sure how it plays into all of this. My datastore within esxi is set for 1mb block size, and my zfs is set the the default (128k or 512k I believe).

 

Like @dalekphalm I'm ultimately switching to mirrord vdevs at some point, when I did RaidZ1 I just didn't know the limitations of parity based arrays limiting IOPS to a single disk's ability.

Link to comment
Share on other sites

Link to post
Share on other sites

Go with a higher end Dell R710 LFF with 4, 8 or 10 TB drives in it then 2 SSDs in Raid one as a working drive. You can do direct attach 10Gb networking so line from server to workstation. Would make it a lot cheaper than buying a $1500 switch then the 10Gb RJ-45 nics like you are likely thinking. SFP+ would work great maybe even IB I think its called.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×