Jump to content

NAS + Render Farm

Hey guys,

My buddies and I all record video in some shape or form, and since we're always a 5-minute drive from one another, we want to build a NAS + Render Farm that we'd all connect to and share between. My ideal configuration goes something like this:

  1. Minimum 16GB ECC RAM to prevent any issues with reading/writing from memory..
  2. Best GPU setup video rendering in Sony Vegas / Premiere Pro.
  3. RAID 6 array with 6 WD Black 1TB drives (Totaling in 4TB of storage with 2 possible drive failures before completely losing data).
  4. Hot-swap case for the drives.
  5. Ethernet + WiFi connectivity.

My main concerns are this:

  1. I'd like to set it up to render more than one video at a time, and possibly be able to stream media or stream games to other computers.
  2. What GPU's to get for good concurrent rendering + streaming. I currently plan of setting up two GTX 980 TI's OR a single GTX Titan X.
  3. Do I go with server-grade CPU's and OS or desktop-grade CPU's and OS? In terms of desktop CPU and OS, I was planning on using an i7-4790K overclocked with Windows 7 Ultimate, I don't know what I'd do for the server config.
  4. I still need a RAID controller and Mobo that support RAID 6.
  5. Is 16GB of ECC RAM good enough for this machine? Should I go for 32GB?
  6. Are there rack-mounted full cases I should be looking at?


Thank you in advance for any help!

Link to comment
Share on other sites

Link to post
Share on other sites

In order to utilise ECC memory, you will need xeons. The best gpu's you can get for adobe and stuff like that are the high end quadros. If you make it fast enough, you can just queue up videos to automatically render right after the other. A 4790k may be slow. You could try going for a dual xeon setup depending on your budget. 

Recommendations: Don't stream from this as well. If you go the xeon route with ECC memory etc. you can go for a server motherboard that has a raid controller inbuilt. Also, windows 7 may create issues, look into using Windows Server 2012 r2 (will explain why down below). There is no sense in buying a rackmount case, just get a large case like the Enthoo Pro or something. Rackmount only saves space if you have a metric shitload of stuff that needs to be in the same area. 

Server 2012 R2: If you go this route, you can set up multiple vm's of windows 7 or whatever you like and have your video editing program run on that. If you do that, that means that you could actually render multiple things simultaneously and even have each person remotely connect and work on their own project. For that to work though, you would want at least 32 gigs or 64 gigs of ram.; 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, HunterAP said:

-snip-

As stated above, you need a Xeon and a Intel C series chipset motherboard to properly use ECC RAM.

 

You should use WD Reds for the RAID array. Blacks, Blues, and Greens aren't geared for RAID use. The speed difference between the Reds and Blacks isn't that much.

 

Do you really need a rackmount chassis? They are pretty heavy, large, and very loud / expensive. Also, the room the server is going to be in will be really hot, just so you know.

 

You should get 32GB of RAM or more to aid with the video editing.

 

As for server grade CPUs, if you plan on having this render 24/7, I would go for Xeons (I myself bought a engineering sample from Ebay...but it is kind of sketchy, no lie). The 5820K is a pretty potent CPU when overclocked as well though.

 

For GPUs, Quadros would be ideal...but they're freaking expensive...I would opt for 980 Tis.

 

You should try for Windows Server 2012 R2...you can get on the Reddit thread for $15. Or if one of you goes to college, you can use Dreamspark to get a free key.

 

I hope you have a fast internet connection between you all, the internet speed is going to bottleneck for everyone outside the local network (Even on my fairly high end 75Mb/s FIOS plan, that's really only 9ish MB/s at full speed). On the local area network, 1Gb/s Ethernet will be your bottleneck...which taps out at 125MB/s (It's usually 100MB/s though) and that's slower than a normal hard drive (Why I'm saving up for 10Gb/s Ethernet for my NAS).

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for the info, guys. Let me make sure I got everything right:

  1. Windows Server 2012 R2: I am enrolled in college, so I'll be able to get this for free from Dreamspark.
  2. Motherboard: Any recommendations for running this server config? I'm more savvy in terms of desktop parts, so my knowledge is minimal for server parts.
  3. Xeon Processor(s): Is there one you guys would recommend for the workload? Preferably with a little lag as possible
  4. I've looked up comparisons for WD drives, I trust WD to be reliable and WD Blacks 7200 RPM perform better than the WD Red 5400 RPM drives of the same size. I'm trying to find sources as to whether WD Blacks can support RAID 6, but I don't necessarily see why they wouldn't be supported.

Also I'm understanding that I shouldn't stream from this, but if I'm running enough CPU + GPU power, couldn't I forcibly set this config to use less CPU/GPU power when streaming is active and still get the benefits of the Render Farm? Theoretically, a dual-xeon and dual GTX 980 TI setup should be more than powerful enough to render and stream. Couldn't I even set the rendering programs to use 1 CPU and one GPU while streaming is active as well?

Again, thanks for the replies, it's helping me a lot to figure all this out.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, HunterAP said:

Thanks for the info, guys. Let me make sure I got everything right:

  1. Windows Server 2012 R2: I am enrolled in college, so I'll be able to get this for free from Dreamspark.
  2. Motherboard: Any recommendations for running this server config? I'm more savvy in terms of desktop parts, so my knowledge is minimal for server parts.
  3. Xeon Processor(s): Is there one you guys would recommend for the workload? Preferably with a little lag as possible
  4. I've looked up comparisons for WD drives, I trust WD to be reliable and WD Blacks 7200 RPM perform better than the WD Red 5400 RPM drives of the same size. I'm trying to find sources as to whether WD Blacks can support RAID 6, but I don't necessarily see why they wouldn't be supported.

Also I'm understanding that I shouldn't stream from this, but if I'm running enough CPU + GPU power, couldn't I forcibly set this config to use less CPU/GPU power when streaming is active and still get the benefits of the Render Farm? Theoretically, a dual-xeon and dual GTX 980 TI setup should be more than powerful enough to render and stream. Couldn't I even set the rendering programs to use 1 CPU and one GPU while streaming is active as well?

Again, thanks for the replies, it's helping me a lot to figure all this out.

1. Yes, sign up using your college email address.

2. Too lazy atm someone else can give advice :P

3. Ditto ^

4. WD Blacks are not certified for RAID use and the firmware is not optimized to work in such configurations. This is not to say it won't work but use at your own risk. WD Red/Red Pro/Se/Re etc all have TLER enabled in the firmware of the disk controller, this is the critically important thing that makes a disk RAID certified or not. Time-Limited Error Recovery (TLER) insures that a sector read error is aborted and reported back to the RAID card within 7 seconds to prevent the disk being marked as failed for not responding. A WD Black will keep trying a sector read error indefinitely, good for single disk configuration but bad for RAID.

 

Quote

To this end, Western Digital has introduced Time-Limited Error Recovery (TLER) in its enterprise hard drives to improve error handling compatibility between WD hard drives and RAID controllers. The Problem. All drives include error correction such as the ability to handle read errors.

Source: http://www.wdc.com/wdproducts/library/other/2579-001098.pdf

Link to comment
Share on other sites

Link to post
Share on other sites

So I've changed some parts in my list around, this is the semi-final list. Everything is in USD and the prices were from PC Part Picker except for the CPU, which was from PassMark's listing.

CPU: Two Intel Xeon X5650 @ 2.67 GHz, $105 per CPU ($210 total)

CPU Cooler: Two Corsair H100i GTX 70.7 CFM Liquid CPU Coolers, $109.99 per Cooler ($219.98 total)

Storage: 6 x 1TB WD Reds in RAID 6 config, $62.89 per drive ($377.34 total).

GPU:2 x EVGA GTX 980 TI Hybrid with water cooling, $719.99 per card ($1439.98 total).

Case: Phanteks Enthoo Pro ATX full tower, $89.88

 

I don't know what motherboard would work best would this, and I can't choose a specific set of 64GB ECC RAM until I know the mobo's RAM slot pins. I'll find a PSU once I figure out the required wattage, but so far it's already hit 810w because of the dual-GPU and dual-CPU set up, so I'm shooting for a 1000w to 1100w PSU.

 

So does anyone mind recommending a mobo with 64GB ECC RAM, two GPU slots, two CPU sockets, and supports RAID 6? I personally can't seem to find one through my searches.

Also, if a mobo says it supports RAID 5, does that mean it supports RAID 6 as well? RAID 6 is basically just RAID 5 with a spare drive, so I'm assuming RAID 5 support == RAID 6 support.

Link to comment
Share on other sites

Link to post
Share on other sites

You're going to be hard pressed to find 1366 socket motherboards. I believe a haswell i7 would smoke those 5650s but you aren't going to get an i7 for $105 per, at all. ECC RAM is recommended especially in systems that use large amounts of RAM... so that would rule out my own suggestion. 

 

The E3-1241v3 is a very capable processor... http://cpuboss.com/cpus/Intel-Xeon-E3-1241-v3-vs-Intel-Core-i7-4790K

(Granted this isn't compared to the 4790k overclocked)

 

I also like supermicro motherboards, take a look at those.

Link to comment
Share on other sites

Link to post
Share on other sites

So I've updated my parts, I'm going to end up using one GTX 980 TI until I feel it necessary to add a second, but here's the part list:

 

http://pcpartpicker.com/p/zNLwGX
Price breakdown by merchant: http://pcpartpicker.com/p/zNLwGX/by_merchant/

CPU: Intel Xeon E5-2620 V3 2.4GHz 6-Core OEM/Tray Processor  ($389.99 @ SuperBiiz)
CPU: Intel Xeon E5-2620 V3 2.4GHz 6-Core OEM/Tray Processor  ($389.99 @ SuperBiiz)
CPU Cooler: Noctua NH-L9x65 33.8 CFM CPU Cooler  ($49.96 @ Amazon)
CPU Cooler: Noctua NH-L9x65 33.8 CFM CPU Cooler  ($49.96 @ Amazon)
Motherboard: Supermicro MBD-X10DAL-I-O ATX Dual-CPU LGA2011-3 Motherboard  ($301.98 @ Newegg)
Memory: Kingston 64GB (4 x 16GB) Registered DDR4-2133 Memory  ($359.99 @ SuperBiiz)
Storage: Western Digital Red 1TB 3.5" 5400RPM Internal Hard Drive  ($62.89 @ OutletPC)
Storage: Western Digital Red 1TB 3.5" 5400RPM Internal Hard Drive  ($62.89 @ OutletPC)
Storage: Western Digital Red 1TB 3.5" 5400RPM Internal Hard Drive  ($62.89 @ OutletPC)
Storage: Western Digital Red 1TB 3.5" 5400RPM Internal Hard Drive  ($62.89 @ OutletPC)
Storage: Western Digital Red 1TB 3.5" 5400RPM Internal Hard Drive  ($62.89 @ OutletPC)
Storage: Western Digital Red 1TB 3.5" 5400RPM Internal Hard Drive  ($62.89 @ OutletPC)
Video Card: MSI GeForce GTX 980 Ti 6GB Video Card  ($619.99 @ Micro Center)
Power Supply: Antec High Current Pro Platinum 1000W 80+ Platinum Certified Fully-Modular ATX Power Supply  ($211.99 @ SuperBiiz)
Total: $2751.19
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2016-03-18 00:58 EDT-0400

 

I have yet to pick a case, but I'd refer something small, light, and inexpensive. I don't need anything flashy or necessarily good looking, so long as it's functional. I was recommended this Rosewill Server case, but I'm not sure if it's compatible with my setup (including possibly adding a second GPU in the future).

 

I'm also planning on using this Intel RAID card which supports RAID 6.

Any last things I should take care of?

Link to comment
Share on other sites

Link to post
Share on other sites

Whew, blew my panties off. Very nice build. If you're willing to sacrifice 1TB for speed, you could just do a raid 10 which offers equal/better fault tolerance but faster speeds and more IOPS. This way you're using the onboard. I'm not well versed in raid cards, but I am an intel fanboy so what you're looking at seems ok to me. LSI is also a very big RAID card manufacturer. 

 

Alternatively keep the Raid 6 and throw in an SSD for "active" jobs / renders and then the Raid 6 array for archive.

Link to comment
Share on other sites

Link to post
Share on other sites

My gripe with RAID 6 vs RAID 10 is that RAID 6 has more storage space, but at the same time RAID 10 can theoretically handle n/2 drive failures so long as no two drives are in the same pairing, but also has half the storage space of all the drives.

 

Which is better for being both a NAS and a Render Farm? I figure RAID 6 is good as a NAS, but RAID 10 is good for rendering.

If I'm going for 8 drives of 1TB each, would I be better off using RAID 60?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, HunterAP said:

My gripe with RAID 6 vs RAID 10 is that RAID 6 has more storage space, but at the same time RAID 10 can theoretically handle n/2 drive failures so long as no two drives are in the same pairing, but also has half the storage space of all the drives.

 

Which is better for being both a NAS and a Render Farm? I figure RAID 6 is good as a NAS, but RAID 10 is good for rendering.

If I'm going for 8 drives of 1TB each, would I be better off using RAID 60?

With your configuration you would only lose 1TB switching to 10. (6 1TB drives, Raid 6 = 4tb, Riad 10 = 3TB). Honestly a pair of mirror SSDs for active projects makes life nice. 

With 8 drives... 6TB in Raid 6, and 4TB so there's 2TB lost, not terrible. Raid 60 you'll still be limited to the slowest drive's capability of IOPS.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, HunterAP said:

I'm also planning on using this Intel RAID card which supports RAID 6.

Any last things I should take care of?

You need to buy the BBU with that RAID card else you won't safely be able to use write-back cache, which is off by default without the battery, so write performance in RAID 6 will be very bad.

 

If that card doesn't support a BBU then you should upgrade to model above it, the OEM LSI 9361-8i. FYI almost every RAID card by server hardware vendors are LSI with the exception of HP who now use Adaptec .

 

Edit: This includes that Intel RAID card, it is actually an LSI.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Mikensan said:

With your configuration you would only lose 1TB switching to 10. (6 1TB drives, Raid 6 = 4tb, Riad 10 = 3TB). Honestly a pair of mirror SSDs for active projects makes life nice. 

With 8 drives... 6TB in Raid 6, and 4TB so there's 2TB lost, not terrible. Raid 60 you'll still be limited to the slowest drive's capability of IOPS.

@HunterAP

 

Also don't forget you can not expand a RAID 10 array with more disks once it  is created, you can with RAID 6. The only two ways to increase space when using RAID 10 is create a new array or replace EVERY disk in the array with a larger one and once this is done then you can expand.

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, leadeater said:

@HunterAP

 

Also don't forget you can not expand a RAID 10 array with more disks once it  is created, you can with RAID 6. The only two ways to increase space when using RAID 10 is create a new array or replace EVERY disk in the array with a larger one and once this is done then you can expand.

Only some RAID cards support RAID 6 expansion, and it's a slow process, since all Parity Data needs to be re-created. It's equivalent of rebuilding an array.

 

But you're right, it can be done.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

@HunterAP

 

Also don't forget you can not expand a RAID 10 array with more disks once it  is created, you can with RAID 6. The only two ways to increase space when using RAID 10 is create a new array or replace EVERY disk in the array with a larger one and once this is done then you can expand.

Odd, with FreeNAS it's the opposite, you can expand a raid 10 but not a RaidZ2 (unless you want to just add more vdevs to a pool). I just assumed it could be done on a raid controller as well (have never used Raid 10 on a controller).

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Mikensan said:

Odd, with FreeNAS it's the opposite, you can expand a raid 10 but not a RaidZ2 (unless you want to just add more vdevs to a pool). I just assumed it could be done on a raid controller as well (have never used Raid 10 on a controller).

I didn't think you can expand a RAID 10 either on ZFS? I could be mistaken though.

 

In ZFS, generally, if you want to expand your pool, you're advised to create a whole new pool, or to create a new array (You have a RAIDZ2, create another RAIDZ2) and add them both to a vdev (to create the ZFS equivalent of RAID 60).

 

I don't have a ton of experience with any of this in practice though, so it might all be hypothetical.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, HunterAP said:

-snip-

I would heavily recommend a SSD for the OS drive, it'll speed things along a lot. I personally run a Sandisk Extreme Pro in my NAS, but the Samsung 850 Evo is plenty good. I would recommend 240GB or more.

 

RAID10 vs RAID6 is something you're probably going to have to debate about...RAID10 is faster, but RAID6 is safer. You can lose two drives in either, but with RAID 10, it has to be the right two drives that die otherwise the array goes down.

 

You might look at the Norco 20 bay 4U chassis, it's pretty nice for $360ish if I remember right. You'll need a expander down the road to connect all of the drive bays though.

 

Yeah, you can't expand RAID10 after it's built. You have to have all of the drives upfront. Also, be sure to get the battery backup for the RAID card as mentioned above.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, dalekphalm said:

I didn't think you can expand a RAID 10 either on ZFS? I could be mistaken though.

 

In ZFS, generally, if you want to expand your pool, you're advised to create a whole new pool, or to create a new array (You have a RAIDZ2, create another RAIDZ2) and add them both to a vdev (to create the ZFS equivalent of RAID 60).

 

I don't have a ton of experience with any of this in practice though, so it might all be hypothetical.

When extending a volume, ZFS supports the addition of virtual devices (vdevs) to an existing volume (ZFS pool). A vdev can be a single disk, a stripe, a mirror, a RAIDZ1, RAIDZ1, or a RAIDZ3. Once a vdev is created, you can not add more drives to that vdev; however, you can stripe a new vdev (and its disks) with the same type of existing vdev in order to increase the overall size of the ZFS pool. In other words, when you extend a ZFS volume, you are really striping similar vdevs. Here are some examples:
to extend a ZFS stripe, add one or more disks. Since there is no redundancy, you do not have to add the same amount of disks as the existing stripe.
to extend a ZFS mirror, add the same number of drives. The resulting striped mirror is a RAID 10.
to extend a three drive RAIDZ1, add three additional drives. The result is a RAIDZ+0, similar to RAID 50 on a hardware controller.
to extend a RAIDZ2 requires a minimum of four additional drives. The result is a RAIDZ2+0, similar to RAID 60 on a hardware controller.
In the “Volume to extend” section, select the existing volume that you wish to stripe with. This will grey out the Volume name field and select ZFS as the filesystem type. Highlight the required number of additional disk(s), select the same type of RAID used on the existing volume, and click Add Volume.

You just keep throwing mirrored vdevs at a pool and ZFS will stripe them, increasing performance and IOPS.

 

Guess if I had thought about it I would've quickly realized this would not apply to a raidcard.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Mikensan said:

When extending a volume, ZFS supports the addition of virtual devices (vdevs) to an existing volume (ZFS pool). A vdev can be a single disk, a stripe, a mirror, a RAIDZ1, RAIDZ1, or a RAIDZ3. Once a vdev is created, you can not add more drives to that vdev; however, you can stripe a new vdev (and its disks) with the same type of existing vdev in order to increase the overall size of the ZFS pool. In other words, when you extend a ZFS volume, you are really striping similar vdevs. Here are some examples:
to extend a ZFS stripe, add one or more disks. Since there is no redundancy, you do not have to add the same amount of disks as the existing stripe.
to extend a ZFS mirror, add the same number of drives. The resulting striped mirror is a RAID 10.
to extend a three drive RAIDZ1, add three additional drives. The result is a RAIDZ+0, similar to RAID 50 on a hardware controller.
to extend a RAIDZ2 requires a minimum of four additional drives. The result is a RAIDZ2+0, similar to RAID 60 on a hardware controller.
In the Volume to extend section, select the existing volume that you wish to stripe with. This will grey out the Volume name field and select ZFS as the filesystem type. Highlight the required number of additional disk(s), select the same type of RAID used on the existing volume, and click Add Volume.

You just keep throwing mirrored vdevs at a pool and ZFS will stripe them, increasing performance and IOPS.

 

Guess if I had thought about it I would've quickly realized this would not apply to a raidcard.

Yeah that's how I thought the system works.

 

But you don't want to just keep throwing mirrored vdevs together. A few won't really present a problem, but sooner or later, your risk factor will be huge, because if you are striping, let's say, 4x RAIDZ2 vdevs, then yes, you could in theory lose up to 8 drives total, but on the other hand, which drives fail becomes increasingly more important.

 

I'd say striping 2 or 3 times would be the max I would ever do. After that, I'd plan on recreating the entire array from scratch with new HDD's.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

With RAID 10, I wouldn't need to purchase a RAID card (saving me at least $400), and I get better performance / less risk of URE. With RAID 6 I get a better chance of saving my drives since two can fail before everything goes down, but URE's are higher because if a drive fails, then the data has to be rewritten.

I'm also going to run Windows 2012 Server R2, so if I remember right FreeNAS and unRAID are OS's so I won't need them.

Also if I plan on expanding the storage, I'd probably backup / clone the data to another RAID array, then replace the drives and wipe the old ones, but expansion won't be an issue.

I'm thinking of going with RAID 6 even though it's more expensive since it'll be more reliable and I'm not looking for super high performance since the build will only stream or render videos overnights, and stick with the 4u 8 bay Rosewill case I posted earlier. 

The only questions I have are how exactly j can stream from the server to another PC, and then record that to the pc. My plan currently is to stream games through Steam's In-Home streaming service and record the video with Nvidia Shadowplay or OBS. This should work properly, right? I can't think of any issues aside from network bottlenecking, which shouldn't be too bad if the recording is being saved to the PC's local drive, then transferred to the server for rendering.

Speaking of rendering, what is more optimal: open the file for editing in premiere pro/Sony Vegas (which would be installed on the server), then edit the file from the local drive or from the server?

Lastly, are there efficient ways to make Pemiere Pro or Sony Vegas 13 automatically render videos placed in a specific folder on the server and output them to another foldercon the server?

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, HunterAP said:

With RAID 10, I wouldn't need to purchase a RAID card (saving me at least $400), and I get better performance / less risk of URE. With RAID 6 I get a better chance of saving my drives since two can fail before everything goes down, but URE's are higher because if a drive fails, then the data has to be rewritten.

I'm also going to run Windows 2012 Server R2, so if I remember right FreeNAS and unRAID are OS's so I won't need them.

Also if I plan on expanding the storage, I'd probably backup / clone the data to another RAID array, then replace the drives and wipe the old ones, but expansion won't be an issue.

I'm thinking of going with RAID 6 even though it's more expensive since it'll be more reliable and I'm not looking for super high performance since the build will only stream or render videos overnights, and stick with the 4u 8 bay Rosewill case I posted earlier. 

The only questions I have are how exactly j can stream from the server to another PC, and then record that to the pc. My plan currently is to stream games through Steam's In-Home streaming service and record the video with Nvidia Shadowplay or OBS. This should work properly, right? I can't think of any issues aside from network bottlenecking, which shouldn't be too bad if the recording is being saved to the PC's local drive, then transferred to the server for rendering.

Speaking of rendering, what is more optimal: open the file for editing in premiere pro/Sony Vegas (which would be installed on the server), then edit the file from the local drive or from the server?

Lastly, are there efficient ways to make Pemiere Pro or Sony Vegas 13 automatically render videos placed in a specific folder on the server and output them to another foldercon the server?

You could still get a RAID card for RAID10 if you wanted. However, depending on the chassis, if it uses hotswap SAS on the backplane for the drives, you'll need a RAID or SAS HBA card to interface to it (Unless your motherboard has onboard SAS which is kind of hard to find).

 

Yeah, if you go RAID6, then you will need to get a RAID card.

 

As for the streaming stuff, sorry, I don't know much about it...I don't have much video editing or streaming experience. I think you can set Adobe media encoder to watch folders...or use that program Linus does that encodes on the watch folder...I forgot what it was called though.

Link to comment
Share on other sites

Link to post
Share on other sites

Since you will have Server 2012R2 you could also use Storage Spaces instead of hardware RAID. A Mirror pool is really like a RAID 10 - if you create the virtual disk with 6 physical disks available, it will by default create a 2-copy 3-column virtual disk, meaning you are striping between 3 mirror sets. The downside is that you always have to have 6 disks. If on the other hand you manually make a 2-copy 2-column virtual disk (via Powershell) then you have some leeway. You could, for example, replace a failed 1TB drive with a 2TB drive, and then up to two oher drives could fail and you would be able to repair the virtual disk with only 5 drives. It's a lot of the same pros as ZFS, especially if you use ReFS as it directly communicates with Storage Spaces to do the same type of on the fly analysis and repair that ZFS does. ZFS is way more capable but obviously doesn't run natively on Windows.

 

BTW don't use Parity virtual disks in Storage Spaces without two SSDs to devote to Journal usage, which means they are used as a writeback cache. You have to have two because the Journal must be a mirror.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, brwainer said:

Since you will have Server 2012R2 you could also use Storage Spaces instead of hardware RAID. A Mirror pool is really like a RAID 10 - if you create the virtual disk with 6 physical disks available, it will by default create a 2-copy 3-column virtual disk, meaning you are striping between 3 mirror sets. The downside is that you always have to have 6 disks. If on the other hand you manually make a 2-copy 2-column virtual disk (via Powershell) then you have some leeway. You could, for example, replace a failed 1TB drive with a 2TB drive, and then up to two oher drives could fail and you would be able to repair the virtual disk with only 5 drives. It's a lot of the same pros as ZFS, especially if you use ReFS as it directly communicates with Storage Spaces to do the same type of on the fly analysis and repair that ZFS does. ZFS is way more capable but obviously doesn't run natively on Windows.

 

BTW don't use Parity virtual disks in Storage Spaces without two SSDs to devote to Journal usage, which means they are used as a writeback cache. You have to have two because the Journal must be a mirror.

From what I have seen and used of Server 2016 storage spaces and enhancements in ReFS there are some things it does better than ZFS. I also think expanding and maintaining physical disks in storage spaces is easier now that it has added the re-balancing feature, I am a baised windows person so this is just personal opinion.

 

I also like the fact in storage spaces when using auto tiering the SSD space actually contributes to usable storage space, it's not just a cache.

Link to comment
Share on other sites

Link to post
Share on other sites

So if I'm able to get Windows Server 2012 R2 through Dreamspark, can I VM it through unRAID? What advantage would I have if I did that rather than just run Windows Server normally?

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, HunterAP said:

So if I'm able to get Windows Server 2012 R2 through Dreamspark, can I VM it through unRAID? What advantage would I have if I did that rather than just run Windows Server normally?

DreamSpark normally lets you get both a Standard license key as well as a Datacenter license key. The Standard license is good for either two VMs on a single VM host (must be the same host) or a physical host plus one VM on that host. To use the two copies as VMs you have to use Hyper-V Server (NOT Windows 2012 R2 with the Hyper-V Server role, because if you do that then it eats one of you licensed installs) or a third party hypervisor. With the Datacenter license you get unlimited VMs on a single physical host, plus the ability to use Automatic Virtual Machine Activation

 

If you are using the Standard License key, it makes no difference what hypervisor you are using. With the Datacenter license key, you really should be using Server 2012 R2 Datacenter Hyper-V as your hypervisor so that you can use AVMA.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×