Jump to content

10gbe Video Editing NAS to 1gbe Workstations

I work for a small video production house and we're trying to build a NAS for our workflow. We currently have 4 Windows 10 workstations and an improvised "NAS" running Windows 7 file shares over a 1 gigabit network. Some lighter projects can be edited off on the Windows 7 file server but our main projects require no dropped frames during playback and can only be edited off our local hard drives. We plan to build a FreeNAS server that all 4 workstations can edit off with little to no dropped frames in Premiere for the least amount of money possible.

 

I saw Linus feature the Asus XG-U2008. A $249 8-port gigabit switch with 2 10gbe ports. If we were to build a 10gbe capable NAS, connect that to one of the XG-U2008's 10gbe ports, then connect our workstations to the gigabit ports, can we edit 4K footage off of the FreeNAS server? Or do we need to buy a 10gbe switch and 10gbe NICs for each the workstations?

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, aeronkunteL said:

I work for a small video production house and we're trying to build a NAS for our workflow. We currently have 4 Windows 10 workstations and an improvised "NAS" running Windows 7 file shares over a 1 gigabit network. Some lighter projects can be edited off on the Windows 7 file server but our main projects require no dropped frames during playback and can only be edited off our local hard drives. We plan to build a FreeNAS server that all 4 workstations can edit off with little to no dropped frames in Premiere for the least amount of money possible.

 

I saw Linus feature the Asus XG-U2008. A $249 8-port gigabit switch with 2 10gbe ports. If we were to build a 10gbe capable NAS, connect that to one of the XG-U2008's 10gbe ports, then connect our workstations to the gigabit ports, can we edit 4K footage off of the FreeNAS server? Or do we need to buy a 10gbe switch and 10gbe NICs for each the workstations?

You need to all have 10GB NICs espically the switch to have full poential of 10GB or you will just have 1GB network essentially. It like running big pipe and than connecting it with smaller pipes to each user when you could use same pipe(Just like Australia NBN (FTTN).

 

I would suggest the cheapest options is to stick with 1GBe for most of the users and maybe have dedicated connection to the FreeNAS using SFP. They are relative cheap on ebay (second hands) compare if you going for full 10 gigabyte network. I would't suggest going with Asus, you are much better off getting legit enterprise 10 gigabyte switch. This option will allow easier integration of new system as you might run out of expansion slots if you going with first method. 

Magical Pineapples


 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Link to comment
Share on other sites

Link to post
Share on other sites

You need to calculate your bitrate based off the the video formats you use, including RAW. From a quick google 4K RAW is 1697Mbps so if that is a requirement you have then you will need 10Gb on all workstations.

 

Also make sure the NAS you are planning to build can actually handle the load of 4 workstations, the demand you'll be putting on it might be higher than you expect and FreeNAS/ZFS RAID levels + vdev configurations matters.

 

4 workstations even doing rather sequential reads or writes end up being much more like random I/O as far as the NAS is concerned.

 

It's safer to edit local then copy to the NAS even if all workstations have 10Gb.

 

First you should find out what the limiting factor is for your current setup. Isolate the testing to a single workstation and do some editing to test if there is enough performance to do the required tasks, if there isn't use Resource Monitor/Perfmon to figure out where the issue is. If it's a disk performance limitation get an SSD if you can and put that in to the system and run the testing again. If it still doesn't work and it's not a CPU limit on the NAS and the NICs are hitting 100% utilization then there is your answer, you will need 10Gb NICs

 

Before going fully in to building that NAS if the above is the case get two FreeNAS compatible 10Gb NICs off something like ebay for cheap and put that in to the the current NAS and a workstation and directly connect the two systems, set a static IP and run all the tests again.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, leadeater said:

You need to calculate your bitrate based off the the video formats you use, including RAW. From a quick google 4K RAW is 1697Mbps so if that is a requirement you have then you will need 10Gb on all workstations.

 

Also make sure the NAS you are planning to build can actually handle the load of 4 workstations, the demand you'll be putting on it might be higher than you expect and FreeNAS/ZFS RAID levels + vdev configurations matters.

 

4 workstations even doing rather sequential reads or writes end up being much more like random I/O as far as the NAS is concerned.

 

It's safer to edit local then copy to the NAS even if all workstations have 10Gb.

 

First you should find out what the limiting factor is for your current setup. Isolate the testing to a single workstation and do some editing to test if there is enough performance to do the required tasks, if there isn't use Resource Monitor/Perfmon to figure out where the issue is. If it's a disk performance limitation get an SSD if you can and put that in to the system and run the testing again. If it still doesn't work and it's not a CPU limit on the NAS and the NICs are hitting 100% utilization then there is your answer, you will need 10Gb NICs

 

Before going fully in to building that NAS if the above is the case get two FreeNAS compatible 10Gb NICs off something like ebay for cheap and put that in to the the current NAS and a workstation and directly connect the two systems, set a static IP and run all the tests again.

Thanks for the reply! 

 

We only produce videos mainly for YouTube and Facebook so I think our bitrate requirement is not that high. Most of our footage are MP4 from Sony A7S II's and I don't think we will have to edit ProRes regularly in the near future.

 

I already parted out a build which is similar to the 45Drives Storinator AV15. Getting some of the parts is a challenge because we are based in the Philippines and server grade hardware is hard to find. We have friends/relatives in the US who can ship the parts to us but we need to finalize what to buy first.

 

I have already done some tests with a FreeNAS server in our current gigabit network. I had 3 workstations rendering videos whlie we tried editing a video on the 4th. The 4th workstation could edit fine but some stations rendered as fast as local, some didn't. Maybe it was the settings in FreeNAS or the workstation NICs , I'm not sure. I am sure that the gigabit connection from the server to the switch is a bottleneck that is why I need to upgrade that to 10gb.

 

Do you think we need to try an Intel X540 for my test server and an Asus XG-U2008 switch so that the bottlenecks are the gigabit connections to the workstations or should we go all out and buy a 10gb switch?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, aeronkunteL said:

I have already done some tests with a FreeNAS server in our current gigabit network. I had 3 workstations rendering videos whlie we tried editing a video on the 4th. The 4th workstation could edit fine but some stations rendered as fast as local, some didn't. Maybe it was the settings in FreeNAS or the workstation NICs , I'm not sure. I am sure that the gigabit connection from the server to the switch is a bottleneck that is why I need to upgrade that to 10gb.

 

Do you think we need to try an Intel X540 for my test server and an Asus XG-U2008 switch so that the bottlenecks are the gigabit connections to the workstations or should we go all out and buy a 10gb switch?

Is it possible to run that testing again exactly as before? Setup monitoring on all the workstations using perfmon and configure it to log to file so you can review it easier and see a longer time span. You should also be able to do the same on the FreeNAS server but I'm not exactly sure since I don't use FreeNAS much, worst case you'll have to monitor that one in real time.

 

If your testing shows that the server doesn't hit the limit of the 1Gbps connection you might just need to add slightly more RAM to the server, SSD for ZIL/L2ARC, or use more HDDs to get higher performance that way.

 

Any chance this switch is in your price range?

https://www.ubnt.com/edgemax/edgeswitch-16-xg/

 

You'll be able to use slightly cheaper 10Gb NIC in the FreeNAS server than that X540, which is a great NIC btw I have two of them just a little pricey. The workstations can use the copper ports at 1Gb speed and later if you wish you can upgrade them to 10Gb without making any network switch changes.

 

Edit:

You can also sum up the network utilization from the 4 perfmon captures to get what should be hitting the FreeNAS server too.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, leadeater said:

Is it possible to run that testing again exactly as before? Setup monitoring on all the workstations using perfmon and configure it to log to file to you can review it easier and see a longer time span. You should also be able to do the same on the FreeNAS server but I'm not exactly sure since I don't use FreeNAS much, worst case you'll have to monitor that one in real time.

 

If your testing shows that the server doesn't hit the limit of the 1Gbps connection you might just need to add slightly more RAM to the server, SSD for ZIL/L2ARC, or use more HDDs to get higher performance that way.

 

Any chance this switch is in your price range?

https://www.ubnt.com/edgemax/edgeswitch-16-xg/

 

Thanks again for the reply.

 

These are the specs of the FreeNAS server I've been using for testing (Some of it are from my gaming rig haha)

  • i5-4670K
  • Gigabyte Z87mx-d3h
  • 12GB RAM
  • 3 x 1tb Seagate 7200rpm in raidz1

I'll try to do the tests again while monitoring the FreeNAS server tomorrow.

 

Here's what I was hoping to build:

  • Pentium G4560
  • Supermicro X11SSH-CTF-O
  • 32GB ECC RAM
  • 8 x 3tb Seagate Ironwolf in raidz1
  • Seasonic 620w

We can buy everything but the motherboard and RAM locally and it would cost about $1800. There's no budget to speak of but the goal is to improve our workflow for the least amount of money possible.

 

That Ubiquiti switch would have been perfect but I would prefer all copper links as SPF+ cables are hard to find here. 

Link to comment
Share on other sites

Link to post
Share on other sites

The raid / disks have their own limitations that will quickly be met with that many workstations. To test as leadeater is suggesting, reply back with the following using your test box:

 

Go ahead and start rendering on all four workstations. To get this information, go to Task Manager > Performance (Windows 10 organizes this data very nicely).

Workstation 1-4:

Network throughput:

Disk throughput:

 

See if you can find some Dell R610 or R710s, they usually come with ECC memory and start around $200us. They're usually sold with 32gb, could be cheaper to buy memory this way.

Link to comment
Share on other sites

Link to post
Share on other sites

So I did some tests toady. I downloaded jPerf on a 5th computer on the network to monitor the FreeNAS server's bandwidth. As we had some deadlines today, I can only use 2 of the 4 workstations.

 

First, here is when everything is on idle:

idle.thumb.jpg.fcd039783c88fa96e4587059de37f573.jpg

 

Then I tried to copy a 4gb file from the server to my workstation:

596f225f8cb90_servertoclient.thumb.jpg.c0773cfc5ddbd177f98302789a04dbd2.jpg

 

Here is when I copy the same file from the workstation to the server:

596f229aea591_clienttoserver.thumb.jpg.e91c5e91ce062ef755914b4cdb4cc190.jpg

 

Here is when I export a 1min sequence with 20 4K streams from the server to an SSD on my workstation:

render.thumb.jpg.6ae29246f5e60bb1bc683696a9edd644.jpg

 

And here are the jPerf graphs when 1 machine is exporting vs 2 machines exporting simultaneously:

596f2390ac765_2renders.thumb.jpg.e0180029cc134d5757ca2d36948a3c5e.jpg

 

 

 

From what I can tell, exporting can't saturate the network because it can't reach the recieve speeds of a plain old file transfer.

What do you think causes this? My earlier tests show that exporting locally vs from the server are only ahead by a few seconds. The exports take minutes longer when other workstations are exporting too. I assumed it was the gigabit link of the server to the switch but this test shows that Adobe Media Encoder does not even use 100% of the network.

 

When we get a chance, we'll try to do a real world test: all workstations editing off the server at the same time. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, aeronkunteL said:

From what I can tell, exporting can't saturate the network because it can't reach the recieve speeds of a plain old file transfer.

What do you think causes this? My earlier tests show that exporting locally vs from the server are only ahead by a few seconds. The exports take minutes longer when other workstations are exporting too. I assumed it was the gigabit link of the server to the switch but this test shows that Adobe Media Encoder does not even use 100% of the network.

I suspected that network bandwidth might not have been the limiting factor. It's probably the disk performance of the server that can't handle the load of the workstations.

 

There is one thing I would do to optimize the networking side and that is enable jumbo frames on the server, switch and workstations. That will reduce the TCP overhead on I/O traffic as less packets will be required to do the same things.

 

On the server end the only way to really prove that it's disk performance that is the issue is temporarily put an SSD in it and run all your tests using that. If you can't do that you'll need to monitor the disk performance and load during your testing.

 

Edit:

The network load is still rather high, 172 * 4 = 688Mbps, so investing in 10Gb in a new server is the right thing to do.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

I suspected that network bandwidth might not have been the limiting factor. It's probably the disk performance of the server that can't handle the load of the workstations.

 

There is one thing I would do to optimize the networking side and that is enable jumbo frames on the server, switch and workstations. That will reduce the TCP overhead on I/O traffic as less packets will be required to do the same things.

 

On the server end the only way to really prove that it's disk performance that is the issue is temporarily put an SSD in it and run all your tests using that. If you can't do that you'll need to monitor the disk performance and load during your testing.

I've already enabled jumbo frames on the workstations. I'll try to look in to how to do that on the server.

 

I don't think we have any SSDs we can use to test. Do you think adding more hard drives would increase performace? I currently only have 3 running raidz1 at the moment.

 

I'm also trying to continue my work off the NAS as a real world test and I haven't had any problems. I'll try to have the other editors do the same as soon as possible just to see more realistic numbers from the server.

 

Thanks for all your replies. You've been very helpful.

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, aeronkunteL said:

I don't think we have any SSDs we can use to test. Do you think adding more hard drives would increase performace? I currently only have 3 running raidz1 at the moment.

Adding more HDDs can increase performance but there are other things you can do. Have a look at the following link on ZFS performance testing, a notable thing to try is enabling LZ4 compression. It's talked about quite far down the page, this will have a CPU load increase.

 

https://calomel.org/zfs_raid_speed_capacity.html

 

Quote

off   1x 2TB    a single drive  1.8 terabytes ( w=131MB/s , rw= 66MB/s , r= 150MB/s )
lzjb  1x 2TB    a single drive  1.8 terabytes ( w=445MB/s , rw=344MB/s , r=1174MB/s ) 
lz4   1x 2TB    a single drive  1.8 terabytes ( w=471MB/s , rw=351MB/s , r=1542MB/s ) 

off   1x 256GB  a single drive  232 gigabytes ( w=441MB/s , rw=224MB/s , r= 506MB/s ) SSD
lzjb  1x 256GB  a single drive  232 gigabytes ( w=510MB/s , rw=425MB/s , r=1290MB/s ) SSD

off   2x 2TB    raid1  mirror   1.8 terabytes ( w=126MB/s , rw= 79MB/s , r= 216MB/s )
lzjb  2x 2TB    raid1  mirror   1.8 terabytes ( w=461MB/s , rw=386MB/s , r=1243MB/s )
lz4   2x 2TB    raid1  mirror   1.8 terabytes ( w=398MB/s , rw=354MB/s , r=1537MB/s )

off   3x 2TB    raid5, raidz1   3.6 terabytes ( w=279MB/s , rw=131MB/s , r= 281MB/s )
lzjb  3x 2TB    raid5, raidz1   3.6 terabytes ( w=479MB/s , rw=366MB/s , r=1243MB/s )
lz4   3x 2TB    raid5, raidz1   3.6 terabytes ( w=517MB/s , rw=453MB/s , r=1587MB/s )

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 3 weeks later...
On 7/19/2017 at 10:27 PM, Mikensan said:

Quick guide on how to use dd to test your array, though you'll want to disable compression as noted in the guide (just for testing). It won't give you IOPS naturally, but at least tell you throughput.

 

https://forums.freenas.org/index.php?threads/notes-on-performance-benchmarks-and-cache.981/

I ran the commands with compression enabled and here's what I got:

 

[root@freenas ~]# dd if=/dev/zero of=tmp.dat bs=2048k count=50k                 
51200+0 records in                                                              
51200+0 records out                                                             
107374182400 bytes transferred in 792.487319 secs (135490095 bytes/sec)        

 
[root@freenas ~]# dd if=tmp.dat of=/dev/null bs=2048k count=50k                 
51200+0 records in                                                              
51200+0 records out                                                             
107374182400 bytes transferred in 26.907646 secs (3990471067 bytes/sec)         
[root@freenas ~]#       

 

Writes: 135.5 MB/s

Reads: 3990.5 MB/s

 

Edit:

Ran it again with compression off:

 

[root@freenas ~]# dd if=/dev/zero of=tmp.dat bs=2048k count=50k                 
51200+0 records in                                                              
51200+0 records out                                                             
107374182400 bytes transferred in 1012.589721 secs (106039179 bytes/sec)   

     
[root@freenas ~]# dd if=tmp.dat of=/dev/null bs=2048k count=50k                 
51200+0 records in                                                              
51200+0 records out                                                             
107374182400 bytes transferred in 26.896464 secs (3992130055 bytes/sec)         
[root@freenas ~]#     

 

Writes: 106 MB/s

Reads: 3992 MB/s

Link to comment
Share on other sites

Link to post
Share on other sites

It looks like you're sitting in the home directory, did you change directory to /mnt/volume/dataset before testing?

Link to comment
Share on other sites

Link to post
Share on other sites

Oops. I missed that. I tried it again directed to the dataset:

 

lz4 OFF:

[root@freenas ~]# dd if=/dev/zero of=/mnt/vol1/Win/tmp.dat bs=2048k count=50k   
51200+0 records in                                                              
51200+0 records out                                                             
107374182400 bytes transferred in 330.278688 secs (325101759 bytes/sec)    
     
[root@freenas ~]# dd if=/mnt/vol1/Win/tmp.dat of=/dev/null bs=2048k count=50k   
51200+0 records in                                                              
51200+0 records out                                                             
107374182400 bytes transferred in 289.577772 secs (370795665 bytes/sec) 

Writes: 325.1 MB/s

Reads: 370.8 MB/s

 

lz4 ON:

[root@freenas ~]# dd if=/dev/zero of=/mnt/vol1/Win/tmp.dat bs=2048k count=50k   
51200+0 records in                                                              
51200+0 records out                                                             
107374182400 bytes transferred in 41.667787 secs (2576911094 bytes/sec)     
    
[root@freenas ~]# dd if=/mnt/vol1/Win/tmp.dat of=/dev/null bs=2048k count=50k   
51200+0 records in                                                              
51200+0 records out                                                             
107374182400 bytes transferred in 32.250603 secs (3329369739 bytes/sec)         
[root@freenas ~]#   

 

Writes: 2576.9 MB/s

Reads: 3329.3 MB/s

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, aeronkunteL said:

Oops. I missed that. I tried it again directed to the dataset:

 

lz4 OFF:


[root@freenas ~]# dd if=/dev/zero of=/mnt/vol1/Win/tmp.dat bs=2048k count=50k   
51200+0 records in                                                              
51200+0 records out                                                             
107374182400 bytes transferred in 330.278688 secs (325101759 bytes/sec)    
     
[root@freenas ~]# dd if=/mnt/vol1/Win/tmp.dat of=/dev/null bs=2048k count=50k   
51200+0 records in                                                              
51200+0 records out                                                             
107374182400 bytes transferred in 289.577772 secs (370795665 bytes/sec) 

Writes: 325.1 MB/s

Reads: 370.8 MB/s

 

lz4 ON:


[root@freenas ~]# dd if=/dev/zero of=/mnt/vol1/Win/tmp.dat bs=2048k count=50k   
51200+0 records in                                                              
51200+0 records out                                                             
107374182400 bytes transferred in 41.667787 secs (2576911094 bytes/sec)     
    
[root@freenas ~]# dd if=/mnt/vol1/Win/tmp.dat of=/dev/null bs=2048k count=50k   
51200+0 records in                                                              
51200+0 records out                                                             
107374182400 bytes transferred in 32.250603 secs (3329369739 bytes/sec)         
[root@freenas ~]#   

 

Writes: 2576.9 MB/s

Reads: 3329.3 MB/s

Now those are numbers that make sense :-) Just retested mine to see what I'm at... currently 384MB/s write, 438MB/s read. 5x 4TB disks in RaidZ1. So I'd say your speeds for 3x disks are right where they should be. 

 

Are you still getting inconsistent speeds over SMB?

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×