Jump to content

Help building a NAS / Renderserver for work!

Hey Dear LTT Community, 

 

I'm currently in the midst of planning and then eventually building a new Server for my new job. 

First of all to the background of the Company I Wwork for. It is a Company focused on making photos and Videos for commercial use and Marketing as well as wedding Photography etc. (Occasional 4k Videos included)

It is planned that the Server should on the one Hand be there for Archiving and Storing the Data that comes in as we do shootings, and on the other hand act as a sort of Renderserver to use the power of the Workstation aspect of the Server to help with rendering and editing to free up capacity on the Machines of our Photographers/Videographers,

(The overwhelming majority of those work on MAC, so OS X by the way) 

 

Here ist the Part list that I thought about assembling:

 

Case: Lian Li PC-343

PSU: Corsair AX 1500i 

Motherboard: ASUS Z10PE-D16 WS

CPU: Intel Xenon E5-2640 v4 

GPU: EVGA GeForce GTX 1080 Ti FTW3

RAM: Kingston KVR21R15D4K4/64

SSD: Samsung 960 Evo 1TB

HDD: 5x WD RED PRO 8TB

CPU-Cooling: Noctua NH-U9DX i4 

Case Cooling: 2x Be Quiet Silent Wings 3 120mm

Misc: 2x Icy Dock FlexCage MB973SP-1B 3.5

 

The SSD is supposed to be used as a Cache and for the Operating System.

a 10 GBit Ethernet or Thunderbolt via PCI-Express expansion or something similar would be great, but I am not quite sure how to best solve this. 

 

How does this sound to you, and what should I best run on the Server? FreeNAS, Unraid,...?

 

All help is really appreciated guys and girls :-)

 

Greetings from Germany

Link to comment
Share on other sites

Link to post
Share on other sites

How much ram are you getting?  And for the Archival purpose, I would definitely use FreeNas.  I don't know much about render servers.  Maybe do something like ProxMox so you can virtualize your FreeNas and render server in the same box?

Link to comment
Share on other sites

Link to post
Share on other sites

@newgeneral1064 Gigs to start out with. The Mobo can handle up to 1024 Gb but realistically 256 GB would be achievable in the long run, and if we upgrade to a second Xeon E5-2640 v4. But for the start it would be 64 Gigs.

How does ProxMox work and have you made any experiences with it?

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Xadras said:

64 Gigs to start out with. The Mobo can handle up to 1024 Gb but realistically 256 GB would be achievable in the long run, and if we upgrade to a second Xeon E5-2640 v4. But for the start it would be 64 Gigs.

How does ProxMox work and have you made any experiences with it?

Quote or tag him: @newgeneral10, so he can see you replied. 

Intel HEDT and Server platform enthusiasts: Intel HEDT Xeon/i7 Megathread 

 

Main PC 

CPU: i9 7980XE @4.5GHz/1.22v/-2 AVX offset 

Cooler: EKWB Supremacy Block - custom loop w/360mm +280mm rads 

Motherboard: EVGA X299 Dark 

RAM:4x8GB HyperX Predator DDR4 @3200Mhz CL16 

GPU: Nvidia FE 2060 Super/Corsair HydroX 2070 FE block 

Storage:  1TB MP34 + 1TB 970 Evo + 500GB Atom30 + 250GB 960 Evo 

Optical Drives: LG WH14NS40 

PSU: EVGA 1600W T2 

Case & Fans: Corsair 750D Airflow - 3x Noctua iPPC NF-F12 + 4x Noctua iPPC NF-A14 PWM 

OS: Windows 11

 

Display: LG 27UK650-W (4K 60Hz IPS panel)

Mouse: EVGA X17

Keyboard: Corsair K55 RGB

 

Mobile/Work Devices: 2020 M1 MacBook Air (work computer) - iPhone 13 Pro Max - Apple Watch S3

 

Other Misc Devices: iPod Video (Gen 5.5E, 128GB SD card swap, running Rockbox), Nintendo Switch

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Xadras said:

@newgeneral1064 Gigs to start out with. The Mobo can handle up to 1024 Gb but realistically 256 GB would be achievable in the long run, and if we upgrade to a second Xeon E5-2640 v4. But for the start it would be 64 Gigs.

How does ProxMox work and have you made any experiences with it?

I haven't used ProxMox yet but I am going to in my next build.  It's a Type1 Hypervisor which basically means its the Operating System form of VMware or VirtualBox.  Basically, you install ProxMox on your hardware, then install Freenas and other OSes onto the ProxMox.  This way you can have like 10 Virtual Machines on 1 physical machine.  In your use case, you could install FreeNas and whatever renderserver you want on the same machine.  Have one OS dedicated to storage and one dedicated to rendering.  This also has security benefits too.  And ProxMox claims it only adds 1-3% overhead so that's nice.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, newgeneral10 said:

I haven't used ProxMox yet but I am going to in my next build.  It's a Type1 Hypervisor which basically means its the Operating System form of VMware or VirtualBox.  Basically, you install ProxMox on your hardware, then install Freenas and other OSes onto the ProxMox.  This way you can have like 10 Virtual Machines on 1 physical machine.  In your use case, you could install FreeNas and whatever renderserver you want on the same machine.  Have one OS dedicated to storage and one dedicated to rendering.  This also has security benefits too.  And ProxMox claims it only adds 1-3% overhead so that's nice.

 

Thanks, that sounds great for a start I'll surely read up on this. Any comments on the part-list?

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Xadras said:

 

Thanks, that sounds great for a start I'll surely read up on this. Any comments on the part-list?

That should be more than enough for the ProxMox/FreeNas side of things, but again, I don't know much about renderservers.  Depending on how many users you have, the 2 Gigabit ports should be fine (if you do some sort of teaming or LACP) but I could see how a 10 Gig card would help.

Link to comment
Share on other sites

Link to post
Share on other sites

@newgeneral10

I don't have a lot of time and also not a lot of experience with this stuff, but just to make you aware:

 

I read somewhere, that you should not virtualize FreeNAS, because, as you probably know, it runs ZFS as a filesystem.

And if I remember correctly ZFS has to have total control over the hard drives it is using, so virtual drives might not be the best idea.

 

I would highly recommend you try to read up on ZFS (or virtualizing a system that uses ZFS)!

 

See this comment for example: https://www.reddit.com/r/homelab/comments/3un70f/home_server_virtualization_esxi_or_unraid/d3557dk/

Edited by Lubi97
added link
Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Lubi97 said:

@newgeneral10

I don't have a lot of time and also not a lot of experience with this stuff, but just to make you aware:

 

I read somewhere, that you should not virtualize FreeNAS, because, as you probably know, it runs ZFS as a filesystem.

And if I remember correctly ZFS has to have total control over the hard drives it is using, so virtual drives might not be the best idea.

 

I would highly recommend you try to read up on ZFS (or virtualizing a system that uses ZFS)!

 

See this comment for example: https://www.reddit.com/r/homelab/comments/3un70f/home_server_virtualization_esxi_or_unraid/d3557dk/

what are you rendering? What programs?

 

 

Proxmox has zfs built in, so no need for freenas as a vm. Just use a container in proxmox to run a samba file share and your good.

Link to comment
Share on other sites

Link to post
Share on other sites

@Xadras

If you want to use the server/workstation as storage/NAS and to render then FreeNAS isn't going to be an option. FreeNAS isn't a multiple role OS like that and has limited support for running things even like Plex.

 

Proxmox could be a good option and supports ZFS however if you're going to be using GPUs and GPU accelerated rendering and other application performance sensitive stuff I would advise using either Windows Server or Windows 10 Pro.

 

If you are going to go with a Windows based system you won't be able to use the OS SSD to cache the HDDs, you'll need a total of 3 SSDs to do this. Use a smaller 120/250 850 EVO for the OS and two 850 Pros of which ever size you think is right. If using Windows 10 Pro set the SSDs in the Storage Spaces pool to Journal and they will cache writes, if using Windows Server 2016 you can also set them as Journal or level them as standard and using Storage Spaces Tiering. I would advise the second option for Windows Server and remember you must format the virtual disks in all cases to ReFS not NTFS.

 

Edit:

Also for 10GbE NICs buy them used off ebay, way cheaper.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

however if you're going to be using GPUs and GPU accelerated rendering and other application performance sensitive stuff I would advise using either Windows Server or Windows 10 Pro.

But you can also just run windows in a vm in proxmox(or any other kvm hypervisor, or xen, or esxi) and give windows the full control over that gpu.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Electronics Wizardy said:

But you can also just run windows in a vm in proxmox(or any other kvm hypervisor, or xen, or esxi) and give windows the full control over that gpu.

True but GPU passthrough isn't always perfect and user experience and workstation usage is a factor i.e. fully interactive display output for editing applications where more than straight GPU acceleration is required. There is a reason high end workstations are not virtualized and when they are they are done using professional GPUs with the optimized hypervisor drivers and additional software, often using PCoIP or HDX.

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, leadeater said:

@Xadras

If you want to use the server/workstation as storage/NAS and to render then FreeNAS isn't going to be an option. FreeNAS isn't a multiple role OS like that and has limited support for running things even like Plex.

 

Proxmox could be a good option and supports ZFS however if you're going to be using GPUs and GPU accelerated rendering and other application performance sensitive stuff I would advise using either Windows Server or Windows 10 Pro.

 

If you are going to go with a Windows based system you won't be able to use the OS SSD to cache the HDDs, you'll need a total of 3 SSDs to do this. Use a smaller 120/250 850 EVO for the OS and two 850 Pros of which ever size you think is right. If using Windows 10 Pro set the SSDs in the Storage Spaces pool to Journal and they will cache writes, if using Windows Server 2016 you can also set them as Journal or level them as standard and using Storage Spaces Tiering. I would advise the second option for Windows Server and remember you must format the virtual disks in all cases to ReFS not NTFS.

 

Edit:

Also for 10GbE NICs buy them used off ebay, way cheaper.

Does it cause any Problems that the Machines accessing the Server will be mostly Macs? Why cant the windows based system not use the ssd in question? 

Thanks for the Suggestion so far

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Xadras said:

Does it cause any Problems that the Machines accessing the Server will be mostly Macs? Why cant the windows based system not use the ssd in question? 

Thanks for the Suggestion so far

Macs can access standard SMB shares so it won't be a problem.

 

You can't use the OS SSD since that's a configuration restriction of Storage Spaces, they must be disks dedicated to the storage pool. There are other caching software like Primocache that might be able to do it off the single SSD.

https://www.romexsoftware.com/en-us/primo-cache/

 

@MageTank

You use Primocache correct?

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Electronics Wizardy said:

what are you rendering? What programs?

 

 

Proxmox has zfs built in, so no need for freenas as a vm. Just use a container in proxmox to run a samba file share and your good.

Hey, we render Premiere Pro videos ( 4k as well) and Lightroom files after editing. so the whole adobe shabang basically. including encoder and all.

 

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, leadeater said:

Macs can access standard SMB shares so it won't be a problem.

 

You can't use the OS SSD since that's a configuration restriction of Storage Spaces, they must be disks dedicated to the storage pool. There are other caching software like Primocache that might be able to do it off the single SSD.

https://www.romexsoftware.com/en-us/primo-cache/

 

@MageTank

You use Primocache correct?

By the way, do I need some sort of Raidcontroller? And do you think a redundant power supply is a must have? Any other safety features I should be thinking about?

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Xadras said:

Hey, we render Premiere Pro videos ( 4k as well) and Lightroom files after editing. so the whole adobe shabang basically. including encoder and all.

 

What video res, codecs and effects? Normally a render server for premiere doesn't make sense, just render it on your break and your good.

 

Your gonna want a much faster cpu, and don't need that much gpu power.

 

Id get something like dual e5 2643's v4 as premiere and lighroom want high clocked chips, not tons of cores.

 

2 hours ago, Xadras said:

By the way, do I need some sort of Raidcontroller? And do you think a redundant power supply is a must have? Any other safety features I should be thinking about?

for raid card, depends on the os and setup. If your using storage spaces in windows or zfs you don't need a raid card.

 

How much do you care about uptime?

 

PSU's don't fail that often, so id personally not bother with redundant ones. And if your getting redundant ones, you might aswell just get a dell server.  There other big advantage is that you can hook the system up to 2 different power sources and have reduancy on the powercompany level or multiple ups, but your probably not running off anything more than a standard 120 wall plug.

 

You want to have backups of this, and if you really care about uptime, you can have anouther server that it will switch to if this one fails at any time.

 

 

 

EDIT changed e5 2643 to e5 2643 v4, don't get sandy bridge.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Electronics Wizardy said:

Id get something like dual e5 2643's as premiere and lighroom want high clocked chips, not tons of cores.

Yes this is a much better CPU to go for.

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, Electronics Wizardy said:

What video res, codecs and effects? Normally a render server for premiere doesn't make sense, just render it on your break and your good.

 

Your gonna want a much faster cpu, and don't need that much gpu power.

 

Id get something like dual e5 2643's v4 as premiere and lighroom want high clocked chips, not tons of cores.

 

for raid card, depends on the os and setup. If your using storage spaces in windows or zfs you don't need a raid card.

 

How much do you care about uptime?

 

PSU's don't fail that often, so id personally not bother with redundant ones. And if your getting redundant ones, you might aswell just get a dell server.  There other big advantage is that you can hook the system up to 2 different power sources and have reduancy on the powercompany level or multiple ups, but your probably not running off anything more than a standard 120 wall plug.

 

You want to have backups of this, and if you really care about uptime, you can have anouther server that it will switch to if this one fails at any time.

 

 

 

EDIT changed e5 2643 to e5 2643 v4, don't get sandy bridge.

We handle up to 4k res, h264 and we use After Effects as well as 3DMax for some of our video projects.

Up to 8 people could be working simultaneously.

 

Does the 2640 v4 not have a 3.4 boost frequency while the 2643 v4 has a 3.7 boost, so is the difference that great, or is the boost not available on all cores? Dual chips is the idea for the long run, though buget is an issue for now, and the 2643 v4 is nearly double the price of the 2640 v4. 

 

The server should run nearly 24/7. Downtime at night or on the weekends would be a possibility though, but remote acessibility is a thing that we are considering as well, as to make working from home on the weekends available for our employees. 

 

As for Backups, the projects that are not finished and given to the customer yet will be stored on the server as well as on the IMac working on it. Another copy is supposed to be stored and updated every few weeks on HDDs that will go to a remote site. As soon as a project is finished and delivered the copy on the IMacs should be gone. The copy on the server stays as is, and every half year or year we want to catalogue the projects of the year and put it on remote HDDs named after the year and have a safekeeping in that way.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Xadras said:

Does the 2640 v4 not have a 3.4 boost frequency while the 2643 v4 has a 3.7 boost,

thats with one core, with all cores the 2640 v4 runs at about 2.6 and the 2643 v4 runs at about 3.6ghz.

 

1 hour ago, Xadras said:

We handle up to 4k res, h264 and we use After Effects as well as 3DMax for some of our video projects.

Up to 8 people could be working simultaneously.

im assuming you have plans of how you use this software with a render server, some software can be a pain to run with a render server.

 

1 hour ago, Xadras said:

As for Backups,

backups aren't the same as redunancy, if the server goes down, you still have to spend many hours restoring backups, with redunancy with a second server, you have no downtime.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×