Jump to content

High Computing Server and Storage Build Help needed

Hi all,

 

We are planning on building up a small server + SAN setup that allows our small company to work. We produce video and 3d content. We are looking to:

 

1- Have a centralized SAN for storage (looking at around 60tb of space). Need to be able to grow this to 200tb eventually. We  are thinking on using Synology's rack mounted san storage with ironwolf disks.

2- Have a small server, that works as a file server to feed 10 workstations connected to a switcher. We are looking at a 32 cores, 64gb ram server with an internal 500mb ssd.
3- The same server does application serving (ie: Maya, Resolve). 

4- This same server does work as a license server for the applications aforementioned. 

5- Eventually we want to add a few more compute-centric  nodes (probably Epyc based) for rendering. 

 

Could you helps us with a mockup for the main items needed for a build like this? Like what are the components for such a workflow and what would you generally pick?  It would be super helpful. 

 

We want to ensure a fast transfer back and forth between the workstations and the server/san (probably around 300mb/s would be ideal).

 

Thanks in advance!

 

 

 

 


 

Link to comment
Share on other sites

Link to post
Share on other sites

1. By san what protocols are you using? You probably want more nas protocols here if there being used by workstations, like smb of cifs

 

2. Why have the server and san seprate, Are you gettin ga cluster? Id just have one box that works as a nas. What are all those cores for? File servers normally don't need much cpu power.

 

3. So people are running the app on the remote system? So you want VDI? Why not run the app on the workstations?

 

 

What network do the workstations have? What os?

 

If I was bulding this, id get a few systems like dell r7515s, put a hypervisor on them like hyper-v, then have them all have like 50tb of hdds and like 8tb of ssds. Setup something like storage spaces direct if you want to grow, so then you have clustered storage you can easily grow and be redundant if a node fails. Then get a switch with a lot of 10gbe ports, and maybe some faster ports for the server, maybe this guy https://mikrotik.com/product/crs326_24s_2q_rm

 

Id really hire someone that knows what their doing here, turning this and keeping it working right can be a challenge, and Its well worth the cost to get someone that knows how it do it well.

Link to comment
Share on other sites

Link to post
Share on other sites

a pair of 45 drives Q30 when they offer epyc may make sense.

gives you 2 machines with most of the 64 cores free and a few PCIE slots for a pair of GPUs along with 40gb network and raid cards.

 

you could go with a single larger S45 and put in 36 drives set up in 3 12 drive raid 6 stripped. I'd have the other half of the row set up for SSDs giving you a live project server with something like 14 in a raid 50. Both can have one hot spare and I'd keep a few extra HDD on hand. you may have space for 2 GPUs in that config.

 

the best all around single box solution (I don't like the risk but the cut cost maybe worth it) R282-Z93 with 12 HDD in a raid 60 and then when you need more space using a diskshelf with an add in hba/raid card. that will allow 3 GPUs, 128 cores a single M.2 for os/caching  and 40gb networking.

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Electronics Wizardy said:

1. By san what protocols are you using? You probably want more nas protocols here if there being used by workstations, like smb of cifs

 

2. Why have the server and san seprate, Are you gettin ga cluster? Id just have one box that works as a nas. What are all those cores for? File servers normally don't need much cpu power.

 

3. So people are running the app on the remote system? So you want VDI? Why not run the app on the workstations?

 

 

What network do the workstations have? What os?

 

If I was bulding this, id get a few systems like dell r7515s, put a hypervisor on them like hyper-v, then have them all have like 50tb of hdds and like 8tb of ssds. Setup something like storage spaces direct if you want to grow, so then you have clustered storage you can easily grow and be redundant if a node fails. Then get a switch with a lot of 10gbe ports, and maybe some faster ports for the server, maybe this guy https://mikrotik.com/product/crs326_24s_2q_rm

 

Id really hire someone that knows what their doing here, turning this and keeping it working right can be a challenge, and Its well worth the cost to get someone that knows how it do it well.

Hey, 

 

thanks for the quick rundown! Super helpful. Let me start with the last point:

-- "Id really hire someone that knows what their doing here"

 

Totally :) Just want to get a grasp and get acquainted with some basics.

2- The idea behind using a SAN is becuase all this workstations will be pulling/pushing lots of data into and from the storage, and Im worried that bandwidth and slowness would be a problem. I understand that a SAN operates sort of like on its own network and reducing the latency between each computer would be nice. Ultimately all we need is a setup that allows ~300mb/s interaction with the server for about 5 workstations. Ideally we'd want to be able to scale that (eventually) to having more workstations and a few extra compute servers.


3- no no I mean, running on the workstation but *installed* on the server. apologies, my bad :)

 

Any other thoughts would be supe helpful, and if you know of any online trusted resources for hire (or a platform to find them) that would be super :)

 

Thanks for the help thus far!

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, sixjames1000 said:

- The idea behind using a SAN is becuase all this workstations will be pulling/pushing lots of data into and from the storage, and Im worried that bandwidth and slowness would be a problem. I understand that a SAN operates sort of like on its own network and reducing the latency between each computer would be nice. Ultimately all we need is a setup that allows ~300mb/s interaction with the server for about 5 workstations. Ideally we'd want to be able to scale that (eventually) to having more workstations and a few extra compute servers.

Normally sans are block storage, and don't really work when connectd to deskops. I think a nas would work much better here, and would easily hit the needed speed.

6 hours ago, sixjames1000 said:

- no no I mean, running on the workstation but *installed* on the server. apologies, my bad :)

 

Why do that? Id just install the programs on the workstations them selves. The programs aren't that big, and All the workstations have boot ssds right?

6 hours ago, sixjames1000 said:

Any other thoughts would be supe helpful, and if you know of any online trusted resources for hire (or a platform to find them) that would be super :)

 

hunt around for small buiness IT places, Its probaby easier and better to hire someone. There is a lot of learning that would go into getting this setup right.

Link to comment
Share on other sites

Link to post
Share on other sites

Why dont you try a HCI solution? Something with either vSAN or S2D? You can build those clusters with standard servers. Other more "comercial solutions" would involve Nutanix or VxRail. I have found that for single offices HCI is cheaper in te longrun. 60TB is not much and could be easily achievable with 3 or 4 nodes.

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, sixjames1000 said:

Hey, 

 

thanks for the quick rundown! Super helpful. Let me start with the last point:

-- "Id really hire someone that knows what their doing here"

 

Totally :) Just want to get a grasp and get acquainted with some basics.

2- The idea behind using a SAN is becuase all this workstations will be pulling/pushing lots of data into and from the storage, and Im worried that bandwidth and slowness would be a problem. I understand that a SAN operates sort of like on its own network and reducing the latency between each computer would be nice. Ultimately all we need is a setup that allows ~300mb/s interaction with the server for about 5 workstations. Ideally we'd want to be able to scale that (eventually) to having more workstations and a few extra compute servers.


3- no no I mean, running on the workstation but *installed* on the server. apologies, my bad :)

 

Any other thoughts would be supe helpful, and if you know of any online trusted resources for hire (or a platform to find them) that would be super :)

 

Thanks for the help thus far!

With the specs you are comming up with people could think you where a hollywood editor or CGI person..

 

A SAN is useless for desktops, you can't simply just "plug" and play this to "compute" devices then remote it from desktops. This will be a horrible solution..

Your best bet is:

1. A "mega" (depends how you define it) NAS, that all desktops can access at once. This cal also be configured with JBOD setup for maximal transfer speeds, however you will need to keep an extra copy somewhere else then.

2. A sort of cluster with serval nodes.

 

A SAN will work best if you planned to hava 1 server serve serval virtual desktops. Other way just stay away. They are really not worth the hassle.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×