Jump to content

New server - Need help.

Leduc

Hi,

I work in geomatic for a company we do mobile mapping (3d scan) mostly and need to update our server. The storage server we have is almost 20 years old. The current server was build by someone who somewhat knew what he was doing but got updated over time and now its just a mess of old switch and wire. The storage solution is not used anymore and the speed of transfer between computer on the network is so slow we use external drive or WeTransfer to transfer files.

 

We are looking to upgrade and while I'm no expert in networking it's clear I'll have to take charge of the upgrade. So I'll need help figuring out if the solution I'm looking for is actually enough for our work. 

 

Here's what I was looking at for our storage solution:

    45Drives : Storinator AV15

    Turbo model
    CPU: Intel Xeon Gold 6230R - 26 Cores, 52 Threads - 2.10 GHz
    Motherboard: Supermicro X11SPL

    RAM: 128 GB

    Usable storage: 144 GB

    HDD: 8 x 20 To Exos Enterprise drives (Possibility of an extra 8)
    (Add on) Dual Port 40 Gb/s -- I was shure what kind of port to add, the base one is only 1Gb/s. Figured I should take the best one for future proofing.
    OS: FreeNAS -- No idea if this is the right choice. Saw Linus setting up some NAS with it.

 

It's mostly gona be used for backing up data for the current year. But I was hoping at some point that we would be able to actually work on the file directly to avoid having 3 or more copy on each person workstation.

File size vary from 2 to 20 GB, project can go up to a couple To.

 

As for the switch we need I'm totaly lost. We have over 20 workstation in the office, 3 Synology NAS. I want to be able to have decent transfer speed but I'm not shure if it's worth buying expensive switch because I'm not shure what will be the bottle neck for speed transfer. I'm guessing right now that we need to upgrade a bit and replace most of the wire for better quality Cat6 (some of them are not even Cat6). But I'm afraid that we will be limited to the HDD read/write speed since most of our drive a HDD because of the project size.

 

I hope someone will be able to point me at the right direction.

 

Vincent

PS: If Linus fell like coming to Canada to do the unboxing that would be awesome.

Link to comment
Share on other sites

Link to post
Share on other sites

So an FYI:
8 SSDs won't come anywhere NEAR a 40gbit network pipe.  You'd be lucky to get that to actually saturate a 10gb pipe.

 

40gbit = ~4GB/s
HDDs don't push 500MB/s each.

 

If you want a "Live Work" network drive?  40gbit is a good pipe, but you'd need an array of SSDs for it, to actually make use of that speed.  8-10x 4TB in a RAID 5 (or 6) wouldn't be terrible for performance for that.

 

Link to comment
Share on other sites

Link to post
Share on other sites

It's one thing if you're looking to use this as network storage but if you're looking to run simulations/renders on the server itself you're going to need to check if the software that runs it is even available for the OS this server will run. FreeNAS (now known as TrueNAS) uses Bhyve as a virtualizer and I personally don't recommend it due to lack of features and overhead. You might want to look into something like a hypervisor such s PROXMOX. It still uses ZFS but is focused around hosting virtual machines and uses QEMU/KVM which could be used to divvy out system resources for storage hosting and/or rendering on whatever OS supports it best.

 

For network switches I don't have an array of recommendations but the Dell S4810P would give your 4x40Gig + 48x10Gig ports, QSFP & SFP+ respectively. There are pros and cons to going either Cat6/Cat6a RJ-45 or full DAC/fiber-optic and it really depends on your setup/layout. Maybe wait for others to give other switch suggestions but SFP+ NICs and fiber can be picked up cheap. Transceivers get a little costly though.

 

For network transfer speeds this can be impacted by file type, what's being used to control the RAID array, and the single threaded performance of the CPU. If these project files are many many small files your speeds are going to suffer no matter what. If it's one really big file and your RAID array doesn't have much CPU overhead on the server transfer speeds should be pretty good. It should be noted Parity RAID has more CPU overhead than RAID0/1/10 if a hardware RAID controller isn't being used. However if you want to use hardware RAID do not use ZFS.

 

5 minutes ago, tkitch said:

So an FYI:
8 SSDs won't come anywhere NEAR a 40gbit network pipe.  You'd be lucky to get that to actually saturate a 10gb pipe.

 

40gbit = ~4GB/s
HDDs don't push 500MB/s each.

If he goes the ZFS route 128GB of RAM will be a quite sizable ARC or Read Cache. In a large multi-client environment it could still be justifiable to use 40Gig if there are frequently accessed files read off the server as they would be read from RAM not from disk.

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, tkitch said:

So an FYI:
8 SSDs won't come anywhere NEAR a 40gbit network pipe.  You'd be lucky to get that to actually saturate a 10gb pipe.

 

40gbit = ~4GB/s
HDDs don't push 500MB/s each.

 

If you want a "Live Work" network drive?  40gbit is a good pipe, but you'd need an array of SSDs for it, to actually make use of that speed.  8-10x 4TB in a RAID 5 (or 6) wouldn't be terrible for performance for that.

 

Thank for the info.

Sadly I can't go for SSD because of the price/capacity ratio. I guess I'll have to wait for price to go down.

Our budget was 15 to 20k $ canadian and I'm already busting the budget with the Storinator. It got approved but I doubt they will accept a 50k ssd solution.

 

I guess i can remove the 40GB add on.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Windows7ge said:

It's one thing if you're looking to use this as network storage but if you're looking to run simulations/renders on the server itself you're going to need to check if the software that runs it is even available for the OS this server will run. FreeNAS (now known as TrueNAS) uses Bhyve as a virtualizer and I personally don't recommend it due to lack of features and overhead. You might want to look into something like a hypervisor such s PROXMOX. It still uses ZFS but is focused around hosting virtual machines and uses QEMU/KVM which could be used to divvy out system resources for storage hosting and/or rendering on whatever OS supports it best.

 

For network switches I don't have an array of recommendations but the Dell S4810P would give your 4x40Gig + 48x10Gig ports, QSFP & SFP+ respectively. There are pros and cons to going either Cat6/Cat6a RJ-45 or full DAC/fiber-optic and it really depends on your setup/layout. Maybe wait for others to give other switch suggestions but SFP+ NICs and fiber can be picked up cheap. Transceivers get a little costly though.

 

For network transfer speeds this can be impacted by file type, what's being used to control the RAID array, and the single threaded performance of the CPU. If these project files are many many small files your speeds are going to suffer no matter what. If it's one really big file and your RAID array doesn't have much CPU overhead on the server transfer speeds should be pretty good. It should be noted Parity RAID has more CPU overhead than RAID0/1/10 if a hardware RAID controller isn't being used. However if you want to use hardware RAID do not use ZFS.

 

If he goes the ZFS route 128GB of RAM will be a quite sizable ARC or Read Cache. In a large multi-client environment it could still be justifiable to use 40Gig if there are frequently accessed files read off the server as they would be read from RAM not from disk.

Thank for the information.

Project are mostly compose of multiple 2gb to 20gb e57 file.
I wasn't aiming to use a RAID controller since my knowledge is not very high on networking.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Leduc said:

Thank for the info.

Sadly I can't go for SSD because of the price/capacity ratio. I guess I'll have to wait for price to go down.

Our budget was 15 to 20k $ canadian and I'm already busting the budget with the Storinator. It got approved but I doubt they will accept a 50k ssd solution.

 

I guess i can remove the 40GB add on.

The switch and networking will chew into that budget very quickly too. Id probably stick with 1gbe to the workstations, with a 10gbe uplink to the server. Those switches should be about 500 depending on features.

 

1 minute ago, Leduc said:

Thank for the information.

Project are mostly compose of multiple 2gb to 20gb e57 file.
I wasn't aiming to use a RAID controller since my knowledge is not very high on networking.

How long is it acceptable to copy those files?

 

With 1gbe that would be about 20 seconds to 3 minutes, and 10gbe could do from 2 seconds to 20 seconds(but likely be disk limited before the network)

 

Raid controllers are pretty easy to use, and would be faster than something like zfs in truenas

 

 

 

Whats your backup plan? What will you do if the server fails and you can't get any data from it? You probably want to backup to something like a hdd archive, cloud, or anouther server.

 

Are the desktops managed? Are they on a domain?

 

Id probably just get something like this guy here https://www.synology.com/en-us/products/DS3622xs+. Easier to use, very good software, cheaper, smaller and quieter.

 

Link to comment
Share on other sites

Link to post
Share on other sites

depending on how well your software deals with network shares, just a server with 10gig ports, to a switch with s few 10gig ports, and gigabit everything else could be plenty.

 

as for the rest of the server, how much space would you practically need on the server? are you dealing with a windows domain? does it need to pull other duties? (said windows domain perhaps)

45drives is all fine and good, but if you dont have a 'tech bro' on site, it may make more business sense to just buy some HP proliant something and call it a day.

Link to comment
Share on other sites

Link to post
Share on other sites

you can do 10gbit for the server and get a Switch (D-Link and other companies make proper managed business switches that have 10gb SFP Uplinks + 1gbit fiber for very reasonable prices.) with 10gbit SFPs.

 

8 HDDs will absolutely beat out 1gbit networking, but 10 would be near their limits.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Leduc said:

Thank for the information.

Project are mostly compose of multiple 2gb to 20gb e57 file.
I wasn't aiming to use a RAID controller since my knowledge is not very high on networking.

I take it most clients currently run on a 1Gig network?

 

Do clients download the files to their stations or do they read and modify the files directly off the server? I ask because if all the clients only have HDDs no client will see much above 175MB/s transfer rates read or write on a 10Gig network. Editing directly off the server would alleviate that but that increases the network load significantly and could stall everyone else if the HDD array in the server becomes the next bottleneck. This is where SSD's would be wanted but they come at a higher price-tag. Now multiply that across all the stations and the server and you're looking at something very very expensive.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, manikyath said:

depending on how well your software deals with network shares, just a server with 10gig ports, to a switch with s few 10gig ports, and gigabit everything else could be plenty.

 

as for the rest of the server, how much space would you practically need on the server? are you dealing with a windows domain? does it need to pull other duties? (said windows domain perhaps)

45drives is all fine and good, but if you dont have a 'tech bro' on site, it may make more business sense to just buy some HP proliant something and call it a day.

We need aroung 150 To of usable storage, for now, it's only be used as a storage solution but I was hoping to host data for the clients but my guess is that we would need way more storage to keep older project.

There is the possiblity to install windows OS on the 45Drives but I figured if I wanted speed I need to go for linux distro.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Leduc said:

We need aroung 150 To of usable storage, for now, it's only be used as a storage solution but I was hoping to host data for the clients but my guess is that we would need way more storage to keep older project.

There is the possiblity to install windows OS on the 45Drives but I figured if I wanted speed I need to go for linux distro.

it might be best to split the 'work server' and the data host for clients. perhaps rent some datacenter space for the client accessible side. (e.g. they request the data, employee moves it to the shared storage, provides them with a password, and a policy automatically removes it after 30 days)

 

as for the 150TB size.. that's certainly not gonna fit in a proliant 😛

 

the thing here is - if you go for a linux based OS, it may be worth looking into some of the enterprise synology products instead. on that note.. does the company have an IT partner of sorts? they may be able to get you some nice quotes on enterprise gear. (it's not uncommon for quotes on the big boy equipment to be lower than amazon's sticker price)

 

i'll have a poke around later tonight to see what the current day options are.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, manikyath said:

it might be best to split the 'work server' and the data host for clients. perhaps rent some datacenter space for the client accessible side. (e.g. they request the data, employee moves it to the shared storage, provides them with a password, and a policy automatically removes it after 30 days)

 

as for the 150TB size.. that's certainly not gonna fit in a proliant 😛

 

the thing here is - if you go for a linux based OS, it may be worth looking into some of the enterprise synology products instead. on that note.. does the company have an IT partner of sorts? they may be able to get you some nice quotes on enterprise gear. (it's not uncommon for quotes on the big boy equipment to be lower than amazon's sticker price)

 

i'll have a poke around later tonight to see what the current day options are.

I'm going to a IT partner to see what they can offer, sadly the local IT company are not the best. We are based in Quebec City and we used to have a IT partner in Montreal that had better equipement for sale. But its 3 hours drive to get there every time a workstation has an issues (they are brand new and still under warranty).

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Leduc said:

the local IT company

if there is only one IT company for the entirety of Quebec, i'd be mind blown. they're usually not very 'visible' companies, but i'd be certain there's douzens.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×