Jump to content

New storage/media server

Marchy

I work for a video production company and we're looking at upgrading one of our servers to allow for our team to achieve both more storage space (70TB currently) and allow for scrubbing media in premier/after effects with ease.

 

Our network is currently Cat6A 1gb backbone, with direct 10gb links for those team members straight into the back of the current server. I believe we'll have budget to make any adjustments required so please feel free to provide any and all recommendations.

 

Also be gentle, but my boss is a Windows guy. Things like TrueNAS aren't an option as we're still very much reliant on on-prem active directory and group policies and he has trust in the OS. Basically need the best solution to keep all parties happy and within a budget.

 

Thanks in advance guys 🙂

Link to comment
Share on other sites

Link to post
Share on other sites

Honestly, I think Linus' older videos on the upgades of Whonnock would be a great resource for research. That server has been their workhorse for years and has been upgraded to meet demand. Of course, they have a few more servers added to the rack nowadays, but Whonnock probably is more up to speed on what you're doing.

Heres a couple of the videos:

 

"Although there's a problem on the horizon; there's no horizon." - K-2SO

Link to comment
Share on other sites

Link to post
Share on other sites

You could get a moderate (low-mid spec) server from HP or Lenovo or Dell and make a Win Server 2019/2022 storage server really easily.

 

how powerful you'd need the CPUs to be depends on what you're doing on the box beyond just storage.  

 

A single 2U Server can hold about 2 dozen 2.5" spinning disks.  A 4U Server can easily hold a dozen plus 3.5" Server HDDs.

Link to comment
Share on other sites

Link to post
Share on other sites

Maybe you want to look at professional solutions if it is for business...

 

There should be some feasible options from NetApp, PureStorage (for all-flash) or HPE... The people there can also design a solution directly for your use case.

Maybe you can take that and try to build something similar yourself.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, dj_ripcord said:

Honestly, I think Linus' older videos on the upgades of Whonnock would be a great resource for research. That server has been their workhorse for years and has been upgraded to meet demand. Of course, they have a few more servers added to the rack nowadays, but Whonnock probably is more up to speed on what you're doing.

Heres a couple of the videos:

 

Last video was the exact reason why I thought of TrueNAS in the first case but the boss hates anything that even smells like Linux 😛 Appreciate the resources either way as tbf, LTT's setup was great and I doubt we would be as resource intensive as them. Thanks a bunch 🙂

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, tkitch said:

You could get a moderate (low-mid spec) server from HP or Lenovo or Dell and make a Win Server 2019/2022 storage server really easily.

 

how powerful you'd need the CPUs to be depends on what you're doing on the box beyond just storage.  

 

A single 2U Server can hold about 2 dozen 2.5" spinning disks.  A 4U Server can easily hold a dozen plus 3.5" Server HDDs.

For sure. Storage is actually the easiest part of the build as tech's come a long way since the current server was installed in 2015. I'm just thinking primarily of what CPU/RAM would be best to make sure the team can scrub through their media as seamless as possible. Appreciate the input 🙂

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, ColdFusion04 said:

Maybe you want to look at professional solutions if it is for business...

 

There should be some feasible options from NetApp, PureStorage (for all-flash) or HPE... The people there can also design a solution directly for your use case.

Maybe you can take that and try to build something similar yourself.

Good shout to be fair. Thanks 🙂

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Marchy said:

For sure. Storage is actually the easiest part of the build as tech's come a long way since the current server was installed in 2015. I'm just thinking primarily of what CPU/RAM would be best to make sure the team can scrub through their media as seamless as possible. Appreciate the input 🙂

the data processing is being done mostly by the client workstations, not the server.  (But yeah, the server will need a decent CPU if it's feeding multiple high-res video streams at once.)

 

The server needs to have fast enough data access to be able to feed that many clients at once.  This is why Linus went to a SSD SAN for the media editors, so they could reliably get the bandwidth to do 4K streams over 10gb networking.  

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, tkitch said:

the data processing is being done mostly by the client workstations, not the server.  (But yeah, the server will need a decent CPU if it's feeding multiple high-res video streams at once.)

 

The server needs to have fast enough data access to be able to feed that many clients at once.  This is why Linus went to a SSD SAN for the media editors, so they could reliably get the bandwidth to do 4K streams over 10gb networking.  

I think 10Gb should be easy to achieve for us as we should have the budget for a new switch and the rest of the infrastructure's already in place. When you say he went to an SSD SAN, is that in the most recent revision of Whonnock? Or something from the start? I'll be honest, I haven't been keeping up with their setup as much as I could of done, and I'm not the most knowledgeable of server infrastructure as of yet. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Marchy said:

I think 10Gb should be easy to achieve for us as we should have the budget for a new switch and the rest of the infrastructure's already in place. When you say he went to an SSD SAN, is that in the most recent revision of Whonnock? Or something from the start? I'll be honest, I haven't been keeping up with their setup as much as I could of done, and I'm not the most knowledgeable of server infrastructure as of yet. 

Well, HDDs are bad at feeding 1GB/s, as they'd need at least a 15 disk array to maintain that speed.

 

And then if you have multiple people accessing, seek time would be horrid and kill transfers.

 

So if you're trying to keep a high-speed SAN running for multiple video feeds at once?  You're gonna need an SSD Array to do that.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, tkitch said:

Well, HDDs are bad at feeding 1GB/s, as they'd need at least a 15 disk array to maintain that speed.

 

And then if you have multiple people accessing, seek time would be horrid and kill transfers.

 

So if you're trying to keep a high-speed SAN running for multiple video feeds at once?  You're gonna need an SSD Array to do that.

Hey, what about a tiered storage? Tier 1 = SSD-Storage for files that are accessed often and Tier 2 = "Cold storage" for data, that has to be available but isn't accessed continously? That would be much more cost effective than a all-flash-storage, especially if you consider that the already existing data pool is about 70TB. If you add a reasonable data growth rate, that would be approx. 100-120TB in 3 years (just my wild guess, i don't know how much data is produced annually).

If you consider 20% of the data to be worked on, you could save a pretty buck to invest in maybe a bigger backup or other projects (maybe related to this storage-project).

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, ColdFusion04 said:

Hey, what about a tiered storage? Tier 1 = SSD-Storage for files that are accessed often and Tier 2 = "Cold storage" for data, that has to be available but isn't accessed continously? That would be much more cost effective than a all-flash-storage, especially if you consider that the already existing data pool is about 70TB. If you add a reasonable data growth rate, that would be approx. 100-120TB in 3 years (just my wild guess, i don't know how much data is produced annually).

If you consider 20% of the data to be worked on, you could save a pretty buck to invest in maybe a bigger backup or other projects (maybe related to this storage-project).

Yeah, having a "live use" array and a second "bulk storage" is totally doable, and would be fine, but it'll depend on their workflow and such.

 

I can't speak to what they're doing.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, ColdFusion04 said:

Hey, what about a tiered storage? Tier 1 = SSD-Storage for files that are accessed often and Tier 2 = "Cold storage" for data, that has to be available but isn't accessed continously? That would be much more cost effective than a all-flash-storage, especially if you consider that the already existing data pool is about 70TB. If you add a reasonable data growth rate, that would be approx. 100-120TB in 3 years (just my wild guess, i don't know how much data is produced annually).

If you consider 20% of the data to be worked on, you could save a pretty buck to invest in maybe a bigger backup or other projects (maybe related to this storage-project).

Sounds good, it was ideally what I was after. I'm not sure whether that means going and getting two seperate machines to do this, or if it can be done on a single machine? Another issue that crept up yesterday was I think I'm only permitted to go down the HPE route which feels very restrictive but I guess they just want to match all the existing infrastructure.

11 hours ago, tkitch said:

Yeah, having a "live use" array and a second "bulk storage" is totally doable, and would be fine, but it'll depend on their workflow and such.

 

I can't speak to what they're doing.

There isn't much workflow at the moment. The 70TB server is filling fast because they just leave their old projects on there and continue working on new ones on the same pool. There was talk of hiring a data manager just to look after that equipment alone. I think we're definetly after some long term storage as well as fast 'live' storage for the team to work on, as ColdFusion described.

Link to comment
Share on other sites

Link to post
Share on other sites

Matching the existing infrastructure has actually benefits (from a financial ans administration standpoint), because you can streamline your warranty and maintenance contracts. Also if you have only one system integrator to work with, you also have only one partner to blame if something happens.

 

And if you manage to get a good system integrator, they also analyze your use cases and should help you define and implement such workflows as well as futureproof concepts.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Marchy said:

There isn't much workflow at the moment. The 70TB server is filling fast because they just leave their old projects on there and continue working on new ones on the same pool. There was talk of hiring a data manager just to look after that equipment alone. I think we're definetly after some long term storage as well as fast 'live' storage for the team to work on, as ColdFusion described.

If you've got a "Digital Hoarder" situation, building a 45-drives style solution, similar to LTT's, for a huge cold-storage would not be the worst call.  Expensive, sure, but long-term functionality is not cheap.

 

So a 45 drive unit would have 3x 15 drive arrays - 1 hot spare, 2 parity, and 12 data drives per array.  Using 20TB Drives, you'd get over 200TB of usable space out of each array, making it well over 600TB in total, nearly a 10x increase over your current setup.

 

That server would only need "moderate" CPU Power, as it wouldn't be the live editing box, only the archives.

 

You'd then want a second server with more CPU power and SSD Drives for you "Live Editing" rig, so that everyone can get stupidly high-speed access over 10gb networking.  

Link to comment
Share on other sites

Link to post
Share on other sites

You might want @leadeater to comment on this one. Considering your boss appears to be opposed to LInux from your comments early on.

My understanding is Storage Spaces does constant analysis of file access, and can automatically move frequently used files to the SSD tier, while infrequently used files are held in the HDD tier. This will give you a balance to SSD performance for "live projects" with HDD capacity for accessibility and affordability. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Jarsky said:

You might want @leadeater to comment on this one. Considering your boss appears to be opposed to LInux from your comments early on.

My understanding is Storage Spaces does constant analysis of file access, and can automatically move frequently used files to the SSD tier, while infrequently used files are held in the HDD tier. This will give you a balance to SSD performance for "live projects" with HDD capacity for accessibility and affordability. 

@MarchySo I'm probably featured in one of those linked videos (or older ones on it), well my trying to help Linus anyway lol. Lets just say he took some of my advice, also not some of my advice, got stung, made changes that aligned with my past advice more. He doesn't ask me anymore though, he reaches out to Wendel and ServeTheHome (I pointed him towards them).

 

Very long and short of it is Storage Spaces sucks major if you need to use any kind of parity configuration. That includes Journal SSDs (write-back cache) or Tiering. Also technically Tiering is only supported on Storage Spaces Direct (S2D) unless that changed in Server 2019 or 2022. You can get it working on a single server, which I do at home on my storage server which is running Storage Spaces, but this is not something I would "recommend" doing in a business environment.

 

The only acceptable performance configurations you can get out of Storage Spaces realistically is Mirror, either HDD only or HDD + SSD Journal or SSD Tier + HDD Tier.

 

I personally like a lot of things about Storage Spaces but there are still a lot of not good things about it and the not good things tend to matter more.

 

If you want tiering then do it manually, have an SSD array and share and an HDD array and share and move project between as required. This will actually give a performance guarantee. Other more expensive OEM vendor systems can do this tiering much better but you pay for what you get.

 

On 6/30/2022 at 1:37 AM, Marchy said:

 

Also be gentle, but my boss is a Windows guy. Things like TrueNAS aren't an option as we're still very much reliant on on-prem active directory and group policies and he has trust in the OS. Basically need the best solution to keep all parties happy and within a budget.

Does it have to be a build your own solution? ProMax (which we have some systems) are a vendor solution in the Media Design market based on Windows Storage Server and uses hardware RAID not Storage Spaces.

 

Good hardware RAID controllers paired with good NL-SAS disks and patrol reads and data backups is still today an applicable and adequate solution despite what you might read and hear online. As with most things in IT there is more than one way to do or achieve something and a universal best way tend to not exist.

 

I would however recommend Netapp over ProMax but, quite a big but, Netapp costs significantly more for their FAS and AFF systems than a ProMax or equivalent. Netapp E-Series is more affordable however only offers block storage so you would need to put a server in front of it to create any SMB/NFS shares and this would be the next level up from DAS RAID in a Windows server.

 

On 6/30/2022 at 1:53 AM, Marchy said:

I'm just thinking primarily of what CPU/RAM would be best to make sure the team can scrub through their media as seamless as possible. Appreciate the input 🙂

File server access is actually very low CPU requirements so this isn't of much concern unless running something like ZFS, which by the sounds you will not be doing. Lower core count higher frequency is slightly better here, but only slightly.

 

Windows is actually really efficient CPU wise at serving SMB shares compared to SAMBA, it is a Microsoft thing after all.

 

22 hours ago, Marchy said:

Another issue that crept up yesterday was I think I'm only permitted to go down the HPE route which feels very restrictive but I guess they just want to match all the existing infrastructure.

Well that's a bit dumb, although I prefer their servers over all others. But that either precludes you to buy a standard server system and rolling your own file server solution or using a HPE NAS/SAN system and there are better vendors or at least other as good ones out there.

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/30/2022 at 8:57 AM, ColdFusion04 said:

Matching the existing infrastructure has actually benefits (from a financial ans administration standpoint), because you can streamline your warranty and maintenance contracts. Also if you have only one system integrator to work with, you also have only one partner to blame if something happens.

 

And if you manage to get a good system integrator, they also analyze your use cases and should help you define and implement such workflows as well as futureproof concepts.

Appreciate the input. That's the only reason I can really understand why we have all the same equipment as we have a mix of HP and Dell (Primarily Dell) for users workstations etc. Plus, my boss is probably a bit OCD 😄 

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, tkitch said:

If you've got a "Digital Hoarder" situation, building a 45-drives style solution, similar to LTT's, for a huge cold-storage would not be the worst call.  Expensive, sure, but long-term functionality is not cheap.

 

So a 45 drive unit would have 3x 15 drive arrays - 1 hot spare, 2 parity, and 12 data drives per array.  Using 20TB Drives, you'd get over 200TB of usable space out of each array, making it well over 600TB in total, nearly a 10x increase over your current setup.

 

That server would only need "moderate" CPU Power, as it wouldn't be the live editing box, only the archives.

 

You'd then want a second server with more CPU power and SSD Drives for you "Live Editing" rig, so that everyone can get stupidly high-speed access over 10gb networking.  

That's definetly one option which I'll be putting forward. Appreciate you explaining the power requirements for it. 

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, leadeater said:

@MarchySo I'm probably featured in one of those linked videos (or older ones on it), well my trying to help Linus anyway lol. Lets just say he took some of my advice, also not some of my advice, got stung, made changes that aligned with my past advice more. He doesn't ask me anymore though, he reaches out to Wendel and ServeTheHome (I pointed him towards them).

 

Very long and short of it is Storage Spaces sucks major if you need to use any kind of parity configuration. That includes Journal SSDs (write-back cache) or Tiering. Also technically Tiering is only supported on Storage Spaces Direct (S2D) unless that changed in Server 2019 or 2022. You can get it working on a single server, which I do at home on my storage server which is running Storage Spaces, but this is not something I would "recommend" doing in a business environment.

 

The only acceptable performance configurations you can get out of Storage Spaces realistically is Mirror, either HDD only or HDD + SSD Journal or SSD Tier + HDD Tier.

 

I personally like a lot of things about Storage Spaces but there are still a lot of not good things about it and the not good things tend to matter more.

 

If you want tiering then do it manually, have an SSD array and share and an HDD array and share and move project between as required. This will actually give a performance guarantee. Other more expensive OEM vendor systems can do this tiering much better but you pay for what you get.

 

Does it have to be a build your own solution? ProMax (which we have some systems) are a vendor solution in the Media Design market based on Windows Storage Server and uses hardware RAID not Storage Spaces.

 

Good hardware RAID controllers paired with good NL-SAS disks and patrol reads and data backups is still today an applicable and adequate solution despite what you might read and hear online. As with most things in IT there is more than one way to do or achieve something and a universal best way tend to not exist.

 

I would however recommend Netapp over ProMax but, quite a big but, Netapp costs significantly more for their FAS and AFF systems than a ProMax or equivalent. Netapp E-Series is more affordable however only offers block storage so you would need to put a server in front of it to create any SMB/NFS shares and this would be the next level up from DAS RAID in a Windows server.

 

File server access is actually very low CPU requirements so this isn't of much concern unless running something like ZFS, which by the sounds you will not be doing. Lower core count higher frequency is slightly better here, but only slightly.

 

Windows is actually really efficient CPU wise at serving SMB shares compared to SAMBA, it is a Microsoft thing after all.

 

Well that's a bit dumb, although I prefer their servers over all others. But that either precludes you to buy a standard server system and rolling your own file server solution or using a HPE NAS/SAN system and there are better vendors or at least other as good ones out there.

Thanks for getting in touch @leadeater

 

After having another discussion with my manager yesterday, here's what the score is:

  1. We're going down the HPE route. Wants to keep warranties in line and infrastructure the same. Our one and only (much to my regret) supplier claims to be a HPE specialist at the same time, hence probably the extra push and maybe discounts.
  2. We currently have 4 x Mac Pros (bin design) which at the time of purchase cost around £15k each. We currently have a storage solution for that team called an EVO which has apparently been used by the likes of Disney etc which at the time cost around £80k.
  3. Ryzen is completely out of the picture as we've had some issues on our PDC with virtualisation for our main CRM software. It's apparently due to Ryzen so that's off the cards completely.
  4. The goal is, to have more storage that we currently have, and give the team access to the server to be able to scrub through projects using a 10gb backbone. Oh, and the team are looking to receive "workstation grade" laptops.

You'll have to forgive the situation. I've only been at this place for 2 months and very much on probation, and not looking to start arguments with my manager over this 😄

 

We're looking to get away from other vendors and their storage solutions as my manager wants everything to be integrated with AD. Can't blame him for that (Although, really looking forward to moving to Azure soon :)))))))

 

It seems the company has a budget now due to us moving more and more away from Macs. I think the ultimate goal would be to do tiering as you've suggested. One main server with more storage, and an SSD share to allow the team to work on current projects. Can I be annoying and ask what kind of equipment you may recommend when it comes to storage?

 

Looking at HPE's site, this is probably what we would want to look into;

 

HPE StoreEasy 1860 Performance Storage with Microsoft Windows Server IoT 2019 (We would upgrade to 2022 to match the rest of the infrastructure)

2 x Intel Xeon-Silver 4208 (2.1GHz/8-core/85W) Processor Kit for HPE ProLiant DL380 Gen10
6 x HPE 16GB (1x16GB) Dual Rank x8 DDR4-2933 CAS-21-21-21 Registered Smart Memory Kit

 

Do you think this would be sufficient? It's probably extremely overkill but we would be looking to keep this server going for 5-7 years so maybe worth the extra cost?

 

Thanks again for the information. I think I now need to find some resources on how to make this all come together with storage spaces as I don't have a lot of experience as of yet with them.

Link to comment
Share on other sites

Link to post
Share on other sites

I will be the first to say "Don't knock HPE Servers, they're pretty damned solid boxes."

We just got a new HyperV Server, and it's an absolute beast of a dual XEON from HP.  You shouldn't be upset by the performance HP can give you.

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/1/2022 at 8:51 PM, Marchy said:

We're going down the HPE route. Wants to keep warranties in line and infrastructure the same. Our one and only (much to my regret) supplier claims to be a HPE specialist at the same time, hence probably the extra push and maybe discounts.

Not really a problem, HPE is really good for most of their product offerings. Could have been forced down a worse path.

 

On 7/1/2022 at 8:51 PM, Marchy said:

We currently have a storage solution for that team called an EVO which has apparently been used by the likes of Disney etc which at the time cost around £80k.

Sounds like similar company to ProMax, one thing to be aware of with marketing of companies is that they will advertise working with companies like Dinsey even when it's quite a minor role or component. But gotta play the game right, need those sales as a business so fair game here.

 

On 7/1/2022 at 8:51 PM, Marchy said:

Ryzen is completely out of the picture as we've had some issues on our PDC with virtualisation for our main CRM software. It's apparently due to Ryzen so that's off the cards

You mean Ryzen CPUs? Card sounds like GPUs? If GPUs then yea Nvidia or bust on this front for almost every professional use case. If CPU then EPYC has the proper firmware/microcode and platform optimizations for virtualization, Ryzen is good enough in most cases though. Would never use Ryzen in a server though.

 

Edit: Oh... off the cards, haha I read that stupidly for whatever reason. Oh well I'll leave it for you to read and enjoy, have a good laugh at my dumb dumb 

 

On 7/1/2022 at 8:51 PM, Marchy said:

We're looking to get away from other vendors and their storage solutions as my manager wants everything to be integrated with AD.

They all integrate just fine with AD, there is no issue there at all. We have over 35,000 SMB home drive shares and like 1,000+ standard shares on Netapp working just fine this way. It really doesn't matter though as I doubt anything other than HPE will be considered just know this is a non-issue.

 

On 7/1/2022 at 8:51 PM, Marchy said:

HPE StoreEasy 1860 Performance Storage

Just an FYI the StoreEasy products are literally just HPE ProLiant servers, nothing special about them. However you get better configuration options if you just buy a standard ProLiant server configured how you want it.

 

There are other options too like HPE Apollo, there are two primary storage offerings in that product lineup one being 24 bays and one being 60+ (like 64/68 forget).

 

You can of course add on disk shelves to any HPE server so bay count isn't a big factor really.

 

On 7/1/2022 at 8:51 PM, Marchy said:

Thanks again for the information. I think I now need to find some resources on how to make this all come together with storage spaces as I don't have a lot of experience as of yet with them.

I can honestly say don't go with Storage Spaces, I promise you will regret it in time. For Windows hardware RAID is the better option and the Broadcom Tri-Mode RAID cards HPE OEM/ODM are extremely good. You'll get far more performance from this than Storage Spaces, and it will 100% always work for any use case and any configuration (within reason) which you cannot say about Storage Spaces.

 

Here's a couple of configuration options of servers we have purchase in the last year that will be applicable to you, I'll give rough pricing as a guide but they will be higher than we actually paid as your deal rate will be lower than what we get. Managing expectations.

 

HPE DL385 Gen10 Plus v2: 1x AMD 7313 64GB RAM 2x 3.84TB MU SSD 12x 12TB HDD Est: $21,000 USD

Quote

P38410-B21 HPE DL385 G10+ v2 12LFF CTO Svr 1
5217858 AMD EPYC 7313 CPU for HPE 1
4882847 HPE 8GB 1Rx8 PC4-3200AA-R Smart Kit 8
5191445 HPE DL38X Gen10+ 2SFF x4Tri-Mode U.3 Kit 1
5217947 HPE 3.84TB SATA MU SFF BC MV SSD 2
4294659 HPE 12TB SATA 7.2K LFF LP He 512e MV HDD 12
5059548 BCM 57414 10_25GbE 2p SFP28 Adptr 1
4043152 HPE 96W Smart Storage Battery 145mm Cbl 1
3706041 HPE Smart Array P408i-a SR Gen10 Ctrlr 1
4882869 MRV QL41232 10_25GbE 2p SFP28 OCP3 Adptr 1
5217865 HPE 800W FS Plat Ht Plg LH PS Kit 2
4882906 HPE Gen10 Plus TPM BR Module Kit 1
4798245 HPE DL38X Gen10+ 2U LFF EI Rail Kit 1
4798246 HPE DL38X Gen10+ 2U CMA for Rail Kit 1
5217906 HPE DL3X5 Gen10+ Stnd Heat Sink Kit 1
2481831 HPE iLO Adv Elec Lic 3yr Support 1
HU4A6A5 HPE 5Y TC Essential SVC 1
HU4A6A5_R2M HPE iLO Advanced Non Blade Support 1
HU4A6A5_ZSF HPE Proliant DL385 Gen10 Plus V2 Support 1

 Purpose of above server is security camera recording server, for a lot of cameras. Can also go with the DL380 Intel equivalent, not real difference just minor cost variations due to CPU and system board.

 

 

HPE Apollo 4200 Gen10 Plus: 2x Intel 5218 192GB RAM 24x 12TB NL-SAS HDD Est: $27,000 USD

Quote

P07244-B21 HPE Apollo 4200 Gen10 24LFF CTO Svr 3
P07244-B21#UUF Asia Pacific English 3
P07910-L21 HPE Apollo 4200 Gen10 5218 FIO Kit 3
P07910-B21 HPE Apollo 4200 Gen10 5218 Kit 3
P00922-K21 HPE 16GB 2Rx8 PC4-2933Y-R Smart Kit 36
P07248-B21 HPE Apollo 4200 Gen10 Rear 2SFF/PCIe Kit 3
881781-K21 HPE 12TB SAS 7.2K LFF LP He 512e HDD 72
P18422-K21 HPE 480GB SATA RI SFF SC MV SSD 6
804394-B21 HPE Smart Array E208i-p SR Gen10 Ctrlr 3
P01367-B21 HPE 96W Smart Stg Li-ion Batt 260mm Kit 3
869083-B21 HPE Smart Array P816i-a SR G10 LH Ctrlr 3
817718-B21 HPE Eth 10/25Gb 2p 631SFP28 Adptr 3
830272-B21 HPE 1600W FS Plat Ht Plg LH Pwr Sply Kit 6
BD505A HPE iLO Adv 1-svr Lic 3yr Support 3
822731-B21 HPE 2U Shelf-Mount Adjustable Rail Kit 3
822731-B21#0D1 HPE 2U Shelf-Mount Adjustable Rail Kit 3
P09656-B21 HPE SA E/P FIO Ctlr for Rear Strg 3
HU4B2A5 HPE 5Y Tech Care Basic SVC 1
HU4B2A5#XMZ HPE Apollo 4200 Gen10 Support 3
HU4B2A5#R2M HPE iLO Advanced Non Blade Support 3

Above is 3 servers, purpose is Ceph storage servers.

 

Note about the HPE Apollo 4200 Gen10 Plus is it has extra 2.5" bays for SSDs as well so this server could handle both the SSD and HDD arrays you need all within 2U and can scale to few hundred TB of storage without expansion storage shelf.

 

s00010243?$zoom$#.png

 

For you I would recommend the Apollo 4200 Gen10 Plus with SSDs for the live editing and however many HDDs of an appropriate size for the bulk storage, leave some bays free for expansion but also prefer more disks not less for performance. Welcome to lower the ram btw, you don't need that much with hardware RAID as we needed that much for Ceph.

 

You should be able to get ~120TB of HDD and 20TB-30TB of SSD for less than $50,000 USD.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×