Jump to content

1 petabyte Windows ReFS server?

Background:

Are anyone running Windows Servers with +1 Petabyte on ReFS? I was looking at storage space direct but that only supports up to 1 petabyte. And I really need 5 Petabytes at least and we don't want to buy super expensive storage systems from the old storage vendors.

 

I have been looking at GlusterFS to make it a scale-out solution but, unlike Windows where everyone has the training, it is hard for me to hand over to our Operations department. Other scale-out software solutions like Scality on HP servers looks great(easy upgrades and replacement of old nodes, can mix server types), but the license price is *3 of an Apollo 4500 server from HPE with 64 12TB drives.

 

UnRaid seemed like a cheap solution, but  I need a solution with high sustainable writes and I think there might be a bottleneck there. Data redundancy is not really needed except protection drive failures.

 

 

Question:

- So I was thinking, has anyone been running single Windows servers with f.eks. 64*12TB drives on ReFS and been happy with it. I am googling it but can't find any, so I guess the answer is no?

- Can we get an update on the LTT GlusterFS some time? Has it been smooth sailing? :D 

 

 

 

IDK what I am doing.

https://www.youtube.com/ruddk

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

If it's a back-up solution more than raw data you might want to keep in mind that ReFS still doesn't support deduplication and file system compression at this point, so you'll need more capacity that way than you would with NTFS. It apparently also has a weird bug that causes data corruption. In other words, not ready for prime time yet.

PC Specs - AMD Ryzen 7 5800X3D MSI B550M Mortar - 32GB Corsair Vengeance RGB DDR4-3600 @ CL16 - ASRock RX7800XT 660p 1TBGB & Crucial P5 1TB Fractal Define Mini C CM V750v2 - Windows 11 Pro

 

Link to comment
Share on other sites

Link to post
Share on other sites

"expensive" is going to be relative if you're looking at 5pb... Considering for 5pb of storage you're going to want really... really... really good redundancy. Recovering 1pb+ of data is just going to cost insane amounts of money in operations for a company. I can't imagine how long it would take any storage solution to write-back 1pb worth of data from backup, let alone 5.

 

Just to even reach 5pb @ $500USD (for janky consumer disks) per 12tb is almost $21,000, and I would at the very least triple that to have decent redundancy. Not to mention if you're dealing in big data, you're going to need the network backbone to support it, single 10gb cards are not going to cut it.

 

I personally wouldn't entrust 5pb to essentially a homebrew solution, things like unraid are far out of the question imo.

If you're just trying to save 10-15k on easily a 100k project, I don't think it's worth it. But to each their own.

 

For DIY hardware, I'd maybe look at these guys:

https://www.45drives.com/products/storage/

 

Or consult the boys over at ixsystems to come up with a solution

https://www.ixsystems.com/freenas-certified-servers/

 

Phone calls to vendors are typically free for consults, can't say I've ever had difficulty getting a sales engineer on the phone or to even come out on site. So call a few vendors - the volume you want may get you a decent discount.

 

 

tl;dr either the company can or can't afford a proper 5pb solution. Half-ass solution has a great potential to cost more within just a few years.

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, Mikensan said:

Just to even reach 5pb @ $500USD (for janky consumer disks) per 12tb is almost $21,000,

 

1 PB of raw space (not considering GB vs GiB) is $500*86 drives = $43,000

 

5 PB would be $215,000 (also not considering drive redundancy, nor GB vs GiB)

Can Anybody Link A Virtual Machine while I go download some RAM?

 

Link to comment
Share on other sites

Link to post
Share on other sites

I get that no one has training for gluster but once it is rolled out there shouldn't be much if anything to change and you may be able to get it to send emails when something goes wrong.

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, unijab said:

 

1 PB of raw space (not considering GB vs GiB) is $500*86 drives = $43,000

 

5 PB would be $215,000 (also not considering drive redundancy, nor GB vs GiB)

lol whoops I dropped a decimal place when staring at my calculator... 208,333 is what I got, thought it was 20,833 ~_~ good catch! I don't think even I could swallow it was over 200k! 

 

But yea I didn't account for block size, file system, and redundancy because it can vary so much. Was giving a rough figure to showcase how expensive this project is going to be.

 

Though when regarding hard drives / disk space I feel most assume Peta/Giga/Mega/Kilo-byte. Such as advertisements, raid guides, or just in general discussion. Unlike bandwidth/network-speeds where it varies, especially when you want to boast larger numbers. I haven't seen anyone use Gibibyte when discussing disk space, since it's generally accepted to consider the ranks just multiples of 1,000 instead of 1,024^x etc... Safe to say if the end users gets 5,000 terabytes and not 5,629TiB that he won't be surprised.

 

Link to comment
Share on other sites

Link to post
Share on other sites

When you're buying 5PB of storage you're probably going to find the enterprise storage options aren't actually that expensive, I'd still get pricing from Netapp, HPE 3PAR, Lenovo and Dell then price out just raw server hardware from HPE, Dell, Lenovo and see how that stacks up. If the cost saving isn't there then it's not worth the extra training, administration and risk of putting in a solution that isn't well supported in your organization.

 

Secondly if you go with something like Netapp the deduplication and compression can give you between 3:1 to 10:1 effective storage depending on data type and what the storage is used for.

 

Also what's the storage for? Archive/cold data, active data, backup store?

 

If it's a backup target I'd spend more money on a good product that will dedup the data down a lot, we have 500TB Netapps for backup targets with about 300TB used however before dedup that data would be ~5.5PB. Backups are super easy to dedup though.

 

10 hours ago, Martin J said:

Are anyone running Windows Servers with +1 Petabyte on ReFS? I was looking at storage space direct but that only supports up to 1 petabyte. And I really need 5 Petabytes at least and we don't want to buy super expensive storage systems from the old storage vendors.

I've used Storage Spaces a lot but mostly on a single server, I have done a POC of Storage Spaces Direct on 3 servers and it's nice however it's really targeted for VM/Hyper-V storage use cases more than anything else. You can do scale out SMB with it but even that was mainly for Hyper-V VHDX hosting as well.

 

As much as I like Storage Spaces I'd be reluctant to put anywhere near that amount of data behind it.

 

Ceph is another good option to look at btw however it'll be a bit harder to admin again over GlusterFS but for this amount of storage and number of drives it'll be more resilient and flexible configuration wise.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Mikensan said:

I personally wouldn't entrust 5pb to essentially a homebrew solution, things like unraid are far out of the question imo.

If you're just trying to save 10-15k on easily a 100k project, I don't think it's worth it. But to each their own.

 

For DIY hardware, I'd maybe look at these guys:

https://www.45drives.com/products/storage/

 

Or consult the boys over at ixsystems to come up with a solution

https://www.ixsystems.com/freenas-certified-servers/

^ This.

 

I thought that Microsoft was backing away from ReFS..? They basically don't have anything for this, at least anything serious, in enterprise big storage is usually NetApp or Oracle. (or object storage)

 

About NetApp, they just crashed a stock exchange btw? I also didn't know this but they are a modified custom version of FreeBSD on UFS. (they probably forked it before ZFS? See.. Don't steal and closed fork open source or you'll own it when it breaks. Hello to you Juniper.)

 

If your looking for an open solution I would only use FreeBSD and ZFS. Yes, Lawrence Livermore National Laboratory has a 55 petabyte ZFS array on Linux. but I'm sure they had to jump some air gaps to get that working properly. I believe the LHC also uses ZFS for data integrity reasons. If your looking for a commercial vendor.. Nextenta has an Illumos backed solution and they have been around for quite awhile. https://nexenta.com/products/nexentastor

 

When people recommend Unraid for it's commercial storage support they probably want something like Nextenta instead. They are a much bigger company and have better technology. I've used it personally and it's good but that was over 10 years ago on sun hardware (before freebsd had ZFS support) so I can't comment on what it is like today.

 

As Mikensan said, call iX, tell them what you want to do and they can hook you up.

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, leadeater said:

Because they don't know any better :P.

Then you kindly tell them they are an idiot, and they get all offended. "Fine, i'll leave you to your ignorance"

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, NelizMastr said:

If it's a back-up solution more than raw data you might want to keep in mind that ReFS still doesn't support deduplication and file system compression at this point, so you'll need more capacity that way than you would with NTFS. It apparently also has a weird bug that causes data corruption. In other words, not ready for prime time yet.

Yeah, there seems to be a lack of people who actually work with Windows that would recommend it. 

In our case, it is "just" video recordings from our surveillance cameras so we can afford to lose it but still, I'd like it to be stable.

IDK what I am doing.

https://www.youtube.com/ruddk

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, Mikensan said:

For DIY hardware, I'd maybe look at these guys:

https://www.45drives.com/products/storage/

 

Or consult the boys over at ixsystems to come up with a solution

https://www.ixsystems.com/freenas-certified-servers/

 

Phone calls to vendors are typically free for consults, can't say I've ever had difficulty getting a sales engineer on the phone or to even come out on site. So call a few vendors - the volume you want may get you a decent discount.

 

 

tl;dr either the company can or can't afford a proper 5pb solution. Half-ass solution has a great potential to cost more within just a few years.

3

Hi, I should have clarified that it is for video surveillance footage so it isn't mission critical if we lose a bit of recording.  Of course, I would run some sort of RAID so a single hard drive failure wouldn't cause problems since with that amount of data, you will see failing drives. I'll take look at 45drives, looks very interesting, and maybe FreeNAS should be on the table as well. Thanks for the input. :)

IDK what I am doing.

https://www.youtube.com/ruddk

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, GDRRiley said:

I get that no one has training for gluster but once it is rolled out there shouldn't be much if anything to change and you may be able to get it to send emails when something goes wrong.

Yeah, I think I should be looking at peoples experience with that in terms of reliability and how much maintenance it requires.

IDK what I am doing.

https://www.youtube.com/ruddk

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, leadeater said:

When you're buying 5PB of storage you're probably going to find the enterprise storage options aren't actually that expensive, I'd still get pricing from Netapp, HPE 3PAR, Lenovo and Dell then price out just raw server hardware from HPE, Dell, Lenovo and see how that stacks up. If the cost saving isn't there then it's not worth the extra training, administration and risk of putting in a solution that isn't well supported in your organization.

 

Secondly if you go with something like Netapp the deduplication and compression can give you between 3:1 to 10:1 effective storage depending on data type and what the storage is used for.

 

Also what's the storage for? Archive/cold data, active data, backup store?

 

If it's a backup target I'd spend more money on a good product that will dedup the data down a lot, we have 500TB Netapps for backup targets with about 300TB used however before dedup that data would be ~5.5PB. Backups are super easy to dedup though.

 

I've used Storage Spaces a lot but mostly on a single server, I have done a POC of Storage Spaces Direct on 3 servers and it's nice however it's really targeted for VM/Hyper-V storage use cases more than anything else. You can do scale out SMB with it but even that was mainly for Hyper-V VHDX hosting as well.

 

As much as I like Storage Spaces I'd be reluctant to put anywhere near that amount of data behind it.

 

Ceph is another good option to look at btw however it'll be a bit harder to admin again over GlusterFS but for this amount of storage and number of drives it'll be more resilient and flexible configuration wise.

Hi, thanks, I agree.

Yes, I have been getting quotes from the traditional vendors as well. In this case, it is 2 tier storage for video surveillance footage so I don't think that deduplication and compression are going to do much.

It will be mostly %80 writes of data that will never be looked at and deleted after 2-3 weeks so it is a bit special that way. So we can afford some downtime since it isn't mission critical. 

I will be reviewing the different solutions. Of course, the cheapest to buy is to roll your own, but that might cost a lot of money in maintenance and training. Then at the other end of the scale would be a finished system with good support from the vendor. Expensive to buy but costs less time in man hours and training internally afterward.  I'll probably find 10-15 "data points", like initial cost, training, etc to find the TCO for each system to evaluate them.

 

Edit: So far when I have been looking at the alternatives to f.eks. NetApp. The solutions that use commodity hardware, comes surprisingly close to the classic storage vendors when they add their software licenses. :/ 

IDK what I am doing.

https://www.youtube.com/ruddk

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, jde3 said:

^ This.

 

I thought that Microsoft was backing away from ReFS..? They basically don't have anything for this, at least anything serious, in enterprise big storage is usually NetApp or Oracle. (or object storage)

 

About NetApp, they just crashed a stock exchange btw? I also didn't know this but they are a modified custom version of FreeBSD on UFS. (they probably forked it before ZFS? See.. Don't steal and closed fork open source or you'll own it when it breaks. Hello to you Juniper.)

 

If your looking for an open solution I would only use FreeBSD and ZFS. Yes, Lawrence Livermore National Laboratory has a 55 petabyte ZFS array on Linux. but I'm sure they had to jump some air gaps to get that working properly. I believe the LHC also uses ZFS for data integrity reasons. If your looking for a commercial vendor.. Nextenta has an Illumos backed solution and they have been around for quite awhile. https://nexenta.com/products/nexentastor

 

When people recommend Unraid for it's commercial storage support they probably want something like Nextenta instead. They are a much bigger company and have better technology. I've used it personally and it's good but that was over 10 years ago on sun hardware (before freebsd had ZFS support) so I can't comment on what it is like today.

 

As Mikensan said, call iX, tell them what you want to do and they can hook you up.

Hi,

Thanks, I have added Nexenta as one of the possible solutions now. I see they have a trial version you can run in VMware to check the functionality out so I am doing that.

I have been administering NetApp for a few years now, IIRC we had about 4000-5000 drives at one point, mostly for the virtual servers. But I haven't kept up much with NetApp news since we built our 1,4 petabyte SSD VSAN on VMware. I think the old storage vendors be seeing a lot more competition from software-defined storage, besides from all the small storage start-ups, some with are bound to fold again.

NetApp, at least the old 7-mode, definitely was from the *BDS family.

 

 

IDK what I am doing.

https://www.youtube.com/ruddk

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

50 minutes ago, Martin J said:

Yes, I have been getting quotes from the traditional vendors as well. In this case, it is 2 tier storage for video surveillance footage so I don't think that deduplication and compression are going to do much.

It will be mostly %80 writes of data that will never be looked at and deleted after 2-3 weeks so it is a bit special that way. So we can afford some downtime since it isn't mission critical. 

Yea video footage is a bit of a pain like that, not much you can do though it's already compressed so the win is already there anyway. The way we handle that sort of thing is to use multiple recording servers with Milestone so we do the scale out at the software layer rather than the storage layer, all using large HPE servers with big arrays attached to them but no where near the 5PB mark in total you're looking at.

 

In fact Milestone supports what they call Edge Storage/Scalable Video Quality Recording™ (SVQR) where you can have storage in the cameras and it'll pull the footage down when you need to archive it or export the video etc. Caveat is increase in cost for the cameras.

 

Quote

• Edge Storage: Uses camera-based storage as a complement to the central storage in the recording servers, with flexible video and audio retrieval based on time schedules, events or manual requests, including the ability to combine centrally and remotely stored video using Scalable Video Quality Recording™ (SVQR).

 

Not sure what you're using but I'm willing to bet other software can do this.

 

38 minutes ago, Martin J said:

I have been administering NetApp for a few years now, IIRC we had about 4000-5000 drives at one point, mostly for the virtual servers. But I haven't kept up much with NetApp news since we built our 1,4 petabyte SSD VSAN on VMware. I think the old storage vendors be seeing a lot more competition from software-defined storage, besides from all the small storage start-ups, some with are bound to fold again.

NetApp, at least the old 7-mode, definitely was from the *BDS family.

We've been using Netapp since 7-Mode as well, gone through to C-Mode and basically every version iteration of that which has been rather good far as we go. Wouldn't say there is much massive differences though it's harder to remember what it was like back then to now as it's been a long time, I'd say one big plus is aggregate level dedup now instead of volume level.

 

As nice as stuff like Netapp is it's honestly a waste for something like surveillance footage. Unless you're using all the $$$ features which tbh you actually can't, imagine the Snapshot bloat on this with the constant day in and day out writing, would win the how much storage can you waste award lol.

 

Semi update on Netapp as a whole they are kind of branching out/away from just being a storage hardware provider running their own software, ONTAP Select can be installed on any server to turn it in to a Netapp controller and disk and they have ONTAP Cloud and Storage GRID (Object Storage).

 

Still can't do auto storage tiering from SSD to SATA like you can on an EMC, only FlashPool which isn't the same thing at all.

 

Also interesting you went with VSAN, we're a Nutanix shop. What's it been like?

Link to comment
Share on other sites

Link to post
Share on other sites

Oh I see, definitely makes a difference. That is definitely a lot of storage for 2-3 weeks so I'm assuming it's either recording the full stream from all cameras or there's just a boat load of cameras? Possible to re-encode older videos to save space, to something like h265 and gain room for more storage? I don't have much experience in the ways of surveillance, just food for thought.

 

The one nice thing about solutions like NetApp is how easy it is to add a shelf and keep on trucking. Not to say adding storage to glusterfs or freenas is painful in anyway, but if you're already familiar with NetApp and I don't foresee them going anywhere... there's value in that.

 

I'm so very used to the people who post on this forum with big aspirations with a $1 budget but it sounds like you've got your head on straight lol.

 

45 drives isn't terribly cheap but they do save you on licensing/feature costs. I think given the use-case that any solution you chose would be fine with good documentation + purchased support. For FreeNAS, ixsystems would provide support for both ZFS and their OS, and I ~believe~ 45 drives supports glusterfs troubleshooting as well though can't hurt to call and verify. Just switching vendors for any solution can mean re-learning everything, so nothing new. 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Mikensan said:

Possible to re-encode older videos to save space, to something like h265 and gain room for more storage? I don't have much experience in the ways of surveillance, just food for thought.

Milestone actually does h265 :). We have a few hundred cameras, some older, some 3MP and some 4K ones (those eat space fast heh). Not use many other software though, kind of found an actually good one early and wouldn't bother with anything else unless that something else was the best thing ever.

 

Edit:

6 minutes ago, Mikensan said:

I'm so very used to the people who post on this forum with big aspirations with a $1 budget but it sounds like you've got your head on straight lol.

90% of posts about storage servers.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

Milestone actually does h265 :). We have a few hundred cameras, some older, some 3MP and some 4K ones (those eat space fast heh). Not use many other software though, kind of found an actually good one early and wouldn't bother with anything else unless that something else was the best thing ever.

Oh that's really cool! And good god @ 4k cameras, whew! Not to derail too much, but do you have multiple milestone installations or do all connections feed into a single server? I can imagine just 100 2MP cameras saturating a gigabit link let alone 3MP+4k on the network.

Link to comment
Share on other sites

Link to post
Share on other sites

I actually use to work for a company that did video surveillance but their storage was done on Oracle and they were in the process of migrating that to Postgres.

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Mikensan said:

Oh that's really cool! And good god @ 4k cameras, whew! Not to derail too much, but do you have multiple milestone installations or do all connections feed into a single server? I can imagine just 100 2MP cameras saturating a gigabit link let alone 3MP+4k on the network.

There's a central admin server and you can have as many recording servers as you like license permitting. So basically you just add more recording servers to handle the number of cameras you need, we currently have 3 way over spec'd HPE DL380 Gen10 as recording servers.

Link to comment
Share on other sites

Link to post
Share on other sites

On 17/7/2018 at 11:20 AM, leadeater said:

In fact Milestone supports what they call Edge Storage/Scalable Video Quality Recording™ (SVQR) where you can have storage in the cameras and it'll pull the footage down when you need to archive it or export the video etc. Caveat is increase in cost for the cameras.

 

Hmm interesting, didn't know about that, seems like something I will take a look at. I know that it is going to be a mixture of old cameras already mounted and new cameras we will be adding, but maybe we should take a look at which cameras we will be installing and maybe re-evaluate our current selection.

 

On 17/7/2018 at 11:20 AM, leadeater said:

As nice as stuff like Netapp is it's honestly a waste for something like surveillance footage. Unless you're using all the $$$ features which tbh you actually can't, imagine the Snapshot bloat on this with the constant day in and day out writing, would win the how much storage can you waste award lol.

 

Indeed. The footage isn't mission critical so it won't be a huge loss if something should be lost. And so we wanted to see how cheap can we do it, from rolling our own solution to some cheaper NAS or whatever. :D The problem with rolling our own would be that it might be high on maintenance and hard to hand over to our operations department. 

 

On 17/7/2018 at 11:20 AM, leadeater said:

Semi update on Netapp as a whole they are kind of branching out/away from just being a storage hardware provider running their own software, ONTAP Select can be installed on any server to turn it in to a Netapp controller and disk and they have ONTAP Cloud and Storage GRID (Object Storage).

 

Still can't do auto storage tiering from SSD to SATA like you can on an EMC, only FlashPool which isn't the same thing at all

 We are trying to build a NetApp Select cluster on our vmware system but have having problems getting it to work, have had a lot of help from NetApp all over the planet, it seemes to be network related. We are now doing tests on single node to move some old vFilers. We had quite the problems with the performance on the NetApp Select systems. Apparently, the older versions of select, had their NVRAM cache virtualized as disks, so writing to the Select server caused 2x writes to disk, first the "NVRAM" disks, then the real disks.

 

On 17/7/2018 at 11:20 AM, leadeater said:

Also interesting you went with VSAN, we're a Nutanix shop. What's it been like?

Well, the VSAN itself has been stable. Haven't had problems as such. A bit disappointed with the difference from the sales pitch about how we just could add servers to how they now say all servers should be equal in capacity in terms of disks etc.

The single server performance write went a bit down compared to our old NetApp solution on spinning disks, because of the efficiency of NetApp's NVRAM, and that we run N+1 mirroring on multiple fault domains. Of course when the NetApp got overburdened, the VSAN is better, and the NetApp was always overburdened, also because of the 8 gigabit interconnect links in the metro cluster. So write latency went up a few milliseconds, and batch workload on databases didn't really improve.

But our combined throughput / IOPS on the VSAN is 20x higher so in that regard it is really great and the databases don't experience uneven performance if a new workload is running late somewhere.

 

 

I really like what I am hearing from Nutanix, especially that they tell you how their system and algorithms work and have it documented. I also like the concept that when you move a server with vMotion, Nutanix will move the data to the server as it is requested and written. You can't get an answer from VMware on how VSAN works, it is magic and "just works, accept it". And even though I am not going to tinker with the details, I need to know how it operates and functions so that I can design new setups and also help with a bit of problem-solving when it occur. 

 

IDK what I am doing.

https://www.youtube.com/ruddk

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 17/7/2018 at 3:40 PM, Mikensan said:

Oh I see, definitely makes a difference. That is definitely a lot of storage for 2-3 weeks so I'm assuming it's either recording the full stream from all cameras or there's just a boat load of cameras? Possible to re-encode older videos to save space, to something like h265 and gain room for more storage? I don't have much experience in the ways of surveillance, just food for thought.

Yeah, it could be up to 15.000 cameras in the end. the old ones are most likely runing mjpeg, and the newer ones h.264(I am guessing). it will be like 10 FPS only when movement.

I am just reading up on how these surveillance systems work myself and it seems they do have storage tiering where you can re-encode the streams, when you archive it.

 

On 17/7/2018 at 3:40 PM, Mikensan said:

The one nice thing about solutions like NetApp is how easy it is to add a shelf and keep on trucking. Not to say adding storage to glusterfs or freenas is painful in anyway, but if you're already familiar with NetApp and I don't foresee them going anywhere... there's value in that.

I agree that they are stable, some of our old NetApps are over 7 years old. I am trying to do a calculation to find the sweet spot between rolling our own solution and buying off the shelf stuff, factoring in, hardware and software support, man-hours, software licenses.

 

On 17/7/2018 at 3:40 PM, Mikensan said:

I'm so very used to the people who post on this forum with big aspirations with a $1 budget but it sounds like you've got your head on straight lol.

 

Hah, thanks. :) It is also sort of what I do right now. I design and build IT-infrastructure for projects. Usually, they only wanted the "classical" solutions from the traditional vendors, so it is quite refreshing that I get to do a complete greenfield solution.

On 17/7/2018 at 3:40 PM, Mikensan said:

45 drives isn't terribly cheap but they do save you on licensing/feature costs. I think given the use-case that any solution you chose would be fine with good documentation + purchased support. For FreeNAS, ixsystems would provide support for both ZFS and their OS, and I ~believe~ 45 drives supports glusterfs troubleshooting as well though can't hurt to call and verify. Just switching vendors for any solution can mean re-learning everything, so nothing new. 

Yes, I think I will get a quote from them as well, could be really interesting,. I have been looking at the HPE Apollo 4500 with room for 64 drives and since we buy HPE servers already so it would make sense just to buy more servers from them. But I really like 45 drives and they seem to be the first movers in high-density storage so, yeah, I'll include them also.

IDK what I am doing.

https://www.youtube.com/ruddk

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Martin J said:

Yes, I think I will get a quote from them as well, could be really interesting,. I have been looking at the HPE Apollo 4500 with room for 64 drives and since we buy HPE servers already so it would make sense just to buy more servers from them. But I really like 45 drives and they seem to be the first movers in high-density storage so, yeah, I'll include them also.

One difference to consider about HPE vs the rest is that HPE warranty their disks, all classes even SATA, for the full time the server itself is under warranty. This isn't the case with everyone else, IBM (now Lenovo) would only warranty their SATA disks for 1 year which is a right pain.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Martin J said:

Well, the VSAN itself has been stable. Haven't had problems as such. A bit disappointed with the difference from the sales pitch about how we just could add servers to how they now say all servers should be equal in capacity in terms of disks etc.

The single server performance write went a bit down compared to our old NetApp solution on spinning disks, because of the efficiency of NetApp's NVRAM, and that we run N+1 mirroring on multiple fault domains. Of course when the NetApp got overburdened, the VSAN is better, and the NetApp was always overburdened, also because of the 8 gigabit interconnect links in the metro cluster. So write latency went up a few milliseconds, and batch workload on databases didn't really improve.

But our combined throughput / IOPS on the VSAN is 20x higher so in that regard it is really great and the databases don't experience uneven performance if a new workload is running late somewhere.

 

 

I really like what I am hearing from Nutanix, especially that they tell you how their system and algorithms work and have it documented. I also like the concept that when you move a server with vMotion, Nutanix will move the data to the server as it is requested and written. You can't get an answer from VMware on how VSAN works, it is magic and "just works, accept it". And even though I am not going to tinker with the details, I need to know how it operates and functions so that I can design new setups and also help with a bit of problem-solving when it occur. 

Very similar experiences with Nutanix as well, though that differs for the Hybrid nodes/clusters and our All Flash nodes/clusters. Hybrid clusters generally have lower VM performance than when hosted on the Netapps for the same reason you pointed too though the All Flash stuff is blistering fast, the disappointing part though was we never got to compare Nutanix to Netapp All Flash performance only FlashPool SAS. Same issue/advice about keeping node storage and disk counts the same as well.

 

We have Netapp AFF A300 for our SQL workloads which was way way faster than the trays of SAS it was on before.

 

What we've both experienced is what I like to think of as the untold truth about "Web Scale"/"Scale out"/"Hyper-converged"/"HCI" or what ever buzz word of the day someone is using in that while there is some truth in the hype there certainly is hype. If a server is not performing well adding more nodes will not increase the performance at all so never under spec a node, scale up is still just as valid as out. Also for Nutanix which has the CVM VM running on every node which uses ~12vCPU and 48GB ram I'd never buy anything less than 512GB ram per node since so much overhead goes in to running the platform itself.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×