Jump to content

SAN/DAS disk shelves - expansion to multi connected servers to disk shelf

Hi.

First post here.

 

I have a D2600 dumb disk shelf and it can only connect to one server at a time - mostly has been okay, it guzzles electricity at 200w idle.

For a Ceph project I was looking to run multiple servers (HP Proliant 360P G8) and they're all SFF servers and all have 1 TB slots, so small disks means more and more nodes at 1TB = costly. So being able to have cheaper 3.5" drives in a shared storage box would be ideal.

I was looking at the HP MSA2000 G2 (MSA2000sa G1 only supports DAS), others do ISCSI, switches, ethernet, etc

 

MSA2000fc G1 and MSA2000sa G1 can attach 4 hosts via DAS - I only have 3 right now. I'm hoping I won't get more than 4, otherwise this option is not good as it only supports 4 and would need to look at SAN (MSA2000fc G1 and MSA2000i G1). That's getting pricey for sure! Especially the switches.

 

I'd be up for Ethernet/DAS, even if I had 4 machines to attach to the disk shelf, I probably wouldn't have enough disks to warrant any more. I hope....

 

In any case, does anyone know if MSA2000sa G1 does chaining?

I'm expecting that eventually, I would use more than 12 bay LFF drives but I don't expect to really need 4 servers to connect to it at any given time, but chaining and at least 2 computers connecting (to some of the disks) is probably what I'm looking for.

 

So does anyone know what option might be best for me (so I can keep scouring eBay for a deal) (All the P2000 are in the 1K mark! Trying not to laugh, these units are OLD!) MSA2000sa seems in the reasonable £200-600 range (let's not forget Chia as well...)

 

FYI:

The HP StorageWorks 2000i G1 seems to mention:

"The MSA2000i offers flexibility and is available in two models: A single controller version for lowest price with future expansion and a dual controller model for the more demanding entry-level situations that require higher availability. Each model comes standard with 12 drive bays that can simultaneously accommodate 3.5-inch enterprise-class SAS drives and archival-class SATA drives. Additional capacity can be easily added when needed, by attaching up to three MSA2000 12 bay drive enclosures." - This is similar to my D2600 - it can daisy chain, but only supports ISCSI, I've not worked with that.

 

TL:DL

Looking for 4 < system hosts to attach to disk shelf. DAS/Fibre preferred, or bit the bullet and go away from DAS so I can have 2+ hosts connected with the ability to daisy chain disk shelves?

 

Thanks

MSA2000_G1_White_Paper.pdf

Link to comment
Share on other sites

Link to post
Share on other sites

honestly what you are looking for is simple Ethernet with what ever speed you need or are willing to spend money for.

my guess would be 10Gig LAN would be fine but you can even get 25 or 40 now for only like 10 times the price if that is worth it for you.

 

generally depending on your budget you are probably much better off buying one modern and fast server instead of building all that expensive infrastructure only so you can use old hardware for the same job.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Pixel5 said:

generally depending on your budget you are probably much better off buying one modern and fast server instead of building all that expensive infrastructure only so you can use old hardware for the same job.

I hear what you're saying, but I sort of need to stick with enterprise hardware. I have plenty of modern systems for compute.

Can you give an example of more modern equipment that can work what I've already got?

There are reasons that people still use these.

 

10Gig is pricey though.... switches, NICS, all to replace....

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Voarsh said:

I hear what you're saying, but I sort of need to stick with enterprise hardware. I have plenty of modern systems for compute.

Can you give an example of more modern equipment that can work what I've already got?

There are reasons that people still use these.

 

10Gig is pricey though.... switches, NICS, all to replace....

What is your budget here? How much storage do you need? How much bandwidth and iops do you need?

 

Id get a nas like a synology here, and use nfs/ cifs here. Much easier to setup and plug and play.

 

You can connect many das models to multiple hosts, but you need a filesystem and setup that knows about this, and most filesystems assume that you only have one host touching the blocks. You need the same thing with iscsi where multiple initators are using one target.

 

For your parts and cheap, id just get a cheap server(or use one of them) to have the das plugged into them, and then have the rest of the system access files via nfs/cifs.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Voarsh said:

I hear what you're saying, but I sort of need to stick with enterprise hardware. I have plenty of modern systems for compute.

Can you give an example of more modern equipment that can work what I've already got?

There are reasons that people still use these.

 

10Gig is pricey though.... switches, NICS, all to replace....

the cost of a 10Gig network is trivial compared to what enterprise grade hardware will cost that would need for a good result.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Pixel5 said:

the cost of a 10Gig network is trivial compared to what enterprise grade hardware will cost that would need for a good result.

I guess that's a bit subjective. The "old" hardware that I am using is performant enough with slow HDDs.

100 TB's + - I expect this to expand as well. Including needing more enclosures.

 

Costs involve buying another shelf unit, probably 1 more mini sas wire. £250-700.

 

I will tell you, the only thing I would probably consider is a low watt G8/G9 server with LFF bays....

8 hours ago, Electronics Wizardy said:

Id get a nas like a synology here, and use nfs/ cifs here. Much easier to setup and plug and play.

 

By more modern hardware I need examples still. Are we talking DIY enclosures, buying a sinology NAS with only 5 bays? Lol

I'm not throwing away my existing hardware for a shiny new label, just because people think they need a new iPhone.

FYI, NFS is not as fast nor stable for my workload. Only possible way NFS has been stable for new is Kubernetes with Read Write Many on PVC volumes for multi NFS servers.... 

8 hours ago, Electronics Wizardy said:

For your parts and cheap, id just get a cheap server(or use one of them) to have the das plugged into them, and then have the rest of the system access files via nfs/cifs.

That is sort of what I have done, although I wouldn't say cheap because of the specs.

I wrote that I already have a disk enclosure. Not very fault tolerant, shutdown/break/maintenance,  that's why enterprise (albeit "older" because newer stuff is several zeros....) exists....

8 hours ago, Electronics Wizardy said:

What is your budget here? How much storage do you need? How much bandwidth and iops do you need?

6G plus. All spinning disks, I need chainable enclosures, I need multi connected hosts to shared storage. This was sort of mentioned when I gave you like 3 options of disk shelves, and all I get is "buy new hardware". Doesn't synology do things other than just disk enclosures, aren't they trying to be a little OS with VM's and stuff? A load of bloat that I don't need.

Link to comment
Share on other sites

Link to post
Share on other sites

the thing is what you want is simply not possible and especially not easily so.

You could build a single server with multiple disk shelves attached to it without any problem but your requirement to access this with multiple clients is the deal breaker because basically any enterprise grade hardware will simply be build connect to each other with 25 or 40G LAN as this is exactly what this is supposed to be used for.

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, Pixel5 said:

You could build a single server with multiple disk shelves attached to it without any problem but your requirement to access this with multiple clients

Yes, however, unless you have 8pin and 16pin PCI free, you can't, assuming you have one of those free, you can max only have two/4 (depending on the 2/4 port model you have) connected - also, if that machine needs powering off, no other host can use the disks. Going back to fault tolerance.

 

32 minutes ago, Pixel5 said:

the thing is what you want is simply not possible and especially not easily so.

You could build a single server with multiple disk shelves attached to it without any problem but your requirement to access this with multiple clients is the deal breaker because basically any enterprise grade hardware will simply be build connect to each other with 25 or 40G LAN as this is exactly what this is supposed to be used for.

OK, so, There's a least one model from what I mentioned that has DAS, nothing about LAN at all.... There's fibre too.

You're conflicting the two.

 

I can get a LAN model, DIY (might be more cost effective to make it myself) or something and upgrade to 10G+ LAN, that would mean replacing all NICS/switches, etc.

But your first suggestion, to basically throw away my investment for a Synology NAS (24 bays is 4K +, or several hundred for a measly 5 bay, with a max of 15 bays with an expansion.....) with all sorts of SSD/M.2 caching that I simply don't need, an OS trying to be a cheap hypervisor and all that....... is not exactly what I was looking for. 😄 

I don't need more than 4 hosts connected, but I do need chaining - going back to my original post, which was which model to go for...

 

Tl;dr:

4 hosts to connect to a disk shelf (DAS connected)

Lan is not preferred since network upgrades all over are needed.

Chaining disk shelves for expanding 48 bays + is a plus - however I am not sure if the DAS model lets you connect 4 hosts and have chaining.

 

Failing that:

LFF host with + DAS chaining boxes... although it is not fault tolerant.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Voarsh said:

By more modern hardware I need examples still. Are we talking DIY enclosures, buying a sinology NAS with only 5 bays? Lol

I'm not throwing away my existing hardware for a shiny new label, just because people think they need a new iPhone.

FYI, NFS is not as fast nor stable for my workload. Only possible way NFS has been stable for new is Kubernetes with Read Write Many on PVC volumes for multi NFS servers.... 

Synology makes some multi host drive enclosures. Pretty high end, redundant,and expandable to hundreds of bays. There a pretty good solution for storage here. 

 

1 hour ago, Voarsh said:

That is sort of what I have done, although I wouldn't say cheap because of the specs.

I wrote that I already have a disk enclosure. Not very fault tolerant, shutdown/break/maintenance,  that's why enterprise (albeit "older" because newer stuff is several zeros....) exists....

How reliable do you need it? What drives do you have? Are they multiport sas drives?

 

Normally cheap and reliable don't do together. If you want reliable, get. premade solution from a vendor, then you also get support if something bad happens.

 

24 minutes ago, Voarsh said:

4 hosts to connect to a disk shelf (DAS connected)

Lan is not preferred since network upgrades all over are needed.

Chaining disk shelves for expanding 48 bays + is a plus - however I am not sure if the DAS model lets you connect 4 hosts and have chaining.

Basically no das allows 4 clients to connect.

 

Your options here are either:

 

Get a premade san/nas. The most reliable, and expensive. And you get good support.

 

Get a das with a server as a firesharing box. CHeaper, more diy. If you have 2 nodes and dualport sas drives its fully redundant. 

 

Go hyperconverged, and connect some drives to every host. Then use something like ceph, storage spaces direct, vsan, glusterfs or others to make it into one redundant volume.

 

Network isn't as expensive as you thing. You can setup a mesh with 10gbe or 40gbe pretty cheap(less than 1k usd total)

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Electronics Wizardy said:

 

Get a das with a server as a firesharing box. CHeaper, more diy. If you have 2 nodes and dualport sas drives its fully redundant

This route is looking more probable, it seems.

 

3 minutes ago, Electronics Wizardy said:

Go hyperconverged, and connect some drives to every host. Then use something like ceph, storage spaces direct, vsan, glusterfs or others to make it into one redundant volume.

Again, this could probably work.

I would look to independently utilise my SFF 2.5" drives, get a LFF server, connect an existing DAS box to LFF model, and pool it together with Ceph/GlusterFS then. Just a bit annoying about the redundancy, but I suppose software redundancy (over hardware redundancy) would be cheaper I suppose spreading the disk sizes onto more nodes/smaller SFF drives....

 

I was thinking this was before posting, I'll have to go away and think. Thanks. I definitely don't want to do the mistake of buying more equipment and outgrow it like I did with the D2600, because now it's more costly to fix the oversight of that.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×