Jump to content

270TB Storinator

zoombay

Hi,

 

Here in the company, we are planning to buy ​Storinator from 45Drives it contains 45 X 6TB WD 5K4000 (WD60EFRX)

Storinator Direct-Wired Redundant POD 4.0 Redundant Boot Drive 500GB SSD & power

32 GB Ram

E5-2687W v3 SUP+ X10 SRA MotherBoard

2 Dual Port Copper NIC 10Gbit

Rocket 750 HBA Card

CentOS

 

 

Mainly it will be for cloud storage (FTP based) for around 10K users

 

What most matters for us is reliability and some good speed as we thought to do RAID-1 to have as much as possible usable space. 

 

22 X 6 TB will be usable

 

My question is dose this will give us good Write/Read speed ?

What will happen if there is like 100 users in the same time trying to Write/Read from one RAID-1 array 6TB ?

 

How the speed would be ?

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Run Raid 10. Which combines Raid 1 that mirrors the data and Raid 0 that stripes the data. Thus giving you reliability and speed.

Mein Führer... I CAN WALK !!

Link to comment
Share on other sites

Link to post
Share on other sites

we are planning to buy ​Storinator from 45Drives it contains 45 X 6TB WD 5K4000 (WD60EFRX)

Sounds fishy to me. The 5K4000 is a drive from Hitachi, the product code you've listed is for a WD Red. HGST is a WD company but they don't share product codes.

Link to comment
Share on other sites

Link to post
Share on other sites

Hi,

 

Here in the company, we are planning to buy ​Storinator from 45Drives it contains 45 X 6TB WD 5K4000 (WD60EFRX)

Storinator Direct-Wired Redundant POD 4.0 Redundant Boot Drive 500GB SSD & power

32 GB Ram

E5-2687W v3 SUP+ X10 SRA MotherBoard

2 Dual Port Copper NIC 10Gbit

Rocket 750 HBA Card

CentOS

 

 

Mainly it will be for cloud storage (FTP based) for around 10K users

 

What most matters for us is reliability and some good speed as we thought to do RAID-1 to have as much as possible usable space. 

 

22 X 6 TB will be usable

 

My question is dose this will give us good Write/Read speed ?

What will happen if there is like 100 users in the same time trying to Write/Read from one RAID-1 array 6TB ?

 

How the speed would be ?

 

Hey zoombay,
 
For such large deployments, I would strongly suggest using enterprise-level drives such as WD Re, WD Se or WD Re+. these drives are designed for such large drive pools and for such type of usage. WD Red is designed for consumer usage and having these in such large deployments may compromise the safety of your data. WD Red Pro is another solution but it also wouldn't be appropriate for a 45-drive array.
 
I'd check out the Enterprise drives and see which of them would fit your needs:
 
For the RAID type, I would look into the more complex types such as RAID50 or RAID60 or even RAID600 for really optimized speed and safety. RAID10 or RAID100 would also be good options. It really depends on what software you'll be using, what OS and what RAID controllers. :)
 
Captain_WD.

If this helped you, like and choose it as best answer - you might help someone else with the same issue. ^_^
WDC Representative, http://www.wdc.com/ 

Link to comment
Share on other sites

Link to post
Share on other sites

 

Hey zoombay,
 
For such large deployments, I would strongly suggest using enterprise-level drives such as WD Re, WD Se or WD Re+. these drives are designed for such large drive pools and for such type of usage. WD Red is designed for consumer usage and having these in such large deployments may compromise the safety of your data. WD Red Pro is another solution but it also wouldn't be appropriate for a 45-drive array.
 
I'd check out the Enterprise drives and see which of them would fit your needs:
 
For the RAID type, I would look into the more complex types such as RAID50 or RAID60 or even RAID600 for really optimized speed and safety. RAID10 or RAID100 would also be good options. It really depends on what software you'll be using, what OS and what RAID controllers. :)
 
Captain_WD.

 

 

Yes its agreed here , we will use WD Re for its reliability and 5 years warranty.

We are looking to have as much as possible usable space

 

Is there big noticeable difference in:

 

SW RAID-10 and SW RAID-1 in team of  Read/Write Speed ?

 

Assume having 100 users uploading/downloading files from the CentOS FTP server

Link to comment
Share on other sites

Link to post
Share on other sites

Sounds fishy to me. The 5K4000 is a drive from Hitachi, the product code you've listed is for a WD Red. HGST is a WD company but they don't share product codes.

 

Sorry typo mistake. 

Link to comment
Share on other sites

Link to post
Share on other sites

Run Raid 10. Which combines Raid 1 that mirrors the data and Raid 0 that stripes the data. Thus giving you reliability and speed.

 

With Raid-10 there will be 67.5 TB usable space compared to 135 TB of RAID-1 

Link to comment
Share on other sites

Link to post
Share on other sites

If you're serious about this I wouldn't build it yourself. I would contact your local SuperMicro, Dell or HP distributor and get them to provide it. It's all well and good arranging the hardware yourself however what happens when there's an issue? You've got '10,000' potential users on this thing and they've got to wait 24 hours+ for a replacement RAID controller to arrive? They'll leave pretty much instantly. Dell, HP and SuperMicro can all cater to as little as a 4 hour response window for most major areas.

 

Furthermore the drive configuration you're talking about is a terrible idea to be honest. The rebuild times on RAID's 1, 5 and 6 are quite terrible which means downtime for the service. Your write speeds are also going to be horrible too. The controller in that system you mentioned is a HBA meaning that you're going to be running a software based RAID which isn't all that great for real time RAID environments.

 

You'll have better luck with a proper server with dedicated support where parts are availible easily as well as 'software' add-on's like CacheCade and FastPath. One server to be honest is definitely not suitable for such a task if you're legitimately going to have 10k users. Have you also looked into how you're going to manage that many FTP accounts too?

Link to comment
Share on other sites

Link to post
Share on other sites

I'm very much in agreement with @Windspeed36 here. This really sounds like a job for

professionals. They should be able to work with you to figure out precisely what your

needs are, and then provide a suitable solution tailored to that.

Doing things yourself is awesome, but when we're talking about a 10k userbase, there

isn't much room for trial and error and DIY solutions IMHO. I recommend looking into

proper enterprise-grade solutions, both with regards to hardware, infrastructure and

support contracts.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

The local SuperMicro, Dell or HP distributor here provides such expensive setup almost like +70% additional cost  :( 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

The local SuperMicro, Dell or HP distributor here provides such expensive setup almost like +70% additional cost  :( 

 

 

There is a reason for that; better quality parts, more tailored needs and a better support package. For what you want to do, that thing you mentioned will fall flat on its face. The controller is absolutely terrible, there is no hardware RAID functionality nor SSD cache capacity, no BBU setup, no 4 hour or NBD support available and overall it isn't a great product.

Link to comment
Share on other sites

Link to post
Share on other sites

The local SuperMicro, Dell or HP distributor here provides such expensive setup almost like +70% additional cost  :( 

Yes, enterprise-grade setups are hilariously expensive. Despite that, with a 10k

userbase, this really isn't something I would recommend skimping on.

As an example: What do you do if your HBA breaks? Your entire machine is offline,

boom, 10k people can't access their stuff anymore. What do you do when there's a

power outage? Or if there's an issue with your internet connection? Do you have

a secondary machine somewhere in another location as a mirrored failover which

can detect if the first one has gone offline and can seamlessly take over for it?

What do you do if there's an update which breaks something? What's your plan

in case you (or whoever ends up being sysadmin for that machine) makes an error

and takes the machine down? Or when they make an error and accidentally delete

the data? Do you have snapshots of the data to restore from? If so, how frequently

do you snapshot? What happens if your server's security gets compromised? And so

on and so forth.

Providing reliable service for 10,000 users is not a trivial matter, and if you

want to do it right, it will not be cheap. You don't just pay for the hardware,

you also pay for the support, as Windspeed mentioned. And for a setup like this,

good support is critical.

Lastly, I haven't really gone into whether the setup you've listed would be adequate

performance-wise, since honestly I just don't know, so I'll leave that to others.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Thank you for reply and really a appreciate your contribution

 

The HBA Rocket 750 used in storinator is used by BackBlaze, which is one of the largest high-profile customers for the Rocket 750 and they seem successful in their business.

 

Please review our plan

  • We plan to pay $10K annually to a local IT team to be standby for hardware replacements
  • We plan to use WD re 6TB HDD which is used by Dell,HP,etc..
  • We plan to buy extra spare parts for each component
  • Our software is smart enough to control all FTP accounts
  • In our DC we have UPS in case the power went out
  • There is a redundant switch from MPLS to support internet connection
  • There will be DR for backup
  • For Any updates, we perform them on test bid first and monitor the behavior
  • Human errors are controlled by internal process.
  • All data are backed up in weekly bases and we offer restore points for users
  • For security measures, public access is not allowed to data PODs

 

After all above, is our choice going for storinator is still bad for what we want to achieve from it?

Link to comment
Share on other sites

Link to post
Share on other sites

Thank you for reply and really a appreciate your contribution

 

The HBA Rocket 750 used in storinator is used by BackBlaze, which is one of the largest high-profile customers for the Rocket 750 and they seem successful in their business.

 

Please review our plan

  • We plan to pay $10K annually to a local IT team to be standby for hardware replacements
  • We plan to use WD re 6TB HDD which is used by Dell,HP,etc..
  • We plan to buy extra spare parts for each component
  • Our software is smart enough to control all FTP accounts
  • In our DC we have UPS in case the power went out
  • There is a redundant switch from MPLS to support internet connection
  • There will be DR for backup
  • For Any updates, we perform them on test bid first and monitor the behavior
  • Human errors are controlled by internal process.
  • All data are backed up in weekly bases and we offer restore points for users
  • For security measures, public access is not allowed to data PODs

 

After all above, is our choice going for storinator is still bad for what we want to achieve from it?

If this server goes down do you have anouther server than can instantly replace it, it yes then cheap out and have a clustered system(like most big compaines these days) if you don't have a active backup then get good dell/hp/supermicro stuff. When you build you own stuff there can be problems and what if some part fails. If its a dell server you call dell.

 

Don't get the xeon e5 2687W v3's they are built for workstations and run hot.

 

Consider something like the supermicro http://www.supermicro.com/products/chassis/4U/946/SC946ED-R2KJBOD.cfm It can store 90 drives in a 4u and connects to anouther server with SAS. Then get someting like the r520 with a SAS card and a 10gbe nic in it. 

Link to comment
Share on other sites

Link to post
Share on other sites

What storage protocol do you plan on using ?

Can Anybody Link A Virtual Machine while I go download some RAM?

 

Link to comment
Share on other sites

Link to post
Share on other sites

Thank you for reply and really a appreciate your contribution

 

The HBA Rocket 750 used in storinator is used by BackBlaze, which is one of the largest high-profile customers for the Rocket 750 and they seem successful in their business.

Backblaze is a joke, let's be honest. The 750 is a terrible controller; it's got 10 8087 connectors yet is only capable of 40 drives. Furthermore it's an older PCIe 2 card which can be bottlenecked with that many drives attached. You'll be better off with 1-2 SuperMicro 846 chassis. If you're using software RAID then something like a 92074i or 4i4e will work. If you want hardware RAID then a 9271-8i with BBU-LSIiBBU09 plus Intel 3600 series 400GB drives for CacheCade. How much storage did you actually need?

Link to comment
Share on other sites

Link to post
Share on other sites

... mm maybe this forum is not the best place to help on how to run your own company. Maybe you need to hire a consultant. Also why would you run a RAID 10 or 1 in an enterprise environment. Why not raid 6?

 

If you had 100 people accessing it at once theoretically you should be able to get ~ 200 mbit person based on your 20 gbit port. 

 

Also warranty does not matter in a true enterprise situation. Drive goes bad you crush it and throw it away. It sounds like you have no idea what you are doing.

 

This is all theory btw. Using drives you mentioned.

RAID 0 - 6165 MB/s

RAID 1 - 137 MB/s Can't do raid 1 with 45 drives!

RAID 10 - 4018 MB/s Can only use 44 drives.

RAID 5 - 2466 MB/s

RAID 6 - 1761 MB/s

 

20gpbs = 2500 MB/s

 

I would recommend raid 5 or 6. Google it if you have no idea. A lot of this info is available online you need to do a lot more research. 

Link to comment
Share on other sites

Link to post
Share on other sites

What storage protocol do you plan on using ?

 

Depends, if its my choice I would go for 10Gb iSCSI and separate storage traffic from the regular network traffic

Link to comment
Share on other sites

Link to post
Share on other sites

Backblaze is a joke, let's be honest. The 750 is a terrible controller; it's got 10 8087 connectors yet is only capable of 40 drives. Furthermore it's an older PCIe 2 card which can be bottlenecked with that many drives attached. You'll be better off with 1-2 SuperMicro 846 chassis. If you're using software RAID then something like a 92074i or 4i4e will work. If you want hardware RAID then a 9271-8i with BBU-LSIiBBU09 plus Intel 3600 series 400GB drives for CacheCade. How much storage did you actually need?

 

 

Thanks for the suggestions. To be honest we planned to buy 4 PODs each 270TB and do RAID-1 to get 0.5 Petabyte

But after reading here I guess we might change our plans.  

 

Link to comment
Share on other sites

Link to post
Share on other sites

... mm maybe this forum is not the best place to help on how to run your own company. Maybe you need to hire a consultant. Also why would you run a RAID 10 or 1 in an enterprise environment. Why not raid 6?

 

If you had 100 people accessing it at once theoretically you should be able to get ~ 200 mbit person based on your 20 gbit port. 

 

Also warranty does not matter in a true enterprise situation. Drive goes bad you crush it and throw it away. It sounds like you have no idea what you are doing.

 

This is all theory btw. Using drives you mentioned.

RAID 0 - 6165 MB/s

RAID 1 - 137 MB/s Can't do raid 1 with 45 drives!

RAID 10 - 4018 MB/s Can only use 44 drives.

RAID 5 - 2466 MB/s

RAID 6 - 1761 MB/s

 

20gpbs = 2500 MB/s

 

I would recommend raid 5 or 6. Google it if you have no idea. A lot of this info is available online you need to do a lot more research. 

 

The plan is to get the project done in a initiative cost effective way as much as possible, and frankly specking we are not an IT company.

Link to comment
Share on other sites

Link to post
Share on other sites

The plan is to get the project done in a initiative cost effective way as much as possible, and frankly specking we are not an IT company.

Since you're not an IT company, have you considered outsourcing this? I know it's sort

of a bad word these days, and I would say that often things are outsourced just because

management wants the cheapest possible solution without taking quality into account when

it actually would be bette done in-house, but there are situations where it makes sense.

If you can find a good partner for it (and I'm not talking about a partner from halfway

around the world, can be a local-ish company), you might end up with a good solution for an

acceptable price (easier said than done of course, I know). Just a thought.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

... mm maybe this forum is not the best place to help on how to run your own company. Maybe you need to hire a consultant. Also why would you run a RAID 10 or 1 in an enterprise environment. Why not raid 6?

 

If you had 100 people accessing it at once theoretically you should be able to get ~ 200 mbit person based on your 20 gbit port. 

 

Also warranty does not matter in a true enterprise situation. Drive goes bad you crush it and throw it away. It sounds like you have no idea what you are doing.

 

This is all theory btw. Using drives you mentioned.

RAID 0 - 6165 MB/s

RAID 1 - 137 MB/s Can't do raid 1 with 45 drives!

RAID 10 - 4018 MB/s Can only use 44 drives.

RAID 5 - 2466 MB/s

RAID 6 - 1761 MB/s

 

20gpbs = 2500 MB/s

 

I would recommend raid 5 or 6. Google it if you have no idea. A lot of this info is available online you need to do a lot more research. 

I'd be moving away from having a single box at this point. 45 drives in raid 5 only has a ~80% chance of rebuilding after a disk failure (http://www.raid-failure.com/raid5-failure.aspx) assuming 1 correctable error per 10^16 bits (if your looking at the WD RE SATA drives it is 1:10^15 bits or 11.6% at recovering). raid 6 is better, but not very much so for redundancy, so i'd be using raid purely for throughput/iops.

 

Infact I would be going down the dedicated SAN route, have one box backup duplicate to the other or a distributed file system.

Link to comment
Share on other sites

Link to post
Share on other sites

I'd be moving away from having a single box at this point. 45 drives in raid 5 only has a ~80% chance of rebuilding after a disk failure (http://www.raid-failure.com/raid5-failure.aspx) assuming 1 correctable error per 10^16 bits (if your looking at the WD RE SATA drives it is 1:10^15 bits or 11.6% at recovering). raid 6 is better, but not very much so for redundancy, so i'd be using raid purely for throughput/iops.

 

Infact I would be going down the dedicated SAN route, have one box backup duplicate to the other or a distributed file system.

 

Yes this is the right way to do it. 

Link to comment
Share on other sites

Link to post
Share on other sites

As someone who works as a support engineer for a large ISP, who also provides outsource IT support to businesses, im also in agreement that its a good idea to consider outsourcing IT if you're asking these type of questions.

 

A storage system such as this has a huge number of concerns.

- How are you securing the data at a network level

- How are you managing security rights/access to the data

- How are you creating backups (shadow copies etc..) of the data

- How are you testing regular backup recovery

- Who is monitoring the server and ensuring software issues are addressed for accessibility issues

- Who is managing hardware failures and ensuring reliable rebuilds

- What happens in the event a major component like the RAID card or Motherboard fails, and you require uptime now.

 

These are some of the benefits of having outsourced IT with service contracts, they look after these things.

This is also the benefit of having a built server from the likes of Dell, HP, Supermicro, etc...who have SLA's to get you back up and running ASAP - hence the extra cost.

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO + 4 Additional Venturi 120mm Fans | 14 x 20TB Seagate Exos X22 20TB | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

...These are some of the benefits of having outsourced IT with service contracts, they look after these things...

I would say this is more about having someone fully qualified to be managing your network. Not Helpdesk but an actual sysadmin. Most MSPs will charge more then just hiring a dedicated sysadmin. The benefit of an MSP is so your in-house it staff can point a finger when things go wrong, have time to work on other projects (i.e. finally leaving that terrible PBX system for a shiny new VOIP platform, or taking annual leave), or when your team doesn't have any specialists for tech x (e.g. SAP, no one man band is going to be able to support SAP and be a sysadmin or vice versa).

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×