Jump to content

20 x 8TB in Raid6 -> Sloooooow

Go to solution Solved by Electronics Wizardy,
8 minutes ago, havok2 said:

I tried Linux as well... same picture. Here is my config at this moment (Sorry I used Raid6, but the Values are the same for my Raid6 test. StripSize was also 256). (I used skip init, because I tried it first with and it takes 50h, so just for testing I skip this at this Moment)

 

I know init can affect performance can affect performance on some.

 

Try chaing to write-back, makes writes much faster, and enable the write cache on the card.

 

 

 

Hello,
I had 20 Seagate IronWolf 8TB HDDs available at work. Now I wanted to put them all in our Windows Server 2016 and configure them in a Raid6 using the Adaptec 81605zq and the Adaptec 82885T expander card. No sooner said than done. All 20 disks installed in the LianLi D8000 in decoupled slots and connected. Raid6 set up and ... 10MB/s in writing. Reading, however, goes at almost 2GB/s and even in a Raid0 I have speeds in the GB/s range. I have tried everything. Operation with two power supplies, all HDDs laid out on the floor... The data sheet says 1-8 Bays. The IronWolf Pro models can handle 1-24 Bays. Should it really have such a dramatic effect? Does anyone have any experience? Otherwise I'll have to go shopping again. Thank you very much!

AA894E3D-5B1B-4A76-9B70-E27B945A9083.jpeg

Link to comment
Share on other sites

Link to post
Share on other sites

Do you have the battery? THe parity performance on those normally sucks without the battery.

 

 

Are you able to use software raid, id argue its a much better solution, esp if you can use zfs on linux.

 

id also way a raid 5 on 20 drives isn't a good config, id go for a raid 60 here probably.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, havok2 said:

Hello,
I had 20 Seagate IronWolf 8TB HDDs available at work. Now I wanted to put them all in our Windows Server 2016 and configure them in a Raid6 using the Adaptec 81605zq and the Adaptec 82885T expander card. No sooner said than done. All 20 disks installed in the LianLi D8000 in decoupled slots and connected. Raid6 set up and ... 10MB/s in writing. Reading, however, goes at almost 2GB/s and even in a Raid0 I have speeds in the GB/s range. I have tried everything. Operation with two power supplies, all HDDs laid out on the floor... The data sheet says 1-8 Bays. The IronWolf Pro models can handle 1-24 Bays. Should it really have such a dramatic effect? Does anyone have any experience? Otherwise I'll have to go shopping again. Thank you very much!

AA894E3D-5B1B-4A76-9B70-E27B945A9083.jpeg

 

 

https://searchstorage.techtarget.com/definition/RAID-6-redundant-array-of-independent-disks

 


Disadvantages of RAID 6
Each set of parities must be calculated separately using RAID 6. This slows write performance. RAID 6 is also more expensive because of the two extra disks required for parity. RAID controller coprocessors are often employed to handle parity calculations and to improve RAID 6 write speed.

It takes a long time to rebuild the array after a disk failure because of RAID 6's slow write times. With even a moderate-sized array, rebuild times can stretch to 24 hours.

RAID 6 requires special hardware; it is important to use a controller specifically designed to support it.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Biomecanoid said:

RAID 6 requires special hardware; it is important to use a controller specifically designed to support it.

Nowadays this is completely wrong.

Modern CPU-s can easily deal with the math, usually much faster than dedicated controller unless it is modern and expensive one.

 

For the OP-s problem to me it also seems like controller is working without cache for whatever reason, most probable one - no battery/dead battery. If that is the case sometimes it is possible to force cache on without battery for testing purposes, just be sure to not use the array like this, especially for any work-related task.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Archer42 said:

Nowadays this is completely wrong.

Modern CPU-s can easily deal with the math, usually much faster than dedicated controller unless it is modern and expensive one.

 

For the OP-s problem to me it also seems like controller is working without cache for whatever reason, most probable one - no battery/dead battery. If that is the case sometimes it is possible to force cache on without battery for testing purposes, just be sure to not use the array like this, especially for any work-related task.

Software solutions do not compare with dedicated controller solution found in servers. You never see a server with software raid

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Biomecanoid said:

Software solutions do not compare with dedicated controller solution found in servers. You never see a server with software raid

you see a lot of servers with software raid. There are tons of software raid solutions like zfs and storage spaces that are made for servers. They provide similar performance, and are used in many production servers. Also with things like nvme ,software raid is getting much more common as hardware raid can't be used.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, Electronics Wizardy said:

you see a lot of servers with software raid. There are tons of software raid solutions like zfs and storage spaces that are made for servers. They provide similar performance, and are used in many production servers. Also with things like nvme ,software raid is getting much more common as hardware raid can't be used.

Just because there are workarounds from doing the proper way does not mean you should. Servers in an enterprise environment should have a controller for Raid, overwise they are not servers just plain PCs.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Biomecanoid said:

Just because there are workarounds from doing the proper way does not mean you should. Servers in an enterprise environment should have a controller for Raid, overwise they are not servers just plain PCs.

Software raid isn't a workaround, it is often the better solutuion. Its more flexable, and gives more features.

 

Also lots of large companies, like Oracle, Facebook and others are using software raid, and many more, like VMWare and Microsoft are selling soluions built on software raid. Its not the worse solution, and hardware raid cards seem to be slowing going away.

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Electronics Wizardy said:

Do you have the battery? THe parity performance on those normally sucks without the battery.

 

 

Are you able to use software raid, id argue its a much better solution, esp if you can use zfs on linux.

 

id also way a raid 5 on 20 drives isn't a good config, id go for a raid 60 here probably.

Hello,
yes I have the capacitor backup. We previously had 16x4TB SSD on the controller in Raid6 with >5GB/s 😄 So I think the controller should actually manage that. Even if I pass all 20 HDDs through to Windows as RAW and set up a Raid5 in Windows, the performance is just as subterranean. I have also not activated the write cache because I have already shot a drive completely to RAW here. As I said, the performance was still excellent with 8 and 12 HDDs. It is also only an additional backup. We have others. So the demand for reliability is lower. In the last 10 years of my company's history, I have not had a single defective HDD in my raids and with a Raid6 where two can fail, I remain relaxed. As I said, I think it's more about resonances that cause the HDDs to cycle. Here is an interesting article about this: https://www.ept.ca/features/everything-need-know-hard-drive-vibration/

It's also more about the question of whether anyone has ever seen something like this in the wild? There must be a reason why Seagte writes max. 8 drives with IronWolf, 24 with IronWolf Pro and even Linus talks about it in the Petabyte video at minute 4:20: https://youtu.be/EtZXMj_gUjU

Thanks 🙂

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Electronics Wizardy said:

Software raid isn't a workaround, it is often the better solutuion. Its more flexable, and gives more features.

 

Also lots of large companies, like Oracle, Facebook and others are using software raid, and many more, like VMWare and Microsoft are selling soluions built on software raid. Its not the worse solution, and hardware raid cards seem to be slowing going away.

Its just cheap and easier to do. These days people a kinda lazy and go for the cheap/easy solution. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Biomecanoid said:

Its just cheap and easier to do. These days people a kinda lazy and go for the cheap/easy solution. 

How is software raid the lazy/ cheaper solution if its better and offters more features? I mean you have to plan it right, but its not the inferior solution, and there is a reason why lots of large production systems are using software raid. Hardware raid also isn't really good at newer storage standards like nvme either.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, havok2 said:

Hello,
yes I have the capacitor backup. We previously had 16x4TB SSD on the controller in Raid6 with >5GB/s 😄 So I think the controller should actually manage that. Even if I pass all 20 HDDs through to Windows as RAW and set up a Raid5 in Windows, the performance is just as subterranean. I have also not activated the write cache because I have already shot a drive completely to RAW here. As I said, the performance was still excellent with 8 and 12 HDDs. It is also only an additional backup. We have others. So the demand for reliability is lower. In the last 10 years of my company's history, I have not had a single defective HDD in my raids and with a Raid6 where two can fail, I remain relaxed. As I said, I think it's more about resonances that cause the HDDs to cycle. Here is an interesting article about this: https://www.ept.ca/features/everything-need-know-hard-drive-vibration/

It's also more about the question of whether anyone has ever seen something like this in the wild? There must be a reason why Seagte writes max. 8 drives with IronWolf, 24 with IronWolf Pro and even Linus talks about it in the Petabyte video at minute 4:20: https://youtu.be/EtZXMj_gUjU

Thanks 🙂

what raid in windows are you using? Are you using storage spaces, what commands did you use in storage spaces?

 

Have you tried linux to rule out a os issue?

 

What settings are you using on the raid setup? can you post a screenshot of your config?

 

The 8 drives per enclosure is mostly just marketing and warranty, and won't make a difference in terms of performance, esp with a case like yours where vibration won't be a big issue.

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Electronics Wizardy said:

what raid in windows are you using? Are you using storage spaces, what commands did you use in storage spaces?

 

Have you tried linux to rule out a os issue?

 

What settings are you using on the raid setup? can you post a screenshot of your config?

 

The 8 drives per enclosure is mostly just marketing and warranty, and won't make a difference in terms of performance, esp with a case like yours where vibration won't be a big issue.

 

 

 

 

Hey,

I tried Linux as well... same picture. Here is my config at this moment (Sorry I used Raid6, but the Values are the same for my Raid6 test. StripSize was also 256). (I used skip init, because I tried it first with and it takes 50h, so just for testing I skip this at this Moment)

 

Yeah I used that Storage Manager thing and dynamic drives and so on.

46ADD965-1C53-44BB-867C-85DDE4E037CD.jpeg

Edited by havok2
False Image
Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, havok2 said:

I tried Linux as well... same picture. Here is my config at this moment (Sorry I used Raid6, but the Values are the same for my Raid6 test. StripSize was also 256). (I used skip init, because I tried it first with and it takes 50h, so just for testing I skip this at this Moment)

 

I know init can affect performance can affect performance on some.

 

Try chaing to write-back, makes writes much faster, and enable the write cache on the card.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Electronics Wizardy said:

I know init can affect performance can affect performance on some.

 

Try chaing to write-back, makes writes much faster, and enable the write cache on the card.

 

 

 

Hello,
Due to the discussion, I have activated the write cache (disk&controller). I thought this was only intended for frequent access? It seems much better, but are these peaks normal? (One 60GB File)

A090C0A3-15BD-494E-AA9D-77EBE0EA3D7D.jpeg

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, havok2 said:

Due to the discussion, I have activated the write cache (disk&controller). I thought this was only intended for frequent access? It seems much better, but are these peaks normal?

What io workload are you using when the screenshot was taken?

 

That seems pretty normal to me.

 

The cache really helps with writes on parity raid, so this is expected.

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, Electronics Wizardy said:

What io workload are you using when the screenshot was taken?

 

That seems pretty normal to me.

 

The cache really helps with writes on parity raid, so this is expected.

Hello,

I just copied a 58GB ISO File from 40Gbit Network SSD Raid to that Raid with Explorer Copy&Paste. I thinking about to do two Raid5 with two of that controllers and then make a Raid0 in Windows. But I hate the Idea that only one drive can fail in an raid5... But this controller talked to me „I dont want to get to the shelf“

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, havok2 said:

Hello,

I just copied a 58GB ISO File from 40Gbit Network SSD Raid to that Raid with Explorer Copy&Paste. I thinking about to do two Raid5 with two of that controllers and then make a Raid0 in Windows. But I hate the Idea that only one drive can fail in an raid5... But this controller talked to me „I dont want to get to the shelf“

That seems like a messy setup, id just do a raid 50, its basically the same thing anways.

 

Or do a raid 6 if performance isn't a huge issue

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Electronics Wizardy said:

That seems like a messy setup, id just do a raid 50, its basically the same thing anways.

 

Or do a raid 6 if performance isn't a huge issue

This looks more like thr values I wanted. I‘ll test a 50 for performce comparison. Thanks!

CC3B2AC7-9FC4-4451-BB9C-76F1DE6EE72D.jpeg

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, havok2 said:

Yeah I used that Storage Manager thing and dynamic drives and so on.

Just as an FYI Storage Manager and it's RAID configurations aren't the same thing as Storage Spaces, two different things in Windows. You won't want to use either though as even with the better one of these two, Storage Spaces, it requires SSDs as Journals for Write-Back Cache. You have a decent hardware RAID card that will be faster so better option here.

 

5 hours ago, havok2 said:

yes I have the capacitor backup. We previously had 16x4TB SSD on the controller in Raid6 with >5GB/s

So as an explainer here the recommended configuration of SSDs RAIDs is to force off Write-Back cache as SSDs RAIDs are faster than the onboard cache. It was likely disabled on purpose which is why you were seeing such bad performance with HDDs.

 

I know you have it on now but it's probably still good to know why when to have it on or off. with LSI RAID cards they actually dynamically on a per array basis disable Write-Back if the array contains SSDs, so you don't have to do it manually or globally turn it off on the entire card.

 

You have it enabled now so that's good to see.

 

But yes Write-Back cache for HDD parity RAIDs is a necessity to make them at all usable, when you were getting 10MB/s write performance that is pretty typical without Write-Back. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Biomecanoid said:

Just because there are workarounds from doing the proper way does not mean you should. Servers in an enterprise environment should have a controller for Raid, overwise they are not servers just plain PCs.

That situation has drastically changed, you might have heard me say similar a very long time ago (longer than 10 years ago) but it's just not the case anymore. Netapp FAS enterprise storage solutions are all software based, there are other enterprise storage vendors that are now software based and all the new market entrants have only ever been software based.

 

When NVMe initially came out there were no hardware RAID cards that supported NVMe at all, I have some HPE servers with NVMe SSDs in them but they are of this generation so software was the only choice.

 

Then you have even more advanced software storage solutions like Ceph, Gluster, Lustre or one of the many other distributed filesystems.

 

Things change over time, always good to make sure you stay current and are aware of these changes.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, leadeater said:

That situation has drastically changed, you might have heard me say similar a very long time ago (longer than 10 years ago) but it's just not the case anymore. Netapp FAS enterprise storage solutions are all software based, there are enterprise storage vendors that are now software based and all the new market entrants have only ever been software based.

 

When NVMe initially came out there were no hardware RAID cards that supported NVMe at all, I have some HPE servers with NVMe SSDs in them but they are of this generation so software was the only choice.

 

Then you have even more advanced software storage solutions like Ceph, Gluster, Lustre or one of the many other distributed filesystems.

 

Things change over time, all good to make sure you stay current and are aware of these changes.

Its been like that on normal PCs for ages now.  So nothing that new to learn here.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Biomecanoid said:

Its been like that on normal PCs for ages now.  So nothing that new to learn here.

Except you specifically said servers in your comments and Storage Spaces is usable on desktops and even laptops and if you run a Linux desktop then you have every other option I talked about if you want to use them. You don't have to use hardware RAID anymore if you don't wish to, there are good options from both approaches on desktops/laptops and servers alike.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, leadeater said:

Except you specifically said servers in your comments and Storage Spaces is usable on desktops and even laptops and if you run a Linux desktop then you have every other option I talked about if you want to use them. You don't have to use hardware RAID anymore if you don't wish to, there are good options from both approaches on desktops/laptops and servers alike.

Linux can do bootable Raid arrays, that windows can now. Yes that is doable. And I do have software Raid in my Laptop because there is not other option, I have m2 for boot drive + HDD in the default spot on the laptop + HDD caddy instead of optical drive. 

 

No raid card setup is cheaper, everything is done in software on the CPU, so people prefer that to save money and complexity.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks at all to bringing me to that simple and cheap solution!

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×