Jump to content

Interesting RAID testing results

Preface: I am not a pro, just showing what I found.

So I obtained a Dell R610, dual Intel Xeon L5520, put in 47GB DDR3 1600 RAM (yes I know it's a 1066 MHz FSB, this ram actually had a lower CAS than the 1066 kit somehow and was the same price), put in six Seagate Savvio 10K.6 ST300MM0026 300GB 2.5" SAS 6Gb/s 10KRPM drives LINK, installed a PERC H700, got an iDRAC 6 card, Windows Server 2012 R2 standard and some CALs and fired 'er up. Keep in mind this is my first server rodeo, so far I think I'm doing pretty good. I did a lot of research on RAID configurations and decided I think I want to go with a RAID 10 array due to most of my workload being reads and the writes will mostly be random, but I wanted to be sure. So I first configured a single full size RAID 6 array, did a full initialize, did a fresh install of the OS and BAM here are the results, sorry this is a pic from my phone, I was on the KVM console and didn't have time to play around with snipping tool/ email etc.: (ignore the 15KRPM note, I for some reason thought they were 15K drives doing these tests lolol)

IMG_20161225_190031.jpg

Not too shabby right? Well I then rebooted the server, fired up the RAID card deleted the VD, created a full size RAID 10 array, did a full initialize, installed the OS and here are the results from that test: (this one I did through the iDRAC so I could use snipping tool haha):

Test 2.PNG

 

So here's what's troubling.. All seq. work, reads and writes, are slower on RAID 10, at a queue depth of 32 they are dramatically slower, at a 0 queue they are only a tad slower. 4k reads are, sure, twice as fast but onlt 2.4MB/s? 4K writes are indeed faster by an almost factor of 4, but the 4KQD32 results are atrocious, reads a full 1MB/s slower than a RAID 6? The only result I am happy about are the 4kQD32 writes, 23MB/s is pretty fast, relatively speaking.

 

At the end of the day I was seriously hoping for sequential reads/ writes of 1GB/s on this array, unless I'm totally missing something, this array should be able to hit those marks on a RAID 10 config, and quite possibly close on a RAID 6 config as well. My question to you all, and the point of the thread, is this, shouldn't the RAID 10 results be overall better than RAID 6 at this size?

 

And for all those curious, yes I did full initializes before installing the OS, battery backed write back cache is on, adaptive read ahead is on, and the strip size is 64KB.

 

If anyone has any thoughts or questions, please post, I'm curious to see what others have to think! Thank you!

Link to comment
Share on other sites

Link to post
Share on other sites

Yep RAID 10 is slower for Seq read/write, always has and always will be. Only use RAID 10 for highly random write I/O anything else use RAID 6. However if you have a RAID card that doesn't have a BBU and active write-back cache RAID 5/6 will perform very badly.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

Yep RAID 10 is slower for Seq read/write, always has and always will be. Only use RAID 10 for highly random write I/O anything else use RAID 6. However if you have a RAID card that doesn't have a BBU and active write-back cache RAID 5/6 will perform very badly.

I was under the impression that RAID 10 was alwas faster than RAID 6, interesting, assuming that is the case, looks like that 1GB/s won't be possible unless I got faster 15K RPM drives, or SSDs (which neither is happening, not that we need that speed, it would just be nice to have the headroom just in case)

Thanks for the post, I'm going to reach out to the software devs and see what they recommend as well, I want to do this "once and done" kind of style

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, STiCory said:

I was under the impression that RAID 10 was alwas faster than RAID 6, interesting, assuming that is the case, looks like that 1GB/s won't be possible unless I got faster 15K RPM drives, or SSDs (which neither is happening, not that we need that speed, it would just be nice to have the headroom just in case)

Thanks for the post, I'm going to reach out to the software devs and see what they recommend as well, I want to do this "once and done" kind of style

Just remember there are more down sides to RAID 10 than up sides.

  • You cannot expand a RAID 10 array, meaning you can't add more disks to an existing array
  • Half your capacity is usable
  • In much larger arrays RAID 10 is actually less safe than RAID 6, 12+ disks (ish)

What makes RAID 6 faster is that more disks are active I/O spindles in both read and write which is where the higher Seq performance comes from. The performance gap only gets larger with more disks. Modern RAID cards have very good SoC chips on them so parity calc is much less of an issue than back in the parallel SCSI days.

 

There's tons of bad information about RAID on the internet and many people that comment/post about it have very little experience, I see people get confused about RAID 10 vs RAID 6 all the time. I've been using SCSI & RAID since about 1997/1998 so have suffered through the million different connectors, terminators, SCSI IDs etc, SAS is just soo much better.

 

Edit:

P.S. @scottyseng was the last person I converted to RAID 6 on the forum, he'll also be able to give some valuable input and advice for you too.

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, leadeater said:

Just remember there are more down sides to RAID 10 than up sides.

  • You cannot expand a RAID 10 array, meaning you can't add more disks to an existing array
  • Half your capacity is usable
  • In much larger arrays RAID 10 is actually less safe than RAID 6, 12+ disks (ish)

What makes RAID 6 faster is that more disks are active I/O spindles in both read and write which is where the higher Seq performance comes from. The performance gap only gets larger with more disks. Modern RAID cards have very good SoC chips on them so parity calc is much less of an issue than back in the parallel SCSI days.

 

There's tons of bad information about RAID on the internet and many people that comment/post about it have very little experience, I see people get confused about RAID 10 vs RAID 6 all the time. I've been using SCSI & RAID since about 1997/1998 so have suffered through the million different connectors, terminators, SCSI IDs etc, SAS is just soo much better.

 

Edit:

P.S. @scottyseng was the last person I converted to RAID 6 on the forum, he'll also be able to give some valuable input and advice for you too.

Good information, thanks for all that. This server won't grow beyond the (6) 300GB drives, unless I end up replacing them many years from now to larger drives, but for all intent and purposes, the config/ size of the VD that I create now is how it will stay until I decommission it. In your professional opinion, what would be a better move RAID 6 or RAID 10? Or is the answer in use-case scenario?

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, STiCory said:

Good information, thanks for all that. This server won't grow beyond the (6) 300GB drives, unless I end up replacing them many years from now to larger drives, but for all intent and purposes, the config/ size of the VD that I create now is how it will stay until I decommission it. In your professional opinion, what would be a better move RAID 6 or RAID 10? Or is the answer in use-case scenario?

Mostly use case scenario, but usually RAID6 is better for most cases. Most people usually have a lot of large files when storing data vs running a lot of random IO (Usually lots of VMs or finance applications). I'd say go with RAID6.

 

You should see dramatically better sequential performance.

 

I have eight 4TB (Well, nine now) WD Re SAS 4TB drives and they broke 1GB/s in RAID6.

 

Do note that the speeds will drop as you fill up the array. I'm getting roughly 500MB/s at 65% full.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, scottyseng said:

Mostly use case scenario, but usually RAID6 is better for most cases. Most people usually have a lot of large files when storing data vs running a lot of random IO (Usually lots of VMs or finance applications). I'd say go with RAID6.

 

You should see dramatically better sequential performance.

That's what I was thinking but I was curious of the general consensus, so the primary applications are Quickbooks, and Fishbowl Inventory, from what I have gathered they appeared to be random writes but not 100% sure of this, I reached out to both dev teams and am waiting clarification.

This server won't have any VM's or at least not in the near future. I do want to have this server double as the domain controller as well (only 10 users or so) plus it will hold all of the file sharing, product pictures and information, stored accounting information like sales orders/ purchase orders etc. as well as CNC programs and other random stuff.

Basically I want this server to be the daily driver, and then I will be building a second server that will be the backups/ archive server that will do hourly snapshots of the important stuff like the accounting databases, but then nightly backups or something. I'll cross the bridge for that one when we get to it, but this one I need to be about as fast as possible for it's use-case-scenario while still remaining relatively reliable as well, if that's possible haha

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, STiCory said:

Good information, thanks for all that. This server won't grow beyond the (6) 300GB drives, unless I end up replacing them many years from now to larger drives, but for all intent and purposes, the config/ size of the VD that I create now is how it will stay until I decommission it. In your professional opinion, what would be a better move RAID 6 or RAID 10? Or is the answer in use-case scenario?

Pretty much use-case but I'd always default to RAID 6 if in doubt or both come out very close to each other. Best way to know is get the full details of the application I/O pattern, optimize the stripe size (always bigger than the largest I/O) and then use IOmeter to do the comparison. Look at throughput and IOPs but also avg latency and max latency.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, STiCory said:

-snip-

I'd probably say RAID6 is the better bet, but yeah, you need more information on how IO heavy those applications are. You might be fine.

 

Wow, that's a lot of stuff running on a single server though.

 

Your random IO performance on your RAID10 is better than my RAID6 array, but I'm using 7.2kRPM drives.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, STiCory said:

That's what I was thinking but I was curious of the general consensus, so the primary applications are Quickbooks, and Fishbowl Inventory, from what I have gathered they appeared to be random writes but not 100% sure of this, I reached out to both dev teams and am waiting clarification.

This server won't have any VM's or at least not in the near future. I do want to have this server double as the domain controller as well (only 10 users or so) plus it will hold all of the file sharing, product pictures and information, stored accounting information like sales orders/ purchase orders etc. as well as CNC programs and other random stuff.

Basically I want this server to be the daily driver, and then I will be building a second server that will be the backups/ archive server that will do hourly snapshots of the important stuff like the accounting databases, but then nightly backups or something. I'll cross the bridge for that one when we get to it, but this one I need to be about as fast as possible for it's use-case-scenario while still remaining relatively reliable as well, if that's possible haha

You can use HD Tune Pro to monitor disk activity and it will give you a break down of I/O size and how many then you can use that to figure out if you are mostly read or write and what your stripe size needs to be.

 

As for the applications that will depend if you are mostly inserting data or doing lookups, if they are good database engines most things will happen in memory.

 

I would advise against sharing anything with a domain controller OS though, when you enable the DC role it enforces encrypted and signed network sessions putting heaps of load on the CPU. Plus it's not best practice anyway. Windows Server Standard allows you to run 2 VMs with a single license so I would create two VMs, DC and file/application server. Hyper-V is pretty good and it allows you replicate VMs to another server as standard (no license cost unlike ESXi).

https://technet.microsoft.com/en-us/library/jj134172(v=ws.11).aspx

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, leadeater said:

Pretty much use-case but I'd always default to RAID 6 if in doubt or both come out very close to each other. Best way to know is get the full details of the application I/O pattern, optimize the stripe size (always bigger than the largest I/O) and then use IOmeter to do the comparison. Look at throughput and IOPs but also avg latency and max latency.

Will do, I'll install the programs on the current RAID 10 VD and see how it fairs and then redo it on a RAID 6 VD and check, then just pick the best, it will most likely be the RAID 6 I'm thinking though just due to the higher sequential speeds.

1 minute ago, scottyseng said:

I'd probably say RAID6 is the better bet, but yeah, you need more information on how IO heavy those applications are. You might be fine.

 

Wow, that's a lot of stuff running on a single server though.

 

Your random IO performance on your RAID10 is better than my RAID6 array, but I'm using 7.2kRPM drives.

Agreed, I'm beginning to think. It is a lot of stuff but we're a small shop and I'm trying to bring it into the 21st century haha. It's my fathers company and right now we have everything running on a single 5400RPM HD Windows 10 home desktop.. O.o ..want to talk about atrocious performance? haha

Well that's at least one positive so far this RAID 10 VD has going for it xD

2 minutes ago, leadeater said:

You can use HD Tune Pro to monitor disk activity and it will give you a break down of I/O size and how many then you can use that to figure out if you are mostly read or write and what your stripe size needs to be.

 

As for the applications that will depend if you are mostly inserting data or doing lookups, if they are good database engines most things will happen in memory.

 

I would advise against sharing anything with a domain controller OS though, when you enable the DC role it enforces encrypted and signed network sessions putting heaps of load on the CPU. Plus it's not best practice anyway. Windows Server Standard allows you to run 2 VMs with a single license so I would create two VMs, DC and file/application server. Hyper-V is pretty good and it allows you replicate VMs to another server as standard (no license cost unlike ESXi).

https://technet.microsoft.com/en-us/library/jj134172(v=ws.11).aspx

Never used that program, I'll check it out, I'm not a sysadmin by trade by any means, just a 26yo having fun while learning as well as trying to upgrade what we have to something better in just about, if not all, category lol.

Interesting note about the DC role and encryption, that was something that I wasn't aware of. Perhaps then use this server like you're mentioning and run two VMs instead of it all on the physical chassis. At that point though, should I keep the single VD or split the VD up into two RAID 5 arrays, one for the hypervisor and the DC server and the other for the file sharing/ application server? Or something else?

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, STiCory said:

-snip-

Yeah, I'm thinking you can easily break 1GB/s with those drives in RAID6. It is a balancing act though.

 

Ah, if you are running any Ethernet wire, I would urge you to either use shielded or metal conduit for the Ethernet cable. I used to be a intern debugging a software program and these IT guys they hired to run Ethernet used unshielded cat5e wire running through a metal shop with welders, multiple laser cutters, and several machines. The packet loss was over 70%, it was so bad.

 

Yeah, I'm still amazed at how some people's home NAS units on the forum here probably outspec a lot of small business servers...

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, STiCory said:

At that point though, should I keep the single VD or split the VD up into two RAID 5 arrays, one for the hypervisor and the DC server and the other for the file sharing/ application server? Or something else?

Hmm I'm used to having 2 HDDs in a mirror for dedicated OS or a USB drive (ESXi). When you create a drive group and then a VD can you have a 100GB RAID 1 VD and the rest as a RAID 6 VD, been a while since I've used my RAID cards.

 

@scottyseng You've done this more recently, multiple VDs good or bad?

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, leadeater said:

-snip-

My multiple VD setup is pretty great. Of course, if you load one VD, it'll make the others slow down, but so far no issues. Nine 4TB in RAID6, six 4TB in RAID6 (3TB / 11TB).

 

I'm liking this RAID6 too, I can buy a drive, shuck it in, rebuild to expand, and done.

Link to comment
Share on other sites

Link to post
Share on other sites

Alright so how about this idea, being that there is six 300GB drives, what if I a did two VDs, one RAID 1 for the OS, hypervisor, and a virtual machine for just the domain controller role, and the remaining 4 drives in a RAID 6 or 10 array depending on testing for the file sharing and application virtualization machine?

How does that sound?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, STiCory said:

-snip-

That works, but I kind of find the RAID1 a tad overkill, but that's up to you. I personally would buy a SSD drive and just leave it hanging inside of the server as the OS drive / program drive. That's sadly how my OS SSD lives in my server.

 

It should work fine though.

Link to comment
Share on other sites

Link to post
Share on other sites

 

12 hours ago, scottyseng said:

That works, but I kind of find the RAID1 a tad overkill, but that's up to you. I personally would buy a SSD drive and just leave it hanging inside of the server as the OS drive / program drive. That's sadly how my OS SSD lives in my server.

 

It should work fine though.

 

13 hours ago, STiCory said:

Alright so how about this idea, being that there is six 300GB drives, what if I a did two VDs, one RAID 1 for the OS, hypervisor, and a virtual machine for just the domain controller role, and the remaining 4 drives in a RAID 6 or 10 array depending on testing for the file sharing and application virtualization machine?

How does that sound?

 

15 hours ago, leadeater said:

Hmm I'm used to having 2 HDDs in a mirror for dedicated OS or a USB drive (ESXi). When you create a drive group and then a VD can you have a 100GB RAID 1 VD and the rest as a RAID 6 VD, been a while since I've used my RAID cards.

 

@scottyseng You've done this more recently, multiple VDs good or bad?

Sorry for kinda highjacking this thread, but reading trough these results, i've begon to question. How would different file systems fare with these setups?

ZFS, BtrFs etc (Freenas, Unraid,...)

That time I saved Linus' WiFi pass from appearing on YouTube: 

A sudden Linus re-appears : http://linustechtips.com/main/topic/390793-important-dailymotion-account-still-active/

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, MrKickkiller said:

Sorry for kinda highjacking this thread, but reading trough these results, i've begon to question. How would different file systems fare with these setups?

ZFS, BtrFs etc (Freenas, Unraid,...)

ZFS, BTRFS etc are all quite different to hardware RAID. Their performance depends greatly on the design of the file system and then on how you configure your storage. Unlike hardware RAID good software storage solutions can allow you to read from all disks in a RAID 10 configuration giving excellent performance, Windows Storage Spaces can for example and I'm fairly sure ZFS can.

 

ZFS also does a lot of memory caching and can also use SSD as cache, while some hardware RAID cards can do SSD caching it's no where near as good as ZFS.

 

Performance also scales much better with software storage solutions, hardware RAID there really isn't much difference between 12 disk and 48 disks (not usable anyway).

Link to comment
Share on other sites

Link to post
Share on other sites

Alright, so I've been doing some testing and some thinking, I slightly revised my idea, how does this sound/ is this possible?:
VD0 RAID 1 Array (2) 300GB Drives Raid 1

VD1 RAID 6 Array (4) 300GB Drives Raid 5

 

Physical Machine on VD0

  Windows Server 2012 R2 Standard

    Hyper-V

Virtual Machine 0 on VD0

  Windows Server 2012 R2 Standard (VM1)

    Domain Controller

    Active Directory and all it's happy horse stuff

Virtual Machine 1 on VD1

  Windows Server 2012 R2 Standard (VM2)

    File Server

    Application/ Program Server

 

This way, only things running on the RAID 1 array would be the Hyper-V role, the DC and the AD stuff. This leaves a whole separate OS install on entirely separate drives on an entirely separate RAID array to have all of the files/ applications. My only question here is, when Windows disables write back cache, is it just at the OS install level or does it somehow disable it at the RAID card level? In other words, if the RAID card has write-back cache enabled but one Virtual Machine OS install disables it, do the rest of the VM's hitting that same RAID card have it disabled as well? This is what I have created for VDs on the RAID card, what do you think?

RAID Card.PNG


For fun, attached are some CrystalDiskMark tests. First is the small 2 drive RAID 1 array, second is a separate 4 drive RAID 5 array, third replaces that RAID 5 array with a fresh RAID 6, and lastly replacing that RAID 6 is a RAID 10 array. Pretty interesting results honestly seeing how parity arrays are so much different than simple stripes/ mirrors as well as the sequential results on RAID 5 vs. RAID 6!!

2 Drive RAID 1 VD.PNG

4 Drive Raid 5 VD.PNG

4 Drive RAID 6 VD.PNG

4 Drive RAID 10.PNG

 

Thanks everyone!!

Link to comment
Share on other sites

Link to post
Share on other sites

Alright, some updates, @leadeater and @scottyseng tagging you guys because you've been following along and have both been helpful!

To recap, disk group 0 is a two drive RAID 1 array with two VDs, one is 100GB and the other is 126GB, the second disk group is a four drive RAID 5 array and is 837GB.

I installed the first OS on the 100GB VD (drive letter C), formatted the 126GB (drive letter E) VD as NTFS, formatted the 837GB VD (drive letter F) as NTFS, added the Hyper V role and rebooted. I then created the first VM as a gen 2 with 4GB of RAM and assigned it 4 virtual cores, the location of it's VHD and VM is on the E drive in a folder all of it's own. This is where it starts to get interesting!

I did not at this time have the DC role added, right now there are only two machines, the Hyper-V server and the VM fresh install, I did some CrystalDiskMark testing from the Hyper-V server and this is what I got:

Atlas Before DC Role.PNG

I then did a test of the new server before the DC role and this is what I got:

Zenith Before DC Role.PNG

As expected these are almost identical to the preliminary testing from earlier. I then installed the DC role on the VM, configured ADDS, rebooted a handful of times and did the tests again, this is the Hyper-V server after the DC role was installed on the VM:

Atlas after DC role.PNG

And this is the VM after the DC role was installed:

Zenith after DC role.PNG

...no change.. at all... I was expecting a hit because Windows Server 2012 R2 disables write cache when the DC role is installed, right? Well come to find out, if the DC is virtualized, it apparently can't disable this through the host.. If you try to disable it manually you get a nice little error that says "your disk may not support this" or something like that, and to top it off if you go to Event Viewer you see a yellow warning that says write cache-ing is enabled and AD corruption may occur.

Interesting.

SO, here is what we did, bought a fresh new battery for the RAID card, made sure the UPS can support the server for at least 20 minutes under load and will be making sure that if there is power fault for more than a minute or so to gracefully shutdown the server. And in the event that the UPS can't support the server for long enough the battery on the RAID card will save the cache until it comes back online.

 

Interesting findings. This is all new to me but I feel like I'm learning this really fast and aren't things that I could really find anywhere online, so in case anyone doesn't know, I wanted to share my results!

 

Thanks for listening, I know my posts are really long and drawn out but I like to be detailed.

 

Thanks for listening!

 

Happy New Years All! :D

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×