Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
veryDIMM

RAID for 8 1TB Hard Drives

Recommended Posts

Posted · Original PosterOP

Hi All,

 

I have just bought 8 1TB HDD's for my new storage server. I have Windows Server 2016 running and am trying to software RAID them for RAID 5. I have started the process but it seems to be taking forever. I have been waiting over 20 hours and it is still on 14% formatting. Is there anything to make it faster and is this normal for it to take this long.

 

Thanks,

Cooper

Link to post
Share on other sites
Posted · Original PosterOP

That is not the point, the reason to have 8 1TB is because

1, they are cheaper where i am

2, for redundancy

It also means that if and when i need more i can get them one at a time and not need to buy one that is that expensive.

 

But i did consider doing that at one stage.

Link to post
Share on other sites
23 minutes ago, UrbanFreestyle said:

i think that sounds like too long. I raided 4x 2tb drives into a raid 5 and it took less than half an hour.

I have never used Windows Server nor RAID under Windows, so I have no idea how fast or slow it is there, but I came to think of one thing: when you make a software-RAID under Windows Server, do you get an option to do just a quick format or a full one? Perhaps the difference between your speed and OP's could be explained with that?


Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to post
Share on other sites
Posted · Original PosterOP
2 minutes ago, WereCatf said:

I have never used Windows Server nor RAID under Windows, so I have no idea how fast or slow it is there, but I came to think of one thing: when you make a software-RAID under Windows Server, do you get an option to do just a quick format or a full one? Perhaps the difference between your speed and OP's could be explained with that?

I have not ticked that box. I made sure everything was optimised for the quickest format. Still no luck...

Link to post
Share on other sites
35 minutes ago, veryDIMM said:

I have just bought 8 1TB HDD's for my new storage server. I have Windows Server 2016 running and am trying to software RAID them for RAID 5. I have started the process but it seems to be taking forever. I have been waiting over 20 hours and it is still on 14% formatting. Is there anything to make it faster and is this normal for it to take this long.

8 drives is already so many that, personally, I feel one should rather go with RAID6 -- you'd sacrifice two drives for parity, but you'd also be able to survive two drives failing. This is important, because if you had one drive fail, you replaced it and then started resilvering, and while resilvering another drive went bad.. well, you'd be hosed. Resilvering takes fucking forever with so many drives, so it wouldn't be that far-fetched to imagine another drive failing while doing that.

 

I'm not telling you to go for RAID6, but it's something you should possibly think about.


Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to post
Share on other sites
Posted · Original PosterOP
3 minutes ago, WereCatf said:

8 drives is already so many that, personally, I feel one should rather go with RAID6 -- you'd sacrifice two drives for parity, but you'd also be able to survive two drives failing. This is important, because if you had one drive fail, you replaced it and then started resilvering, and while resilvering another drive went bad.. well, you'd be hosed. Resilvering takes fucking forever with so many drives, so it wouldn't be that far-fetched to imagine another drive failing while doing that.

 

I'm not telling you to go for RAID6, but it's something you should possibly think about.

Thanks for the suggestion. It is a good consideration. I'm not too sure if windows server has that functionality.

Link to post
Share on other sites

did you test the drives separately before putting them in raid? The only reason i can think of other then normal slow format is a dying drive that just drags the whole thing down. I've had drives like that myself where they work but only at under 10MB/s speeds. After 20 hours i'd say you got a bad drive in there.


I have no signature

Link to post
Share on other sites

You don't want to do this. Get an Hardware RAID Card!

 

8TB will take forever to rebuild on Windows Software RAID (especially if you a low end CPU)... and when it does, you'll most likely endup with another drive (since they must be from the "same batch") dead and then loose the whole Array.

 

For that many drive, I would use RAID6 or RAID5+1HotSpare... or if you want performance, go RAID10... on a real RAID Card (or go unRAID / ZFS or something like this).

Link to post
Share on other sites
9 hours ago, veryDIMM said:

Hi All,

 

I have just bought 8 1TB HDD's for my new storage server. I have Windows Server 2016 running and am trying to software RAID them for RAID 5. I have started the process but it seems to be taking forever. I have been waiting over 20 hours and it is still on 14% formatting. Is there anything to make it faster and is this normal for it to take this long.

 

Thanks,

Cooper

First question: Did you perform a Quick Format or a Full Format? A Full Format could take days for 8TB, though only 14% after 20 hours seems... too slow.

 

Next: I'd suggest you switch to RAID6 (or Dual Parity) for an array that large.


For Sale (lots of stuff):

Spoiler

[FS] [CAD] Various things

 

 

* Intel i7-4770K * ASRock Z97 Anniversary * 16GB RAM * 750w Seasonic Modular PSU *

* Crucial M4 128GB SSD (Primary) * Hitachi 500GB HDD (Secondary) *

* Gigabyte HD 7950 WF3 * SATA Blu-Ray Writer * Logitech g710+ * Windows 10 Pro x64 *

 

Link to post
Share on other sites
4 hours ago, nka said:

You don't want to do this. Get an Hardware RAID Card!

Why....?

 

He's using Storage Spaces - this is the Windows equivalent of ZFS. It works very well these days (though Parity performance is a bit lackluster, but nothing that would be a deal breaker for most people).

4 hours ago, nka said:

8TB will take forever to rebuild on Windows Software RAID (especially if you a low end CPU)... and when it does, you'll most likely endup with another drive (since they must be from the "same batch") dead and then loose the whole Array.

8TB will take forever to rebuild on pretty much any platform.

4 hours ago, nka said:

For that many drive, I would use RAID6 or RAID5+1HotSpare... or if you want performance, go RAID10...

Agree.

4 hours ago, nka said:

on a real RAID Card

Unnecessary

4 hours ago, nka said:

(or go unRAID / ZFS or something like this).

unRAID is meh - lots of people like it, but it works fundamentally very different from ZFS or Storage Spaces or Hardware RAID. ZFS is functionally very similar to Storage Spaces.


For Sale (lots of stuff):

Spoiler

[FS] [CAD] Various things

 

 

* Intel i7-4770K * ASRock Z97 Anniversary * 16GB RAM * 750w Seasonic Modular PSU *

* Crucial M4 128GB SSD (Primary) * Hitachi 500GB HDD (Secondary) *

* Gigabyte HD 7950 WF3 * SATA Blu-Ray Writer * Logitech g710+ * Windows 10 Pro x64 *

 

Link to post
Share on other sites

If you are using the RAID creation tool in Disk Manager, not only does it create the RAID5 array it also writes out zero's to every drive in the array.

With 8 1TB SATA drives which are slow anyway, this process would take a large number of hours.

If you open command prompt as an admin, type the following;
diskperf -y

 

Open Task Manager (If you had it open previously you will need to re-open it)

Hit the performance tab

You should see all 8 disks under the exact same load (assuming the process hasn't finished yet)

 

Also I would NOT recommend using the build in RAID creation utility, it's prone to serious problems which is why Storage Spaces exists.

Kill off the RAID5 you have created and use Storage Spaces to create a parity storage pool instead.  8 SATA disks in 'RAID5' is going to absolutely abysmal for write performance so do not expect to get any significant IOPS values out of the array.  Sequential read speeds will be good, sequential write speeds will be 'acceptable' but any random disk IO is going to make it crawl with no RAID cache present.


Please quote or tag me if you need a reply

Link to post
Share on other sites
7 hours ago, dalekphalm said:

He's using Storage Spaces

He never said that though, from reading the post it sounds like hes talking about the Windows Server RAID-5 functionality which can be quite horrible when something goes wrong, and it doesnt support 2 disk parity which you should really be using for more than 5 physical disks.

 

I hope OP also realises that when he does swap the disks to larger, that he has to replace all the disks before he can expand the logical volume.

Since he lives in Australia, it probably would have been better to go for 4 x 3TB drives instead - and if using a RAID card, or storage spaces, you can just add additional drives and redistribute the data to gain space so you can just add a disk at a time. 


Spoiler

Intel i7 3770K @ 4.6ghz | EVGA Z77 FTW | 2 x EVGA GTX1070 FTW | 32GB (4x8GB) Corsair Vengeance DDR3-1600 | Corsair H105 AIO, NZXT Sentry 3, Corsair SP120's | 2 x 256GB Samsung 850EVO, 4TB WD Black | Phanteks Enthoo Pro | OCZ ZX 1250w | Samsung 28" 4K Display | Ducky Shine 3 Keyboard, Logitech G502, MicroLab Solo 7C Speakers, Razer Goliathus Extended, X360 Controller | Windows 10 Pro | SteelSeries Siberia 350 Headphones

 

Spoiler

Corsair 400R, IcyDock MB998SP & MB455SPF, Seasonic X-Series 650w PSU, 2 x Xeon E5540's, 24GB DDR3-ECC, Asus Z8NA-D6C Motherboard, AOC-SAS2LP-MV8, LSI MegaRAID 9271-8i, RES2SV240 SAS Expander, Samsung 840Evo 120GB, 2 x 8TB Seagate Archives, 12 x 3TB WD Red

 

Link to post
Share on other sites
3 hours ago, Jarsky said:

He never said that though, from reading the post it sounds like hes talking about the Windows Server RAID-5 functionality which can be quite horrible when something goes wrong, and it doesnt support 2 disk parity which you should really be using for more than 5 physical disks.

Fair enough - I think when I read the OP, and he said he was running a Windows Storage Server, I assumed he was running Storage Spaces.

 

I wouldn't use any Windows RAID functionality outside of Storage Spaces. If I'm not running Storage Spaces, I'm running a proper hardware RAID card w/ Cache and BBU.

3 hours ago, Jarsky said:

I hope OP also realises that when he does swap the disks to larger, that he has to replace all the disks before he can expand the logical volume.

Since he lives in Australia, it probably would have been better to go for 4 x 3TB drives instead - and if using a RAID card, or storage spaces, you can just add additional drives and redistribute the data to gain space so you can just add a disk at a time. 

Good consideration. Have a plan for expanding - otherwise it may be a pain when it actually comes time to putting new disks in.


For Sale (lots of stuff):

Spoiler

[FS] [CAD] Various things

 

 

* Intel i7-4770K * ASRock Z97 Anniversary * 16GB RAM * 750w Seasonic Modular PSU *

* Crucial M4 128GB SSD (Primary) * Hitachi 500GB HDD (Secondary) *

* Gigabyte HD 7950 WF3 * SATA Blu-Ray Writer * Logitech g710+ * Windows 10 Pro x64 *

 

Link to post
Share on other sites

Please don't use Raid 5... Far far to easy to loose data. At least go raid 6 with that many drives


Use this guide to fix text problems in your postGo here and here for all your power supply needs

 

New Build Currently Under Construction! See here!!!! -----> 

 

Spoiler

Deathwatch:[CPU I7 4790K @ 4.5GHz][RAM TEAM VULCAN 16 GB 1600][MB ASRock Z97 Anniversary][GPU XFX Radeon RX 480 8GB][STORAGE 250GB SAMSUNG EVO SSD Samsung 2TB HDD 2TB WD External Drive][COOLER Cooler Master Hyper 212 Evo][PSU Cooler Master 650M][Case Thermaltake Core V31]

Spoiler

Cupid:[CPU Core 2 Duo E8600 3.33GHz][RAM 3 GB DDR2][750GB Samsung 2.5" HDD/HDD Seagate 80GB SATA/Samsung 80GB IDE/WD 325GB IDE][MB Acer M1641][CASE Antec][[PSU Altec 425 Watt][GPU Radeon HD 4890 1GB][TP-Link 54MBps Wireless Card]

Spoiler

Carlile: [CPU 2x Pentium 3 1.4GHz][MB ASUS TR-DLS][RAM 2x 512MB DDR ECC Registered][GPU Nvidia TNT2 Pro][PSU Enermax][HDD 1 IDE 160GB, 4 SCSI 70GB][RAID CARD Dell Perc 3]

Spoiler

Zeonnight [CPU AMD Athlon x2 4400][GPU Sapphire Radeon 4650 1GB][RAM 2GB DDR2]

Spoiler

Server [CPU 2x Xeon L5630][PSU Dell Poweredge 850w][HDD 1 SATA 160GB, 3 SAS 146GB][RAID CARD Dell Perc 6i]

Spoiler

Kero [CPU Pentium 1 133Mhz] [GPU Cirrus Logic LCD 1MB Graphics Controller] [Ram 48MB ][HDD 1.4GB Hitachi IDE]

Spoiler

Mining Rig: [CPU Athlon 64 X2 4400+][GPUS 9 RX 560s, 2 RX 570][HDD 160GB something][RAM 8GBs DDR3][PSUs 1 Thermaltake 700w, 2 Delta 900w 120v Server modded]

RAINBOWS!!!

 

 QUOTE ME SO I CAN SEE YOUR REPLYS!!!!

Link to post
Share on other sites
4 hours ago, Jarsky said:

I hope OP also realises that when he does swap the disks to larger, that he has to replace all the disks before he can expand the logical volume.

Stuff like this is why I so love the flexibility of Btrfs and ZFS; for Btrfs RAID5, I only need to swap two drives to get the expanded space, and from there on I can swap one drive at a time. It allows me to mix and match drives of different sizes, as long as the two largest drives are the same size. For Btrfs RAID6, you need three.


Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to post
Share on other sites
2 minutes ago, WereCatf said:

Stuff like this is why I so love the flexibility of Btrfs and ZFS; for Btrfs RAID5, I only need to swap two drives to get the expanded space, and from there on I can swap one drive at a time. It allows me to mix and match drives of different sizes, as long as the two largest drives are the same size. For Btrfs RAID6, you need three.

Storage spaces on windows lets you do this aswell, so op can do this with his current raid setup.

Link to post
Share on other sites
Just now, Electronics Wizardy said:

Storage spaces on windows lets you do this aswell, so op can do this with his current raid setup.

Ah, okay. I was just going by what @Jarsky said, I have no experience or knowledge of RAID-stuff under Windows. Well, that makes Storage Spaces more appealing, IMHO, than I thought.


Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to post
Share on other sites
1 minute ago, WereCatf said:

Ah, okay. I was just going by what @Jarsky said, I have no experience or knowledge of RAID-stuff under Windows. Well, that makes Storage Spaces more appealing, IMHO, than I thought.

Yea storage spaces is pretty powerfull if you know how to use it, tiering, clusting, mixed raid levels on one pool, easy addition and removal of drives.

Link to post
Share on other sites
7 hours ago, WereCatf said:

Stuff like this is why I so love the flexibility of Btrfs and ZFS; for Btrfs RAID5, I only need to swap two drives to get the expanded space, and from there on I can swap one drive at a time. It allows me to mix and match drives of different sizes, as long as the two largest drives are the same size. For Btrfs RAID6, you need three.

ZFS doesn't allow you to do this? If you want to expand a ZFS pool that has raidz then you need to create another raidz vdev and add it to the pool....

You also cant mix different size drives within a single vdev in the pool..

 

Storage Spaces with ReFS does pretty much everything BTRFS does. 

 


Spoiler

Intel i7 3770K @ 4.6ghz | EVGA Z77 FTW | 2 x EVGA GTX1070 FTW | 32GB (4x8GB) Corsair Vengeance DDR3-1600 | Corsair H105 AIO, NZXT Sentry 3, Corsair SP120's | 2 x 256GB Samsung 850EVO, 4TB WD Black | Phanteks Enthoo Pro | OCZ ZX 1250w | Samsung 28" 4K Display | Ducky Shine 3 Keyboard, Logitech G502, MicroLab Solo 7C Speakers, Razer Goliathus Extended, X360 Controller | Windows 10 Pro | SteelSeries Siberia 350 Headphones

 

Spoiler

Corsair 400R, IcyDock MB998SP & MB455SPF, Seasonic X-Series 650w PSU, 2 x Xeon E5540's, 24GB DDR3-ECC, Asus Z8NA-D6C Motherboard, AOC-SAS2LP-MV8, LSI MegaRAID 9271-8i, RES2SV240 SAS Expander, Samsung 840Evo 120GB, 2 x 8TB Seagate Archives, 12 x 3TB WD Red

 

Link to post
Share on other sites
1 minute ago, Jarsky said:

ZFS doesn't allow you to do this?

I would assume it does. AFAIK ZFS and Btrfs are pretty similar in their capabilities. That said, I've never used ZFS.


Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to post
Share on other sites
Just now, WereCatf said:

I would assume it does. AFAIK ZFS and Btrfs are pretty similar in their capabilities. That said, I've never used ZFS.

They have some similarities, but ZFS works quite differently. BTRFS and ReFS Storage Spaces have more in common with eachother. 

ZFS is highly scalable with its replication and the way pools can be built out in large scale deployments. Some of the shortcomings I mentioned above are issues you come across with home use. 


Spoiler

Intel i7 3770K @ 4.6ghz | EVGA Z77 FTW | 2 x EVGA GTX1070 FTW | 32GB (4x8GB) Corsair Vengeance DDR3-1600 | Corsair H105 AIO, NZXT Sentry 3, Corsair SP120's | 2 x 256GB Samsung 850EVO, 4TB WD Black | Phanteks Enthoo Pro | OCZ ZX 1250w | Samsung 28" 4K Display | Ducky Shine 3 Keyboard, Logitech G502, MicroLab Solo 7C Speakers, Razer Goliathus Extended, X360 Controller | Windows 10 Pro | SteelSeries Siberia 350 Headphones

 

Spoiler

Corsair 400R, IcyDock MB998SP & MB455SPF, Seasonic X-Series 650w PSU, 2 x Xeon E5540's, 24GB DDR3-ECC, Asus Z8NA-D6C Motherboard, AOC-SAS2LP-MV8, LSI MegaRAID 9271-8i, RES2SV240 SAS Expander, Samsung 840Evo 120GB, 2 x 8TB Seagate Archives, 12 x 3TB WD Red

 

Link to post
Share on other sites
1 minute ago, Jarsky said:

They have some similarities, but ZFS works quite differently. BTRFS and ReFS Storage Spaces have more in common with eachother. 

ZFS is highly scalable with its replication and the way pools can be built out in large scale deployments. Some of the shortcomings I mentioned above are issues you come across with home use. 

Well, okay then! Thanks for clarifying on it.


Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×