Jump to content

Old SATA 2 drives for RAID - is there a point?

At my work, they're throwing away a bunch of old server hard drives (the drives were used in a server/storage bank, but they're not necessarily server grade drives). There's a few 160 GB SATA 2 drives (manufactured 2010). My question is would it be useful to put these drives in something like a RAID 0 array? Apart from the basics of array configurations I'm completely new to RAID. SATA 2 is slow, but could an array with a few of these drives achieve usable speeds? I know the reliability here would be low, especially since they're used drives, but if I was to do this I would use it for storing files that I mostly wouldn't use anyway. Is a project like this worth it or pointless?

Thank you for your help.

Link to comment
Share on other sites

Link to post
Share on other sites

If you want to learn how to get that sort of thing set up, then go for it.

 

As you have mentioned though, with the age/usage of the drives, i wouldn't rely on them for anything important, which kind of makes it's use case for anything other than a learning exercise fairly non-existent.

I will only ever answer to the best of my ability - there is absolutely no promises that I will be correct. Or helpful. At all.

 

My toaster:

Spoiler

CPU: Intel Core i5-4670k @ 4.3GHz
Motherboard: Asus Maximus VI Formula
RAM: 16GB Corsair Vengeance DDR3
GPU: Nvidia GeForce GTX770 2GB
Case: Some free Sharkoon case
Storage: Crucial MX500 500GB SSD | Western Digital Blue 1TB
PSU: Corsair HX750
Display(s): Acer framless 24" 1080p thing | Acer 22" 1600x900 thing
Cooling: Corsair H100i AIO | 2 x Corsair LL120 front intakes on radiator | 1 x Corsair LL120 rear exhaust
Keyboard: Steelseries Apex
Mouse: R.A.T 7
Sound: HyperX Cloud II headset | Creative EAX 5.1 speakers
OS: Windows 10 Pro

 

Link to comment
Share on other sites

Link to post
Share on other sites

raid 0 1 and 5 are probably the most common.

 

Raid 0 means data is split between 2 drives.  So you get theoretically twice the speed.  But if one drive dies all your data is kaput.  the usable data in a raid 0 volume is 2 x the lowest capacity drive.

 

A 2010 drive in raid 0 is probably 50 MBps.

Link to comment
Share on other sites

Link to post
Share on other sites

SATA2 will not be a bottleneck for the slower mechanical drives. Using RAID 0 will get near n-times multiplied speed with the increase of drives numbers "n".

However, if you were to put them into RAID 0 array(s) you still have to consider the platform cost, i.e., whether the motherboard support native RAID or do you need a RAID controller. Also, using used drives in a RAID configuration, particularly in your desired RAID 0 "Striping" configuration, pose a much greater threat to data integrity since RAID 0 is all about speed with no redundancy.

If you are going to store non-essential files like games, you can go with the RAID 0 route to increase speed and storage space with those old drives.

"Mankind’s greatest mistake will be its inability to control the technology it has created."

Link to comment
Share on other sites

Link to post
Share on other sites

i mean not really cause just 1 or 2 sata 3 drives can beat the sata 2 drive and youll need a whole lot of them in order to get the same or better speed than the sata 3 drives but they need to be in raid one which means if one fails all the data is gone which being sata 2 and most likely used or dated they are ready to fail for some time now. id say screw the whole sata 2 idea. the pc market left sata 2 years ago for sata 3 and now we are leaving sata 3 for nvme so sata as a whole is starting to show its age too.

Link to comment
Share on other sites

Link to post
Share on other sites

 I am not sure there are any mechanical drives that can take advantage of SATA3. It's more about the technology in the drives including denisity where you see some speed improvements. That and the implementation of the SATA interface.

 

it could be fun to screw around with drives with raid you don't really care about, and the price is right, but if you need the performance, there are better options out there.

 

if we were talking about SATA1, then you'd see a bottleneck.

Link to comment
Share on other sites

Link to post
Share on other sites

160 from 2010, I would expect it to be a 10k or 15k 2.5" disk - seems a bit small to have been sold back then. 

 

RAID 0 is for performance with 0 redundancy, so if you're going to put reproducable data on there I'd say why not. (Games / Movies / Music - all of which you can re-download).

 

I wouldn't bother mirroring them, 160 is small enough that you wouldn't be using them as a primary disk anyway. If you have some documents you want a backup of, hell setup veeam windows agent to back it up nightly or whatever to the 160s in Raid0.

Link to comment
Share on other sites

Link to post
Share on other sites

Spoiler
On 12/14/2018 at 3:54 PM, Mikensan said:

160 from 2010, I would expect it to be a 10k or 15k 2.5" disk - seems a bit small to have been sold back then. 

 

RAID 0 is for performance with 0 redundancy, so if you're going to put reproducable data on there I'd say why not. (Games / Movies / Music - all of which you can re-download).

 

I wouldn't bother mirroring them, 160 is small enough that you wouldn't be using them as a primary disk anyway. If you have some documents you want a backup of, hell setup veeam windows agent to back it up nightly or whatever to the 160s in Raid0.

 

On 12/14/2018 at 2:10 PM, Ertman said:

 I am not sure there are any mechanical drives that can take advantage of SATA3. It's more about the technology in the drives including denisity where you see some speed improvements. That and the implementation of the SATA interface.

 

it could be fun to screw around with drives with raid you don't really care about, and the price is right, but if you need the performance, there are better options out there.

 

if we were talking about SATA1, then you'd see a bottleneck.

 

On 12/14/2018 at 1:31 PM, SkyHound0202 said:

SATA2 will not be a bottleneck for the slower mechanical drives. Using RAID 0 will get near n-times multiplied speed with the increase of drives numbers "n".

However, if you were to put them into RAID 0 array(s) you still have to consider the platform cost, i.e., whether the motherboard support native RAID or do you need a RAID controller. Also, using used drives in a RAID configuration, particularly in your desired RAID 0 "Striping" configuration, pose a much greater threat to data integrity since RAID 0 is all about speed with no redundancy.

If you are going to store non-essential files like games, you can go with the RAID 0 route to increase speed and storage space with those old drives.

 

On 12/14/2018 at 1:28 PM, xentropa said:

raid 0 1 and 5 are probably the most common.

 

Raid 0 means data is split between 2 drives.  So you get theoretically twice the speed.  But if one drive dies all your data is kaput.  the usable data in a raid 0 volume is 2 x the lowest capacity drive.

 

A 2010 drive in raid 0 is probably 50 MBps.

 

On 12/14/2018 at 1:23 PM, MrJoosh said:

If you want to learn how to get that sort of thing set up, then go for it.

 

As you have mentioned though, with the age/usage of the drives, i wouldn't rely on them for anything important, which kind of makes it's use case for anything other than a learning exercise fairly non-existent.

 

Thank you all for your help so far.

 

I have now checked all the drives. They're not actually all from 2010, some are also from 2008, but most are from 2007. There's 11 of them, all 160 GB as follows:

 

- 2x WD1600AAJS (2010)

- 7x ST3160812AS (2007)

- 1x ST3160815AS 

- 1x WD1600AAJS (2008)

 

They're all SATA 2, 7200 RPM with 8 MB of buffer. They all have listed internal data rates/transfer rates at 100 MB/s. That should be their max speed (maybe less in a real world scenario and not the same for both reads and writes), right?

My question is, how can I expect these speeds to stack up in a RAID 0 or a RAID 10 configuration. In RAID 0, are the real world speeds of the array actually n (number of drives) times the real world speed of an individual drive, assuming all drives can perform identically?  Or, for example, let's say I took 8 drives and put them in RAID 10, would the performance of the array be 4 times as fast as a single RAID 1 pair of drives (so if one RAID 1 pair could read at 100 MB/s, could the entire RAID 10 array read at 400 MB/s)? Or are there drop offs?

 

I've checked my computer and my motherboard (MSI Z97-G45) supports RAID 0, 1, 5 and 10. It has 6 SATA 3 ports, 4 of which are available. I can fit 9 more drives in my case. It's here that I obviously run in to an issue. How can I connect more than 4 drives to my motherboard? I'm trying to get away with this as cheap as possible. Maybe I could get some "cards" or something that has SATA 3 on one end and multiple SATA 2s on the other? I've looked a bit in to this and if I understand correctly, depending on the configuration of such a thing, I might not be able to communicate with all of the drives at once (which would impact performance of the RAID array). Another option might be a PCIe SATA card.  In that case, would I have any trouble setting up drives connected to that PCIe card and drives connected directly to the motherboard SATA in the same RAID array? What could I do to keep as much performance as possible (wouldn't want to bottleneck or slow down the array, because the whole point of this is to get these old drives to work a bit faster)? Are there any other areas or components I need to check or that could be affected by having this array in the system.

 

Also in terms of reliability, I might do RAID 10. In any case, this would server as some extra dump storage for stuff I would rather keep, but would otherwise delete if I was forced to free up storage space. Even if the drives start dying in a couple of months or a year, that's totally fine. A part of why I want to do this is also to see how it works. That's why I don't want to spend too much to get this set up.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Giganizer300PRO said:

In RAID 0, are the real world speeds of the array actually n (number of drives) times the real world speed of an individual drive, assuming all drives can perform identically?

Yes. The speed of the RAID 0 array increase near-linearly when the drive number increases, as Tom's Hardware demonstrated.

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9N

6 hours ago, Giganizer300PRO said:

Or, for example, let's say I took 8 drives and put them in RAID 10, would the performance of the array be 4 times as fast as a single RAID 1 pair of drives (so if one RAID 1 pair could read at 100 MB/s, could the entire RAID 10 array read at 400 MB/s)? Or are there drop offs?

Your analogy is correct.

Expected roughly 10% percent drop off due to controller latency and overhead.

6 hours ago, Giganizer300PRO said:

How can I connect more than 4 drives to my motherboard?

You will need an 8-port hardware RAID Controller Card if you want to plug most of the drives into the system in RAID configuration. (I recommend a second hand MegaRAID one with some SFF8643 to SATA cables. More on that later.)

6 hours ago, Giganizer300PRO said:

Another option might be a PCIe SATA card.  In that case, would I have any trouble setting up drives connected to that PCIe card and drives connected directly to the motherboard SATA in the same RAID array?

No, this will not work. As Intel RAID requires the drive to be connected directly into the chipset, i.e., the 6 port on your motherboard, rather than any third party adapter card(s).

6 hours ago, Giganizer300PRO said:

What could I do to keep as much performance as possible (wouldn't want to bottleneck or slow down the array, because the whole point of this is to get these old drives to work a bit faster)? Are there any other areas or components I need to check or that could be affected by having this array in the system.

If you want to use a RAID card, it's better to connect it to a native PCIe slot that connects to CPU instead of the chipset to avoid DMI link bottle-necking the throughput (that should be PCI_E2, PCI_E5 or PCI_E7 for your board, depending on your exact configuration. Consult you manual.)

You will also need some knowledge of Option ROM operation or command line operation. (MegaRAID card has a GUI MegaRAID storage manager, which may be easier for you)

Also check if your power supply is sufficient to power all the drives and if your case will ever fit all of them.

Use 8 drives with an 8-port RAID card, save 3 drives of the same model. In case of drive failure, just swap another one in.

 

As this is more of a hobby project, it's fitting for you to get some cheap stuff to work with. I estimate it will cost you less than 35 EUR (5 EUR for 2 SFF8643 to SATA cable, 30 EUR for an OEM/used MegaRAID card, there could be better deals) for the whole setup.

"Mankind’s greatest mistake will be its inability to control the technology it has created."

Link to comment
Share on other sites

Link to post
Share on other sites

Spoiler
On 12/18/2018 at 8:31 PM, SkyHound0202 said:

Yes. The speed of the RAID 0 array increase near-linearly when the drive number increases, as Tom's Hardware demonstrated.

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9N

Your analogy is correct.

Expected roughly 10% percent drop off due to controller latency and overhead.

You will need an 8-port hardware RAID Controller Card if you want to plug most of the drives into the system in RAID configuration. (I recommend a second hand MegaRAID one with some SFF8643 to SATA cables. More on that later.)

No, this will not work. As Intel RAID requires the drive to be connected directly into the chipset, i.e., the 6 port on your motherboard, rather than any third party adapter card(s).

If you want to use a RAID card, it's better to connect it to a native PCIe slot that connects to CPU instead of the chipset to avoid DMI link bottle-necking the throughput (that should be PCI_E2, PCI_E5 or PCI_E7 for your board, depending on your exact configuration. Consult you manual.)

You will also need some knowledge of Option ROM operation or command line operation. (MegaRAID card has a GUI MegaRAID storage manager, which may be easier for you)

Also check if your power supply is sufficient to power all the drives and if your case will ever fit all of them.

Use 8 drives with an 8-port RAID card, save 3 drives of the same model. In case of drive failure, just swap another one in.

 

As this is more of a hobby project, it's fitting for you to get some cheap stuff to work with. I estimate it will cost you less than 35 EUR (5 EUR for 2 SFF8643 to SATA cable, 30 EUR for an OEM/used MegaRAID card, there could be better deals) for the whole setup.

 

Thank you for your help.

I've looked at some used RAID cards. Is there anything that I need to watch out for or is it more or less a matter of does it have the necessary ports?

Some of the cards I found have SAS SFF-8087 ports. Is there a relevant difference between the 2 ports (I didn't see one)? Is there anything I should watch out for with other SAS connectors?

 

From the cards I found, this one looks very promising: IBM ServeRAID M5015 LSI 9260-8i

I also found some other ones: 

LSI L3-25121-79D 

Lsi L3-01144-01D 

LSI SAS 9210-8i 

LSI L3-01144-10A 8708EM2 

Is this the right direction?

 

I also found an Intel RAID card (INTEL SRCS28X). This thing has SATA ports and not SAS (I think). Would this also work? 

Link to comment
Share on other sites

Link to post
Share on other sites

Except for playing around with raid configurations, it's basically not worth it.

SSDs are so cheap these days, it's easier to just get a 250-512 GB SSD and replace the bunch of driv,es with SSD. Or, go with 3-4 cheap 128GB SSDs in a RAID5 with hot spare or something like that.

 

It's noise, it's age , it's slowness, it's power consumption ... the only plus for mechanical drives is they tend to not die suddenly, so if you monitor the smart data generally you can avoid actual complete failures (but ssds are getting better these days even at this)

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Giganizer300PRO said:

Is there anything that I need to watch out for or is it more or less a matter of does it have the necessary ports?

Pay attention to Host Bus Adapter (HBA) card and RAID controller card. Both looks similar, yet only the latter has RAID function. (more on that later)

Depending on vendor and OEM, the driver and documentation support might vary. Broadcom (LSI) has archived drives and firmware for years, some others may not even bother to update manual.

You might want to pick up a RAID card with battery if you prefer data integrity.

3 hours ago, Giganizer300PRO said:

Is there a relevant difference between the 2 ports (I didn't see one)? Is there anything I should watch out for with other SAS connectors?

SFF-8087 (Mini SAS) and SFF-8643 (Mini SAS HD) are different connector types of the same protocol. Both can be connected to 4 SATA drives.

3 hours ago, Giganizer300PRO said:

Is this the right direction?

Yes. If you want to plug most your drives in and build a simple RAID without using on-board Intel chipset, then buying a dedicated RAID card is an option.

Or, you can get a cheap PCIe to SATA card (those ASMedia HBA cards) and plug your existing drives onto it and use the 6 native SATA ports from Z97 chipset for RAID (albeit only making a 6 disk array).

3 hours ago, Giganizer300PRO said:

LSI SAS 9210-8i 

LSI SAS 9210-8i is an HBA, not a RAID card. Nevertheless it can be flashed to IT mode to function as an HBA for use in unRAID.

(Speaking of unRAID: it's different from the standard RAID levels and accept cheap HBA cards, but comes at an additional license cost. Linus uses it very often, but this might not apply to everyone.)

3 hours ago, Giganizer300PRO said:

INTEL SRCS28X

This is a PCI-X card. It's physically incompatible with your system.

"Mankind’s greatest mistake will be its inability to control the technology it has created."

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/23/2018 at 8:09 AM, Giganizer300PRO said:
  Reveal hidden contents

 

Thank you for your help.

I've looked at some used RAID cards. Is there anything that I need to watch out for or is it more or less a matter of does it have the necessary ports?

Some of the cards I found have SAS SFF-8087 ports. Is there a relevant difference between the 2 ports (I didn't see one)? Is there anything I should watch out for with other SAS connectors?

 

From the cards I found, this one looks very promising: IBM ServeRAID M5015 LSI 9260-8i

I also found some other ones: 

LSI L3-25121-79D 

Lsi L3-01144-01D 

LSI SAS 9210-8i 

LSI L3-01144-10A 8708EM2 

Is this the right direction?

 

I also found an Intel RAID card (INTEL SRCS28X). This thing has SATA ports and not SAS (I think). Would this also work? 

Why buy a raid card? Just use software raid in your os. You have storage spaces in windows and btrfs + zfs + md on linux. That should work fine here and may be faster than a hardware raid card, and free for testing. 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...
On 12/24/2018 at 9:17 PM, Electronics Wizardy said:

Why buy a raid card? Just use software raid in your os. You have storage spaces in windows and btrfs + zfs + md on linux. That should work fine here and may be faster than a hardware raid card, and free for testing. 

I only have 4 SATA ports free on the motherboard. Even if I wanted to test just 4 drives, I'd have to run out and buy SATA cables.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Giganizer300PRO said:

I only have 4 SATA ports free on the motherboard. Even if I wanted to test just 4 drives, I'd have to run out and buy SATA cables.

But those cables are much cheaper than a raid card, and no reason to add a raid card if you don't have to.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Electronics Wizardy said:

But those cables are much cheaper than a raid card, and no reason to add a raid card if you don't have to.

I want to hook up 8 drives to reach a reasonable capacity and higher speeds. There's no way for me to add 8 drives to my system without adding some other equipment/component.

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, Giganizer300PRO said:

I want to hook up 8 drives to reach a reasonable capacity and higher speeds. There's no way for me to add 8 drives to my system without adding some other equipment/component.

How about this then https://www.ebay.com/itm/TESTED-Dell-047MCV-8-Port-PERC-H200-6Gb-s-SAS-SATA-PCI-e-x8-RAID-Card-HIGH-PRO/232894442665?_trkparms=aid%3D111001%26algo%3DREC.SEED%26ao%3D1%26asc%3D20160908105057%26meid%3D356ab8ba6ca342cc94ad8f7ed4572def%26pid%3D100675%26rk%3D1%26rkt%3D15%26sd%3D232894442665%26itm%3D232894442665&_trksid=p2481888.c100675.m4236&_trkparms=pageci%3Aff1802e6-0f98-11e9-a526-74dbd1807dc0|parentrq%3A1579f25b1680ac3cb485326affec3406|iid%3A1

 

Its a hba, so run software raid with it, and supports as many drives as you could ever want in one system.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Electronics Wizardy said:

Interesting. I'm assuming software RAID would work over PCIe...?

Is software RAID really as effective as controller RAID? What I mean is; will it be nearly the same in this scenario (8 drives, speeds from 54 to 77 MB/s, hobby project for infrequently used storage) or are there any drawbacks, etc. Is it as smooth as having a RAID card?

 

On 12/23/2018 at 9:02 PM, SkyHound0202 said:

-snip-

 

I also found some port multiplier cards. I found this a lot at low price points. I converts 1 SATA port to 5 SATA ports. It supports FIS-based switching, so could I take this and connect all 8 drives to the mother board, 5 on the PM and 3 directly to the MB and use the motherboard's integrated Intel RAID or software RAID with the drives? 

I read that a PM is transparent to the drive, but the host knows of the multiple drives and with FIS-based switching I could output to all the drives at once. The controller in question is SATA 2, so does that mean each of the 5 drives connected to it could do up to 60 MB/s or do the real world speeds vary?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Giganizer300PRO said:

nteresting. I'm assuming software RAID would work over PCIe...?

Is software RAID really as effective as controller RAID? What I mean is; will it be nearly the same in this scenario (8 drives, speeds from 54 to 77 MB/s, hobby project for infrequently used storage) or are there any drawbacks, etc. Is it as smooth as having a RAID card?

The HBA just shows the disk to the os, software raid will work fine. The os doesn't care.

 

Software raid will work just fine, esp as your testing and learing. Software raid normally has more features and is more flexable, and can be faster.

 

1 hour ago, Giganizer300PRO said:

use the motherboard's integrated Intel RAID

Intel motherboard raid is crap, don't use it software raid is much better.

 

1 hour ago, Giganizer300PRO said:

I also found some port multiplier cards. I found this a lot at low price points. I converts 1 SATA port to 5 SATA ports. It supports FIS-based switching, so could I take this and connect all 8 drives to the mother board,

Id get a hba instead. These aren't officaly supported by sata and can have issues, and really isn't that much cheaper than a hba.

 

Id just test 4 drives first if you want do it cheaply.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×