Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
JCBiggs

HBA and Disk Shelf for home?

Recommended Posts

Posted · Original PosterOP

I aquired an older dell  precision that has a couple of xeons in it.  Im going to repurpose it to Home Server use.    

 

Id like to buy a disk shelf.  something like  a 2u unit. maybe 20 bays?   But I am trying to figure out a couple of things. 

 

1) whats the best way to set this up as far as raid goes to make it easy to add drives in the future?  Set it up as JBOD? or as some sort of cluster?  (I dont want to use Unraid,)

2) is it possible to find and HBA that will work at 12gb/s on a gen 2 pcie link?  I'm assuming there should be since several of them have a much higher through put than 12gb/s. 

3)im thinking about starting with 4 EXOS 8TB drives.  any reason not to use sas drives other than cost?  Ill have about 20 people hitting the server for file uploads/sharing, and a couple hundred email accounts running through it. 

 

 

Link to post
Share on other sites
2 hours ago, JCBiggs said:

I aquired an older dell  precision that has a couple of xeons in it.  Im going to repurpose it to Home Server use.    

 

Id like to buy a disk shelf.  something like  a 2u unit. maybe 20 bays?   But I am trying to figure out a couple of things. 

I use a Dell MD1200 at home - it's old repurposed server equipment. It's paired with a Dell T410 and an LSI HBA.

2 hours ago, JCBiggs said:

1) whats the best way to set this up as far as raid goes to make it easy to add drives in the future?  Set it up as JBOD? or as some sort of cluster?  (I dont want to use Unraid,)

There are numerous ways to do this. The primary consideration is:

Hardware RAID or Software RAID?

 

If you use Hardware RAID, the process is usually pretty simple, assuming you have this capability (most proper LSI RAID Cards support this) - you add in more drives, then you go into the BIOS RAID Utility and expand the array.

 

If you use Software RAID, how and how easy you do this will entirely depend on the Software solution used.

 

Linux RAID supports expanding by "growing":

https://raid.wiki.kernel.org/index.php/Growing

 

Windows Storage Solutions can do something similar.

 

ZFS on the other hand currently does not support growing RAID arrays - this is something they are actively working on. Apparently it's in alpha preview but I haven't been keeping track:

https://github.com/openzfs/zfs/pull/8853

 

On ZFS (or basically most other RAID systems that support nested RAID), you can effectively create a larger Volume by creating a second pool, and then spanning the two pools together to create a bigger volume.

 

This is like taking a RAID1 pool, creating a second RAID1 pool, and combining them into what effectively becomes a RAID10 pool.

 

Alternatively, you can do the oldschool "pull a drive, pop a new larger drive in, rebuild - repeat for every drive, then expand the volume to accommodate the larger drive space available" - this system sucks. It literally takes days or weeks to complete, and the risk of failure is massive.

 

2 hours ago, JCBiggs said:

2) is it possible to find and HBA that will work at 12gb/s on a gen 2 pcie link?  I'm assuming there should be since several of them have a much higher through put than 12gb/s. 

I do know that there are plenty of 12 Gbps HBAs but I'm not aware of any Gen2 PCIe ones - all the 9xxx LSI Gen2 PCIe cards are 6 Gbps as far as I'm aware.

2 hours ago, JCBiggs said:

3)im thinking about starting with 4 EXOS 8TB drives.  any reason not to use sas drives other than cost?  Ill have about 20 people hitting the server for file uploads/sharing, and a couple hundred email accounts running through it. 

SAS drives are generally regarded as superior to SATA drives - especially in a RAID environment. The main reason most people don't use them is cost, plus the benefits outside of an enterprise environment are diminishing.

 

In your case, if you can afford SAS drives? Go for it. I use SATA drives for all my arrays at home (outside of a pair of SAS 15K drives used in RAID1 as my ESXi boot drive) and have no problems.


For Sale - iPhone SE 32GB - Unlocked - Rose GoldSold

Spoiler

 

 

* Intel i7-4770K * ASRock Z97 Anniversary * 16GB RAM * 750w Seasonic Modular PSU *

* Crucial M4 128GB SSD (Primary) * Hitachi 500GB HDD (Secondary) *

* Gigabyte HD 7950 WF3 * SATA Blu-Ray Writer * Logitech g710+ * Windows 10 Pro x64 *

 

Link to post
Share on other sites
Posted · Original PosterOP
22 hours ago, dalekphalm said:

I use a Dell MD1200 at home - it's old repurposed server equipment. It's paired with a Dell T410 and an LSI HBA.

There are numerous ways to do this. The primary consideration is:

Hardware RAID or Software RAID?

 

If you use Hardware RAID, the process is usually pretty simple, assuming you have this capability (most proper LSI RAID Cards support this) - you add in more drives, then you go into the BIOS RAID Utility and expand the array.

 

If you use Software RAID, how and how easy you do this will entirely depend on the Software solution used.

 

Linux RAID supports expanding by "growing":

https://raid.wiki.kernel.org/index.php/Growing

 

Windows Storage Solutions can do something similar.

 

ZFS on the other hand currently does not support growing RAID arrays - this is something they are actively working on. Apparently it's in alpha preview but I haven't been keeping track:

https://github.com/openzfs/zfs/pull/8853

 

On ZFS (or basically most other RAID systems that support nested RAID), you can effectively create a larger Volume by creating a second pool, and then spanning the two pools together to create a bigger volume.

 

This is like taking a RAID1 pool, creating a second RAID1 pool, and combining them into what effectively becomes a RAID10 pool.

 

Alternatively, you can do the oldschool "pull a drive, pop a new larger drive in, rebuild - repeat for every drive, then expand the volume to accommodate the larger drive space available" - this system sucks. It literally takes days or weeks to complete, and the risk of failure is massive.

 

I do know that there are plenty of 12 Gbps HBAs but I'm not aware of any Gen2 PCIe ones - all the 9xxx LSI Gen2 PCIe cards are 6 Gbps as far as I'm aware.

SAS drives are generally regarded as superior to SATA drives - especially in a RAID environment. The main reason most people don't use them is cost, plus the benefits outside of an enterprise environment are diminishing.

 

In your case, if you can afford SAS drives? Go for it. I use SATA drives for all my arrays at home (outside of a pair of SAS 15K drives used in RAID1 as my ESXi boot drive) and have no problems.

arent most pcie gen 3 cards backwards compatible?  

Link to post
Share on other sites
2 minutes ago, JCBiggs said:

arent most pcie gen 3 cards backwards compatible?  

Yes - basically any Gen3 Card should work in a Gen2 slot - GPU's are used this way all the time.

 

You'll just be limited to the Gen2 link speed - which may or may not matter (you'd have to do the math).


For Sale - iPhone SE 32GB - Unlocked - Rose GoldSold

Spoiler

 

 

* Intel i7-4770K * ASRock Z97 Anniversary * 16GB RAM * 750w Seasonic Modular PSU *

* Crucial M4 128GB SSD (Primary) * Hitachi 500GB HDD (Secondary) *

* Gigabyte HD 7950 WF3 * SATA Blu-Ray Writer * Logitech g710+ * Windows 10 Pro x64 *

 

Link to post
Share on other sites
Posted · Original PosterOP
7 minutes ago, dalekphalm said:

Yes - basically any Gen3 Card should work in a Gen2 slot - GPU's are used this way all the time.

 

You'll just be limited to the Gen2 link speed - which may or may not matter (you'd have to do the math).

i dont have it right in front of me but the card i was looking at was gen 3 and capable of a total of around 5GB/s total.  so if i just assume i cut that in half, and knock it down to 2.5GB /s thats still more than enough to saturate my gigabit connection.   I think Ill grab a card and see what happens.  Im honestly only interested in the 12gb/s to make ssd's worth while. im leaning towards just buying and SSD disk shelf from the go.  Seems like SSD's are gonna at least catch up to HDD gb/dollar in the next several years. 

Link to post
Share on other sites
1 hour ago, JCBiggs said:

i dont have it right in front of me but the card i was looking at was gen 3 and capable of a total of around 5GB/s total.  so if i just assume i cut that in half, and knock it down to 2.5GB /s thats still more than enough to saturate my gigabit connection.   I think Ill grab a card and see what happens.  Im honestly only interested in the 12gb/s to make ssd's worth while. im leaning towards just buying and SSD disk shelf from the go.  Seems like SSD's are gonna at least catch up to HDD gb/dollar in the next several years. 

For large data pools, SSD's are still significantly more expensive. Even more so if you're looking at Datacenter grade SSD's (since you said you were looking at SAS HDD's - keeping the comparison fair).

 

Obviously your choice if you want to go that route. Even on a 6Gbps card, SSD's will still be plenty fast, but only you can decide whether that's fast enough or not.


For Sale - iPhone SE 32GB - Unlocked - Rose GoldSold

Spoiler

 

 

* Intel i7-4770K * ASRock Z97 Anniversary * 16GB RAM * 750w Seasonic Modular PSU *

* Crucial M4 128GB SSD (Primary) * Hitachi 500GB HDD (Secondary) *

* Gigabyte HD 7950 WF3 * SATA Blu-Ray Writer * Logitech g710+ * Windows 10 Pro x64 *

 

Link to post
Share on other sites
Posted · Original PosterOP
15 hours ago, dalekphalm said:

For large data pools, SSD's are still significantly more expensive. Even more so if you're looking at Datacenter grade SSD's (since you said you were looking at SAS HDD's - keeping the comparison fair).

 

Obviously your choice if you want to go that route. Even on a 6Gbps card, SSD's will still be plenty fast, but only you can decide whether that's fast enough or not.

im basically looking at triple redundancy for about 5 terabytes of data. scaling up to 20 over the next 3-4 years.  My work just doesnt generate data that quickly.   Im gonna go ahead and go with SSD's I was looking at the cost last night and its more expensive but its not out of this world expensive.  less noise, heat, vibration, and a smaller package is worth it to me. Ill update the server at some point. 

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×