Jump to content

External drive array

bigjme

Hey everyone.

So this may be a little bit of a weird question but I'm going to ask it anyway.

I currently have my home server (which is also a gaming rig). I recently switched to a smaller case which has drastically limited my storage options.

My old case had support for 10 hot swap bays, my new has only 3.

I am now at the point where my 3 hdd's and 2 ssd's are getting rather full and I am feeling the need for more space.

I also currently do not have any raid set up so I have no backups which is a huge concern for me.

Right now I have 2 2tb's and a 4tb drive but i am looking to increase my storage array to 4 6tb wd reds in raid 10 or 6. I haven't decided yet. And I want to try and leave the option to go to 8 drives down the line.

Due to case limitations I need to make all my storage external but as these drives store games, vms, and other things, I don't want to go to a USB 3 enclosure, never mind the cost of a good 8 bay enclosure.

Space is key for me as I don't have a lot of it, so I am looking to find a way to get an external storage array that I can use as direct attached storage for my server without breaking the bank.

I have looked into a sas controller, with an external pc case with just drives and a psu in it but this seems a little overkill to have an entire pc case and spending so much on extra raid controllers when my motherboard supports 12 sata 3 connectors.

So my question is, has anyone seen a method of connecting drives externally to internal headers without using a huge amount of esata connectors or expensive raid cards?

I love the fact that sas can do 4 sata's over a single cable, and if there was a way to do sata to sas to sata in just cables, I would, but I doubt this is possible.

I may be chasing an unachievable dream, but I am really interested to see what others can think of for such a solution as storage is not something I have spent huge amounts of time looking at.

Regards,

Jamie

Link to comment
Share on other sites

Link to post
Share on other sites

Hey everyone.

So this may be a little bit of a weird question but I'm going to ask it anyway.

I currently have my home server (which is also a gaming rig). I recently switched to a smaller case which has drastically limited my storage options.

My old case had support for 10 hot swap bays, my new has only 3.

I am now at the point where my 3 hdd's and 2 ssd's are getting rather full and I am feeling the need for more space.

I also currently do not have any raid set up so I have no backups which is a huge concern for me.

Right now I have 2 2tb's and a 4tb drive but i am looking to increase my storage array to 4 6tb wd reds in raid 10 or 6. I haven't decided yet. And I want to try and leave the option to go to 8 drives down the line.

Due to case limitations I need to make all my storage external but as these drives store games, vms, and other things, I don't want to go to a USB 3 enclosure, never mind the cost of a good 8 bay enclosure.

Space is key for me as I don't have a lot of it, so I am looking to find a way to get an external storage array that I can use as direct attached storage for my server without breaking the bank.

I have looked into a sas controller, with an external pc case with just drives and a psu in it but this seems a little overkill to have an entire pc case and spending so much on extra raid controllers when my motherboard supports 12 sata 3 connectors.

So my question is, has anyone seen a method of connecting drives externally to internal headers without using a huge amount of esata connectors or expensive raid cards?

I love the fact that sas can do 4 sata's over a single cable, and if there was a way to do sata to sas to sata in just cables, I would, but I doubt this is possible.

I may be chasing an unachievable dream, but I am really interested to see what others can think of for such a solution as storage is not something I have spent huge amounts of time looking at.

Regards,

Jamie

there is an easier way, buy some oak, and ply, make a play frame for your drives, all going into a raid card, and power coming in from supply. then finish the box off with oak, and stain it :P 

Check out my current projects: Selling site (Click Here)

If($reply == "for me to see"){

   $action = "Quote me!";

}else{

   $action = "Leave me alone!";

}

Link to comment
Share on other sites

Link to post
Share on other sites

A simple thing you could do would be to buy a cheap case that can hold the drives, and mod it so that it connects to your main case.  And then run the SATA cables from your mobo to the second case that just holds HDDs. 

 

Depending on your budget, it may be simpler and easier to just get some ebay hardware that will do what you are wanting.  An external JBOD enclosure and an enterprise level RAID card will cost you 200 ish total. 

 

On a side note, RAID is not a backup.  RAID is hardware fault tolerance, not data backup.  Backup means more than one copy on more than one "system" (be that an actual separate computer or just an external drive used as backup storage).

Link to comment
Share on other sites

Link to post
Share on other sites

My important data is backed up to another 2 machines, and external drives. One kept off site.

I looked at some JBOD externals but using esata on a multiplier, or usb 3 is going to cripple it. Unless I am miss understanding your use of the JBOD enclosure?

On a side note, I am in the UK so things tend to be a little expensive here, with a cheap usb 3 8 bay JBOD enclosure being around $500

Regards,

Jamie

Link to comment
Share on other sites

Link to post
Share on other sites

This is the kind of enclosure I am talking about, 16 bays, SAS connection allowing dual 3Gbps connections (1 IN, 1 OUT, for daisy chaining).  I actually have 2 of these, they are fine as long as you swap the fans for something quitter.

 

145 USD

 

http://www.ebay.com/itm/3U-SGI-Rackable-SE3016-16-Caddies-3-5-SAS-SATA-Storage-Expander-JBOD-NAS-SAN-/131582701980?hash=item1ea2f0b19c

 

 

And this kind of card would be used to connect it to your system.    Edit:  Be sure to check compatibility between your mobo and the RAID card, this is a big deal for enterprise level cards and can cause major issues if not properly vetted.

 

50 USD

 

http://www.ebay.com/itm/3Ware-9690sa-8E-RAID-SAS-Controller-Card-with-BBU-Module-FREE-SHIPPING-/301681505698?hash=item463d9e75a2

 

 

 

 

Edit:  I am not endorsing these sellers or specific sales.  Do your research about the seller and product quality before committing to purchase.

Link to comment
Share on other sites

Link to post
Share on other sites

Those enclosures look amazing, but are $700 shipped in the UK which makes them a big let down.

Even used sas controllers with external connectors are $200 or more in the UK. I can get ones with internal headers only for around $50 bit externals seem to have a huge premium over here

Link to comment
Share on other sites

Link to post
Share on other sites

Have you still got your old case ? Just stick some internals in it with raid support and an OS to serve your needs.

Link to comment
Share on other sites

Link to post
Share on other sites

Sadly not. My old case was huge, and I don't want to move all my drives external from my machine and be locked to gigabit speeds.

Link to comment
Share on other sites

Link to post
Share on other sites

~snip~

 

Hey there bigjme,
 
One option is to store safely the HDDs outside the case while they are still connected to the PSU and the motherboard and thus have them as regular internal drives.
If you are going for an external enclosure, do have in mind that some software program and games can't be installed on an external drive.
 
Have you considered a NAS solution for remote access to the data that doesn't necessarily need to be stored on your main system? I could suggest checking out WD's NAS solutions and see if any of them fit your needs and budget: http://products.wdc.com/support/kb.ashx?id=ceWP1L
Building your own NAS is also an option, but this would involve looking for space for at least another PC case (you mentined that you don't have space for larger upgrades).
 
Feel free to ask if you happen to have any questions :)
 
Captain_WD.

If this helped you, like and choose it as best answer - you might help someone else with the same issue. ^_^
WDC Representative, http://www.wdc.com/ 

Link to comment
Share on other sites

Link to post
Share on other sites

Hey captain.

Direct attached is ideally what I want ad external such as usb 3 is simply too slow.

My home server is the nas for my house, as it's a Web server it's on 24/7 and i don't want another machine running all the time so an additional nas is out of the question.

The external enclosure doesn't need to use the hosts power, I have no issue getting a mini molex psu and splitting it to run the hdd separately, it's only the data connection that is the problem

Link to comment
Share on other sites

Link to post
Share on other sites

Those enclosures look amazing, but are $700 shipped in the UK which makes them a big let down.

Even used sas controllers with external connectors are $200 or more in the UK. I can get ones with internal headers only for around $50 bit externals seem to have a huge premium over here

 

Holy cow, no kidding about shipping being insane.  Likely due to the weight of it, they are pretty heavy.  I had no idea shipping was that much.

 

Also, you can get cards with internal hookups, and then just get a cheap $20 internal to external converter.  That is what I did, they work just fine.  If you are interested in going that route still.

Link to comment
Share on other sites

Link to post
Share on other sites

Internal cards are pretty cheap on eBay so an internal to external would be a good option if not a little messy.

I think sas seems to be the best option now.

Link to comment
Share on other sites

Link to post
Share on other sites

Internal cards are pretty cheap on eBay so an internal to external would be a good option if not a little messy.

I think sas seems to be the best option now.

 

I got one of these (link below) for converting from internal to external, works perfect.  Just get a 1-2 foot internal cable to connect it inside the case, surprisingly simple.  Looks like your issue will be finding a cheap used external enclosure in the UK, good luck there.  If you do decide to go this route of SAS card and enclosure, be sure to do your research on the card.  If it is not listed as compatible, or you can't find anyone on this forum or others that have used it and says it works with your mobo, assume it won't work.  Enterprise gear can be super picky about what it works with.  That said, it isn't too hard to find cards that will work just fine.  I will admit that I didn't really bother with looking up my mobo, just looked for chipset compatibility, but I live in the land of cheap parts and easy refunds, so I am willing to play roulette.  You should have little issue of compatibility between the enclosure and the card, but just make sure they support the same protocols.

 

http://www.newegg.com/Product/Product.aspx?Item=N82E16816133056&cm_re=8087_to_8088-_-16-133-056-_-Product

Link to comment
Share on other sites

Link to post
Share on other sites

I will keep an eye out for sure. My mobo is an asrock fatal1ty pro z77 so it should be OK to find something that's compatible. This mobo seems to like anything I throw at it.

Link to comment
Share on other sites

Link to post
Share on other sites

I will keep an eye out for sure. My mobo is an asrock fatal1ty pro z77 so it should be OK to find something that's compatible. This mobo seems to like anything I throw at it.

 

The issue is usually more that the card itself will not play well with your chipset.  If you do find a card you like, just google the card and your chipset, see if there are any known issues, otherwise yah, you should be fine.

Link to comment
Share on other sites

Link to post
Share on other sites

i am looking to increase my storage array to 4 6tb wd reds in raid 10 or 6. I haven't decided yet. And I want to try and leave the option to go to 8 drives down the line.

 

more drives of lower capacity is always better for a RAID then few large drives. 

 

for example;

 

4 x 6TB Drives RAID 10 = 12 TB (50% Efficiency, any single drive can fail, two drives provided they are not mirrored pairs)

4 x 6TB Drives RAID 6 = 12TB (50% Efficiency, Any two drives can fail)

 

6 x 4TB Drives RAID 6 = 16TB (66% Efficiency, any two drrives can fail)

You  wouldn't Raid 10 6 drives.

 

Use a raid calculator like this; http://www.raid-calculator.com/

 

 

As far as hard drives go, you would always be better trying to source a bigger case. You can have lots of external drives wired up internally to the case but this is going to be a royal PITA if you ever want to move it all. If your going to spend money on something, go for a case which is probably going to workj out cheaper than controllers and other solutions. If you decided on external because  you have external hard drives you want to use for storage, break them open and pull the HDD's out and mount inside the case internally. 

[spoiler=REDWING]  REDWING   | Custom Hard Line Water Cooling Loop   |   Intel i7-4790K @ 4.7Ghz   |   Asus Maximus VII Ranger   |   Corsair Dominator Platinum DDR3-2400 2x8GB   |   2 x Gigabyte GTX970 G1 Gaming (SLI)   |   Corsair AX860i   |   Samsung 850 Pro 512GB   |   Corsair Obsidian 750D   |   Corsair K95 RGB & M65 RGB   |   Samsung SyncMaster S27B370


Project Redwing Build Log


[spoiler=ZedNAS] 'ZedNAS' Home NAS Build   |   Intel Xeon E3 1231 v3   |   ASRock Rack E3C224D4I   |   Kingston KVR16E11/8I 2x8GB   |   Corsair HX750   |   6 x HGST Deskstar NAS 4TB 7200RPM HDD   |   Fractal Design Node 304 mITX Black   |   FreeNAS w/ PleX   |   ZFS Raid-Z2 (16TB Effective)   |

Link to comment
Share on other sites

Link to post
Share on other sites

I have been looking at 6 4tb's today actually as I already have 1 of them. In raid 6 of course.

I could probably fit a 1 or 2u case below my main case. I know they do ones that you can basically fill with hdd and nothing else but I'm not sure if those are limited to 3 or 4u cases with the hdd connecting downwards

Link to comment
Share on other sites

Link to post
Share on other sites

~snip~

 

Well USB3.0 caps out at a much higher speed than any current HDD can reach, so it's not fully utilized, unless you are using a RAID option and forcing a number of drives and thus reaching the limit of USB3.0. You can also consider other connection types such as Thunderbolt or eSATA. You just need to find the appropriate external enclosure with the proper connection and controller and you should be fine. :)
 
Captain_WD.

If this helped you, like and choose it as best answer - you might help someone else with the same issue. ^_^
WDC Representative, http://www.wdc.com/ 

Link to comment
Share on other sites

Link to post
Share on other sites

I have been considering a 4u rack, with a sas expander and a sas raid card in my machine. 1 cable to link them and allowing 24 drives. Its overkill buy I can get a 4u rack for the same price as a decent case in the Uk

I am planning to raid, with most likely 6 4tb wd reds. My current wd red by itself can do 200MB/s so 6 of them in raid 6 will saturate usb 3 easily.

It will also leave me some room to grow as 4tb drives get cheaper, and I believe sas will allow up to an 8m external cable? So when I get round to it, I can even move my storage into another room or another section of my pc room so it is out of the way.

Can anyone recommend any good sas controllers and sas expanders? Preferably ones I may be able to find on eBay at a good price. If I can get pci3 and all 12gb/s that would be great, as there is the potential for a lot of storage in this enclosure.

Could I run a boot device off this set up? Say if I did raid 10 on my boot ssd (I have 2) and keep those on the multiplier and use them as you normally would?

Regards,
Jamie

 

Edit:

So after doing a little bit of digging around, i have found that the 12gbps drives do not have many if any expanders available.

Sadly the forum doesn't seem to have a very good search other than the google search so it is a little hard to search for this, so what controllers and expanders in general would you recommend? Be it new or used on ebay?

Link to comment
Share on other sites

Link to post
Share on other sites

~snip~

 

Sadly, as a Western Digital representative, I can't really recommend any other brands or products so I can't help you with controllers or expanders. 
You'd need at least 4 SSDs in order to make RAID10. With two you can only do RAID0, RAID1 or JBOD Span.  You should be able to boot from a RAID array with no issues, as long as your motherboard and BIOS support it. :)
 
Captain_WD.

If this helped you, like and choose it as best answer - you might help someone else with the same issue. ^_^
WDC Representative, http://www.wdc.com/ 

Link to comment
Share on other sites

Link to post
Share on other sites

-snip-

 

For your main issue of card brand, this isn't really too important, as long as it is a major brand (lots of different card models).  One thing though, is you will need to do your research, some card models are designed for specific systems or purposes.  An example is that Dell has some great cards, they go under the model/brand of Perc, but some of them will not work without a certain type of dell bios and mobo.  I would say for ebay cards, stick to cards that would cost more than $200-300 new.  The cards I usually go after at the ones that were in the $500-1500 range when new.  They are normally going for $50-100 on ebay, but that is simply because no enterprise is willing to use them anymore, but they are still perfectly good for your uses.

 

As far as 12Gbps vs 3Gbps, I would say unless you are really really trying to push a lot of constant data, you won't really be bottlenecked by 3Gbps.  If you are pushing RAID 10 with a ton of drives, you might run out of bandwidth, but even then not for long.  Your bottleneck will more likely be the RAID card or type of raid you choose.  Granted, 3Gbps is way too slow if you are using SSDs in your array, but then you are spending an amount of money on the SSDs themselves that you could easily afford the 12Gbps gear.  The initial fill on your array will take a while, since its only about 1.2 TB an hour with 3Gbps, but after the initial fill, how often will you be moving huge amounts of data on and off the array?

 

The only reason I would discourage you from going with 12Gbps is simply due to cost and difficulty of finding gear.  But if you can find gear cheap enough, definitely get it.

Link to comment
Share on other sites

Link to post
Share on other sites

So you would recommend 3gb/s over 6gb/s as well?

I will have around 5 or 6 tb's of content to move initially. After that it will mostly be having games installed, vms and other little bits. It will mostly be using for reading after the initial set up.

I already have 2 ssd's so if possible I will remove all drives from my system entirely. at first it will be a cluster of drives with no raid, so consisting of

2 X 2tb's

1 X 4tb

2 X 120gb ssd's

Then I plan to drop the 2tb drives for 5 more 4tb's and leaving the 2 ssd's as a raided boot device.

The ssd's will be mirrored, and the 4tb's will be in raid 6.

My pc will be the main one getting access as it will be the one the drives connect directly to, the rest of the machines all have a dual gigabit link to the server, but I may switch this out for some infiniband later down the line to allow some proper access.

It would also allow other machines to run software directly off the server, as well as it being a network store of virtual machine images

Regards,

Jamie

Link to comment
Share on other sites

Link to post
Share on other sites

I would honestly recommend keeping the mirrored OS SSDs in the system connected directly to the mobo.  Modern mobo raid can handle a raid 1 pair much better then an addon card will in regards to consumer grade mobo booting.  You may have trouble loading up windows (or which OS you use) onto an array on the addon card.  Also, just be aware that these raid cards can take a while to boot (mine takes about 2 minutes on its own with 24 drives connected).  On that same note, mirrored SSDs can create problems as they may not wear evenly (or might fail at the same time due to wearing the same).  I would suggest using a single SSD for the OS, put only the OS on it, and once it is up and running, use clone software to keep a prepped up second drive ready to swap in should the main one fail.  You can just update the clone every month or however often you feel is needed.

 

I really only suggested the 3Gbps because it should be plenty of speed after the initial load up, and the sheer prevalence of 3Gbps gear makes it cheap and easy to get high quality stuff.  That said as well, since you plan to run a RAID 6 array, you may be surprised at just how not blazing fast it is.  There is a massive write penalty for RAID 5 and 6, which drastically slows down potential write speeds.  With a nice card, it will still be plenty fast if you have enough disks; but you may find that it is not nearly as fast as you thought it might be.  For RAID 5/6, the amount of cache on the card is super extra important.

 

When I was a server admin, we had 3 different servers that were running arrays.  One was a 12 drive raid 6, one was a 4 drive raid 5, and a 10 drive raid 6.  The 4 drive raid 5 had a card with 256 MB cache, and performed surprisingly well, ~80-100 MB/s sustained.  The 10 drive raid 6 had 512 MB of cache and did similarly, but the 12 drive raid 6 only had 256 MB cache, and did 10-25 MB/s sustained.  On that same note, we had an 8 drive raid 10.  That card only had 64 MB of cache, but it screamed, that server was never the bottleneck.  Another thing with RAID 5/6 is that rebuild times can be massive, on the scale of days, which is a scary time if your drives were all purchased out of the same lot.  Also, if you are using your RAID 5/6 for live data (not just storage), you will see huge I/O issues.  We had to run a bunch of VMs off of our 12 drive RAID 6, it was brutal.  A raid 10 could handle it just fine, but you need 2x drives to storage space, so that is really a budget issue.

 

Do be advised, with most of the enterprise cards, you can live expand the arrays.  This process is called Online Capacity Expansion, and usually works just fine, but can be rather time intensive (usually more than a rebuild), and risks destroying the entire array.  But it is an option, and I have done it myself a few times without issue.  Its just one of those things that usually works, but if it goes wrong it goes really wrong.  So it can be a bit nerve wracking, when you have to do it, but don't have backups, but the boss says make it happen anyway.  Scary times.

 

Sorry for the wall of text and that if it seems all rambly, I'm at work and kept getting interrupted, stupid work wanting me to actually do stuff.

 

Edit:  Here is a simple explanation of the "write penalty" for parity raid systems.

http://theithollow.com/2012/03/understanding-raid-penalty/

Edited by ChineseChef
Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for such an in depth explanation. 80-100MB/s seems really low for even 1 drive

This array was supposed to hold games and vms, but from that it seems like I would make things worse?

I can move 50gb of content from my ssd to hdd, and the other way round at over 100MB/s sustained, I even have a 64Gb USB 3.0 that manages 130MB/s sustained when connected to a powered usb hub

I need to have a huge think about this now as these speeds just do not seem great.

The writes are honestly not important for this system, and I will listen to your advice about the SSD's and keeping them separate, right now, 1 has my os and the other has a Web server and database server stored on it, again it's mostly all read data.

I keep my steam library on the bulk storage, with over 3tb's of games that are installed and left there so once written it does not need to be again.

The ks so much for all the advice, it seems I need to find a very high cache raid card if I go that route

Link to comment
Share on other sites

Link to post
Share on other sites

That 80-100 MB/s was for a raid 5 array with multiple VMs running on it.  Most of our bottleneck was single Gbit connections for the servers.  But that said, you may find parity raid solutions to be surprisingly disappointing in performance.  But you can get some pretty impressive speeds and responsiveness out of them if you set them up right and use the right kind of gear.  But there is a lot of overhead when routing data through the raid card, and it has to calculate parity for every stripe block that gets written.  Reading is still super fast, usually just slightly less than a pure raid 0.  But if you want fast writing on raid 5/6, you need lots of cache and a fast raid card. 

 

Kind of why I recommended raid 10, even though it will cost more.  Because you will actually be doing I/O intensive stuff between the VMs and running games off the array.  A raid 10 would more than be able to handle it, or a low drive count (6 or less) raid 5 would likely be fine.  It is simply the double parity calc of raid 6 that hurts it the most.  As a side note here, our raid 10 array maxed out the Gbit connection with ease, and internally tested in the 200-300 MB/s range.

 

High speed low response time mass storage is not cheap or easy.  Unfortunately you really have to make some compromises.  But if you plan it out, you can also do a pair of raid arrays.  Do a smaller 4 drive raid 10 for the games, and a larger raid 6 for data storage and security.  Since you will be getting that 24 bay external (I like you style), you should have plenty of room to hold a few different arrays.  And most of the enterprise cards will allow 12-24 different volumes (arrays), and usually 128-256 drives total.  Your only real limiting factor is how many drives you are willing to buy, though this is where raid fault tolerance helps by allowing you to get cheaper drives (if you are willing to deal with more rebuilds).  Though rebuilds aren't common, I normally only do one every other year or so.  Scares you every time, but get good support parts, and you don't need to worry about rebuild problems on your HDDs.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×