Jump to content

15 3tb or 4tb raid drives

Hey guys im building a new raid array with 15 drives 14 of them in raid 5 and 1 for just the OS.i want at least 3tb drives in it but if its a good deal ill get 4tb drives.im on a tight bugdet so im thining of only geting 3 drives evry time i can afford it (becouse you need at least 3 drives for raid 5)

I know i should be using WD reds for something like this but i realt would like to save the mouney for more drives hers what im looking at

http://www.amazon.com/Toshiba-3-5-Inch-SATA3-Drive-DT01ACA300/dp/B00B229W04/ref=sr_1_6?ie=UTF8&qid=1397069989&sr=8-6&keywords=3tb+hard+drive (dont know if theas are relible but that price.)

 

http://www.amazon.com/WD-Red-NAS-Hard-Drive/dp/B008JJLW4M/ref=sr_1_3?ie=UTF8&qid=1397069989&sr=8-3&keywords=3tb+hard+drive

 

so if you know of any good deals let me know thanks. :)

 

ok ill ive decided wich drives ill ge WD RED 3tb.does any one know of any deals on them?

Edited by Mbarton

I  have GameServer`s And VOIP servers the only price is that you have fun on them. 

Link to comment
Share on other sites

Link to post
Share on other sites

You're building a 45TB NAS but are on a tight budget? ._.

 

Get the wd reds, it's worth the extra. You'll end up spending the money to replace a few of the dead toshibas in the extra money you spend on the reds.

 

Why not an ssd for the OS? 840 evo?

Link to comment
Share on other sites

Link to post
Share on other sites

You're building a 45TB NAS but are on a tight budget? ._.

 

Get the wd reds, it's worth the extra. You'll end up spending the money to replace a few of the dead toshibas in the extra money you spend on the reds.

 

Why not an ssd for the OS? 840 evo?

Thanks for the reply i think your right ill go with the reds.and as for a ssd for the OS i was thinking about it maby a 120g 840 evo is what ill do.im not on a super tight budet but after buying the server case mobo,cpu,ram you know ill be a litilte short.so i think ill just get 3 now and keep trowing 3 more evry pay check

 

if your woundering what case im geting

http://www.amazon.com/dp/B0091IZ1ZG/ref=wl_it_dp_o_pC_S_img?_encoding=UTF8&colid=GHXV3MPHQ74K&coliid=II68BFF2QK4AH

I  have GameServer`s And VOIP servers the only price is that you have fun on them. 

Link to comment
Share on other sites

Link to post
Share on other sites

im building a new raid array with 15 drives 14 of them in raid 5

Please use WD Red/SE or Seagate NAS drives. Otherwise you have essentially zero chance of surviving a RAID rebuild.

 

I also don't recommend running RAID 5 with that many drives. RAID 6 is a much better option, though there is a write performance hit.

 

What will you be using for the RAID? Hardware RAID card, FreeNAS, ZFS on Linux, some other software solution?

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

Please use WD Red/SE or Seagate NAS drives. Otherwise you have essentially zero chance of surviving a RAID rebuild.

 

I also don't recommend running RAID 5 with that many drives. RAID 6 is a much better option, though there is a write performance hit.

 

What will you be using for the RAID? Hardware RAID card, FreeNAS, ZFS on Linux, some other software solution?

You said i would have a zero chance of surviving a RAID rebuils what did you maen?do you mean that if one fails otherones would to.or the whole thing where if im not scubing that the raid card might drop sevrel dives for bad sectors?

 

ill be using hardware RAID (i dont trust software raid for this project) but i dont know what ill be using.I have one "FastTrak SX4100 4 port raid card" but i need 15 so if i get 3 more 4 in totale can i combined then or does all the drives neet to connet to the same RAID card to be put in the same RAID.if i cant do that what RAID card would you recomend.

I  have GameServer`s And VOIP servers the only price is that you have fun on them. 

Link to comment
Share on other sites

Link to post
Share on other sites

You said i would have a zero chance of surviving a RAID rebuils what did you maen?do you mean that if one fails otherones would to.or the whole thing where if im not scubing that the raid card might drop sevrel dives for bad sectors?

 

ill be using hardware RAID (i dont trust software raid for this project) but i dont know what ill be using.I have one "FastTrak SX4100 4 port raid card" but i need 15 so if i get 3 more 4 in totale can i combined then or does all the drives neet to connet to the same RAID card to be put in the same RAID.if i cant do that what RAID card would you recomend.

With consumer drives, a single bad sector will make the drive hang for a long time trying to read it. Hardware RAID controllers will deem a drive "failed" if it is unresponsive for several seconds. So a bad sector on two drives in a RAID 5 could potentially kill the whole RAID array even though only a little bit of data is damaged.

 

If you're using WD Reds you'll be fine.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

With consumer drives, a single bad sector will make the drive hang for a long time trying to read it. Hardware RAID controllers will deem a drive "failed" if it is unresponsive for several seconds. So a bad sector on two drives in a RAID 5 could potentially kill the whole RAID array even though only a little bit of data is damaged.

 

If you're using WD Reds you'll be fine.

ok ill get the WD Reds but like i said in my other Quote can i use multible raid card1s for one RAID or does it have to be the same card and if it does have to be the same card do you have any recomendashons?

I  have GameServer`s And VOIP servers the only price is that you have fun on them. 

Link to comment
Share on other sites

Link to post
Share on other sites

ok ill get the WD Reds but like i said in my other Quote can i use multible raid card1s for one RAID or does it have to be the same card and if it does have to be the same card do you have any recomendashons?

You could if you used FreeNAS or some other software solution like Windows Storage Spaces. You would let the software RAID directly access the disks. For purely hardware RAID, no. You need a single RAID card or card with SAS expanders.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

If you're going to put 14 or 15 consumer drives in raid, PLEASE do not use hardware raid. Use software raid with a beefy FS like XFS, ZFS, or btrfs.

Workstation: 3930k @ 4.3GHz under an H100 - 4x8GB ram - infiniband HCA  - xonar essence stx - gtx 680 - sabretooth x79 - corsair C70 Server: i7 3770k (don't ask) - lsi-9260-4i used as an HBA - 6x3TB WD red (raidz2) - crucia m4's (60gb (ZIL, L2ARC), 120gb (OS)) - 4X8GB ram - infiniband HCA - define mini  Goodies: Røde podcaster w/ boom & shock mount - 3x1080p ips panels (NEC monitors for life) - k90 - g9x - sp2500's - HD598's - kvm switch

ZFS tutorial

Link to comment
Share on other sites

Link to post
Share on other sites

That doesn't make much sense. Why wouldn't you use hardware RAID?

Basically all hardware raid controllers currently available to consumers don't have features that the robust FS's I mentioned have.

Workstation: 3930k @ 4.3GHz under an H100 - 4x8GB ram - infiniband HCA  - xonar essence stx - gtx 680 - sabretooth x79 - corsair C70 Server: i7 3770k (don't ask) - lsi-9260-4i used as an HBA - 6x3TB WD red (raidz2) - crucia m4's (60gb (ZIL, L2ARC), 120gb (OS)) - 4X8GB ram - infiniband HCA - define mini  Goodies: Røde podcaster w/ boom & shock mount - 3x1080p ips panels (NEC monitors for life) - k90 - g9x - sp2500's - HD598's - kvm switch

ZFS tutorial

Link to comment
Share on other sites

Link to post
Share on other sites

Fair enough. But enterprise grade hardware RAID > software RAID right?

Unless you're using a controller that's capable of scrubbing, then no, filesystem level raid is better for data safety, and the speeds are comparable if not the same in most situations.

Workstation: 3930k @ 4.3GHz under an H100 - 4x8GB ram - infiniband HCA  - xonar essence stx - gtx 680 - sabretooth x79 - corsair C70 Server: i7 3770k (don't ask) - lsi-9260-4i used as an HBA - 6x3TB WD red (raidz2) - crucia m4's (60gb (ZIL, L2ARC), 120gb (OS)) - 4X8GB ram - infiniband HCA - define mini  Goodies: Røde podcaster w/ boom & shock mount - 3x1080p ips panels (NEC monitors for life) - k90 - g9x - sp2500's - HD598's - kvm switch

ZFS tutorial

Link to comment
Share on other sites

Link to post
Share on other sites

Unless you're using a controller that's capable of scrubbing, then no, filesystem level raid is better for data safety, and the speeds are comparable if not the same in most situations.

so your saying to just use sotftware RAID unless i have a entepize RAID card? if so can you recomend a ATX mother bourd that has 15 sata ports is server grade and can do software RAID? but im probly am going to get a Enterprize raid card i have before i acualy have a few of them jut nothing suited for this job.

I  have GameServer`s And VOIP servers the only price is that you have fun on them. 

Link to comment
Share on other sites

Link to post
Share on other sites

The concept of modern file systems is failing in this thread :(

 

A disk in RAID or not RAID-ed using just NTFS or any simple filesystem is pretty lame compared to a modern filesystem like ZFS which actually keeps tabs on your files and makes sure they are still there tomorrow and the day after, instead of the filesystem just saying I wrote it and its in this location see ya sucka (aka NTFS)! Sure you can run chkdsk/fsck every day to do the checking but who does that? And it still doesn't check if your file is still the file it was yesterday (checksum).

 

Drives vs filesystems, its not the same thing.

 

There's more to it than that but I think this is going off to another thread topic anyway.  :unsure: 

I roll with sigs off so I have no idea what you're advertising.

 

This is NOT the signature you are looking for.

Link to comment
Share on other sites

Link to post
Share on other sites

The concept of modern file systems is failing in this thread :(

 

A disk in RAID or not RAID-ed using just NTFS or any simple filesystem is pretty lame compared to a modern filesystem like ZFS which actually keeps tabs on your files and makes sure they are still there tomorrow and the day after, instead of the filesystem just saying I wrote it and its in this location see ya sucka (aka NTFS)! Sure you can run chkdsk/fsck every day to do the checking but who does that? And it still doesn't check if your file is still the file it was yesterday (checksum).

 

Drives vs filesystems, its not the same thing.

 

There's more to it than that but I think this is going off to another thread topic anyway.  :unsure:

So you think RAID is not the way to go for file safy?i guess i should have put this up when i crated this thread but this is going to be my Cloud,Backup Server,and a medea server. it will be holding famly photos and all that and i want be able to rely on it not to loos any of thats why o thout i would go RAID 5 becaues a hard drive could die and none of my files would be gone.

 

WAIT WAIT dont read that up there i misunderstood what you ment at first lol.yea im not going to use NTFS i was thinking of using ext4 do you think thats good enough to keep my data safe for years?

Edited by Mbarton

I  have GameServer`s And VOIP servers the only price is that you have fun on them. 

Link to comment
Share on other sites

Link to post
Share on other sites

Anything over 12TB and you NEED to be using raid6. Statistically speaking, you won't survive a rebuild if 1 drives dies to to an increased risk of a bad sector on 1 other drive.

I've used software raid5, I used mdam with a generic ext4 file system. It worked fine and it was really flexible...not the best though, but it just "worked"

ZFS, while superior, is pretty much limited to Open Solaris...which I just don't like using. I much prefer a Debian based OS. But that's because I'm used to it.

BtrFS is still being developed and they actually recommend using the most recent version or you have an increased risk of failure. I don't like the idea of constantly upgrading file system software. I'm a believer of the if "it's not broke don't fix it" method. Especially when it comes to sensitive data

To me, this basically only left XFS. XFS is designed and optimized for RAID arrays and large file storage. Since my current server is a media server this was a perfect choice. Features without the need to constantly update the file system. Make sure when you create the file system you configure what type of RAID you're using. You get performance increases that way.

Feature wise though, ZFS and BtrFS are superior.

Due to my mediocre experience with software RAID, I opted to go with a hardware RAID solution. I purchased a LSI mega raid 9260-8i, and I couldn't be happier. I also purchased a BBU and disabled the built in drive caches which gave me a noticeable performance bump with the benefit of stability during a power outrage.

If you were to get a hardware RAID card, I'd probably recommend you get one with a single SAS port, and then purchase an SAS expander. Depending on the expander you can allow yourself to run up to 32 drives off the expander, which then connects to your RAID card. Lots of expand ability that way.

As for the 3tb or 4tb dilemma, I'd personally go with the 3TB drive. 3tb drives have been around a while, so you can get one with the features that fit your need.he additional TB with 4TB drives sometimes doesn't justify the added cost. Especially in a RAID setup

Hope this helps =P

Link to comment
Share on other sites

Link to post
Share on other sites

Anything over 12TB and you NEED to be using raid6. Statistically speaking, you won't survive a rebuild if 1 drives dies to to an increased risk of a bad sector on 1 other drive.

I've used software raid5, I used mdam with a generic ext4 file system. It worked fine and it was really flexible...not the best though, but it just "worked"

ZFS, while superior, is pretty much limited to Open Solaris...which I just don't like using. I much prefer a Debian based OS. But that's because I'm used to it.

BtrFS is still being developed and they actually recommend using the most recent version or you have an increased risk of failure. I don't like the idea of constantly upgrading file system software. I'm a believer of the if "it's not broke don't fix it" method. Especially when it comes to sensitive data

To me, this basically only left XFS. XFS is designed and optimized for RAID arrays and large file storage. Since my current server is a media server this was a perfect choice. Features without the need to constantly update the file system. Make sure when you create the file system you configure what type of RAID you're using. You get performance increases that way.

Feature wise though, ZFS and BtrFS are superior.

Due to my mediocre experience with software RAID, I opted to go with a hardware RAID solution. I purchased a LSI mega raid 9260-8i, and I couldn't be happier. I also purchased a BBU and disabled the built in drive caches which gave me a noticeable performance bump with the benefit of stability during a power outrage.

If you were to get a hardware RAID card, I'd probably recommend you get one with a single SAS port, and then purchase an SAS expander. Depending on the expander you can allow yourself to run up to 32 drives off the expander, which then connects to your RAID card. Lots of expand ability that way.

As for the 3tb or 4tb dilemma, I'd personally go with the 3TB drive. 3tb drives have been around a while, so you can get one with the features that fit your need.he additional TB with 4TB drives sometimes doesn't justify the added cost. Especially in a RAID setup

Hope this helps =P

Thanks for taking the time to wright that it helps allot i think ill do Hardware Raid and ill get 3 TB WD red Drives.and ill go RAID 6(doing exacly what you reccomended lol)

do you have any RAID card recommendations and where i might get one for cheap

I  have GameServer`s And VOIP servers the only price is that you have fun on them. 

Link to comment
Share on other sites

Link to post
Share on other sites

Sorry for the confusion about filesystems vs storage arrays. My point was that file systems like ZFS and btrfs perform a lot of the functionality that enterprise NAS/SANs do and they do it a lot better than any raid card that's available to consumers.

 

A lot of people fail to understand that with the huge hard drives we use today, you have to worry about the consistency of the data on the disks just as much as you have to worry about having redundant disks. That's to say that bit rot and hardware failure together are what cause data loss. If you only protect against one or the other you might as well be protecting against neither. Purely hardware raid 6 is still viable, but in a a few years that will go the way of raid 5.

 

To OP, even if you do use hardware raid (which I don't recommend) I HIGHlY recommend you use a checksumming filesystem like ZFS / btrfs if you have any kind of important files.

 

Edit: Also, ZFS isn't limited to Open Solaris. ZFSonLinux is alive, well, and has long been production ready.

Workstation: 3930k @ 4.3GHz under an H100 - 4x8GB ram - infiniband HCA  - xonar essence stx - gtx 680 - sabretooth x79 - corsair C70 Server: i7 3770k (don't ask) - lsi-9260-4i used as an HBA - 6x3TB WD red (raidz2) - crucia m4's (60gb (ZIL, L2ARC), 120gb (OS)) - 4X8GB ram - infiniband HCA - define mini  Goodies: Røde podcaster w/ boom & shock mount - 3x1080p ips panels (NEC monitors for life) - k90 - g9x - sp2500's - HD598's - kvm switch

ZFS tutorial

Link to comment
Share on other sites

Link to post
Share on other sites

Sorry for the confusion about filesystems vs storage arrays. My point was that file systems like ZFS and btrfs perform a lot of the functionality that enterprise NAS/SANs do and they do it a lot better than any raid card that's available to consumers.

 

A lot of people fail to understand that with the huge hard drives we use today, you have to worry about the consistency of the data on the disks just as much as you have to worry about having redundant disks. That's to say that bit rot and hardware failure together are what cause data loss. If you only protect against one or the other you might as well be protecting against neither. Purely hardware raid 6 is still viable, but in a a few years that will go the way of raid 5.

 

To OP, even if you do use hardware raid (which I don't recommend) I HIGHlY recommend you use a checksumming filesystem like ZFS / btrfs if you have any kind of important files.

 

Edit: Also, ZFS isn't limited to Open Solaris. ZFSonLinux is alive, well, and has long been production ready.

Ok i get what your saying about ZFS and Hardware raid i will take your recommendation and get use ZFS if it works with the os i get but im not sure what im using i think Ubuntu server or Windows Server 2012 or FREENAS

I  have GameServer`s And VOIP servers the only price is that you have fun on them. 

Link to comment
Share on other sites

Link to post
Share on other sites

I run Ubuntu on my sever and I like that I'm not really limited by the OS as far as what I'd like to do. I used FreeNAS very briefly in my first pseudo-server. Everything was really nice and intuitive and the web interface was really neat. But at the time I used it, expanding the features through software didn't seem viable.

With Ubuntu, I run a web server, file server, WINS server, numerous VM's (which I can configure over the web using the previously mentioned web server...phpvirtualbox), MySQL sever, the list goes on and on. I've been using Ubuntu for almost 8 years now though, so I'm very familiar with the inner workings and the console commands.

Configuring everything just right took lots of time and tinkering. But I also really enjoy that sort of thing.

The OS you choose is just as important as the method with which you use to create your RAID.

The reason I started off with Ubuntu is due to the ENORMOUS amount of support you can find all over the internet. Googling anything with the word Ubuntu in the string will almost always return some sort of guide, or a forum post.

The one thing I can say that Ubuntu does poorly is media playback. The ATI/nvidia drivers are a complete mess, and don't work that well. it's also not possible to enable audio pass through over HDMI using an ATI card. Just one of the many nuances that come with the OS. If you were hoping to connect the server to a TV to playback content that way, then a windows OS might be better as the driver support for media is superior.

I prefer Linux because everything is free...and GUI's are over rated =P

Link to comment
Share on other sites

Link to post
Share on other sites

Ok i get what your saying about ZFS and Hardware raid i will take your recommendation and get use ZFS if it works with the os i get but im not sure what im using i think Ubuntu server or Windows Server 2012 or FREENAS

Ubuntu sever supports ZFS (see the link in my sig about ZFS, there's a part in that tutorial about installing on ubuntu), and ifs is native on freenas

Workstation: 3930k @ 4.3GHz under an H100 - 4x8GB ram - infiniband HCA  - xonar essence stx - gtx 680 - sabretooth x79 - corsair C70 Server: i7 3770k (don't ask) - lsi-9260-4i used as an HBA - 6x3TB WD red (raidz2) - crucia m4's (60gb (ZIL, L2ARC), 120gb (OS)) - 4X8GB ram - infiniband HCA - define mini  Goodies: Røde podcaster w/ boom & shock mount - 3x1080p ips panels (NEC monitors for life) - k90 - g9x - sp2500's - HD598's - kvm switch

ZFS tutorial

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×