Jump to content

Update:

 

I replaced all my 2.5" 500G WD blacks with 2.5" 1TB Reds.

 

DSCN0367.jpg

 

DSCN0369.jpg

 

This is just really interesting to me for some reason. :)

 

Any chance you have a picture of it in whatever rig it belongs too? :)

Desktop - CPU Core i7 4770k - MOBO ASUS Maximus VI Hero - GPU Nvidia GTX770 - RAM 16Gb Corsair Dominator GT - PSU Corsair TX750V2 - Storage OCZ Vertex III 120Gb - Case Fractal Design Arc Midi R2


unRAID - CPU Phenom X6 1090T - MOBO ASUS Sabertooth 990FX R1 - RAM 16Gb Kingston ECC - PSU EVGA 650G1 - Storage 5x3TB WD Red - Case NZXT Source 210

Link to comment
Share on other sites

Link to post
Share on other sites

This is just really interesting to me for some reason. :)

 

Any chance you have a picture of it in whatever rig it belongs too? :)

He's on the list at #58 (for now)

http://linustechtips.com/main/topic/21948-ltt-10tb-storage-show-off-topic/page-50#entry3691169

Link to comment
Share on other sites

Link to post
Share on other sites

Just a general FYI, I haven't forgotten about this, but I'm in the middle of exams at the moment and will then be away for a while, so I'll update the list when I get back.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

New 152TB server:

 

Hardware I
This is the hardware of the Storage node:
Case 1: Supermicro SuperServer 6027TR-DTRF
Case 2: Supermicro SC847 E16-RJBOD1
PSU Case 1: 1280W Redundant High-efficiency Digital Power Supplies w/ PMBus 1.2
PSU Case 2: 1400W high-efficiency (1+1) redundant power supply with PMBus
MB: Supermicro X9DRT-HF
CPU: 2x Intel Xeon E5-2609V2
HS: Stock Supermicro heatsink
RAM: 4x Kingston 8GB 1866MHz DDR3 ECC Reg CL13 DIMM
RAID CARD: LSI MegaRAID SAS 9286 8e SGL 8 + LSIiBBU09
SSD 1: 2x Samsung 850 Pro 512GB (RAID 1)

SSD 2: 4x Samsung 850 Pro 1TB (RAID 10)
HDD: 37x 4TB Seagate Barracuda Desktop (ST4000DM000)

NIC: Supermicro AOC-STGN-I2S 10Gbit NIC 
 
Hardware II
This is the hardware of the EXS node which is in the same chassis as the storage node (storage not counted for 10TB+)
Case: Supermicro SuperServer 6027TR-DTRF
MB: Supermicro X9DRT-HF
CPU: 2x Intel Xeon E5-2650V2
HS: Stock Supermicro heatsink
RAM: 8x Kingston 16GB 1866MHz DDR3 ECC Reg CL13 DIMM
NIC: Supermicro AOC-STGN-I2S 10Gbit NIC & 2x Quad Gbit NIC

 

Software and Configuration:
My server is running windows server 2012R2 on which I have 2 arrays.
Array one consist of 4 4TB drives in RAID 5 and array two consists of 32 4TB drives in RAID 60.
The remaining 4TB drive is being used as a global hot spare.
 
ss+(2015-02-02+at+09.05.27).png

Usage:
I use the storage for movies and series, i have media players around the house that access it.
Its also for backing up my computers.

Backup:
The is arrays backed up to LiveDrive, this is a online cloud backup service. (I have 100/100 Mbps connection)

Photo's:
IMG_0230%20(Custom).JPG
 
 
IMG_0246%20(Custom).JPG
 
 
IMG_0267%20(Custom).JPG
 
 
IMG_0529%20(Custom).JPG
 
 
IMG_0453%20(Custom).JPG
 
ss+(2015-02-02+at+09.11.53).png
 
1a1e9f3136.png
ss+(2015-02-02+at+09.14.01).png

Respect the Code of Conduct!

>> Feel free to join the unofficial LTT teamspeak 3 server TS3.schnitzel.team <<

>>LTT 10TB+ Topic<< | >>FlexRAID Tutorial<<>>LTT Speed wave<< | >>LTT Communies and Servers<<

Link to comment
Share on other sites

Link to post
Share on other sites

Very nice setup.  What made your decision to go with RAID 60 for this environment?  Are 16 drives the largest contiguous span in RAID-6 you can get from the LSI controller?

 

That's also a lot of cores/computer for a storage server.  Are you really "only" using it for streaming media and backups?  You could easily run several VMs on that setup for various tasks.  None the less I love lots of superfluous hardware so don't take my questions as insult/negative.  :-)

Workstation 1: Intel i7 4790K | Thermalright MUX-120 | Asus Maximus VII Hero | 32GB RAM Crucial Ballistix Elite 1866 9-9-9-27 ( 4 x 8GB) | 2 x EVGA GTX 980 SC | Samsung 850 Pro 512GB | Samsung 840 EVO 500GB | HGST 4TB NAS 7.2KRPM | 2 x HGST 6TB NAS 7.2KRPM | 1 x Samsung 1TB 7.2KRPM | Seasonic 1050W 80+ Gold | Fractal Design Define R4 | Win 8.1 64-bit
NAS 1: Intel Intel Xeon E3-1270V3 | SUPERMICRO MBD-X10SL7-F-O | 32GB RAM DDR3L ECC (8GBx4) | 12 x HGST 4TB Deskstar NAS | SAMSUNG 850 Pro 256GB (boot/OS) | SAMSUNG 850 Pro 128GB (ZIL + L2ARC) | Seasonic 650W 80+ Gold | Rosewill RSV-L4411 | Xubuntu 14.10

Notebook: Lenovo T500 | Intel T9600 | 8GB RAM | Crucial M4 256GB

Link to comment
Share on other sites

Link to post
Share on other sites

Very nice setup.  What made your decision to go with RAID 60 for this environment?  Are 16 drives the largest contiguous span in RAID-6 you can get from the LSI controller?

 

That's also a lot of cores/computer for a storage server.  Are you really "only" using it for streaming media and backups?  You could easily run several VMs on that setup for various tasks.  None the less I love lots of superfluous hardware so don't take my questions as insult/negative.  :-)

I prefer the extra redundancy with RAID 60, the card has no problems with running them all in RAID 6 though.

I also tested RAID 6 and RAID 60 did give me improved read speeds.

 

As far as the cores goes, the storage node needed 2 CPU's to make all the needed components on the motherboard work so I just got some cheap quad core non hyper thread Xeons for the storage node.

 

Right now I'm not virtualizing on the storage node as I have the ESX node for that (Hardware II)

Respect the Code of Conduct!

>> Feel free to join the unofficial LTT teamspeak 3 server TS3.schnitzel.team <<

>>LTT 10TB+ Topic<< | >>FlexRAID Tutorial<<>>LTT Speed wave<< | >>LTT Communies and Servers<<

Link to comment
Share on other sites

Link to post
Share on other sites

 

Not to make @looney feel inadequate, but I'm about to upgrade my server from 2TB usable storage to 7TB usable storage, so you have some stiff competition! 

 

*Weeps in corner*

CPU i5 4430 3Ghz | Ram: 16GB DDR3 1600 | GPU: GTX 650 Ti 1GB | Mobo: H87N-Wifi | Case: White Bitfenix Prodigy | Boot Drive: 120GB 840 Evo (Mac OS X) 120gb OCZ Vertex 3 (Windows) | Games Drive: 640GB WD Green | OS: Windows 8 & OS X 10.9.1

I love all technology. The perfection of macs for my designer side, and the hardware and fun of tinkering on the of the pc side. We can have it all, just not at the same time.

Link to comment
Share on other sites

Link to post
Share on other sites

Not to make @looney feel inadequate, but I'm about to upgrade my server from 2TB usable storage to 7TB usable storage, so you have some stiff competition! 

 

*Weeps in corner*

@looney, you better add another 30 HDDs, just to be sure. :P

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

@looney, you better add another 30 HDDs, just to be sure. :P

I'll have to get a second jbod :P

Respect the Code of Conduct!

>> Feel free to join the unofficial LTT teamspeak 3 server TS3.schnitzel.team <<

>>LTT 10TB+ Topic<< | >>FlexRAID Tutorial<<>>LTT Speed wave<< | >>LTT Communies and Servers<<

Link to comment
Share on other sites

Link to post
Share on other sites

I'll have to get a second jbod :P

I'm putting your new server into the system now, question: Is the old one still active and should be left in, or is it dismantled, or its config changed?

EDIT: Are the SSDs for caching (and should be counted), or are they for the OS only?

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Old system is gone for now.

I'll revive it later as a test system for writing tutorials.

Ssd's are os only so 148tb

Respect the Code of Conduct!

>> Feel free to join the unofficial LTT teamspeak 3 server TS3.schnitzel.team <<

>>LTT 10TB+ Topic<< | >>FlexRAID Tutorial<<>>LTT Speed wave<< | >>LTT Communies and Servers<<

Link to comment
Share on other sites

Link to post
Share on other sites

Update:

 

I replaced all my 2.5" 500G WD blacks with 2.5" 1TB Reds.

 

Updated. :)

 

Old system is gone for now.

I'll revive it later as a test system for writing tutorials.

Ssd's are os only so 148tb

Added, master looney. :P

@MrBucket101: I think caching SSDs should be counted, but I'll need to figure something out first, will update you when I've done so.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

 

 

If you don't care to read my horror story skip down to the second bolded sentence.

 

Things weren't looking good for me. I nearly lost ALL my data, and the root cause of it all was my decision to use cheap consumer drives in an always on RAID environment. Heed my warning, STAY AWAY FROM SEAGATE ST3000DM001

 

A little background story. I originally bought 8 - 3TB ST3000DM0001 and was running them in RAID 6. Late november early december I ran into a massive string of failures. I lost 3 drives in 3 weeks. I had a spare availiable, and thankfully all of them were covered under warranty, so I got them replaced for free.

 

2 weeks ago I started to run out of space on my array. I had 1TB left out of the 16.3TB. So I got rid of my dedicated hot-spare and added it to the array. This is where shit hit the fan. The expansion should have only taken 3 days as it has done so in the past, but the expansion took a grand total of 9 days. Which definitely had me worried. But I wasn't going to disturb anything. According to the logs, 10 minutes before the expansion finished, another drive died on me. (that makes 4 of the original 8 within 2 years) I didn't really think much of it at the time. The expansion finished and megaraid reported the new size at 19TB, it was just degraded.  So this time I bought a nice HGST 3TB NAS drive, and put that in to replace the failed drive. But when the rebuild finished....things took a turn for the worse. The OS was only able to see the array as 16.3TB, it was as if the expansion hadn't actually worked. I know when you expand you also have to move the GPT data to the end of the disk and then expand the partition. However the OS reported the drive was only 16.3TB.

 

On top of this, the alarm on the card would not go off. I scoured the logs and saw no problems that had not been taken care of. Yet something was clearly wrong. The alarm would not turn off, unless I did so manually, and the OS could not see the expansion.

 

At this point, I had no effing idea what was going on with my array. But the OS could still run, and I could still access my data. So I left the server alone for a couple days while I did some research. Unfortunately at this point though, I was now getting non-stop kernel panics. The OS was reporting that the device had taken longer than 120s to respond, and so the kernel halted waiting for the array to respond back. The entire OS locked up and I was forced to hard reboot. 

 

This kept up for the next 2 days and the random kernel panics and hard resetting corrupted my filesystem...then the panic set in.

 

I was able to repair the damage done, and verify that my data was in tact. But I just couldn't take this any longer. My stomach was in knots from nearly loosing all of my data from the past 8 years.

 

I snapped and bought new drives. I don't like to do this, but I had to throw in the towel. The system had beaten me and I could not stand to lose my data.

 

SOOOOO on to the juicy stuff.

 

I bought 8 - Hitachi HGST 4TB NAS drives. I picked these over the WD RED for a number of reasons. These drives are comparable to the WD RED PRO line, but they do not have the price tag to match. The drives are 7200rpm, and come with a rotational vibration sensor. Something the WD RED's don't. On top of this the drive supports TLER, is rated for 1million power on hours, and comes with a 3 year warranty. All identical to the WD RED. All this doesn't even come at a much larger price premium. On newegg, the WD RED 4TB drive costs $166, where these drives are around $175. I managed to get 5 of my 4TB drives on sale for $160, and then I had to pay $175 for the last 3 since the coupon had a limit of 5 drives.

 

Since I was buying new drives, I decided to buy another SSD so that I could run my SSD Cache in RAID 1 and safely enable write caching. I just got a crucial MX100. My other SSD is an OCZ Vertex 4. So I wasn't too worried about matching performance. Basically, the cache drives are used to speed up read/write speeds from my array. Data read from the array once is moved onto the SSD cache for faster access times. When I am writing data, the controller will write the data to the SSD and report back to the OS the transfer is done, and then it will move the data from the SSD to the disks at a more convenient time. This feature REALLY speeds things up. I did some rough testing, and I was getting around 1.2GB/s write speed, and 1.7GB/s read speed. I didn't do much to validate the results, but the numbers were enough to convince me it was working. Previously, I was getting around 600MB/s write speed, and 800MB/s read speed. Fun fact, according to the manual my raid card tops out at 1.8GB/s :P

 

I put the new drives in my system and set them up in RAID 6 and used rsync to transfer the content from the bad array, to the new one. rsync is nice in that it will hash check the new file against the original to make sure there was no issue with the transfer.

 

Once I had migrated my data off of the bad array, I booted back into the cards bios and deleted the array. Soon as I did that the alarm finally stopped going off. Whatever it was that had went wrong, had now been fixed...with money lol

 

My OS drive got corrupted with all the kernel panics as well, and i didn't care to go rescue my data from it. I had an image from mid january 2015 that I just copied back onto the system.

 

I plan on taking the crappy 3TB drives and individually filling them up with my data; and then I'm going to put them back in the box my new hard drives came in, and store them in my basement. This will give me a dated offline backup incase of any future catastrophe.

 

I also took the 3TB HGST NAS drive, and I replaced the previous 2TB drive I was using for downloads.

 

To help make things easier, for those that haven't been keeping up with everything. Here is my current configuration.

 

Hardware:

SSD 1: 2 x 120GB SAMSUNG 840 EVO (RAID 1 for the OS)

SSD 2: 1 x 128GB OCZ Vertex 4

SSD 3: 1 x 128GB Crucial MX100 (RAID 1 w/ SSD 2 for SSD cache)

HDD 1: 1 x 320GB WD Black 2.5" (for storing VM disks)

HDD 2: 1 x 250GB Hitachi Travelstar 5k500 2.5" (for backing up my /home/ folder and some application config files)

HDD 3: 1 x 3TB HGST Deskstar NAS

HDD 4: 8 x 4TB HGST Deskstar NAS (RAID 6)

 

(36.066TB ~36TB :D)

 

And now for the pr0n!!!

 

2015-02-10145213_zps9b297725.jpg

 

2015-02-10145224_zps52610a57.jpg

 

2015-02-10153737_zpse980587a.jpg

 

2015-02-10153830_zps09cb7c8e.jpg

 

The graveyard :P

 

2015-02-10154242_zpsd279c241.jpg

 

 

 

Spoiler

SERVER---


Server IP: 10.254.6.250
Server Name: SERVER
OS name: Linux
OS Version: 3.13.0-44-generic
OS Architecture: x86_64
Driver Name: megaraid_sas
Driver Version: 06.700.06.00-rc1
Application Version: MegaRAID Storage Manager - 14.08.01.02

HARDWARE---
Controller: LSI MegaRAID SAS 9260-8i(Bus 3,Dev 0)
Status: Optimal
Firmware Package Version:12.15.0-0205
Firmware Version: 2.130.403-3835
BBU: YES
Enclosure(s): 1
Drive(s): 12
Virtual Drive(s): 3

BBU---
BBU Type: IBBU
BBU Status: Optimal

Enclosures---
PRODUCT NAME TYPE STATUS
HPSASEXPCard Ses OK

Drives---
CONNECTOR PRODUCT ID VENDOR ID STATE DISK TYPE CAPACITY POWER STATE
Port 0 - 3 HGSTHDN724040AL ATA Online SATA 3.638 TB On
Port 0 - 3 CrucialCT128MX1 ATA Online SATA 118.719 GB On
Port 0 - 3 HGSTHDN724040AL ATA Online SATA 3.638 TB On
Port 0 - 3 HGSTHDN724040AL ATA Online SATA 3.638 TB On
Port 0 - 3 HGSTHDN724040AL ATA Online SATA 3.638 TB On
Port 0 - 3 HGSTHDN724040AL ATA Online SATA 3.638 TB On
Port 0 - 3 SamsungSSD840 ATA Online SATA 111.281 GB On
Port 0 - 3 SamsungSSD840 ATA Online SATA 111.281 GB On
Port 0 - 3 OCZVERTEX4 ATA Online SATA 118.719 GB On
Port 0 - 3 HGSTHDN724040AL ATA Online SATA 3.638 TB On
Port 0 - 3 HGSTHDN724040AL ATA Online SATA 3.638 TB On
Port 0 - 3 HGSTHDN724040AL ATA Online SATA 3.638 TB On

Virtual Drive(s):---
TARGET ID NAME CAPACITY STATE RAID LEVEL MegaRAID RECOVERY
3 - 21.829 TB Optimal RAID 6 NO
1 - 118.719 GB Optimal RAID 1 NO
2 - 111.281 GB Optimal RAID 1 NO

Link to comment
Share on other sites

Link to post
Share on other sites

Great story MrBucket01, I have had a similar experience from the same drives, I have had one just die out of no where, the seconds started getting bad sectors in September and now the warranty replacement has the same behaviour!

 

So I have just replaced the final one with another 4TB seagate (we will see how this pans out) but I am confident it should be better quality.

 

I have updated my original post as I now have a 14TB array

| Intel Core i7 7600K | ASRock H170M ITX | 16GB DDR4 | OCZ GTX1060 6GB  |
| Crucial M4 256GB | Samsung 850 EVO 250GB  | Antec True Power 650 | Fractal Design Node 304 | Coolermaster 212+ |

Link to comment
Share on other sites

Link to post
Share on other sites

Great story MrBucket01, I have had a similar experience from the same drives, I have had one just die out of no where, the seconds started getting bad sectors in September and now the warranty replacement has the same behaviour!

 

So I have just replaced the final one with another 4TB seagate (we will see how this pans out) but I am confident it should be better quality.

 

I have updated my original post as I now have a 14TB array

 

The biggest mistake I made was not reading the fine print.

 

The ST3000DM001 is only rated for usage 8 hours a day, 5 days a week; or 2080 hours a year. I don't have the exact numbers, but most of the drives died at around the 9000 hour mark. ~1 year of 24/7 usage. Right around 4x the recommended usage per year; so they were definitely being abused.

 

I had 5 of the early 7200.11 1TB models from seagate, and they all had the defective firmware that would cause the drives to drop out and cause the OS to hang intermittently. The firmware updates helped, but it was still pretty bad. I gave seagate another chance because the price on these drives was competitive, and at the time of purchase they had 4.5 stars on newegg.

 

Needless to say, I'll pretty much never touch a seagate drive again. 

Link to comment
Share on other sites

Link to post
Share on other sites

Does anybody have any experience with the new seagate archive 8TB drives? curious if I would notice any performance difference especially in my array (storage spaces) with my WD greens.

Link to comment
Share on other sites

Link to post
Share on other sites

Does anybody have any experience with the new seagate archive 8TB drives? curious if I would notice any performance difference especially in my array (storage spaces) with my WD greens.

 

You can read above for my horrible track record with seagate. But i'm definitely biased.

 

With that said, looking at the specs sheet. The specs don't seem terrible, 800k MTBF is 200k hours short of the MTBF on WD REDS and HGST NAS, but it is rated for a 24/7 workload and comes with a 3yr warranty.  I also couldn't find any mention of TLER, or similarly named feature, this doesn't confirm the drives do/don't have it...but it is worth mentioning. Had I not been burned 2x already I might be willing to give one of those drives a chance. 

 

As for your question on performance, the spindle RPM is not listed anywhere on their website or spec sheet. Which most likely means it's not 7200rpm, probably 5400rpm.  But given you are on WD Greens I doubt you will see any less performance...usually RAID on WD Greens is frowned upon. 

 

Sometimes being on the bleeding edge isn't always a good thing. SMR is extrememly new, and those drives haven't made their way into the masses yet. I would wait until all the tech sites get their hands on them and do reviews and give us some actual numbers and information.

Link to comment
Share on other sites

Link to post
Share on other sites

This will officially be my 2nd update.

If you don't care to read my horror story skip down to the second bolded sentence.

Things weren't looking good for me. I nearly lost ALL my data, and the root cause of it all was my decision to use cheap consumer drives in an always on RAID environment. Heed my warning, STAY AWAY FROM SEAGATE ST3000DM001

A little background story. I originally bought 8 - 3TB ST3000DM0001 and was running them in RAID 6. Late november early december I ran into a massive string of failures. I lost 3 drives in 3 weeks. I had a spare availiable, and thankfully all of them were covered under warranty, so I got them replaced for free.

2 weeks ago I started to run out of space on my array. I had 1TB left out of the 16.3TB. So I got rid of my dedicated hot-spare and added it to the array. This is where shit hit the fan. The expansion should have only taken 3 days as it has done so in the past, but the expansion took a grand total of 9 days. Which definitely had me worried. But I wasn't going to disturb anything. According to the logs, 10 minutes before the expansion finished, another drive died on me. (that makes 4 of the original 8 within 2 years) I didn't really think much of it at the time. The expansion finished and megaraid reported the new size at 19TB, it was just degraded. So this time I bought a nice HGST 3TB NAS drive, and put that in to replace the failed drive. But when the rebuild finished....things took a turn for the worse. The OS was only able to see the array as 16.3TB, it was as if the expansion hadn't actually worked. I know when you expand you also have to move the GPT data to the end of the disk and then expand the partition. However the OS reported the drive was only 16.3TB.

On top of this, the alarm on the card would not go off. I scoured the logs and saw no problems that had not been taken care of. Yet something was clearly wrong. The alarm would not turn off, unless I did so manually, and the OS could not see the expansion.

At this point, I had no effing idea what was going on with my array. But the OS could still run, and I could still access my data. So I left the server alone for a couple days while I did some research. Unfortunately at this point though, I was now getting non-stop kernel panics. The OS was reporting that the device had taken longer than 120s to respond, and so the kernel halted waiting for the array to respond back. The entire OS locked up and I was forced to hard reboot.

This kept up for the next 2 days and the random kernel panics and hard resetting corrupted my filesystem...then the panic set in.

I was able to repair the damage done, and verify that my data was in tact. But I just couldn't take this any longer. My stomach was in knots from nearly loosing all of my data from the past 8 years.

I snapped and bought new drives. I don't like to do this, but I had to throw in the towel. The system had beaten me and I could not stand to lose my data.

SOOOOO on to the juicy stuff.

I bought 8 - Hitachi HGST 4TB NAS drives. I picked these over the WD RED for a number of reasons. These drives are comparable to the WD RED PRO line, but they do not have the price tag to match. The drives are 7200rpm, and come with a rotational vibration sensor. Something the WD RED's don't. On top of this the drive supports TLER, is rated for 1million power on hours, and comes with a 3 year warranty. All identical to the WD RED. All this doesn't even come at a much larger price premium. On newegg, the WD RED 4TB drive costs $166, where these drives are around $175. I managed to get 5 of my 4TB drives on sale for $160, and then I had to pay $175 for the last 3 since the coupon had a limit of 5 drives.

Since I was buying new drives, I decided to buy another SSD so that I could run my SSD Cache in RAID 1 and safely enable write caching. I just got a crucial MX100. My other SSD is an OCZ Vertex 4. So I wasn't too worried about matching performance. Basically, the cache drives are used to speed up read/write speeds from my array. Data read from the array once is moved onto the SSD cache for faster access times. When I am writing data, the controller will write the data to the SSD and report back to the OS the transfer is done, and then it will move the data from the SSD to the disks at a more convenient time. This feature REALLY speeds things up. I did some rough testing, and I was getting around 1.2GB/s write speed, and 1.7GB/s read speed. I didn't do much to validate the results, but the numbers were enough to convince me it was working. Previously, I was getting around 600MB/s write speed, and 800MB/s read speed. Fun fact, according to the manual my raid card tops out at 1.8GB/s :P

I put the new drives in my system and set them up in RAID 6 and used rsync to transfer the content from the bad array, to the new one. rsync is nice in that it will hash check the new file against the original to make sure there was no issue with the transfer.

Once I had migrated my data off of the bad array, I booted back into the cards bios and deleted the array. Soon as I did that the alarm finally stopped going off. Whatever it was that had went wrong, had now been fixed...with money lol

My OS drive got corrupted with all the kernel panics as well, and i didn't care to go rescue my data from it. I had an image from mid january 2015 that I just copied back onto the system.

I plan on taking the crappy 3TB drives and individually filling them up with my data; and then I'm going to put them back in the box my new hard drives came in, and store them in my basement. This will give me a dated offline backup incase of any future catastrophe.

I also took the 3TB HGST NAS drive, and I replaced the previous 2TB drive I was using for downloads.

To help make things easier, for those that haven't been keeping up with everything. Here is my current configuration.

Hardware:

SSD 1: 2 x 120GB SAMSUNG 840 EVO (RAID 1 for the OS)

SSD 2: 1 x 128GB OCZ Vertex 4

SSD 3: 1 x 128GB Crucial MX100 (RAID 1 w/ SSD 2 for SSD cache)

HDD 1: 1 x 320GB WD Black 2.5" (for storing VM disks)

HDD 2: 1 x 250GB Hitachi Travelstar 5k500 2.5" (for backing up my /home/ folder and some application config files)

HDD 3: 1 x 3TB HGST Deskstar NAS

HDD 4: 8 x 4TB HGST Deskstar NAS (RAID 6)

(36.066TB ~36TB :D)

And now for the pr0n!!!

The graveyard :P

SERVER---

Server IP: 10.254.6.250

Server Name: SERVER

OS name: Linux

OS Version: 3.13.0-44-generic

OS Architecture: x86_64

Driver Name: megaraid_sas

Driver Version: 06.700.06.00-rc1

Application Version: MegaRAID Storage Manager - 14.08.01.02

HARDWARE---

Controller: LSI MegaRAID SAS 9260-8i(Bus 3,Dev 0)

Status: Optimal

Firmware Package Version:12.15.0-0205

Firmware Version: 2.130.403-3835

BBU: YES

Enclosure(s): 1

Drive(s): 12

Virtual Drive(s): 3

BBU---

BBU Type: IBBU

BBU Status: Optimal

Enclosures---

PRODUCT NAME TYPE STATUS

HPSASEXPCard Ses OK

Drives---

CONNECTOR PRODUCT ID VENDOR ID STATE DISK TYPE CAPACITY POWER STATE

Port 0 - 3 HGSTHDN724040AL ATA Online SATA 3.638 TB On

Port 0 - 3 CrucialCT128MX1 ATA Online SATA 118.719 GB On

Port 0 - 3 HGSTHDN724040AL ATA Online SATA 3.638 TB On

Port 0 - 3 HGSTHDN724040AL ATA Online SATA 3.638 TB On

Port 0 - 3 HGSTHDN724040AL ATA Online SATA 3.638 TB On

Port 0 - 3 HGSTHDN724040AL ATA Online SATA 3.638 TB On

Port 0 - 3 SamsungSSD840 ATA Online SATA 111.281 GB On

Port 0 - 3 SamsungSSD840 ATA Online SATA 111.281 GB On

Port 0 - 3 OCZVERTEX4 ATA Online SATA 118.719 GB On

Port 0 - 3 HGSTHDN724040AL ATA Online SATA 3.638 TB On

Port 0 - 3 HGSTHDN724040AL ATA Online SATA 3.638 TB On

Port 0 - 3 HGSTHDN724040AL ATA Online SATA 3.638 TB On

Virtual Drive(s):---

TARGET ID NAME CAPACITY STATE RAID LEVEL MegaRAID RECOVERY

3 - 21.829 TB Optimal RAID 6 NO

1 - 118.719 GB Optimal RAID 1 NO

2 - 111.281 GB Optimal RAID 1 NO

Sorry to hear about your troubles and the new drives solve your problems. Hopefully we both made a good choice in drives. I have the same pr0n as you and I'm running them in the RAID 6 equivalent under ZFS. :-) I'm waiting for the price to drop back to or below $160 so I can buy four more of them to complete my NAS allocation. When I bought my first allotment, I got into the same issue with the 5-drive limitation. I think when I purchased them, they were $165 each.

nOyE3fN.jpg

Edited by Blade of Grass, added spoiler.

Workstation 1: Intel i7 4790K | Thermalright MUX-120 | Asus Maximus VII Hero | 32GB RAM Crucial Ballistix Elite 1866 9-9-9-27 ( 4 x 8GB) | 2 x EVGA GTX 980 SC | Samsung 850 Pro 512GB | Samsung 840 EVO 500GB | HGST 4TB NAS 7.2KRPM | 2 x HGST 6TB NAS 7.2KRPM | 1 x Samsung 1TB 7.2KRPM | Seasonic 1050W 80+ Gold | Fractal Design Define R4 | Win 8.1 64-bit
NAS 1: Intel Intel Xeon E3-1270V3 | SUPERMICRO MBD-X10SL7-F-O | 32GB RAM DDR3L ECC (8GBx4) | 12 x HGST 4TB Deskstar NAS | SAMSUNG 850 Pro 256GB (boot/OS) | SAMSUNG 850 Pro 128GB (ZIL + L2ARC) | Seasonic 650W 80+ Gold | Rosewill RSV-L4411 | Xubuntu 14.10

Notebook: Lenovo T500 | Intel T9600 | 8GB RAM | Crucial M4 256GB

Link to comment
Share on other sites

Link to post
Share on other sites

Sorry to hear about your troubles and the new drives solve your problems.  Hopefully we both made a good choice in drives.  I have the same pr0n as you and I'm running them in the RAID 6 equivalent under ZFS.  :-)  I'm waiting for the price to drop back to or below $160 so I can buy four more of them to complete my NAS allocation.  When I bought my first allotment, I got into the same issue with the 5-drive limitation.  I think when I purchased them, they were $165 each.

 

 

 

 

I went back and looked at your build log. When you do expand, you should look into an SAS expander. I got mine off ebay decently cheap. You can read about it in my UPDATE1, click the link in my SIG. You would use a reverse breakout cable and go from the 8 LSI sata ports on your mobo to the 2 inputs on the SAS expander. Then you would have 6 SAS ports to do what you want with. SAS -> SAS or SAS breakout cable. Now that I think about it, if you check on ebay for supermicro 4U cases, alot of those cases come with built in SAS expanders.

 

How is ZFS on ubuntu? I considered it when I started this whole server thing. But that was 5 years ago, and at that time ZFS was still pretty new. I just didn't like the fact that I was going to be running a filesystem that recommended I always run the latest/greatest version or risk losing my data. I'm more of a fan of the "if it isn't broke, don't fix it" mentality. I went with an XFS filesystem instead. If ZFS ever actually gets to a point of being done I'd like to give it a try.  Have you ever seen it "heal" a file?

Link to comment
Share on other sites

Link to post
Share on other sites

I'm definitely familiar with SAS expanders but haven't implemented on in my own build yet.  I'll certainly consider them down the road.  I have four more SATA ports that I can connect drives to which fills out the chassis for this NAS.  I'll need to move into a bigger case if I want to grown beyond that much like the supermicro case or one of those Norco cases.  I'd of preferred the breakout cables for much less clutter in my chassis but I was able to manage it a bit better than those initial pictures show in my build thread.  The issue I might run into with my setup is the built-in LSI adapter already breaks out the 8 ports as SATA ports.  I'm not sure I can connect those to a SAS expander.  My fall-back was to add another LSI adapter and do as you've suggested.

 

So far ZFS has run perfectly for me in this setup.  I built this NAS specifically with ZFS in mind because my new job focuses on it for the product me build.  I wanted a reason to learn more at home on my own time.  I like the options it offers for expansion, snapshots, and data integrity via checksums (bit rot protection) among other things.  I've done periodic zfs scrubs and I've yet to return an error which needed to be fixed ("healed") as you've asked about.  The way ZFS stores the data it writes a block with a SHA-256 checksums to calculate for parity problems and stores a pointer to the checksum value.  Then the pointer is also checksummed for increased data integrity.  There is also self-healing which can occur because I'm using RAID Z in my setup.  No other mainstream filesystem protects against this issue that I'm aware of.  Performance has been very good so far.  In all cases I'm limited by my 1GbE adapter.  I'm also using an SSD for L2ARC for cached reads.

 

I'd say it's worth you giving another look at ZFS even if to experiment.  It offers a nice way to detach you from a physical RAID card's propriety environment and would allow you to export a pool and import it on another system if needed down the road once you migrate to newer hardware.

Workstation 1: Intel i7 4790K | Thermalright MUX-120 | Asus Maximus VII Hero | 32GB RAM Crucial Ballistix Elite 1866 9-9-9-27 ( 4 x 8GB) | 2 x EVGA GTX 980 SC | Samsung 850 Pro 512GB | Samsung 840 EVO 500GB | HGST 4TB NAS 7.2KRPM | 2 x HGST 6TB NAS 7.2KRPM | 1 x Samsung 1TB 7.2KRPM | Seasonic 1050W 80+ Gold | Fractal Design Define R4 | Win 8.1 64-bit
NAS 1: Intel Intel Xeon E3-1270V3 | SUPERMICRO MBD-X10SL7-F-O | 32GB RAM DDR3L ECC (8GBx4) | 12 x HGST 4TB Deskstar NAS | SAMSUNG 850 Pro 256GB (boot/OS) | SAMSUNG 850 Pro 128GB (ZIL + L2ARC) | Seasonic 650W 80+ Gold | Rosewill RSV-L4411 | Xubuntu 14.10

Notebook: Lenovo T500 | Intel T9600 | 8GB RAM | Crucial M4 256GB

Link to comment
Share on other sites

Link to post
Share on other sites

The issue I might run into with my setup is the built-in LSI adapter already breaks out the 8 ports as SATA ports.  I'm not sure I can connect those to a SAS expander.  My fall-back was to add another LSI adapter and do as you've suggested.

I believe something like this should work just fine. I have a buddy who has a board with a built in LSI controller similar to yours. He has the same case as me, Norco 4224, and he uses that cable connected to one of the backplanes. He has the other 5 backplanes connected to a hardware RAID card.

 

Lets him pass drives through to the OS while keeping them in the hot-swap bays. His chasis looks a lot nicer than mine because of it. I have 1 3TB drive actually mounted, and the other 2.5" drives I either wedged between the PSU and the wall, or buried them under cables to keep them from moving around lol

Link to comment
Share on other sites

Link to post
Share on other sites

All this talk of dodgy Seagate drives... I was about to weigh in about how reliable mine have been over the last 4-5 years, especially given the unbelievable failure rate of the Samsung F3 and F4 series drives I've owned. I was about to attest that out of my 8x Seagate ST2000DM001 I've never had a failure...

 

seagate

 

Until today... failed last night at 03:48 apparently, during a nightly VM backup... bit miffed

I'm on a horse...


Gaming Rig | Storage Server | Virtual Server | HTPC

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×