Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
looney

LTT Storage Rankings

Recommended Posts

Thank you. I know that model has a bad rep but i weighed up that with paying more for drives like the WD RED 3TB.

 

That model now seems to come with a 2 year warranty compared to when they came with a 1 year warranty. I presume they have made some changes that allows for that, i guess i'll replace them with WD Reds as they fail (if they do fail). One of the drives i've had for 1.5 years and it's still running fine. I may actually just replace them with Seagate nas drives as they fail as i've always had a good experience with Seagate

 

I had 2x  ST3000 die while in warranty ,  they suddenly stopped working and I could kiss my data goodbye , and it was a sudden death.

I will never buy Seagate again.


Main Rig : i7 5960X - 16GB DDR4 LPX 2400mhz - Asus Rampage V - 980GTX Ti - AX1200i - 650D - H110 - 1TB 840 EVO SSD - 30 inch HP 1600p IPS + 2xHP 20 inch 1600x1200 PLP setup! - Oculus DK2 - Logitech G502 - Corsair K70 RGB - Fanatec GT3 Wheel

i7 3930K - 16GB DDR3 2133mhz - Asus Rampage IV - 2 x 680GTX - AX850 - 550D - H100i - 256GB 830 Pro SSD - 2713HM 1440p IPS - G5 -

i5 3570K - MSI SLI Z87 motherboard - 4GB Corsair 1600mhz - Adaptec 31205 - Dell Perc 5/i - Intel Quad PT1000 Gigabit Network Card - Loads of 4TB RED's - +-70tb total storrage - HX850watts - Ri-Vier 24bay 4 U case

Link to post
Share on other sites

I had 2x  ST3000 die while in warranty ,  they suddenly stopped working and I could kiss my data goodbye , and it was a sudden death.

I will never buy Seagate again.

 

Meh, all brands have certain models that have issues.

 

I work for HGST and have a bunch of Seagate 4TB drives in my server that I bought before becoming employed at HGST, and haven't had any issues with any of them. I don't plan on switching to WD or HGST drives (we are owned by WD) until these Seagate's fail, and hope (expect) that to be not for a good, long while.


SSD Firmware Engineer

 

| Dual Boot Linux Mint and W8.1 Pro x64 with rEFInd Boot Manager | Intel Core i7-4770k | Corsair H100i | ASRock Z87 Extreme4 | 32 GB (4x8gb) 1600MHz CL8 | EVGA GTX970 FTW+ | EVGA SuperNOVA 1000 P2 | 500GB Samsung 850 Evo |250GB Samsung 840 Evo | 3x1Tb HDD | 4 LG UH12NS30 BD Drives | LSI HBA | Corsair Carbide 500R Case | Das Keyboard 4 Ultimate | Logitech M510 Mouse | Corsair Vengeance 2100 Wireless Headset | 4 Monoprice Displays - 3x27"4k bottom, 27" 1440p top | Logitech Z-2300 Speakers |

Link to post
Share on other sites

...

 

ss+(2015-02-02+at+09.05.27).png

...

lol "small data array" 101 TB

Link to post
Share on other sites

I had to shut down the NAS today due to a short in one of my wall plugs right above it (sparks flying and everything, the wires and plugs are from shortly after WW2 and that particular plug had been falling apart for some time now). 

So before I turned on the electricity again I attached a wall plug meter to the NAS' plug to finally measure it. 

 

Apparently it's using 50-55W at idle. 

I started to play a movie (full-size Blu-Ray rip) and it went to 60W. 

Then I copied a 30GB folder to it (from one of the SSDs inside the PC) and the meter was jumping between 72 and 97W while writing at 110MB/s.

 

According to Task Manager I'm maxing out the PC's Gbit ethernet port at that speed, so as long as I upgrade there's no bottlenecks in the NAS despite running half the suggested RAM.

 

Pretty pleased with those results.  Sure, it'd be nice If I could get it to use less at idle, but before this I was running a WD EX4 as well as a Lacie 2big and those 2 combined would use about the same whilst having less capacity, no redundancy and less than half the speed.

Link to post
Share on other sites

Oh well, here's an update from me.

 

No pics or storage space changes. But i finally reinstalled my server to windows server 2012 R2. It seems a bit slow, don't know why. I've also upgraded my server from a AMD 1090t to a FX8350. Motherboard from a 990fx to 890GX so i can use the on-board graphics and get rid of the dedicated GPU in there (was a 7770). I've also upgraded the ram from 8GB to 16GB. Had it lying around here and thought why not use it? The upgrade took some trouble though, the GX board didn't (and officially doesn't... yeah i know) support the fx 8350. It initially didn't work at all and had to swap the CPU to upgrade the bios to make it work.

 

It's all running great, and it's been running for weeks now. The failed drive has also never failed again so everything finally seems fine. Been thinking of getting a better case with more slots for drives to add another raid card and add the old 2TB drives. But i can't find a good case. So that will have to wait for now...


I have no signature

Link to post
Share on other sites

Wow nothing said in here for a while.

 

I've had my RAID-6 degrade a few more times with timeouts. Actually driving me crazy now. This passed weekend 2 drives failed. After a restart they worked fine and i rebuild the array. After that was done i swapped the cables for new ones (3Ware). Within a day another drive failed with a timeout error. Are these cables just crap? What are good ones? Are the drives crap? how can i find out? Or could it be some other hardware problem, like motherboard? PSU? So far it's never been the same drive twice, it's always been another drive. HDD's in Slots 1, 4, 5 and 6 have failed so far.

 

I have no idea what the problem could be, i was thinking PSU but 520 watt should be able to handle an fx8350, on-board video, 2 PCI'e cards without extra power, a few case fans and 11 HDD's right? I don't have the time atm to swap out everything. Could the mobo be a problem? since it doesn't officially support the 8350? Could the windows version be a problem? (2012 R2)

 

Anyone any idea? Please help :(


I have no signature

Link to post
Share on other sites

All drives are well above the threshold. There is nothing wrong with the drives themselves. They just timeout and drop out of the array. After i restart the server i put them back in the array and it rebuilds successfully (takes about 16 to 17 hours maybe more).

 

What i'm doing with the server when i think they all dropped out (I'm 100% sure this is what happened the last 2 times) is encoding some bluray's im backing up to it. So the only thing i can think off is that the PSU is just getting old and unable to handle it all. So i have just replaced the PSU with an 850watt cooler master M2 i had lying around here (it was a 520watt cooler master). I'm just hoping this is the problem. If it's not im just gonna try to replace the motherboard and hope the not officially supported CPU is the problem. But that seems unlikely to me.


I have no signature

Link to post
Share on other sites

After putting up with random Samba disconnects for the last 2 years or so (supposedly a NIC driver issue with AsRock motherboards), I decided to ditch Ubuntu and give Windows a shot at handling my storage. Did a clean install of Windows 10 and used a tool from Disk Internals called Linux Recovery to access the existing LVM volumes (read only). Wiped 4 of the drives, created a storage pool, then used the recovery tool to transfer everything over from the old volume to the new space. Only took 2 days to transfer. Once everything was transferred I then wiped out the remaining LVM and created another storage pool. I set things up with 4 drives in one pool "Archive" and the other 4 drives in another pool "Backup". Now I'm just waiting for the first pool to sync over to the new pool. Next project will be compressing some of the Blu-ray backups I don't think are worth keeping uncompressed, did a test with Yellowbeard using Handbrake and got it down from 22GB to 10GB just using the AppleTV 3 preset. Only a little over a day before I can start on that, something to do for the weekend.  ^_^

RSync

 

 

Link to post
Share on other sites

Had the two HGST Ultrastar 3TB drives that I put in next to each other in their rack die at the same-ish time.  I'm really at a loss now for what's going on since those drives are supposed to be reliable-as-the-sun bulletproof.

 

I'm taking several steps now to try to mitigate this.  I'm preemptively replacing all but 2 of the remaining Seagates with WD Red Pros.  I'll have 4 RD Red Pros, 12 HGST Deskstars, and 10 Seagates.  Next, I'll be physically decoupling the racks with 1" thick neoprene foam.  Could be my washing machines vibrating these things to death, but I can't find a correlation between me doing laundry and drives dying 5 minutes later.  I'll probably also decouple the 4-bay enclosures from each other.  Right now 2 enclosures are stacked on each other and joined with a screw (8 drives each x 2 of those towers + 8 drives in the node 804).  Last, I'm going to change my raid 6 span arrangement so that no two drives that are physically next to each other are in the same span.  This should reduce the chance that one drive taking out the drive above or below it also takes down the entire raid 60 array.  I've noticed a couple times now where this is the situation.

 

I'm *this* close to doing a 8x8x8 raid 60 configuration instead of 12x12 so then I have 6 drives worth of redundancy instead of 4 drives.


Workstation: 8600k @ 4.6Ghz || ASRock Z390 Taichi Ultimate || Gigabyte 1080Ti || G.Skill DDR4-3800 @ 2666 4x8GB || Corsair AX1500i || 25 gallon whole-house loop.

HTPC/GuestGamingBox: Optoma HD142X 1080p Projector || 7600K@ 4.6 || Gigabyte Z270 Gaming 9  || EVGA Titan X (Maxwell) || Corsair RM650x || CPU+GPU watercooled 280 rad pull only.

Server Router (Untangle): 8350K @ 4.5Ghz || ASRock Z370 ITX || 2x8GB || EVGA G3 750W || CPU watercooled, 25 gallon whole-house loop.

Server VM/Plex/HTTPS: E5-2699v4 (22 core!) || Asus X99m WS || GT 630 || Corsair RM650x || CPU watercooled, 25 gallon whole-house loop.

Server Storage: Pent. G3220 || Z87 Gryphon mATX || || LSI 9280i + Adaptec + Intel Expander || 4x10TB Seagate Enterprise Raid 6, 3x8TB Seagate Archive Backup, Corsair AX1200i (drives) Corsair RM450 (machine) || CPU watercooled, 25 gallon whole-house loop.

On the Shelf: EVGA X99 micro2, 780, 740 GT, 210 w/ DVI port unsoldered (Hint: it can be done but it ain't easy). 

Laptop: HP Elitebook 840 G3 (Intel 8350U).

Link to post
Share on other sites

Update: also I'm saying "fuck it" and pulling out all the stops.  4790K is coming out of the server, E3-1276v3 going in.  It'll be a little bit slower but I want ECC memory.


Workstation: 8600k @ 4.6Ghz || ASRock Z390 Taichi Ultimate || Gigabyte 1080Ti || G.Skill DDR4-3800 @ 2666 4x8GB || Corsair AX1500i || 25 gallon whole-house loop.

HTPC/GuestGamingBox: Optoma HD142X 1080p Projector || 7600K@ 4.6 || Gigabyte Z270 Gaming 9  || EVGA Titan X (Maxwell) || Corsair RM650x || CPU+GPU watercooled 280 rad pull only.

Server Router (Untangle): 8350K @ 4.5Ghz || ASRock Z370 ITX || 2x8GB || EVGA G3 750W || CPU watercooled, 25 gallon whole-house loop.

Server VM/Plex/HTTPS: E5-2699v4 (22 core!) || Asus X99m WS || GT 630 || Corsair RM650x || CPU watercooled, 25 gallon whole-house loop.

Server Storage: Pent. G3220 || Z87 Gryphon mATX || || LSI 9280i + Adaptec + Intel Expander || 4x10TB Seagate Enterprise Raid 6, 3x8TB Seagate Archive Backup, Corsair AX1200i (drives) Corsair RM450 (machine) || CPU watercooled, 25 gallon whole-house loop.

On the Shelf: EVGA X99 micro2, 780, 740 GT, 210 w/ DVI port unsoldered (Hint: it can be done but it ain't easy). 

Laptop: HP Elitebook 840 G3 (Intel 8350U).

Link to post
Share on other sites

Update: also I'm saying "fuck it" and pulling out all the stops.  4790K is coming out of the server, E3-1276v3 going in.  It'll be a little bit slower but I want ECC memory.

 

+1 for the ECC memory ! must have personally.

Link to post
Share on other sites

Found out that only the C series chipsets meant for server boards actually supports ECC.  The Z87 (or any Z / H board) might allowed you to boot and run with ECC memory, but only with ECC functionality turned off. 

 

So I ordered: http://www.newegg.com/Product/Product.aspx?Item=N82E16813121781

 

It conflicts with the ECC ram I bought because the board wants 1.35V memory (DDR3L) and I bought 1.5V memory, but according to the Intel support forum, a customer service guy said other customers have reported no issue running 1.5V memory in a 1.35V slot.  I'll just memtest it to make sure it's fine...I don't care about putting a bit more load on the VRMs or memory controller.  I also like that Passmark's MemTest looks like it can inject ECC errors to make sure it's working.


Workstation: 8600k @ 4.6Ghz || ASRock Z390 Taichi Ultimate || Gigabyte 1080Ti || G.Skill DDR4-3800 @ 2666 4x8GB || Corsair AX1500i || 25 gallon whole-house loop.

HTPC/GuestGamingBox: Optoma HD142X 1080p Projector || 7600K@ 4.6 || Gigabyte Z270 Gaming 9  || EVGA Titan X (Maxwell) || Corsair RM650x || CPU+GPU watercooled 280 rad pull only.

Server Router (Untangle): 8350K @ 4.5Ghz || ASRock Z370 ITX || 2x8GB || EVGA G3 750W || CPU watercooled, 25 gallon whole-house loop.

Server VM/Plex/HTTPS: E5-2699v4 (22 core!) || Asus X99m WS || GT 630 || Corsair RM650x || CPU watercooled, 25 gallon whole-house loop.

Server Storage: Pent. G3220 || Z87 Gryphon mATX || || LSI 9280i + Adaptec + Intel Expander || 4x10TB Seagate Enterprise Raid 6, 3x8TB Seagate Archive Backup, Corsair AX1200i (drives) Corsair RM450 (machine) || CPU watercooled, 25 gallon whole-house loop.

On the Shelf: EVGA X99 micro2, 780, 740 GT, 210 w/ DVI port unsoldered (Hint: it can be done but it ain't easy). 

Laptop: HP Elitebook 840 G3 (Intel 8350U).

Link to post
Share on other sites

Actual OP related stuff:

OS: Freenas

Raid: RAIDz2

CPU: Intel Pentium G3220 @ 3.00GHz

RAM: Kingston 16GB ECC

Motherboard: Asrock server board

PSU: Enermax 430w 80+gold

HDD: 5(3TB Reds) = 15TB theoretical total in JBoD but mainly due to RAIDz2 I only have an usable 8.5-9TB :( Just under the 10TB mark but I rather redundancy and I think redundancy still counts so I can still join the 10TB club :)

 

Initially when I built this home server I only had 2 drives and 8GB of ram but after it worked for a year I upgraded it to have 5 drives and 16GB of ram this summer. I'm only now posting this since I sort of forgot/didn't have time.

 

You have 35TB of storage?? I always wonder how people can fill up such huge amount of space. I'm not even able to fill up a 1TB drive...

Once you have a tonne of storage your perception of file sizes and usage changes. Over time you care less and less about how much of it you are using and thus you don't care that you have the same file(s) in several different places. Also hording data becomes a problem as you think that maybe one day that file will be useful again. Although in a way that is kind of true. I mean it's really nice if you manage to download a mod before the site you downloaded it from goes down and you have that stored somewhere... In case you ever want to replay that the game the mod is for.

Link to post
Share on other sites
Hardware

CASE: Fractal Define R5

PSU: 1x 660W Platinum Seasonic

MB: supermicro x10sl7-f

CPU: Intel Xeon E3-1220

RAM: 16GB DDR3 1333 (ECC)

 

Drives

4x WD Green 2TB

2x WD Green 1TB

2x Crucial M550 256gb

1x Samsung 840 pro 128gb

RAW CAPACITY: 10tb (not including SSD'S)

 

Software and Configuration:

Operating System: Freenas 9.3

There are 2 x raid 1 arrays configured with the 2tb drives I have. So I have 2 pools of 2tb each. Also the 256 GB ssd’s are configured raid 1. The 2 x 1tb are configured raid 0.

 

Usage:

This array was initially to store all my Movies and TV Shows. However I upgraded my crappy 3mbps ADLS2+ line to a 100/40mbps fibre line :D I’ve noticed that I don’t store media files as before, I just stream everything. So I’m going to wait till these greens die then get some WD Reds and configure a backup server first.

I still need to find other uses for this server, any suggestions :P Since loading freenas onto this machine in july the CPU has been idle 98.23% of the time. I wanna make use of the xeon somehow.

 

 

Photos:

I'm sorry i dont have any good photos of the machine, next time i open it up i will defs take some better ones. My server is underneath the network switch and behind my pfsense box and HP microserver. Its a side view of the case.

 

post-2777-0-98045200-1446469811_thumb.jp

 

post-2777-0-04759700-1446469406.png

 

post-2777-0-62960200-1446469383.png

 

Link to post
Share on other sites

-snip-

 

 

Looks good, I have to admit those HP Microservers are really quite capable I've even deployed one as a Network Domain Server.


Main Machine:  27inch iMac 5k Retina (Mid 2017), Core i5-7600, 1TB Fusion Drive, 8GB DDR4-2400MHz, Radeon Pro 570 4GB, MacOS Mojave

Network Gear: Dell PowerEdge T110, TP Link Gigabit 24 Port Switch, Sky Router, Asus Wireless Access Point.

Mobile Machine:  15inch MacBook Pro (Mid 2010), Core i5-540M, 240 GB SSD, 8GB DDR3-1067MHz, nVidia GeForce GT 330M 256MB, MacOS High Sierra

Other Tech: iPhone XS Max, Series 4 Apple Watch (LTE), AirPods, PS4, Nintendo Switch,PS3, Xbox 360, 20inch iMac G4, 30inch Apple HD Cinema Display

Link to post
Share on other sites

Looks good, I have to admit those HP Microservers are really quite capable I've even deployed one as a Network Domain Server.

Yea served me well as my old storage server, now it just collects dust sadly :(

Link to post
Share on other sites

http://imgur.com/a/itGaB

 

I have a large media collection ripped from my DVD and BluRay collections, plus lots of backups of machines I've owned through the years that I can't bring myself to delete (it's fun to look through files from machines from 20 years ago; it's like digital archaeology).

 

My storage solution has grown organically through a number of iterations over many years, including home built Linux servers and Infrant ReadyNAS appliances before NetGear bought them.

 

However it's now time to put something together that will last me a few more years as my current solution is creaking at the seams! 

 

This is very much a work in progress migration project. I will post an update once I have migrated all the disks and data and can take some proper photos when everything is racked properly. :-)

 

My current (soon to be deprecated) solution:

  • 1x QNAP TS-859+ Pro
    • 8x 2TB WDC WD2202FYPS-01U1B04.0 disks in Linux software RAID6
    • Dual gigabit network connections
  • 2x DataTale 4-bay DAS connected via eSATA <-> USB3
    • 4x 3TB disks in RAID5
    • 4x 4TB disks in RAID5
  • 1x Tranquil PC Abel H22 i5 Plex server on top of Debian Jessie
    • 1x 512G m.2 SSD
    • 1x 2TB 2.5" HDD
nicolaw@castamere:~$ df -hTP /{u1,mnt/{esata?,multimedia}}
Filesystem         Type     Size  Used Avail Use% Mounted on
/dev/mapper/hdd-u1 ext4     1.8T  992G  749G  57% /u1
/dev/sde1          fuseblk   11T   11T  145G  99% /mnt/esata1
/dev/sdf1          ext4     8.2T  7.6T  148G  99% /mnt/esata2
nas:/Multimedia    nfs4      11T   11T   92G 100% /mnt/multimedia
nicolaw@castamere:~$

 

New (work-in-progress) solution:

  • 4U Supermicro SC847A server running FreeNAS 9.3
    • ​Gubbins:
      • 1x Supermicro X9DRH-IF motherboard
      • 2x Intel® Xeon® CPU E5-2603 v2 @ 1.80GHz
      • 4x Crucial 8GB DDR3-1600 ECC DIMMS
      • 5x LSI / Avago SAS 9211-8i Host Bus Adapter flashed with "IT" firmware
    • ​System disks:
      • 2x Samsung SSD 850 EVO 500GB - jails mirrored zpool (connected via SAS2)
      • 2x 120G SandForce Driven SSDs KINGSTON SV300S37A120G - freenas-boot mirrored zpool (connected via SATAIII)
    • Test disks:
      • 7x 80GB Seagate Barracuda 7200.10 ST380815AS (initially purchased for £4/ea on eBay to test the chassis)
      • 1x 4TB HGST Deskstar
    • Data pool disks:
      • 14x 3TB HGST Deskstar NAS HDN724030ALE640 (connected via SAS2) - pool1 zpool
        • 7x 3TB disks in raidz2 vdev
        • 7x 3TB disks in raidz2 vdev
        • Planning on adding 5 more 7x 3TB raidz2 vdevs to the same pool
brienne# zpool list
NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
freenas-boot   111G   838M   110G         -      -     0%  1.00x  ONLINE  -
jails          460G  16.4G   444G         -     2%     3%  1.00x  ONLINE  /mnt
pool1           38T  12.4T  25.6T         -    16%    32%  1.00x  ONLINE  /mnt
testpool1     3.62T  2.67T   977G         -    21%    73%  1.00x  ONLINE  /mnt
brienne#

WWW: https://nicolaw.uk   CASE: Supermicro SuperChassis 847A-R1400LPB   MBSupermicro X9DRH-IF   CPU2x Intel® Xeon® CPU E5-2603 v2 @ 1.80GHz   RAM: 80GB ECC DDR3 1333   NICIntel E10G42BFSR X520-SR2   HBA: 5x LSI 9211-8i 6GB/s SAS HBA 'IT' Firmware   HDD/SSD2x 120GB Kingston SV300S37A120G SSD 2x 500GB Samsung 850 EVO SSD 8x 8TB Seagate ST8000AS0002-1NA17Z 21x 3TB Hitachi HGST HDN724030ALE640 4x 4TB Hitachi HGST HDS724040ALE640 3x 3TB Western Digital Red WDC WD30EFRX-68AX9N0

Link to post
Share on other sites

-snip-

 

Just wanted to ask, why do you need five LSI HBA cards for your server in progress? You know SuperMicro Chassis have a LSI expander backplane right (All 24 drives are expanded from 2 SAS 6Gb/s connectors)? You only need to feed two SAS cables to it, from a single HBA 8i card (Unless you want two for redundancy).

Link to post
Share on other sites

Just wanted to ask, why do you need five LSI HBA cards for your server in progress? You know SuperMicro Chassis have a LSI expander backplane right (All 24 drives are expanded from 2 SAS 6Gb/s connectors)? You only need to feed two SAS cables to it, from a single HBA 8i card (Unless you want two for redundancy).

 

Full bandwidth to each disk. (I got the HBAs for £60/ea rather than full price, so it wasn't heinously expensive to get the best bang for buck from the attached disks).


WWW: https://nicolaw.uk   CASE: Supermicro SuperChassis 847A-R1400LPB   MBSupermicro X9DRH-IF   CPU2x Intel® Xeon® CPU E5-2603 v2 @ 1.80GHz   RAM: 80GB ECC DDR3 1333   NICIntel E10G42BFSR X520-SR2   HBA: 5x LSI 9211-8i 6GB/s SAS HBA 'IT' Firmware   HDD/SSD2x 120GB Kingston SV300S37A120G SSD 2x 500GB Samsung 850 EVO SSD 8x 8TB Seagate ST8000AS0002-1NA17Z 21x 3TB Hitachi HGST HDN724030ALE640 4x 4TB Hitachi HGST HDS724040ALE640 3x 3TB Western Digital Red WDC WD30EFRX-68AX9N0

Link to post
Share on other sites

Full bandwidth to each disk.

 

How do you plan to plug the HBAs to the disks though? Remove the LSI backplane?

 

I think getting full bandwidth to each disk isn't really worth it unless you're running SSDs though. It would take quite a few hard drives to bottle neck just one of the HBA cards.

Link to post
Share on other sites

How do you plan to plug the HBAs to the disks though? Remove the LSI backplane?

 

I think getting full bandwidth to each disk isn't really worth it unless you're running SSDs though. It would take quite a few hard drives to bottle neck just one of the HBA cards.

 

It's a JBOD backplane.


WWW: https://nicolaw.uk   CASE: Supermicro SuperChassis 847A-R1400LPB   MBSupermicro X9DRH-IF   CPU2x Intel® Xeon® CPU E5-2603 v2 @ 1.80GHz   RAM: 80GB ECC DDR3 1333   NICIntel E10G42BFSR X520-SR2   HBA: 5x LSI 9211-8i 6GB/s SAS HBA 'IT' Firmware   HDD/SSD2x 120GB Kingston SV300S37A120G SSD 2x 500GB Samsung 850 EVO SSD 8x 8TB Seagate ST8000AS0002-1NA17Z 21x 3TB Hitachi HGST HDN724030ALE640 4x 4TB Hitachi HGST HDS724040ALE640 3x 3TB Western Digital Red WDC WD30EFRX-68AX9N0

Link to post
Share on other sites

Here's my home server I built about a year ago :)

 

Hardware

CASE: Fractal Define R5

PSU: Seasonic G-series 450w 80+ Gold

MB: Supermicro X10SAE

CPU: Intel Xeon E3-1226v3

HS: Stock copper-base Intel Heat sink

RAM: Kingston 32 GB DDR3 1600 ECC

RAID Card: LSI 9271-8i

SSDs: 2x Intel DC3500 120GB

HDDs: 8x 4TB Western Digital Red

 

Software and Configuration

My server is running Windows Server 2012 R2 OS. I am using 2 RAID configurations, 2x Intel DC3500 SSD in RAID 1 for the OS and apps, RAID 50 for mass storage using 8x WD Red HDDs. My total raw storage capacity for the HDDs is 32 TB and 21.8 TB after RAID 50 configuration.

 

Usage

I use this server primarily to store all my media, software and backups. I run Plex media server to stream content to devices on the network. I do some virtualization for web development and game servers. The Define R5 case has sound deadening panels and 3x 140mm fans (2x front, 1x rare) so the server runs very quiet, I even sleep in the same room as the server without issue!

 

Backup

I currently have no backup in place however I am planning to build a second box based on Seagate's 8TB archive HDDs.

 

Photo

2q179k1.png


Software Developer by trade

My computer-based interests include programming, gaming, movies and TV series

(outdated) Check out my 32 TB storage server here

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Buy VPN

×