Jump to content

Wow nothing said in here for a while.

 

I've had my RAID-6 degrade a few more times with timeouts. Actually driving me crazy now. This passed weekend 2 drives failed. After a restart they worked fine and i rebuild the array. After that was done i swapped the cables for new ones (3Ware). Within a day another drive failed with a timeout error. Are these cables just crap? What are good ones? Are the drives crap? how can i find out? Or could it be some other hardware problem, like motherboard? PSU? So far it's never been the same drive twice, it's always been another drive. HDD's in Slots 1, 4, 5 and 6 have failed so far.

 

I have no idea what the problem could be, i was thinking PSU but 520 watt should be able to handle an fx8350, on-board video, 2 PCI'e cards without extra power, a few case fans and 11 HDD's right? I don't have the time atm to swap out everything. Could the mobo be a problem? since it doesn't officially support the 8350? Could the windows version be a problem? (2012 R2)

 

Anyone any idea? Please help :(

I have no signature

Link to comment
Share on other sites

Link to post
Share on other sites

All drives are well above the threshold. There is nothing wrong with the drives themselves. They just timeout and drop out of the array. After i restart the server i put them back in the array and it rebuilds successfully (takes about 16 to 17 hours maybe more).

 

What i'm doing with the server when i think they all dropped out (I'm 100% sure this is what happened the last 2 times) is encoding some bluray's im backing up to it. So the only thing i can think off is that the PSU is just getting old and unable to handle it all. So i have just replaced the PSU with an 850watt cooler master M2 i had lying around here (it was a 520watt cooler master). I'm just hoping this is the problem. If it's not im just gonna try to replace the motherboard and hope the not officially supported CPU is the problem. But that seems unlikely to me.

I have no signature

Link to comment
Share on other sites

Link to post
Share on other sites

After putting up with random Samba disconnects for the last 2 years or so (supposedly a NIC driver issue with AsRock motherboards), I decided to ditch Ubuntu and give Windows a shot at handling my storage. Did a clean install of Windows 10 and used a tool from Disk Internals called Linux Recovery to access the existing LVM volumes (read only). Wiped 4 of the drives, created a storage pool, then used the recovery tool to transfer everything over from the old volume to the new space. Only took 2 days to transfer. Once everything was transferred I then wiped out the remaining LVM and created another storage pool. I set things up with 4 drives in one pool "Archive" and the other 4 drives in another pool "Backup". Now I'm just waiting for the first pool to sync over to the new pool. Next project will be compressing some of the Blu-ray backups I don't think are worth keeping uncompressed, did a test with Yellowbeard using Handbrake and got it down from 22GB to 10GB just using the AppleTV 3 preset. Only a little over a day before I can start on that, something to do for the weekend.  ^_^

RSync

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Had the two HGST Ultrastar 3TB drives that I put in next to each other in their rack die at the same-ish time.  I'm really at a loss now for what's going on since those drives are supposed to be reliable-as-the-sun bulletproof.

 

I'm taking several steps now to try to mitigate this.  I'm preemptively replacing all but 2 of the remaining Seagates with WD Red Pros.  I'll have 4 RD Red Pros, 12 HGST Deskstars, and 10 Seagates.  Next, I'll be physically decoupling the racks with 1" thick neoprene foam.  Could be my washing machines vibrating these things to death, but I can't find a correlation between me doing laundry and drives dying 5 minutes later.  I'll probably also decouple the 4-bay enclosures from each other.  Right now 2 enclosures are stacked on each other and joined with a screw (8 drives each x 2 of those towers + 8 drives in the node 804).  Last, I'm going to change my raid 6 span arrangement so that no two drives that are physically next to each other are in the same span.  This should reduce the chance that one drive taking out the drive above or below it also takes down the entire raid 60 array.  I've noticed a couple times now where this is the situation.

 

I'm *this* close to doing a 8x8x8 raid 60 configuration instead of 12x12 so then I have 6 drives worth of redundancy instead of 4 drives.

Workstation:  13700k @ 5.5Ghz || Gigabyte Z790 Ultra || MSI Gaming Trio 4090 Shunt || TeamGroup DDR5-7800 @ 7000 || Corsair AX1500i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

Update: also I'm saying "fuck it" and pulling out all the stops.  4790K is coming out of the server, E3-1276v3 going in.  It'll be a little bit slower but I want ECC memory.

Workstation:  13700k @ 5.5Ghz || Gigabyte Z790 Ultra || MSI Gaming Trio 4090 Shunt || TeamGroup DDR5-7800 @ 7000 || Corsair AX1500i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

Update: also I'm saying "fuck it" and pulling out all the stops.  4790K is coming out of the server, E3-1276v3 going in.  It'll be a little bit slower but I want ECC memory.

 

+1 for the ECC memory ! must have personally.

Link to comment
Share on other sites

Link to post
Share on other sites

Found out that only the C series chipsets meant for server boards actually supports ECC.  The Z87 (or any Z / H board) might allowed you to boot and run with ECC memory, but only with ECC functionality turned off. 

 

So I ordered: http://www.newegg.com/Product/Product.aspx?Item=N82E16813121781

 

It conflicts with the ECC ram I bought because the board wants 1.35V memory (DDR3L) and I bought 1.5V memory, but according to the Intel support forum, a customer service guy said other customers have reported no issue running 1.5V memory in a 1.35V slot.  I'll just memtest it to make sure it's fine...I don't care about putting a bit more load on the VRMs or memory controller.  I also like that Passmark's MemTest looks like it can inject ECC errors to make sure it's working.

Workstation:  13700k @ 5.5Ghz || Gigabyte Z790 Ultra || MSI Gaming Trio 4090 Shunt || TeamGroup DDR5-7800 @ 7000 || Corsair AX1500i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

Actual OP related stuff:

OS: Freenas

Raid: RAIDz2

CPU: Intel Pentium G3220 @ 3.00GHz

RAM: Kingston 16GB ECC

Motherboard: Asrock server board

PSU: Enermax 430w 80+gold

HDD: 5(3TB Reds) = 15TB theoretical total in JBoD but mainly due to RAIDz2 I only have an usable 8.5-9TB :( Just under the 10TB mark but I rather redundancy and I think redundancy still counts so I can still join the 10TB club :)

 

Initially when I built this home server I only had 2 drives and 8GB of ram but after it worked for a year I upgraded it to have 5 drives and 16GB of ram this summer. I'm only now posting this since I sort of forgot/didn't have time.

 

You have 35TB of storage?? I always wonder how people can fill up such huge amount of space. I'm not even able to fill up a 1TB drive...

Once you have a tonne of storage your perception of file sizes and usage changes. Over time you care less and less about how much of it you are using and thus you don't care that you have the same file(s) in several different places. Also hording data becomes a problem as you think that maybe one day that file will be useful again. Although in a way that is kind of true. I mean it's really nice if you manage to download a mod before the site you downloaded it from goes down and you have that stored somewhere... In case you ever want to replay that the game the mod is for.

Link to comment
Share on other sites

Link to post
Share on other sites

Hardware

CASE: Fractal Define R5

PSU: 1x 660W Platinum Seasonic

MB: supermicro x10sl7-f

CPU: Intel Xeon E3-1220

RAM: 16GB DDR3 1333 (ECC)

 

Drives

4x WD Green 2TB

2x WD Green 1TB

2x Crucial M550 256gb

1x Samsung 840 pro 128gb

RAW CAPACITY: 10tb (not including SSD'S)

 

Software and Configuration:

Operating System: Freenas 9.3

There are 2 x raid 1 arrays configured with the 2tb drives I have. So I have 2 pools of 2tb each. Also the 256 GB ssd’s are configured raid 1. The 2 x 1tb are configured raid 0.

 

Usage:

This array was initially to store all my Movies and TV Shows. However I upgraded my crappy 3mbps ADLS2+ line to a 100/40mbps fibre line :D I’ve noticed that I don’t store media files as before, I just stream everything. So I’m going to wait till these greens die then get some WD Reds and configure a backup server first.

I still need to find other uses for this server, any suggestions :P Since loading freenas onto this machine in july the CPU has been idle 98.23% of the time. I wanna make use of the xeon somehow.

 

 

Photos:

I'm sorry i dont have any good photos of the machine, next time i open it up i will defs take some better ones. My server is underneath the network switch and behind my pfsense box and HP microserver. Its a side view of the case.

 

post-2777-0-98045200-1446469811_thumb.jp

 

post-2777-0-04759700-1446469406.png

 

post-2777-0-62960200-1446469383.png

 

Link to comment
Share on other sites

Link to post
Share on other sites

-snip-

 

 

Looks good, I have to admit those HP Microservers are really quite capable I've even deployed one as a Network Domain Server.

Main Machine:  16 inch MacBook Pro (2021), Apple M1 Pro (10 CPU, 16 GPU Core), 512GB SDD, 16GB RAM

Gaming Machine:  Acer Nitro 5, Core i7 10750H, RTX 3060 (L) 6GB, 1TB SSD (Boot), 2TB SSD (Storage), 32GB DDR4 RAM

Other Tech: iPhone 15 Pro Max, Series 6 Apple Watch (LTE), AirPods Max, PS4, Nintendo Switch, PS3, Xbox 360

Network Gear:  TP Link Gigabit 24 Port Switch, TP-Link Deco M4 Mesh Wi-Fi, M1 MacMini File & Media Server with 8TB of RAID 1 Storage

Link to comment
Share on other sites

Link to post
Share on other sites

Looks good, I have to admit those HP Microservers are really quite capable I've even deployed one as a Network Domain Server.

Yea served me well as my old storage server, now it just collects dust sadly :(

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

http://imgur.com/a/itGaB

 

I have a large media collection ripped from my DVD and BluRay collections, plus lots of backups of machines I've owned through the years that I can't bring myself to delete (it's fun to look through files from machines from 20 years ago; it's like digital archaeology).

 

My storage solution has grown organically through a number of iterations over many years, including home built Linux servers and Infrant ReadyNAS appliances before NetGear bought them.

 

However it's now time to put something together that will last me a few more years as my current solution is creaking at the seams! 

 

This is very much a work in progress migration project. I will post an update once I have migrated all the disks and data and can take some proper photos when everything is racked properly. :-)

 

My current (soon to be deprecated) solution:

  • 1x QNAP TS-859+ Pro
    • 8x 2TB WDC WD2202FYPS-01U1B04.0 disks in Linux software RAID6
    • Dual gigabit network connections
  • 2x DataTale 4-bay DAS connected via eSATA <-> USB3
    • 4x 3TB disks in RAID5
    • 4x 4TB disks in RAID5
  • 1x Tranquil PC Abel H22 i5 Plex server on top of Debian Jessie
    • 1x 512G m.2 SSD
    • 1x 2TB 2.5" HDD
nicolaw@castamere:~$ df -hTP /{u1,mnt/{esata?,multimedia}}
Filesystem         Type     Size  Used Avail Use% Mounted on
/dev/mapper/hdd-u1 ext4     1.8T  992G  749G  57% /u1
/dev/sde1          fuseblk   11T   11T  145G  99% /mnt/esata1
/dev/sdf1          ext4     8.2T  7.6T  148G  99% /mnt/esata2
nas:/Multimedia    nfs4      11T   11T   92G 100% /mnt/multimedia
nicolaw@castamere:~$

 

New (work-in-progress) solution:

  • 4U Supermicro SC847A server running FreeNAS 9.3
    • ​Gubbins:
      • 1x Supermicro X9DRH-IF motherboard
      • 2x Intel® Xeon® CPU E5-2603 v2 @ 1.80GHz
      • 4x Crucial 8GB DDR3-1600 ECC DIMMS
      • 5x LSI / Avago SAS 9211-8i Host Bus Adapter flashed with "IT" firmware
    • ​System disks:
      • 2x Samsung SSD 850 EVO 500GB - jails mirrored zpool (connected via SAS2)
      • 2x 120G SandForce Driven SSDs KINGSTON SV300S37A120G - freenas-boot mirrored zpool (connected via SATAIII)
    • Test disks:
      • 7x 80GB Seagate Barracuda 7200.10 ST380815AS (initially purchased for £4/ea on eBay to test the chassis)
      • 1x 4TB HGST Deskstar
    • Data pool disks:
      • 14x 3TB HGST Deskstar NAS HDN724030ALE640 (connected via SAS2) - pool1 zpool
        • 7x 3TB disks in raidz2 vdev
        • 7x 3TB disks in raidz2 vdev
        • Planning on adding 5 more 7x 3TB raidz2 vdevs to the same pool
brienne# zpool list
NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
freenas-boot   111G   838M   110G         -      -     0%  1.00x  ONLINE  -
jails          460G  16.4G   444G         -     2%     3%  1.00x  ONLINE  /mnt
pool1           38T  12.4T  25.6T         -    16%    32%  1.00x  ONLINE  /mnt
testpool1     3.62T  2.67T   977G         -    21%    73%  1.00x  ONLINE  /mnt
brienne#

WWW: https://nicolaw.uk   CASE: Supermicro SuperChassis 847A-R1400LPB   MBSupermicro X9DRH-IF   CPU2x Intel® Xeon® CPU E5-2603 v2 @ 1.80GHz   RAM: 80GB ECC DDR3 1333   NICIntel E10G42BFSR X520-SR2   HBA: 5x LSI 9211-8i 6GB/s SAS HBA 'IT' Firmware   HDD/SSD2x 120GB Kingston SV300S37A120G SSD 2x 500GB Samsung 850 EVO SSD 8x 8TB Seagate ST8000AS0002-1NA17Z 21x 3TB Hitachi HGST HDN724030ALE640 4x 4TB Hitachi HGST HDS724040ALE640 3x 3TB Western Digital Red WDC WD30EFRX-68AX9N0

Link to comment
Share on other sites

Link to post
Share on other sites

-snip-

 

Just wanted to ask, why do you need five LSI HBA cards for your server in progress? You know SuperMicro Chassis have a LSI expander backplane right (All 24 drives are expanded from 2 SAS 6Gb/s connectors)? You only need to feed two SAS cables to it, from a single HBA 8i card (Unless you want two for redundancy).

Link to comment
Share on other sites

Link to post
Share on other sites

Just wanted to ask, why do you need five LSI HBA cards for your server in progress? You know SuperMicro Chassis have a LSI expander backplane right (All 24 drives are expanded from 2 SAS 6Gb/s connectors)? You only need to feed two SAS cables to it, from a single HBA 8i card (Unless you want two for redundancy).

 

Full bandwidth to each disk. (I got the HBAs for £60/ea rather than full price, so it wasn't heinously expensive to get the best bang for buck from the attached disks).

WWW: https://nicolaw.uk   CASE: Supermicro SuperChassis 847A-R1400LPB   MBSupermicro X9DRH-IF   CPU2x Intel® Xeon® CPU E5-2603 v2 @ 1.80GHz   RAM: 80GB ECC DDR3 1333   NICIntel E10G42BFSR X520-SR2   HBA: 5x LSI 9211-8i 6GB/s SAS HBA 'IT' Firmware   HDD/SSD2x 120GB Kingston SV300S37A120G SSD 2x 500GB Samsung 850 EVO SSD 8x 8TB Seagate ST8000AS0002-1NA17Z 21x 3TB Hitachi HGST HDN724030ALE640 4x 4TB Hitachi HGST HDS724040ALE640 3x 3TB Western Digital Red WDC WD30EFRX-68AX9N0

Link to comment
Share on other sites

Link to post
Share on other sites

Full bandwidth to each disk.

 

How do you plan to plug the HBAs to the disks though? Remove the LSI backplane?

 

I think getting full bandwidth to each disk isn't really worth it unless you're running SSDs though. It would take quite a few hard drives to bottle neck just one of the HBA cards.

Link to comment
Share on other sites

Link to post
Share on other sites

How do you plan to plug the HBAs to the disks though? Remove the LSI backplane?

 

I think getting full bandwidth to each disk isn't really worth it unless you're running SSDs though. It would take quite a few hard drives to bottle neck just one of the HBA cards.

 

It's a JBOD backplane.

WWW: https://nicolaw.uk   CASE: Supermicro SuperChassis 847A-R1400LPB   MBSupermicro X9DRH-IF   CPU2x Intel® Xeon® CPU E5-2603 v2 @ 1.80GHz   RAM: 80GB ECC DDR3 1333   NICIntel E10G42BFSR X520-SR2   HBA: 5x LSI 9211-8i 6GB/s SAS HBA 'IT' Firmware   HDD/SSD2x 120GB Kingston SV300S37A120G SSD 2x 500GB Samsung 850 EVO SSD 8x 8TB Seagate ST8000AS0002-1NA17Z 21x 3TB Hitachi HGST HDN724030ALE640 4x 4TB Hitachi HGST HDS724040ALE640 3x 3TB Western Digital Red WDC WD30EFRX-68AX9N0

Link to comment
Share on other sites

Link to post
Share on other sites

Here's my home server I built about a year ago :)

 

Hardware

CASE: Fractal Define R5

PSU: Seasonic G-series 450w 80+ Gold

MB: Supermicro X10SAE

CPU: Intel Xeon E3-1226v3

HS: Stock copper-base Intel Heat sink

RAM: Kingston 32 GB DDR3 1600 ECC

RAID Card: LSI 9271-8i

SSDs: 2x Intel DC3500 120GB

HDDs: 8x 4TB Western Digital Red

 

Software and Configuration

My server is running Windows Server 2012 R2 OS. I am using 2 RAID configurations, 2x Intel DC3500 SSD in RAID 1 for the OS and apps, RAID 50 for mass storage using 8x WD Red HDDs. My total raw storage capacity for the HDDs is 32 TB and 21.8 TB after RAID 50 configuration.

 

Usage

I use this server primarily to store all my media, software and backups. I run Plex media server to stream content to devices on the network. I do some virtualization for web development and game servers. The Define R5 case has sound deadening panels and 3x 140mm fans (2x front, 1x rare) so the server runs very quiet, I even sleep in the same room as the server without issue!

 

Backup

I currently have no backup in place however I am planning to build a second box based on Seagate's 8TB archive HDDs.

 

Photo

2q179k1.png

Link to comment
Share on other sites

Link to post
Share on other sites

<snip>

8x 4TB Reds and 2 SSDs in an R5, with an LSI card ... could almost be my own NAS' older sister.

What are you using for intake fans? Only the one Fractal or did you add something in the front or side?

Are the temps okay? A few people here (myself included) attached a pair of 120mm fans to the rear of the HDD cages to push-pull air through the HDD cages and aid cooling. I'm seeing nothing like that on yours.

Link to comment
Share on other sites

Link to post
Share on other sites

8x 4TB Reds and 2 SSDs in an R5, with an LSI card ... could almost be my own NAS' older sister.

What are you using for intake fans? Only the one Fractal or did you add something in the front or side?

Are the temps okay? A few people here (myself included) attached a pair of 120mm fans to the rear of the HDD cages to push-pull air through the HDD cages and aid cooling. I'm seeing nothing like that on yours.

 

I have 2x 140mm fractal intake fans on the front and 1x 140mm fractal on the back.

 

CPU temps are 25 to 50 degrees c

RAM temps are 25 to 35 degrees c

HDD temps are 25 to 35 degrees c 

 

I live in NZ so it rarely gets too hot here.

 

1znxwu8.png

Link to comment
Share on other sites

Link to post
Share on other sites

 

http://imgur.com/a/itGaB

 

New (work-in-progress) solution:

  • 4U Supermicro SC847A server running FreeNAS 9.3
    • ​Gubbins:
      • 1x Supermicro X9DRH-IF motherboard
      • 2x Intel® Xeon® CPU E5-2603 v2 @ 1.80GHz
      • 4x Crucial 8GB DDR3-1600 ECC DIMMS
      • 5x LSI / Avago SAS 9211-8i Host Bus Adapter flashed with "IT" firmware
brienne# zpool list
NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
freenas-boot   111G   838M   110G         -      -     0%  1.00x  ONLINE  -
jails          460G  16.4G   444G         -     2%     3%  1.00x  ONLINE  /mnt
pool1           38T  12.4T  25.6T         -    16%    32%  1.00x  ONLINE  /mnt
testpool1     3.62T  2.67T   977G         -    21%    73%  1.00x  ONLINE  /mnt
brienne#

 

An update on the disks in this along with little Perl script to export my FreeNAS disk information to a little CSV file for my own records. It exports things like device name/path, disk model, capacity, form factor, speed, firmware, serial, ZFS GPTID etc.

https://nicolaw.uk/#freenas_disk_info.pl[1]

An example of the output put in to Google Docs (minus some hidden information like the serials and GPTIDs etc):http://i.imgur.com/Ob9P81V.png 

Ob9P81V.png

 

Hope the script is useful to someone other than me.

 

I'll be posting some half decent pictures soon as I'm racking it all up properly with PDUs etc this weekend.

WWW: https://nicolaw.uk   CASE: Supermicro SuperChassis 847A-R1400LPB   MBSupermicro X9DRH-IF   CPU2x Intel® Xeon® CPU E5-2603 v2 @ 1.80GHz   RAM: 80GB ECC DDR3 1333   NICIntel E10G42BFSR X520-SR2   HBA: 5x LSI 9211-8i 6GB/s SAS HBA 'IT' Firmware   HDD/SSD2x 120GB Kingston SV300S37A120G SSD 2x 500GB Samsung 850 EVO SSD 8x 8TB Seagate ST8000AS0002-1NA17Z 21x 3TB Hitachi HGST HDN724030ALE640 4x 4TB Hitachi HGST HDS724040ALE640 3x 3TB Western Digital Red WDC WD30EFRX-68AX9N0

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

Adding to my previous post

Well that filled up quickly, had to shove another 4TB drive in, probably the last space upgrade in a while, decided to get a new cpu,mobo+ram instead.

 

20TB raw! Having to use the drive packaging as mounts until the rest of my SEDNA stands come  :blush:

 

Have some LQ photos!

tOntWXY.png

JgwIVGr.png

 

 

Web | Steam | Last.fm

 

Link to comment
Share on other sites

Link to post
Share on other sites

Total Storage 

77.0TB

 

 

Server Name

Titan

 

Hardware

CASE: Lian Li 343B

PSU: Corsair HX1000i

MB: Asus TUF Sabertooth 990FX

CPU: AMD FX - 8120 @ Stock

HS: Noctua NH-D14

RAM: 32GB DDR3 1600 GSkill Ram

RAID CARD 1: Adaptec 52445 28 Port Raid Card

SSD: Samsung 840 EVO 120GB

Enclosures: Startech 2/5.25 for 3 HDD (8) 

HD's: 

16x 2TB Western Digital (Red) WD20EFRS

9x 2TB Western Digital Greens (Various Models) 

1x 3TB Western Digital WD30EZRS

4x 6TB Western Digital WD60EFRX

 

((( Update Added 7TB to Server )))

((((( Update # 2 Added 10TB More To Server ))))) 7/12/14

((((((( Update #3 Replaced a Failed 2TB//Added Additional 2TB//Changed Operating System//CPU, Motherboard Upgraded)))))))

((((((((((Update #4 Changed Raid card//Added Additional 20TB of Storage//Added 16Gb more of Ram (total 32gb) ))))))))

((((((((((((Update #5 Added an additional 2TB/Reconfiguring Fans for external Fan controllers (Pics to Follow)  ))))))))))))))

(((((((((((((((Update #6 Added an additional 6TB/Replaced MB/Scrapped outside Fan controller/Added 2U for future Expansion)))))))))))))) 

(((((((((((((((((Update #7 Added an additional 2TB///Changed to Platinum PSU HX1000i))))))))))))))))))))

 

 

Update #7 Added 2TB and changed PSU to HX1000i from TX850

Edited by MG2R
Please don't quote pictures.
Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×