Jump to content

#6 reporting in with an update:

 

I've "downgraded" from 26 3TB drives to 5 10TB Seagate Helium Enterprise drives.  I have less capacity than before, but it's raid 6 now instead of raid 60 so I can expand it as I need more storage space (in total I can support up to 36 drives since I added a Intel RS240 expander to my 9280i card).

 

Ironically 1 of the drives failed in the first 24 hours of restoring from backup.  Fuck you Seagate.  This is the best you can do with your top of the range drive?

 

Bonus: I think the reason I was killing drives so fast came down to voltage.  I didn't confirm it, but I was running 18 drives off a single plug going back to the power supply, so it was probably maxed out in terms of current capacity.  I had 8 Seagate 3TB drives in the case that were given dedicated PS plugs and they never failed.  Could be coincidence (like maybe those drives didn't have any data on them being at the end of the raid array).

 

As a result, all my drives are now on a dedicated 1200W power supply (with a dongle to provide minimum load on the 5V and V_sb rails).

Workstation:  13700k @ 5.5Ghz || Gigabyte Z790 Ultra || MSI Gaming Trio 4090 Shunt || TeamGroup DDR5-7800 @ 7000 || Corsair AX1500i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, AnonymousGuy said:

Ironically 1 of the drives failed in the first 24 hours of restoring from backup.  Fuck you Seagate.  This is the best you can do with your top of the range drive?

Did you burn in the drives first? 

 

Also, it's day 1 or day 2000 that a drive is most likely to die.

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, djdwosk97 said:

Did you burn in the drives first? 

 

Also, it's day 1 or day 2000 that a drive is most likely to die.

I would expect with an enterprise drive they would have done burn-in at the factory to weed out infant mortality.  It's what Intel does with Xeons...100% of Xeons go through burn in and get socketed in a motherboard for final validation (consumer parts it's only sampled parts get socketed).

Workstation:  13700k @ 5.5Ghz || Gigabyte Z790 Ultra || MSI Gaming Trio 4090 Shunt || TeamGroup DDR5-7800 @ 7000 || Corsair AX1500i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, AnonymousGuy said:

I would expect with an enterprise drive they would have done burn-in at the factory to weed out infant mortality.  It's what Intel does with Xeons...100% of Xeons go through burn in and get socketed in a motherboard for final validation (consumer parts it's only sampled parts get socketed).

How'd that assumption work out for you.... 

 

You should always test your hardware even if you know for a fact it was tested at the factory. 

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, djdwosk97 said:

How'd that assumption work out for you.... 

 

You should always test your hardware even if you know for a fact it was tested at the factory. 

Does anyone make a Windows based burn in tool?  I could get Linux going on my workbench but blargh, hate working in Linux.

Workstation:  13700k @ 5.5Ghz || Gigabyte Z790 Ultra || MSI Gaming Trio 4090 Shunt || TeamGroup DDR5-7800 @ 7000 || Corsair AX1500i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, AnonymousGuy said:

Does anyone make a Windows based burn in tool?  I could get Linux going on my workbench but blargh, hate working in Linux.

No idea, I usually use Badblocks. 

@leadeater

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, AnonymousGuy said:

Does anyone make a Windows based burn in tool?  I could get Linux going on my workbench but blargh, hate working in Linux.

I'm not aware of anything specifically meant for burn-ins, but I normally do a full-drive write benchmark using HDTune (go to settings > benchmark and choose Full Test) followed by an Error Check pass, also in HDTune. Compare SMART readings before and after. I've caught one drive with infant mortality that way.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, AnonymousGuy said:

Does anyone make a Windows based burn in tool?  I could get Linux going on my workbench but blargh, hate working in Linux.

 

1 hour ago, djdwosk97 said:

No idea, I usually use Badblocks. 

@leadeater

If we're talking about doing a long term disk stability and performance test, haven't really read the history of this, IOmeter is the generally used tool for that.

Link to comment
Share on other sites

Link to post
Share on other sites

Quote

All told, for 3TB WD Green drives it will probably take about 72+ hours to run all the tests as prescribed in the link above. Testing nine drives in parallel the load on my E3-1230v3 CPU is just 0.11, so I'm guessing the number of drives isn't slowing down the testing much, if any. Thus far, no errors have been reported on either badblocks or the SMART self-tests that preceded it.

 

2 hours ago, djdwosk97 said:

No idea, I usually use Badblocks. 

@leadeater

Is that 72 hour timeframe accurate?  If I'm working with 10TB drives, spending a week stress testing them probably not worth it.

Workstation:  13700k @ 5.5Ghz || Gigabyte Z790 Ultra || MSI Gaming Trio 4090 Shunt || TeamGroup DDR5-7800 @ 7000 || Corsair AX1500i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

Hardware

 

CASE: 45Drives 

MB: Supermicro X9SCM 

CPU: Intel i3-3240

HS: Standard

RAM: 32GB ECC Memory (4x 8GB PC3-12800)

HBA: HighPoint Rocket 750

SSD: Toshiba 120GB

HDD 1: 27x 8TB WD RED 5400 RPM

NIC: Onboard 2x 1Gigabit

 

Software and Configuration:

Freenas

 

Single zpool 4 vdevs in raidz2 with 9 disks each.

 

Usage:

Storage.

 

Additional info:

This is my primary large NAS.  The next one is under construction. That should contain roughly 100Tb

 

Images:

I have removed serial numbers from

list.jpg

Front.jpg

rear.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, GreasyGaming said:

Hardware

 

CASE: 45Drives 

MB: Supermicro X9SCM 

CPU: Intel i3-3240

HS: Standard

RAM: 32GB ECC Memory (4x 8GB PC3-12800)

HBA: HighPoint Rocket 750

SSD: Toshiba 120GB

HDD 1: 27x 8TB WD RED 5400 RPM

NIC: Onboard 2x 1Gigabit

 

Software and Configuration:

Freenas

 

Single zpool 3 vdevs in raidz2 with 9 disks each.

 

Usage:

Storage.

 

Additional info:

This is my primary large NAS.  The next one is under construction. That should contain roughly 100Tb

 

Images:

I have removed serial numbers from

list.jpg

Please change your text color (or you can select all the text and click the "Tx" button which removes formatting). Your text is for the most part invisible in Night Theme mode, because it is the exact same shade of grey as the backround.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

This is an updated version of my NASter machine. I have moved my machines into a rack, and made some hardware updates.

 

Hardware

CASE: Macase R255-8 2U rack mount

PSU: HuntKey HK600-12UEP (550W nom.)

MB: Asus P5BV-C

CPU: Intel Core 2 Quad Q9300

HS: 2U side-blowing heatsink with PWM fan upgrade

RAM: 4x Micron 2GB DDR2-800 Unregistered ECC = 8GB

RAID CARD: Broadcom MegaRAID 9271-8iCC + iBBU09

HDD 1: 4x 2TB WD Green WD20EZRX

HDD 2: 4x 2TB WD Red WD20EZRX

HDD 3: 80GB Toshiba MK8052GSX

Ethernet: HP NC360T dual-port 1000BASE-T NIC

Software and Configuration:

My server is running Ubuntu 17.04 Server on the lone 80GB boot drive.

My main array consists of 8x 2TB in RAID 6, so I end up with 12TB actual storage space on that array.

Usage:

This machine is a router and NAS combo. It carries a 4Gbps network backplane, PPPoEoV and IPv6 tunnel router, and an SMB/AFP/NFS triple-access NAS. It holds my downloaded files (including my entire movie and TV show library) and backups of my computers.

Backup:

This array does not have a backup yet. I plan to buy a few 10TB hard disk drives in the future to perform monthly offline backups.

Photo's & Command line snips:

Storage capacity report:

technix@naster:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            3.9G     0  3.9G   0% /dev
tmpfs           799M   82M  717M  11% /run
/dev/sdb2        64G  6.3G   55G  11% /
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sdb1       1.4G  171M  1.1G  14% /boot
/dev/sda1        11T  4.6T  5.8T  44% /var/export
tmpfs           799M     0  799M   0% /run/user/1000

RAID card report:

technix@naster:~$ sudo storcli64 /c0 show
Generating detailed summary of the adapter, it may take a while to complete.

Controller = 0
Status = Success
Description = None

Product Name = LSI MegaRAID SAS 9271-8i
Serial Number = SV51210958
SAS Address =  500605b******0b0
PCI Address = 00:05:00:00
System Time = 05/23/2017 22:30:27
Mfg. Date = 03/17/15
Controller Time = 05/23/2017 22:30:26
FW Package Build = 23.33.0-0023
BIOS Version = 5.49.03.1_4.16.08.00_0x060B0201
FW Version = 3.450.75-4319
Driver Name = megaraid_sas
Driver Version = 06.812.07.00-rc1
Vendor Id = 0x1000
Device Id = 0x5B
SubVendor Id = 0x1000
SubDevice Id = 0x9271
Host Interface = PCI-E
Device Interface = SAS-6G
Bus Number = 5
Device Number = 0
Function Number = 0
Drive Groups = 1

TOPOLOGY :
========

----------------------------------------------------------------------------
DG Arr Row EID:Slot DID Type  State BT      Size PDC  PI SED DS3  FSpace TR 
----------------------------------------------------------------------------
 0 -   -   -        -   RAID6 Dgrd  N  10.913 TB dflt N  N   none N      N  
 0 0   -   -        -   RAID6 Dgrd  N  10.913 TB dflt N  N   none N      N  
 0 0   0   252:4    13  DRIVE Onln  N   1.818 TB dflt N  N   none -      N  
 0 0   1   252:5    9   DRIVE Rbld  Y   1.818 TB dflt N  N   none -      N  
 0 0   2   252:6    11  DRIVE Rbld  Y   1.818 TB dflt N  N   none -      N  
 0 0   3   252:7    10  DRIVE Onln  N   1.818 TB dflt N  N   none -      N  
 0 0   4   252:0    12  DRIVE Onln  N   1.818 TB dflt N  N   none -      N  
 0 0   5   252:1    14  DRIVE Onln  N   1.818 TB dflt N  N   none -      N  
 0 0   6   252:2    15  DRIVE Onln  N   1.818 TB dflt N  N   none -      N  
 0 0   7   252:3    8   DRIVE Onln  N   1.818 TB dflt N  N   none -      N  
----------------------------------------------------------------------------

DG=Disk Group Index|Arr=Array Index|Row=Row Index|EID=Enclosure Device ID
DID=Device ID|Type=Drive Type|Onln=Online|Rbld=Rebuild|Dgrd=Degraded
Pdgd=Partially degraded|Offln=Offline|BT=Background Task Active
PDC=PD Cache|PI=Protection Info|SED=Self Encrypting Drive|Frgn=Foreign
DS3=Dimmer Switch 3|dflt=Default|Msng=Missing|FSpace=Free Space Present
TR=Transport Ready

Virtual Drives = 1

VD LIST :
=======

--------------------------------------------------------------
DG/VD TYPE  State Access Consist Cache Cac sCC      Size Name 
--------------------------------------------------------------
0/0   RAID6 Dgrd  RW     Yes     RAWBD -   ON  10.913 TB      
--------------------------------------------------------------

Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded|Dgrd=Degraded
Optl=Optimal|RO=Read Only|RW=Read Write|HD=Hidden|TRANS=TransportReady|B=Blocked|
Consist=Consistent|R=Read Ahead Always|NR=No Read Ahead|WB=WriteBack|
AWB=Always WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled
Check Consistency

Physical Drives = 8

PD LIST :
=======

--------------------------------------------------------------------------------
EID:Slt DID State DG     Size Intf Med SED PI SeSz Model                Sp Type 
--------------------------------------------------------------------------------
252:0    12 Onln   0 1.818 TB SATA HDD N   N  512B WDC WD20EZRX-00D8PB0 U  -    
252:1    14 Onln   0 1.818 TB SATA HDD N   N  512B WDC WD20EFRX-68EUZN0 U  -    
252:2    15 Onln   0 1.818 TB SATA HDD N   N  512B WDC WD20EZRX-00D8PB0 U  -    
252:3     8 Onln   0 1.818 TB SATA HDD N   N  512B WDC WD20EZRX-00D8PB0 U  -    
252:4    13 Onln   0 1.818 TB SATA HDD N   N  512B WDC WD20EFRX-68EUZN0 U  -    
252:5     9 Rbld   0 1.818 TB SATA HDD N   N  512B WDC WD20EFRX-68EUZN0 U  -    
252:6    11 Rbld   0 1.818 TB SATA HDD N   N  512B WDC WD20EFRX-68EUZN0 U  -    
252:7    10 Onln   0 1.818 TB SATA HDD N   N  512B WDC WD20EZRX-00D8PB0 U  -    
--------------------------------------------------------------------------------

EID-Enclosure Device ID|Slt-Slot No.|DID-Device ID|DG-DriveGroup
DHS-Dedicated Hot Spare|UGood-Unconfigured Good|GHS-Global Hotspare
UBad-Unconfigured Bad|Onln-Online|Offln-Offline|Intf-Interface
Med-Media Type|SED-Self Encryptive Drive|PI-Protection Info
SeSz-Sector Size|Sp-Spun|U-Up|D-Down|T-Transition|F-Foreign
UGUnsp-Unsupported|UGShld-UnConfigured shielded|HSPShld-Hotspare shielded
CFShld-Configured shielded|Cpybck-CopyBack|CBShld-Copyback Shielded


BBU_Info :
========

------------------------------------------------------------------------
Model   State   RetentionTime Temp Mode MfgDate    Next Learn           
------------------------------------------------------------------------
iBBU-09 Optimal 48 hours +    64C  5    2012/05/29 2017/05/24  14:56:59 
------------------------------------------------------------------------

Mode 5: 48+ Hrs retention with a non-transparent learn cycle 
           and moderate service life.

Note: I am posting this in the middle of a online drive rebuild, as I just replaced a dead and an almost-dead WD Greens with two new WD Reds.

The Fruit Pie: Core i7-9700K ~ 2x Team Force Vulkan 16GB DDR4-3200 ~ Gigabyte Z390 UD ~ XFX RX 480 Reference 8GB ~ WD Black NVMe 1TB ~ WD Black 2TB ~ macOS Monterey amd64

The Warship: Core i7-10700K ~ 2x G.Skill 16GB DDR4-3200 ~ Asus ROG Strix Z490-G Gaming Wi-Fi ~ PNY RTX 3060 12GB LHR ~ Samsung PM981 1.92TB ~ Windows 11 Education amd64
The ThreadStripper: 2x Xeon E5-2696v2 ~ 8x Kingston KVR 16GB DDR3-1600 Registered ECC ~ Asus Z9PE-D16 ~ Sapphire RX 480 Reference 8GB ~ WD Black NVMe 1TB ~ Ubuntu Linux 20.04 amd64

The Question Mark? Core i9-11900K ~ 2x Corsair Vengence 16GB DDR4-3000 @ DDR4-2933 ~ MSI Z590-A Pro ~ Sapphire Nitro RX 580 8GB ~ Samsung PM981A 960GB ~ Windows 11 Education amd64
Home server: Xeon E3-1231v3 ~ 2x Samsung 8GB DDR3-1600 Unbuffered ECC ~ Asus P9D-M ~ nVidia Tesla K20X 6GB ~ Broadcom MegaRAID 9271-8iCC ~ Gigabyte 480GB SATA SSD ~ 8x Mixed HDD 2TB ~ 16x Mixed HDD 3TB ~ Proxmox VE amd64

Laptop 1: Dell Latitude 3500 ~ Core i7-8565U ~ NVS 130 ~ 2x Samsung 16GB DDR4-2400 SO-DIMM ~ Samsung 960 Pro 512GB ~ Samsung 850 Evo 1TB ~ Windows 11 Education amd64
Laptop 2: Apple MacBookPro9.2 ~ Core i5-3210M ~ 2x Samsung 8GB DDR3L-1600 SO-DIMM ~ Intel SSD 520 Series 480GB ~ macOS Catalina amd64

Link to comment
Share on other sites

Link to post
Share on other sites

Just thought I'd give a quick status update:

 

I have been terribly busy with offline life; working on my thesis and all that good stuff, so I basically haven't been around at all. Lectures will be finished by mid-June though, so I will have some breathing room again at that point and will trawl through the thread and update it at the latest in a few weeks.

 

Thanks for your patience, and sorry for the delays. :)

 

 

Cheers, 

aw

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/24/2017 at 3:18 PM, alpenwasser said:

Just thought I'd give a quick status update:

 

I have been terribly busy with offline life; working on my thesis and all that good stuff, so I basically haven't been around at all. Lectures will be finished by mid-June though, so I will have some breathing room again at that point and will trawl through the thread and update it at the latest in a few weeks.

 

Thanks for your patience, and sorry for the delays. :)

 

 

Cheers, 

aw

pssssh.... these damn inactive mods :dry:

Link to comment
Share on other sites

Link to post
Share on other sites

small teaser for my update:

 

IMG_20170518_203412.jpg

Respect the Code of Conduct!

>> Feel free to join the unofficial LTT teamspeak 3 server TS3.schnitzel.team <<

>>LTT 10TB+ Topic<< | >>FlexRAID Tutorial<<>>LTT Speed wave<< | >>LTT Communies and Servers<<

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, looney said:

small teaser for my update:

I'm thinking of getting one of those and connecting it up to my storinator. How are you finding the fan noise and drive temps on the 60drive pod?

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, GreasyGaming said:

I'm thinking of getting one of those and connecting it up to my storinator. How are you finding the fan noise and drive temps on the 60drive pod?

I couldn't say, It isn't quiet but the fan speed is controlled by temperature so I personally don't find it very loud.

As far as temps go, I havent populated mine yet.

Respect the Code of Conduct!

>> Feel free to join the unofficial LTT teamspeak 3 server TS3.schnitzel.team <<

>>LTT 10TB+ Topic<< | >>FlexRAID Tutorial<<>>LTT Speed wave<< | >>LTT Communies and Servers<<

Link to comment
Share on other sites

Link to post
Share on other sites

Ok, someone else is running enterprise high speed spinners and the drive temps were astronomical!

 

Would be interested to know you get on.

Link to comment
Share on other sites

Link to post
Share on other sites

I added four 256GB 850PROs to my NAS (link to earlier in this thread).  Now at 33.5TB raw space

 

8x 4TB WD Red in RAIDZ2

2x 250GB Crucial MX200 mirrored

4x 256GB Samsung 850PRO in RAIDZ1

FreeNAS is running from a 16GB PNY stick, so all the SSDs are storage. 

 

Still need to configure the Samsungs, but I'm completely wiping the NAS and doing a fresh install of Freenas 9.10 first (I'm still on 9.3 currently). 

I know that FreeNAS11 will be released real soon, but I'd rather not update every 2 days because of bug fixes.  So I'll wait a little while before switching to that.

 

I had to get pretty creative though.  Because my Define R5 already had all 8 drive bays and the two 2.5" trays filled, I ended up mounting the 850s in the top grille. 

 

5933391de9427_850mounting.jpg.3226c08e02c93e2482989a7fa6c28540.jpg

(After taking the photo I put the modu-vents back on, of course)

 

At this point I'm out of SATA ports, so any more upgrades will involve a new HBA and a larger case or a rack. 

 

Methinks I'll stick with the Tentagon name, even though it should really be Fourteentagon now. 

 

59333c777c068_NASdiskoverview.jpg.c4450ac2a7773daf786a0f6661bad4ca.jpg

 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Captain Chaos said:

At this point I'm out of SATA ports, so any more upgrades will involve a new HBA and a larger case or a rack

Your card is a a SAS card which means it can support up to 255 devices connected to it.
You can use an expander such as this if you need more ports: http://www.ebay.com/itm/New-HP-24-Bay-PCI-e-SAS-Expander-Card-468405-001-487738-001-/262633260835

That of course wouldn't resolve your case size issue if you do need to expand further :P

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Jarsky said:

Your card is a a SAS card which means it can support up to 255 devices connected to it

True, with an expander I could indeed get more drives without needing to put an extra HBA in.  I should have thought of that, but I completely forgot about expanders.  

As you said I would still have the case problem to sort out. 

 

Right now I have around 13TB of data and 21TB of usable space (should be 25TB but I seem to have a lot of ZFS overhead).  I don't expect to run out of disc space in the next 5 years. 

The main reason for adding the Samsungs is because my music collection has gotten out of hand ever since I switched that over to FLAC.  It just doesn't fit on the Crucials anymore and I don't want to store it on the HDDs. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Captain Chaos said:

Right now I have around 13TB of data and 21TB of usable space (should be 25TB but I seem to have a lot of ZFS overhead).  I don't expect to run out of disc space in the next 5 years. 

 

If you did multiple vdev's then you would have more zfs overhead as you essentially have multiple raid5's.

I would assume the way you've done it is 2x 4x4TB RAIDZ1 vdev's together in a zpool then with the parity overhead of having multiple vdev's 21TB would sound about right. It would be the recommended way of doing it with that number of disks typically for expansion in the future as well, compared to a raidz2 with all those disks. Rebuild times would be significantly lessened as well.

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Jarsky said:

I would assume the way you've done it is 2x 4x4TB RAIDZ1 vdev's together in a zpool

Whoops, forgot to write that the HDDs are in RAIDZ2.  I'm getting 19.9TB, whereas a perfect 2 drive parity would give me 24TB. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Captain Chaos said:

 

I had to get pretty creative though.  Because my Define R5 already had all 8 drive bays and the two 2.5" trays filled, I ended up mounting the 850s in the top grille.

I like your mounting method. My solution to mounting SSDs without proper bays usually involves adhesive-backed velcro.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×