Jump to content

We survived!  I now have 3 HGST drives, 21 Seagates, with 1 HGST spare on my desk.  4 more Seagates are still working through RMA.   I'm eyeballing adding all 4 of them as hot spares via the SFF-8088 connector, but need to consider if it's really worth the effort to do that.

Personally I wouldn't add them anymore.

In fact I'd buy a stack of HGST or WD drives and start swapping out the Seagates before they kick the bucket, just like Backblaze did. You know they're going to die pretty soon anyway.

Do a full wipe on the removed drives and sell them on to partially fund the next few new drives.

---

I'm still contemplating moving from FreeNAS to Linux, and really should shut down the NAS so I can flash that <censored> raid card again.

The NAS is running just fine, but the risk of corrupting everything due to a firmware/driver mismatch does bother me.

It honestly blows my mind that they just change the driver, thereby forcing everyone who owns that card (LSI 9211-8i) to flash the firmware

If I make the move to Linux, I might as well host my own domain, mail server and cloud on the NAS so I don't need a webhost etc anymore.

My main concern is the whole UI change though, I'm not a fan of CLI really, and that's putting it mildly. I can stand it just fine for 5 minutes when using codes that I know, but when I need to start looking up things online and start typing them in, using CLI quickly makes me want to throw my keyboard at the screen.

Link to comment
Share on other sites

Link to post
Share on other sites

Personally I wouldn't add them anymore.

In fact I'd buy a stack of HGST or WD drives and start swapping out the Seagates before they kick the bucket, just like Backblaze did. You know they're going to die pretty soon anyway.

Do a full wipe on the removed drives and sell them on to partially fund the next few new drives.

---

I'm still contemplating moving from FreeNAS to Linux, and really should shut down the NAS so I can flash that <censored> raid card again.

The NAS is running just fine, but the risk of corrupting everything due to a firmware/driver mismatch does bother me.

It honestly blows my mind that they just change the driver, thereby forcing everyone who owns that card (LSI 9211-8i) to flash the firmware

If I make the move to Linux, I might as well host my own domain, mail server and cloud on the NAS so I don't need a webhost etc anymore.

My main concern is the whole UI change though, I'm not a fan of CLI really, and that's putting it mildly. I can stand it just fine for 5 minutes when using codes that I know, but when I need to start looking up things online and start typing them in, using CLI quickly makes me want to throw my keyboard at the screen.

 

One of the great things about linux is that you can have a program specifically to be a WebUI for most management needs, Check out WebMin.

Comb it with a brick

Link to comment
Share on other sites

Link to post
Share on other sites

Hay I Updated som thinks at my Server

 

 

 

 

Edited by alpenwasser
Fixed link after forum upgrade. :)
Link to comment
Share on other sites

Link to post
Share on other sites

Hardware
CASE: SuperMicro 846E1-R900B (24 3.5" Bays)
PSU: SuperMicro PWS-920P-SQ 920W (Only one for now, will purchase the other one for redundancy later...I do have the two original 900W PSUs that came with the chassis, but they're too loud)
MB: SuperMicro MBD-X10SRA-O
CPU: Intel Xeon E5-2695V3 (14 core, 28 thread, 2.2GHz, 2.5GHz on Turbo)
GPU1: ZOTAC GeForce GT 1030 2GB GDDR5 (ESXI console output)

Cooler: Cooler Master Hyper D92
RAM: 64GB (16GB x 4) Crucial DDR4 ECC 2133MHz Server RAM
HBA CARD 1: LSI 9211-8i
HBA CARD 2: LSI 9311-8i
SSD 1: 2 TB Crucial MX500 (VM Storage)

SSD 2: 1 TB Crucial MX500 (Steam Game Storage)
SSD 3: 480 GB Sandisk Extreme Pro (VM Storage)
Other Storage: 64 GB Samsung Bar 3.0 Drive (ESXi boot drive)
HDD 1: 6 x 4TB Western Digital Red and 1 x 4TB Western Digital Red Pro (Hot Spare)
HDD 2: 6 x 4TB Western Digital RE SAS and 5 x 4TB HGST SAS
HDD 3: 6 x 8TB Western Digital shucked drives out of WD Elements (5 x WD80EDAZ, 1 x WD80EZAZ)
Total Drive Capacity: 120TB
NIC: 10Gb x 2 Solarflare SFN5122F

OS: VMware ESXi 6.7.0 Update 3 (Want to upgrade to 7, but need to replace the SAS2 HBA with a SAS3 one per compatibility matrix for 7.0)

UPS: Eaton 9130 2000VA with 1 external battery module (Complete overkill is even more overkill)

 

Software and Configuration:
I run VMware ESXi after being tired of stability issues with Windows Server 2012 R2. All hard drives run off the SuperMicro BPN-SAS2-846EL1 backplane (I replaced the original SAS1 Backplane). I plan to buy a 45 bay Supermicro disk shelf next.

Two SAS2 cables from the backplane feed into the LSI HBA 9211-8i.

Previous configuration:

Six 4TB WD Reds are in RAID6. The ten 4TB WD RE SAS drives are in RAID6. This used to have a LSI MegaRAID 9260-8i CV.
However, due to my great luck, I lost my 10 drive RAID6 array and so I have made the decision to migrate to TrueNAS Core (Formerly FreeNAS)

New configuration:
TrueNAS Core has the two HBA cards running into it.
Six 4TB WD Reds are in RAIDZ2 with the WD Red Pro as a hot spare. The six WD RE SAS drives and five HGST SAS Drives are in RAID Z2. The six 8TB WD Elements drives are in RAIDZ2 in one vdev in a pool. My goal is to keep expanding this pool by adding more six drive RAIDZ2 vdevs.
 

Usage:
I use the server for my primary storage and for running home labs.

 

Backup:
Now a priority, I regret not having one when my main array failed. I'm planning to buy a perfect mirror to the main RAIDZ2 pool.

Additional info:

The ten 4TB WD Re drives in RAID6 are my new primary storage location

The six 4TB WD Red drives are also a temp array as I save up to buy the other 4TB WD Re SAS drives.

I do have the original 5 hot swap 80mm fans, but they were too loud and the SuperMicro motherboard only has options for 50%, 75%, and 100% fan speed.

I was too broke to buy a fan controller and couldn't deal with the sound, so I dropped in some BeQuiet Pure Wings Fans (Some 120mm, some 140mm)

 

The final amount of drives will be 24 4TB WD Re SAS drives, but I'm saving up funds for those (Might take a year or two). First priority is another SuperMicro PSU. The six WD Reds will be the backup and moved out of the server once the above 24 drives are bought.
 

Photo's:

 

Spoiler


Inside of the chassis:

z57kjMU.jpg

 

A fan cooling the RAID card:

jif1b9j.jpg

 

Flat view:

L0HnZFD.jpg

 

Three 120mm fans in the center behind backplane:

05CorGl.jpg

 

Front view:

PlQ7bI1.jpg

 

The server lives on top of my art drawing supply drawer:

hw6LHIX.jpg

 

Configuration in LSI MegaRAID Storage Manager:

 

Capture2.PNG

 

 

Capture3.PNG

 

I must say I did not expect the RAID6 array to crack the 1GB/s barrier. This is the newly created RAID6 array of the eight WD Re SAS drives:

 

Capture.PNG

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

 

CPU Bench in CPU-Z:

7KkRXTA.png

 

 

 

Its interesting seeing performance difference between two older physical cpus IE my Dual Xeon X5650's 12core 24 thread vs your single newer Xeon E5-2695V3 14 core 28 Threads.

 

amazing how cpus have come along all that power in a single cpu !

 

post-217690-0-40002100-1442251042.png

Link to comment
Share on other sites

Link to post
Share on other sites

-snip-

 

Yep, that's very true. I think what I still can't get over is how low the vCore is on the Xeon. At idle it uses 0.764v, under Intel Burn Test, it uses 0.934v, and under normal full load (typical render), it uses 0.890v.

 

On Cinebench R15, single thread performance is 104cb, and multicore is 1608cb.

 

Compared to my i5-2500K in my main PC, it uses 1.114v at idle and 1.398v (OC'd to 4.6GHz). The Xeon absolutely crushes the 2500K though.

 

However, the 2500K wins in single thread performance due to the higher clock.

 

And yes, this CPU is 100% overkill for a NAS. I have yet to ever see it go above 5% use in normal use even with my content creation programs. I still remember when my original plan was to use a Haswell i3 with a server board...

Link to comment
Share on other sites

Link to post
Share on other sites

I'm contemplating going all YOLO-fukit and adding full array redundancy in the form of 8 x 8 TB Seagate archive drives in JBOD on a "cheap" Adaptec 2xSAS raid card adapted down to PCI-e x1 since that's all I have left in the server (GPU + 10Gbit card + 9280i only leaves the x1 slot open).  Someone talk me out of spending another $2k on my server.

Workstation:  13700k @ 5.5Ghz || Gigabyte Z790 Ultra || MSI Gaming Trio 4090 Shunt || TeamGroup DDR5-7800 @ 7000 || Corsair AX1500i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

No. You need it. Must. Have.

I agree, he should't spend that money, he should spend 8k on servers instead :P

Respect the Code of Conduct!

>> Feel free to join the unofficial LTT teamspeak 3 server TS3.schnitzel.team <<

>>LTT 10TB+ Topic<< | >>FlexRAID Tutorial<<>>LTT Speed wave<< | >>LTT Communies and Servers<<

Link to comment
Share on other sites

Link to post
Share on other sites

I'm contemplating going all YOLO-fukit and adding full array redundancy in the form of 8 x 8 TB Seagate archive drives in JBOD on a "cheap" Adaptec 2xSAS raid card adapted down to PCI-e x1 since that's all I have left in the server (GPU + 10Gbit card + 9280i only leaves the x1 slot open).  Someone talk me out of spending another $2k on my server.

 

 

I'm all about advocating for buying more storage, redundancy, and data hoarding, but I'd have concerns on the performance that those 8 TB archive drives have due to the SMR nature of things.  I get the idea you're doing this for redundancy but you may have issues with the amount of time it'll take to syn/backup/restore.  Have you looked into the performance characteristics of those drives?

Workstation 1: Intel i7 4790K | Thermalright MUX-120 | Asus Maximus VII Hero | 32GB RAM Crucial Ballistix Elite 1866 9-9-9-27 ( 4 x 8GB) | 2 x EVGA GTX 980 SC | Samsung 850 Pro 512GB | Samsung 840 EVO 500GB | HGST 4TB NAS 7.2KRPM | 2 x HGST 6TB NAS 7.2KRPM | 1 x Samsung 1TB 7.2KRPM | Seasonic 1050W 80+ Gold | Fractal Design Define R4 | Win 8.1 64-bit
NAS 1: Intel Intel Xeon E3-1270V3 | SUPERMICRO MBD-X10SL7-F-O | 32GB RAM DDR3L ECC (8GBx4) | 12 x HGST 4TB Deskstar NAS | SAMSUNG 850 Pro 256GB (boot/OS) | SAMSUNG 850 Pro 128GB (ZIL + L2ARC) | Seasonic 650W 80+ Gold | Rosewill RSV-L4411 | Xubuntu 14.10

Notebook: Lenovo T500 | Intel T9600 | 8GB RAM | Crucial M4 256GB

Link to comment
Share on other sites

Link to post
Share on other sites

Yep, that's very true. I think what I still can't get over is how low the vCore is on the Xeon. At idle it uses 0.764v, under Intel Burn Test, it uses 0.934v, and under normal full load (typical render), it uses 0.890v.

 

On Cinebench R15, single thread performance is 104cb, and multicore is 1608cb.

 

Compared to my i5-2500K in my main PC, it uses 1.114v at idle and 1.398v (OC'd to 4.6GHz). The Xeon absolutely crushes the 2500K though.

 

However, the 2500K wins in single thread performance due to the higher clock.

 

And yes, this CPU is 100% overkill for a NAS. I have yet to ever see it go above 5% use in normal use even with my content creation programs. I still remember when my original plan was to use a Haswell i3 with a server board...

 

 

I over-spec'ed the CPU on my NAS also and I'm glad I did.  I now run a virtual machine on my NAS along with a Plex server.  On top of that it uses spare CPU cycles to run handbrake to convert movies for me.

Workstation 1: Intel i7 4790K | Thermalright MUX-120 | Asus Maximus VII Hero | 32GB RAM Crucial Ballistix Elite 1866 9-9-9-27 ( 4 x 8GB) | 2 x EVGA GTX 980 SC | Samsung 850 Pro 512GB | Samsung 840 EVO 500GB | HGST 4TB NAS 7.2KRPM | 2 x HGST 6TB NAS 7.2KRPM | 1 x Samsung 1TB 7.2KRPM | Seasonic 1050W 80+ Gold | Fractal Design Define R4 | Win 8.1 64-bit
NAS 1: Intel Intel Xeon E3-1270V3 | SUPERMICRO MBD-X10SL7-F-O | 32GB RAM DDR3L ECC (8GBx4) | 12 x HGST 4TB Deskstar NAS | SAMSUNG 850 Pro 256GB (boot/OS) | SAMSUNG 850 Pro 128GB (ZIL + L2ARC) | Seasonic 650W 80+ Gold | Rosewill RSV-L4411 | Xubuntu 14.10

Notebook: Lenovo T500 | Intel T9600 | 8GB RAM | Crucial M4 256GB

Link to comment
Share on other sites

Link to post
Share on other sites

Here i am again. Quick question which i don't really think needs its own topic... let me know if its better if i do make a new topic for it.

 

My 40TB array degraded 2 days ago. I've tested the drive that failed and it seems fine. The log showed it gave a timeout error. Is this drive actually dying? I've never had much of an idea of what the SMART stuff means but it seems fine

 

post-16166-0-87905000-1442397230.png

 

I've got everything backed up so im in no way concerned, it's also a RAID-6 so another one can fail and all the data would still be fine. Should i prepare for this drive to fail more often or is something else wrong? Is it just some glitch or something that made the drive fail? When i looked before all the SMART stuff was just "-" and no numbers. I don't want to send in a perfectly fine drive for RMA ;)

 

edit: i've got the drive back in and the array is rebuilding...

I have no signature

Link to comment
Share on other sites

Link to post
Share on other sites

Here i am again. Quick question which i don't really think needs its own topic... let me know if its better if i do make a new topic for it.

 

My 40TB array degraded 2 days ago. I've tested the drive that failed and it seems fine. The log showed it gave a timeout error. Is this drive actually dying? I've never had much of an idea of what the SMART stuff means but it seems fine

 

attachicon.gifRAID-6-Degraded-Drive.png

 

I've got everything backed up so im in no way concerned, it's also a RAID-6 so another one can fail and all the data would still be fine. Should i prepare for this drive to fail more often or is something else wrong? Is it just some glitch or something that made the drive fail? When i looked before all the SMART stuff was just "-" and no numbers. I don't want to send in a perfectly fine drive for RMA ;)

 

edit: i've got the drive back in and the array is rebuilding...

 

What are the specs of the server and models of the drives? You might be using consumer drives that aren't made for RAID.

Link to comment
Share on other sites

Link to post
Share on other sites

Im sorry but at least a tiny bit of searching would have had you learn they are WD reds connected to an areca 1880i... They are made for raid....

I have no signature

Link to comment
Share on other sites

Link to post
Share on other sites

Im sorry but at least a tiny bit of searching would have had you learn they are WD reds connected to an areca 1880i... They are made for raid....

 

Sorry, I had missed your name when looking through the server lists on page 1. Your drive there appears to be in perfect condition and I don't honestly see why the drive would be dropping from the array unless it's a glitch of some kind.

Link to comment
Share on other sites

Link to post
Share on other sites

Here i am again. Quick question which i don't really think needs its own topic... let me know if its better if i do make a new topic for it.

 

My 40TB array degraded 2 days ago. I've tested the drive that failed and it seems fine. The log showed it gave a timeout error. Is this drive actually dying? I've never had much of an idea of what the SMART stuff means but it seems fine

 

attachicon.gifRAID-6-Degraded-Drive.png

 

I've got everything backed up so im in no way concerned, it's also a RAID-6 so another one can fail and all the data would still be fine. Should i prepare for this drive to fail more often or is something else wrong? Is it just some glitch or something that made the drive fail? When i looked before all the SMART stuff was just "-" and no numbers. I don't want to send in a perfectly fine drive for RMA ;)

 

edit: i've got the drive back in and the array is rebuilding...

 

 

I've had shitty SATA cables cause drive drop-reappear on me before.

Workstation:  13700k @ 5.5Ghz || Gigabyte Z790 Ultra || MSI Gaming Trio 4090 Shunt || TeamGroup DDR5-7800 @ 7000 || Corsair AX1500i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

I've had shitty SATA cables cause drive drop-reappear on me before.

 

Just wanted to add i had this issue with couple drives which were coming up disconnected randomly (which was very annoying!)... Lots trouble shooting later...  i replaced all my sata/sas cables with new high quality cables.. and never had issue since. also have cables which have clip so they clip to motherboard and hdd..

 

Ever since i always use high quality cables..

 

may not be this but is amazing that lot people over look a good quality cable in expensive setups sometimes. 

Link to comment
Share on other sites

Link to post
Share on other sites

Update: (((((((((((((((Update #6 Added an additional 6TB/Replaced MB/Scrapped outside Fan controller/Added 2U for future Expansion))))))))))))))

 

Updated thumb.gif

 

- snip -

Very nice, welcome to the list! :)

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Nice thread.

I run 4 nas servers with freebsd/zfsguru.

1 with 10x4 and 10x3 TB (2 raid-z2)

2 with 10x4 and 10x3 TB (2 raid-z2)

3 with 20x4 TB (2 raid z2)

4 with 5x4 TB (raid z1)

The 4 TB drives are hgst 7k4000

The 3 TB drives are toshiba DT01ACA300 (7k3000 of hgst)

Split it over multiple servers and pools as i not want everything online all the time.

10GE ethernet via x520-da2 cards with DA cupper cables.

Link to comment
Share on other sites

Link to post
Share on other sites

Nice thread.

I run 4 nas servers with freebsd/zfsguru.

1 with 10x4 and 10x3 TB (2 raid-z2)

2 with 10x4 and 10x3 TB (2 raid-z2)

3 with 20x4 TB (2 raid z2)

4 with 5x4 TB (raid z1)

The 4 TB drives are hgst 7k4000

The 3 TB drives are toshiba DT01ACA300 (7k3000 of hgst)

Split it over multiple servers and pools as i not want everything online all the time.

10GE ethernet via x520-da2 cards with DA cupper cables.

 

We need the photos and print screens :)

Link to comment
Share on other sites

Link to post
Share on other sites

Nice thread.

I run 4 nas servers with freebsd/zfsguru.

1 with 10x4 and 10x3 TB (2 raid-z2)

2 with 10x4 and 10x3 TB (2 raid-z2)

3 with 20x4 TB (2 raid z2)

4 with 5x4 TB (raid z1)

The 4 TB drives are hgst 7k4000

The 3 TB drives are toshiba DT01ACA300 (7k3000 of hgst)

Split it over multiple servers and pools as i not want everything online all the time.

10GE ethernet via x520-da2 cards with DA cupper cables.

 

Nice! Since we're trying to keep the thread consistent, feel free to use looney's template

from this post and I'll add you to the list if you want. :)

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

No i want not to be in the list..

I like to read this topic for nice servers.

My norco case had Original 5x 80mm fans very loud in the case, changed it to 3x120 mm fans (noctua)

In server 3 i took out the fans plate and put in 3x140mm fans (noctua) without a plate. Noise is now better with server 3.

So updates will follow for server 1 and 2, perhaps even the norco cases will be changed to rivier case.

Had doa issues with 2 backplanes for the hdd's in the borco cages. They were swapped under warrantee, but still have 4 hdd's with a lot of cable errors

Link to comment
Share on other sites

Link to post
Share on other sites

Update #63 part 2/5 :)

 

I've had 3 drives die in slot 9 of my raid array.  One was original, so the failure was probably legitimate.  I swapped in a recert one I had on my desk...dies within 12 hours of rebuild finishing...swapped in another recert one where now slot 0 also died putting my array in a critical state (1 more failure = data loss).  slot 0 rebuilds fine then slot 9 tries to rebuild and inst-fails.

 

Either Seagate is churning out shit for their recert drives or my SFF-8087 SATA cable is bad.  I have 2 HGST Deskstars being delivered today and another cable. We'll get to the bottom of it.  I figure I'm going to mix HGST Ultrastars with HGST Deskstars and see if I notice any difference in reliability.  1M MTBF vs. 2M MTBF but bother have rotational vibration sensors so it might not be worth the $50 premium to upgrade to full enterprise grade.

 

And to reiterate..holy fucking balls Seagate 3TB are bad.  20 drives and with 5 cold spares on my desk I can't even keep up RMA-ing them (and that's *with* the $12 advanced cross ship RMA) and now have 0 spares.

 

EDIT: when I say a drive "dies", I mean the raid controller has marked it as failed.  I have no idea really what criteria the raid controller uses to determine a drive has failed (maybe SMART, maybe communication issues, maybe drive data is corrupted, etc), but even manually trying to put the drive back in service it will immediately get kicked back out as failed. 

Workstation:  13700k @ 5.5Ghz || Gigabyte Z790 Ultra || MSI Gaming Trio 4090 Shunt || TeamGroup DDR5-7800 @ 7000 || Corsair AX1500i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

I think you should be safe to save the $50 :)

SSD Firmware Engineer

 

| Dual Boot Linux Mint and W8.1 Pro x64 with rEFInd Boot Manager | Intel Core i7-4770k | Corsair H100i | ASRock Z87 Extreme4 | 32 GB (4x8gb) 1600MHz CL8 | EVGA GTX970 FTW+ | EVGA SuperNOVA 1000 P2 | 500GB Samsung 850 Evo |250GB Samsung 840 Evo | 3x1Tb HDD | 4 LG UH12NS30 BD Drives | LSI HBA | Corsair Carbide 500R Case | Das Keyboard 4 Ultimate | Logitech M510 Mouse | Corsair Vengeance 2100 Wireless Headset | 4 Monoprice Displays - 3x27"4k bottom, 27" 1440p top | Logitech Z-2300 Speakers |

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×