Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
looney

LTT Storage Rankings

Recommended Posts

On 9/5/2016 at 3:24 AM, brwainer said:

No matter what the rules say, I sure as hell want to hear about this! Maybe the individual nodes can make it on the list, and the system as a whole can be an honorable mention? I personally think that the whole system should be counted together though. But where does the line get drawn? Does SOFS or Storage Spaces Direct count as a cluster/distributed file system?

The original plan was to use Storage Spaces Direct, was going to edit post and add it but the forum went crazy :P.

 

I may end up doing both or one on top of the other, I really want to try both. Main problem with Storage Spaces Direct is I use ESXi and NFS services on Windows isn't that good, iSCSI does work well but there are reasons why that annoys me for my current setup (too few physical servers, loopback within itself iSCSI target server blah).

 

I would say though that SOFS shouldn't count, not really a proper distributed file system that acts as one system.

Link to post
Share on other sites

Been meaning to post this for far too long!

 

Hardware

CASE: MS-Tech CA-0210 rev. b

PSU: Corsair VS450

MB: Foxconn H55M-S

CPU: Core i7 870

HS: Stock intel heatsink

RAM: 16GB DDR3 1333

SSD: Sandisk Ultra 120GB

HDD 1: 6x 4TB WD Red

HDD 2: 2x 1TB Toshiba (something something)

As the MB only has 6 SATA ports I have a Syba 4 Port PCIe SATA Card. There are also some Gb NICs in there as at some point I will running a PFsence VM.

Following a CPU upgrade I also now have to run a graphics card as the 870 doesn't have video. This is annoying as there is no monitor connected to it but it won't post without. Any tips on getting round that would be great!

Software and Configuration:

Server runs WD2012 R2. the 4TB reds are in a parity configuration controlled by storage spaces. This is used for the main storage. The 2 1TB drives are in a RAID 1 setup and is used as a scratch disk  as well as the home for the Hyper-V vdisks.

I run 3 VMs. 2 for plex servers 1 for various news-groups utilities and 1 PFsense router.

Usage:

Server is used for plex TV shows/Films as well as family music and picture libraries and as a NAS.

Backup:

I've gone through various options with this. At the moment I use a cloud service data deposit box

Photo's:

2016-07-24 17.29.03.jpg

2016-07-23 16.30.55.jpg

2016-07-23 16.31.00.jpg

2016-07-23 16.31.28.jpg

2016-07-23 16.31.45.jpg

SS.png

Link to post
Share on other sites

I have deployed a new server! I guess the old one should be taken off the list (the wood case, still in my sig), but I assume the post won't be removed correct? I still have links to it cuz that thing was pretty sweeeeeet!

 

Anyways, new build is finally deployed!

 

Hardware

CASE:  SUPERMICRO 4U 846E16

PSU:  R1200B

MB:  SUPERMICRO X8DTE-F

CPU:  Dual Intel XEON L5520

HS: Stock Supremicro

RAM: 32GB DDR3 1333 ECC

RAID CARD 1:  6GBPS SAS HBA (WITH LSI 9211-8I FIRMWARE)

SSD1 Boot: Crucial SSDNOW 200V+

SSD2: Crucial SSDNOW 200V+

HDD 1: 8x 4TB Western Digital Red RAID Z2

HDD 2: 8x 3TB Seagate ST3000DM001 (I know, these are total garbage) RAID Z2

 

Software and Configuration:

This server is running Freenas, and obviously then ZFS. My old NAS was on Open Media Vault and it served me well, but without ECC RAM I couldn't make the jump to Freenas. Suffice it to say, this was a good move ;)

 

Additional info:

Currently still learning Freenas, but should drive-download-20160925T060805Z.zip

Sorry I didn't get any great pix of it, probably shoulda busted out the D810, but I was lazy......... Really should get some sick shots of my Rig at least huh?

 

Also, don't worry, its not actually running windows. That was just the obligatory NEED TO SEE LOTS OF CPU's IN TASK MANAGER pic haha. And I wanted to test thermals, so I figure some AIDA64 and Prime95 would be a good way that I am accustomed to using. 

 

IMG_4070.JPG

IMG_4071.JPG

IMG_4086.JPG

IMG_4090.JPG

IMG_4128.JPG

IMG_4138.JPG

IMG_4148.JPG


Rig: i9 9900k @ 5Ghz - - Z390 Auros Elite  - - EVGA RTX 2080 XC Ultra @ 2025Mhz - - 4x8GB Vengeance Pro 3000Mhz @ 3600MHz - - Samsung 950 Pro 512 NVMe Boot + Main Programs - - Samsung 830 Pro 256 RAID 0 Lightroom + Photo work - - WD Blue 1 TB SSD for Games - - Corsair RM650i - - Sound BlasterX EA-5 - - EK Supremacy Evo - - XT45 X-Flow 420 + UT60 280 rads - - EK Full Cover GPU Block - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - Steel Series QcK XXL

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: ESXi 6.5 - - 250 GB SSD for VM's/ESXi boot - - FreeNAS 11.3-U2 - - HPE Proliant ML10 Gen 9 backbone - - i3 6100 - - 28 GB ECC - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander

 

White Lightning (original full watercooled build) - Homelab / Media Server - The Blue Beast (my car and other expensive hobby...) - iPhone Xs - 2018 MacBook Air

Link to post
Share on other sites

Hey, I see lots of supermicro mods goin on. I am trying to get this server ^ to be quieter! Any advice? I guess I should start trolling this thread a bit.


Rig: i9 9900k @ 5Ghz - - Z390 Auros Elite  - - EVGA RTX 2080 XC Ultra @ 2025Mhz - - 4x8GB Vengeance Pro 3000Mhz @ 3600MHz - - Samsung 950 Pro 512 NVMe Boot + Main Programs - - Samsung 830 Pro 256 RAID 0 Lightroom + Photo work - - WD Blue 1 TB SSD for Games - - Corsair RM650i - - Sound BlasterX EA-5 - - EK Supremacy Evo - - XT45 X-Flow 420 + UT60 280 rads - - EK Full Cover GPU Block - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - Steel Series QcK XXL

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: ESXi 6.5 - - 250 GB SSD for VM's/ESXi boot - - FreeNAS 11.3-U2 - - HPE Proliant ML10 Gen 9 backbone - - i3 6100 - - 28 GB ECC - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander

 

White Lightning (original full watercooled build) - Homelab / Media Server - The Blue Beast (my car and other expensive hobby...) - iPhone Xs - 2018 MacBook Air

Link to post
Share on other sites
22 hours ago, {EAC} Shoot em UP said:

Hey, I see lots of supermicro mods goin on. I am trying to get this server ^ to be quieter! Any advice? I guess I should start trolling this thread a bit.

I actually had to take out the stock fans and stick a bunch of BeQuiet PureWings into mine. I have to be careful though because the fans are just lying around (No way to screw or hold them down). Also, the other issue is that in some SuperMicro Boards, the min fan speed is 50% or higher (I don't know why they did that). So the fans are running faster than they really need to be.

Link to post
Share on other sites
17 hours ago, scottyseng said:

Also, the other issue is that in some SuperMicro Boards, the min fan speed is 50% or higher (I don't know why they did that). So the fans are running faster than they really need to be.

Because they're designed to be used in a rack probably -- so they don't want to encourage you to be stupid. (just my guess)


PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to post
Share on other sites
48 minutes ago, djdwosk97 said:

Because they're designed to be used in a rack probably -- so they don't want to encourage you to be stupid. (just my guess)

Yeah, according to SuperMicro support, they're like that because they prefer reliability over noise. So even if everything is in temp, they'd rather have it run faster just in case. Their desktop boards aren't like that though. I heard if you have a board with IPMI though, you can hack the min fan speed, but not sure how true that is.

Link to post
Share on other sites
4 hours ago, scottyseng said:

Yeah, according to SuperMicro support, they're like that because they prefer reliability over noise. So even if everything is in temp, they'd rather have it run faster just in case. Their desktop boards aren't like that though. I heard if you have a board with IPMI though, you can hack the min fan speed, but not sure how true that is.

 

5 hours ago, djdwosk97 said:

Because they're designed to be used in a rack probably -- so they don't want to encourage you to be stupid. (just my guess)

Yea I mean... its server hardware. There is NO need to be quiet in a server environment and I fully understand that, but since I am not in that environment (even though that was the design intention), I want to change it as many of you do/have as well lol.


Rig: i9 9900k @ 5Ghz - - Z390 Auros Elite  - - EVGA RTX 2080 XC Ultra @ 2025Mhz - - 4x8GB Vengeance Pro 3000Mhz @ 3600MHz - - Samsung 950 Pro 512 NVMe Boot + Main Programs - - Samsung 830 Pro 256 RAID 0 Lightroom + Photo work - - WD Blue 1 TB SSD for Games - - Corsair RM650i - - Sound BlasterX EA-5 - - EK Supremacy Evo - - XT45 X-Flow 420 + UT60 280 rads - - EK Full Cover GPU Block - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - Steel Series QcK XXL

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: ESXi 6.5 - - 250 GB SSD for VM's/ESXi boot - - FreeNAS 11.3-U2 - - HPE Proliant ML10 Gen 9 backbone - - i3 6100 - - 28 GB ECC - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander

 

White Lightning (original full watercooled build) - Homelab / Media Server - The Blue Beast (my car and other expensive hobby...) - iPhone Xs - 2018 MacBook Air

Link to post
Share on other sites
4 hours ago, {EAC} Shoot em UP said:

 

Yea I mean... its server hardware. There is NO need to be quiet in a server environment and I fully understand that, but since I am not in that environment (even though that was the design intention), I want to change it as many of you do/have as well lol.

haha, I know. I have this in my room. Yeah the SuperMicro hotswap fans are proprietary so you can't really change them for slower fans, only the same (They have a different pin out / fan end). I think the other option was to run a fan controller inside of the server, but the RAID or HBA controller will throw an error about there being no fans on the backplane, which got annoying. The fan pins on the backplane and motherboard are normal fan 4 pins so you can use your own fans, but yeah, just accept that the backplane and motherboard have aggressive fan profiles.

 

The other thing to note though if that if you plan on filling up all of the drives with hot 7200 RPM drives, you will need the stock fans, as the drives do get really hot, as my ghetto bequiet fan setup isn't strong enough to get the air moving fast enough. I had to space out my drives because they were overheating.

Link to post
Share on other sites
15 hours ago, scottyseng said:

haha, I know. I have this in my room. Yeah the SuperMicro hotswap fans are proprietary so you can't really change them for slower fans, only the same (They have a different pin out / fan end). I think the other option was to run a fan controller inside of the server, but the RAID or HBA controller will throw an error about there being no fans on the backplane, which got annoying. The fan pins on the backplane and motherboard are normal fan 4 pins so you can use your own fans, but yeah, just accept that the backplane and motherboard have aggressive fan profiles.

 

The other thing to note though if that if you plan on filling up all of the drives with hot 7200 RPM drives, you will need the stock fans, as the drives do get really hot, as my ghetto bequiet fan setup isn't strong enough to get the air moving fast enough. I had to space out my drives because they were overheating.

Yea, the 24 bays will not be all used up for quite some time, not to worried about that. Thanks for the info!


Rig: i9 9900k @ 5Ghz - - Z390 Auros Elite  - - EVGA RTX 2080 XC Ultra @ 2025Mhz - - 4x8GB Vengeance Pro 3000Mhz @ 3600MHz - - Samsung 950 Pro 512 NVMe Boot + Main Programs - - Samsung 830 Pro 256 RAID 0 Lightroom + Photo work - - WD Blue 1 TB SSD for Games - - Corsair RM650i - - Sound BlasterX EA-5 - - EK Supremacy Evo - - XT45 X-Flow 420 + UT60 280 rads - - EK Full Cover GPU Block - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - Steel Series QcK XXL

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: ESXi 6.5 - - 250 GB SSD for VM's/ESXi boot - - FreeNAS 11.3-U2 - - HPE Proliant ML10 Gen 9 backbone - - i3 6100 - - 28 GB ECC - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander

 

White Lightning (original full watercooled build) - Homelab / Media Server - The Blue Beast (my car and other expensive hobby...) - iPhone Xs - 2018 MacBook Air

Link to post
Share on other sites
2 minutes ago, {EAC} Shoot em UP said:

Yea, the 24 bays will not be all used up for quite some time, not to worried about that. Thanks for the info!

Have a fun time with the build. Yeah, I had six WD Re drives stacked in one column...and yeah...under load they got to 60C+

The WD Reds chug along at 45C, even under load.

Link to post
Share on other sites

Is the ranking still a thing? Doesn't seem it has been updated in a few months...


Gaming Rig: ASUS Z87-PRO (V EDITION), Intel Core i5-4590 Haswell (Stock Cooling), HyperX FURY 8GB DDR3 1866, ASUS R9 380 2Gig, SILVERSTONE DA700, Samsung 840 500Gig SSD MZ-7TD500BW, 3 TB storage drive, ASUS DRW-2014L1T (DVD), LG WH14NS40 (Blu Ray), Sony SDM-HS95P, Windows 10 Pro
 

Home Server: - Supermicro SC826TQ-R800LPB chassis, X8DTN+, Dual Xeon E5620 (Stock), 
4x Nanya NT4GC72B4NA1NL-CG (24GB Reg ECC DDR3), 12x MICRON MT36JSZF51272PZ-1G4G1FG (48GB Reg ECC DDR3), (72GB total memory), 2x OCZ Agility (60GB) (OS & Backup), 6x WD40EFRX (4TB) in RaidZ2 (14.3 TB usable), 2x WD 10TB white label (Easystore shucks), 2x Q Series HDTS225XZSTA (256GB) ZIL & L2ARC mirrored

Link to post
Share on other sites
31 minutes ago, Ziggidy said:

Is the ranking still a thing? Doesn't seem it has been updated in a few months...

I think it got updated about a month ago? There hasn't been a huge amount of submissions since then


Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to post
Share on other sites
Might as remove the old configuration I posted, Almost everything is different about this system from the original.
 
Hardware
CASE: Fractal Define R5
PSU: EVGA SuperNOVA 650

CPU: Intel Core i3-6100 3.7 GHz

RAM: 32GB - G.SKILL Ripjaws V Series 

MB: ASRock H170M Mini-ITX

 

HBA CARD: LSI 9207-8i

INTEL SAS 36 port expander

HDD 1: 1x 500G WD Blue 2.5" (OS)

HDD 2: 12x 1TB  WD Red 2.5"

HDD 3: 4x 3TB WD Green 3.5"

 
 
Software and Configuration:
My server is running CentOS 6.8 
I'm using mdadm to make my arrays.
The OS drive is the WD Blue (encrypted with luks).
My main array consists of 2 sets of six 1TB drives in RAID 5.
The WD Green drives will be in RAID 5 and used as storage for backups.
 
Usage:
I use the reds as storage for iscsi targets and/or kvm storage.
The greens are for making putting copies of all the files I want to have a second copy of.
 
 
Photo's:
 
DSCN21001_zpsb8lf89fc.JPG.c9f7f8f1feff72277e32490600a57c27.JPG
 
 
DSCN21021_zpsuomewi0v.thumb.JPG.f3da5da48cf2656f41bfccdd5e9bffa9.JPG
 
 
 

testa.PNG


Can Anybody Link A Virtual Machine while I go download some RAM?

 

Link to post
Share on other sites
Posted · Original PosterOP

Just to let you all know, I have bought a  SuperMicro 6017R-N3RF4+ put my nas system in.

I plan to buy another 8 8TB drives soon which will get me well over 200TB :P


Respect the Code of Conduct!

>> Feel free to join the unofficial LTT teamspeak 3 server TS3.schnitzel.team <<

>>LTT 10TB+ Topic<< | >>FlexRAID Tutorial<<>>LTT Speed wave<< | >>LTT Communies and Servers<<

Link to post
Share on other sites
2 minutes ago, looney said:

Just to let you all know, I have bought a  SuperMicro 6017R-N3RF4+ put my nas system in.

I plan to buy another 8 8TB drives soon which will get me well over 200TB :P

Haha, just when I thought someone would have you beat for the #1 spot...

 

Congratulations though.

Link to post
Share on other sites
On 9/29/2016 at 0:06 PM, brwainer said:

I think it got updated about a month ago? There hasn't been a huge amount of submissions since then

Perhaps I didn't meet some requirement. Posted just about 3 months back, but hasn't been added to rankings, unless I am just overlooking it.


Gaming Rig: ASUS Z87-PRO (V EDITION), Intel Core i5-4590 Haswell (Stock Cooling), HyperX FURY 8GB DDR3 1866, ASUS R9 380 2Gig, SILVERSTONE DA700, Samsung 840 500Gig SSD MZ-7TD500BW, 3 TB storage drive, ASUS DRW-2014L1T (DVD), LG WH14NS40 (Blu Ray), Sony SDM-HS95P, Windows 10 Pro
 

Home Server: - Supermicro SC826TQ-R800LPB chassis, X8DTN+, Dual Xeon E5620 (Stock), 
4x Nanya NT4GC72B4NA1NL-CG (24GB Reg ECC DDR3), 12x MICRON MT36JSZF51272PZ-1G4G1FG (48GB Reg ECC DDR3), (72GB total memory), 2x OCZ Agility (60GB) (OS & Backup), 6x WD40EFRX (4TB) in RaidZ2 (14.3 TB usable), 2x WD 10TB white label (Easystore shucks), 2x Q Series HDTS225XZSTA (256GB) ZIL & L2ARC mirrored

Link to post
Share on other sites
5 hours ago, Ziggidy said:

Perhaps I didn't meet some requirement. Posted just about 3 months back, but hasn't been added to rankings, unless I am just overlooking it.

One of the mods usually adds a note saying when they update the listings


Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to post
Share on other sites
On 10/17/2016 at 7:55 PM, brwainer said:

One of the mods usually adds a note saying when they update the listings

Looks like it may be a while...


Can Anybody Link A Virtual Machine while I go download some RAM?

 

Link to post
Share on other sites

Hardware

CASE: Fractal Design Node 804

PSU: Seasonic SS-760XP

MB: Asus P9D-M

CPU: Intel Xeon E3-1231 v3

HS:  Noctua NH-C14S

RAM: 2 * 4GB DDR3L ECC (Kingston KVR16LE11S8/4KF)

RAID CARD 1: SAS9211-8I (Flashes to IT)

SSD: 2 * Samsung SSD 850 PRO 128GB

HDD 1: 8 * WD 6TB red (WDC WD60EFRX-68L0BN1)

Software and Configuration:

My server is running FreeBSD 11 and I'm using ZFS to manage a RAID 10 pool. 

My array consists of 8 6TB WD drives in RAID 10. This gives 21 TB of real availability from the pool. 

Currently I am using Samba to connect to my desktop, I will be setting up iSCSI to share space with an ESXi host. 

Usage:

I use the storage for movies and series, Run plex on the server and I connect to it for local playback. 

It will be used as a datastore for an ESXi host. 

Backup:

Hopes and dreams.

Additional info:

I'm looking at increasing the memory and running bhyve since the CPU is woefully underutilized. I would mostly be shifting the services into VMs and possibly running related VMs on it. 

I used RAID 10 for ease of recovery and lower overhead on high write operations. 

I use identical sized partitions in my ZFS pool rather than direct drive size as this prevents possible issues if a replacement drive is a different size by a MB or two.

Photo's:

3ZP26Dp.jpg

4 deep on each stack. The SSDs are installed in a section in the front panel. I installed the PSU upside down and couldn't be bothered to rewire it all after I realized. 

FChNz1n.jpg
On the plus side, I never have problems with heat. 

BLcn1YH.jpg

Nothing fancy, but it keeps it off the floor so it doesn't suck up as much dust and lint. 

bqZoH6d.png

lLrUUzm.png

Pool status and capacity. 

 

EDIT: Yes, I'm a bad for running as root. I was lazy and had to install and modify some services. 

Link to post
Share on other sites

Just for 'fun' I thought I'd throw the SAN I'm using at work (I just upgraded our infrastructure systems from an old VMware nightmare to a shiny new HyperV beastie that doesn't fall over if you look at it too hard).

 

So yes, this is a just for fun, because, well, this is small/medium business/enterprise grade stuff:

 

Hardware

HP MSA2040
2 x Controllers
Each with: 1 x 1 Gbit Ethernet (management), 4 x SPDF+ Adaptors (populated with 2 x 16 Gb Fibre Channel and 1 x 1 Gbit iSCSI)
Redundant power supplies
24 SFF SAS drive ports - connecting to 12Gb/sec SA controller.  Controller has super capacitor and flash memory backup for writes during power loss.
 
Bays 1 - 16: HP 2.5" 600GB SAS 15K Drives.
Bays 17 - 24: HP 2.5" 1.2TB SAS 10K Drives.
 
Software and Configuration:
 
HP MSA Storage Management Utility v3 (Web based).  I think it's BSD based, but it might be linux.
 
Bays 1 - 8 Configured as a storage volume, RAID 5 for about 4.2TB
Bays 9 - 16 Configured as a storage volume, RAID 5 for about 4.2TB
Bays 17 - 24 Configured for remote archive (from our DR system) for about 8.4TB.
 
Total storage about 16.8TB
 
Controllers are cross connected (no FC switch) to 2 x HP ProLiant DL 360p Gen8 servers running Windows Server 2012 R2 (each server has 1 connection from controller A and one from controller B) using 16Gb Fibre Channel HBAs.
 
Usage:
 
Storage for HyperV of 14 Windows servers, including failover clustering/AlwaysOn for SQL Server 2016, Exchange 2016, IIS and so on.
It also remotely clones itself to our DR data centre once an hour to an identical device.
 
Photo's:
I thought I had some on my phone, but apparently I don't!
 
All told it's good, solid, stable and deals with all of the bandwidth needs we can throw at it.  It's difficult to properly bench something like this (While it's in a live environment anyway), but I've seen iops over 4000, and a 5GB file copies at about 300 - 500MB/second - obviously more than enough to saturate our 1Gbit network.
 
Anyway, that config (without servers, just the SAN + Drives) comes in at around $12,000USD (at today's exchange rate anyway).
 
Thought you guys might enjoy seeing how much cheaper and probably more capable your builds are compared to that!
 
The price we pay for 'zero' down time...
Link to post
Share on other sites

Alrighty then, time for an update!

 

Note: The list of noteworthy builds will now also hold the decommissioned builds.

 

@MyNameIsNicola Updated. I hope I got it right, so many drives. :D

@scottyseng Updated

@maxtch Added your second system, updated your first one.

@b3nno Added system to list. Nice box!

@Jonny Updated.

@Ziggidy Nope, rankings are not dead, but yes, it does usually take me a while to get around to them. Added your system to the list. Thanks for the entry!

@username6465 Updated.

@leadeater Answered your question in chat already, but just for public info: We'd count each system in a clustered file system as a single entry, if it qualified for the list.

@FattyDave Added your system to the list. Thanks!

@{EAC} Shoot em UP Added your new system to the list, relegated the old one to the secondary list (no, we don't delete posts). The pics in your new post seem to be broken though! :( As for the noise: I put noise dampening material into our server, it actually made quite a difference. Wasn't cheap though.

@unijab Added your new system, moved old one to secondary list.

@kerradeph Added system to list. Very neat.

 

 

It would also appear that there are a few entries which I've overlooked. The thread seems to be becoming a bit unwieldy, I'm going to think a bit if I can come up with a solution for that. Wouldn't want this to keep happening. Apologies to everyone below.

@paps511 Added.

@brwainer Fixed that and added to list. Apologies.

@weeandykidd Overlooked your last update, fixed that as well.

@Ramaddil Updated.

 

Lastly:

@Bhav If you post the rest of the system info as per @looney's template, that seems like it would qualify for the list.

@saitir Obviously we won't be starting to add work systems to the rankings (would be a bit unfair), but that does sound pretty neat, so if you ever do get a hold of some pictures... wouldn't mind seeing those. ;)

 


BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites
On 6/1/2013 at 10:44 AM, looney said:
  • We don't count OS drives,
  • we also do not count cache drives,
  •  

Why?


PSU Tier List LTT Gender Survey

LTT's self-appointed memelord

Always up for a game of chess

See @STRMfrmXMN @Energycore or @Starelementpoke for PSU needs

See @deXxterlab97 for blue fetishes (if that's your thing)

See @ShadowTechXTS for cats

@Implosivetech's got you covered for all your graphic design needs

And lastly, never put a CX green series PSU in a high end build https://youtu.be/Oybz5Q-If9M?t=7

 

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×