Jump to content

APOLLO (2 CPU LGA1366 Server | InWin PP689 | 24 Disks Capacity) - by alpenwasser [COMPL. 2014-MAY-10]

 

That moment when you get called "son" by a person 12 years younger than you. :P

(no worries, I get it, figure of speech and all that, plus it makes me feel young again)

 

Anyway, ontopic: Yes, indeed I have very much enjoyed this build, and yes, I too

have a weakness for server builds.

 

If you want to see some more servers, I recommend you check out our 10 TB+ topic,

as well as its pendant on Hardocp. OCN also has a nice server thread here

 

 

Less teasing, more building! :P

 

lol yeah, figure of speech :P and I will look in those places :)

Main Rig: Cpu: Intel Core i7 4790k CPU Cooler: NH_d15 GPU: Gigabyte Radeon HD7970 Windforce III MOBO: Msi Z97 Gaming 5 RAM: HyperX Fury 1600 8gb white SSD: Samsung 840 evo 250gb PSU: Corsair CX750M Case: NZXT H630 white

PC peripherals: Monitor: Samsung Syncmaster T27B550 Keyboard: Logitech G105 Mouse: Logitech G700s Headphones: Logitech G430 

 

Link to comment
Share on other sites

Link to post
Share on other sites

That is awesome :)

Rig CPU Intel i5 3570K at 4.2 GHz - MB MSI Z77A-GD55 - RAM Kingston 8GB 1600 mhz - GPU XFX 7870 Double D - Keyboard Logitech G710+

Case Corsair 600T - Storage Intel 330 120GB, WD Blue 1TB - CPU Cooler Noctua NH-D14 - Displays Dell U2312HM, Asus VS228, Acer AL1715

 

Link to comment
Share on other sites

Link to post
Share on other sites

Nice....hope the plans for the build goes better than Apollo 2 which was abandoned by NASA and never launched:P

The Mistress: Case: Corsair 760t   CPU:  Intel Core i7-4790K 4GHz(stock speed at the moment) - GPU: MSI 970 - MOBO: MSI Z97 Gaming 5 - RAM: Crucial Ballistic Sport 1600MHZ CL9 - PSU: Corsair AX760  - STORAGE: 128Gb Samsung EVO SSD/ 1TB WD Blue/Several older WD blacks.

                                                                                        

Link to comment
Share on other sites

Link to post
Share on other sites

That is awesome :)

 

Thanks! :)

 

Nice....hope the plans for the build goes better than Apollo 2 which was abandoned by NASA and never launched:P

Lol, yeah, so far so good. It's in active duty as we speak (well,

uhm, write, I suppose). HELIOS is still suffering from massive

delays though, due to budget constraints. But I'm a very patient man,

I'll finish that beast even if it takes me another three years. :D

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

 

Thanks! :)

 

Lol, yeah, so far so good. It's in active duty as we speak (well,

uhm, write, I suppose). HELIOS is still suffering from massive

delays though, due to budget constraints. But I'm a very patient man,

I'll finish that beast even if it takes me another three years. :D

 

Don't worry man. Most people here has their delayes when it comes to budgets. I just got an unexpected bill which set me back 1-1½ month:P

 

Just part of the game.

The Mistress: Case: Corsair 760t   CPU:  Intel Core i7-4790K 4GHz(stock speed at the moment) - GPU: MSI 970 - MOBO: MSI Z97 Gaming 5 - RAM: Crucial Ballistic Sport 1600MHZ CL9 - PSU: Corsair AX760  - STORAGE: 128Gb Samsung EVO SSD/ 1TB WD Blue/Several older WD blacks.

                                                                                        

Link to comment
Share on other sites

Link to post
Share on other sites

Don't worry man. Most people here has their delayes when it comes to budgets. I just got an unexpected bill which set me back 1-1½ month:P

 

Just part of the game.

Yeah, I know, it's par for the course. Such is the life

of PC modding. :D

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

I had more requirements to it than you had. In the end I found this one:

 

 

 

The advantage of that one was that it has pin outs for Activity/Power LEDs. Fully hot swappable as well... 6$

 

Sata+SAS, hotswap, failover, pin outs for LEDs, all together for a small price

this might be a realy dumb question but why are there 2 sata conectors on the board? 

If you tell a big enough lie and tell it frequently enough it will be believed.

-Adolf Hitler 

Link to comment
Share on other sites

Link to post
Share on other sites

this might be a realy dumb question but why are there 2 sata conectors on the board? 

SATA and SAS connectors are the same but SAS drives support failover, in case your host bus adapter is ceasing to exist. since i posted failover as a feature here's what it means:

 

you plug in one port to your first RAID card/HBA/Mobo and the other to another port of a separate RAID card/HBA/Mobo and you get rid of a point of failure this way. SAS drives only though, SATA doesn't offer this feature

 

/edit: but you can still work with SATA drives by just plugging in a cable in one of the 2 ports, since the only difference is how the drive itself handles the data link

My builds:


'Baldur' - Data Server - Build Log


'Hlin' - UTM Gateway Server - Build Log

Link to comment
Share on other sites

Link to post
Share on other sites

this might be a realy dumb question but why are there 2 sata conectors on the board?

Not a dumb question at all. The board has 14 ports in total:

Six from the Intel chipset itself (the normal SATA ports you

see towards the bottom edge), and eight ports from the two

SAS connectors (each of those two large connectors gets one

of those cables that fans out to four SATA ports).

The eight SAS/SATA ports are connected to a separate LSI 1068E

chipset, which would actually make them a pretty good option,

with one major caveat: That chipset does not support drives

larger than 2 TB due to its age (back then 3 TB drives hadn't

come along yet). That's why I'm not using them (plus, I haven't

yet quite figured out how I'd use them in IT mode, supposedly

there's a jumper for that somewhere on the board).

EDIT:

LOL, derp, I thought you meant the motherboard, not the PCB

@Ahnzh posted. But yeah, anyway, what he said.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Ha, how could I forget, I actually weighed it!

~33 kg currently (~73 lbs).

That's actually not too bad, I'm assuming air cooling will be making it quite a bit lighter than water cooling which can add like 5 - 6 kg easily, both through the mass of the radiators and of course the water.

My Personal Rig - AMD 3970X | ASUS sTRX4-Pro | RTX 2080 Super | 64GB Corsair Vengeance Pro RGB DDR4 | CoolerMaster H500P Mesh

My Wife's Rig - AMD 3900X | MSI B450I Gaming | 5500 XT 4GB | 32GB Corsair Vengeance LPX DDR4-3200 | Silverstone SG13 White

Link to comment
Share on other sites

Link to post
Share on other sites

That's actually not too bad, I'm assuming air cooling will be making it quite a bit lighter than water cooling which can add like 5 - 6 kg easily, both through the mass of the radiators and of course the water.

But you would not put water cooling in a server.

Rig CPU Intel i5 3570K at 4.2 GHz - MB MSI Z77A-GD55 - RAM Kingston 8GB 1600 mhz - GPU XFX 7870 Double D - Keyboard Logitech G710+

Case Corsair 600T - Storage Intel 330 120GB, WD Blue 1TB - CPU Cooler Noctua NH-D14 - Displays Dell U2312HM, Asus VS228, Acer AL1715

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just going to leave this here...

 

 

Also, water does get used in professional servers as well.

Oh I guess I was wrong.

Rig CPU Intel i5 3570K at 4.2 GHz - MB MSI Z77A-GD55 - RAM Kingston 8GB 1600 mhz - GPU XFX 7870 Double D - Keyboard Logitech G710+

Case Corsair 600T - Storage Intel 330 120GB, WD Blue 1TB - CPU Cooler Noctua NH-D14 - Displays Dell U2312HM, Asus VS228, Acer AL1715

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just going to leave this here...

 

 

Also, water does get used in professional servers as well.

on very big scales i can see liquid cooling making sense, because it's a more efficient and manageable way to move the heat directly where you want it, but otherwise on personal home servers and whatnot i would just stick with regular heatpipe cooling, it's just more reliable and less prone to failure.

 

that is ofcourse if you are doing it the conventional way, having multiple servers and mounting radiators outside your house with tubing going through the walls are very clear exceptions.

Link to comment
Share on other sites

Link to post
Share on other sites

That's actually not too bad, I'm assuming air cooling will be making it quite a bit lighter than water cooling which can add like 5 - 6 kg easily, both through the mass of the radiators and of course the water.

Yeah, w/c can add a lot of weight to a machine, that's for sure.

  

Oh I guess I was wrong.

 

on very big scales i can see liquid cooling making sense, because it's a more efficient and manageable way to move the heat directly where you want it, but otherwise on personal home servers and whatnot i would just stick with regular heatpipe cooling, it's just more reliable and less prone to failure.

 

that is ofcourse if you are doing it the conventional way, having multiple servers and mounting radiators outside your house with tubing going through the walls are very clear exceptions.

Yeah, I love water cooling, and I'm really happy with ZEUS, but

for APOLLO I just didn't want to deal with that hassle TBH.

One day I want to have a proper server rack and I might use

external radiator goodness, but for now, air cooling will do. :)

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

on very big scales i can see liquid cooling making sense, because it's a more efficient and manageable way to move the heat directly where you want it, but otherwise on personal home servers and whatnot i would just stick with regular heatpipe cooling, it's just more reliable and less prone to failure.

 

that is ofcourse if you are doing it the conventional way, having multiple servers and mounting radiators outside your house with tubing going through the walls are very clear exceptions.

 

Major nopes. For big scale you need modularity and consistency, a leaky water pipe (don't get me wrong, you can make it leak proof but in a real world that's impossible) next to a major power source negates that. A/C's exist for a reason and those do involve water cooling (aka Chillers) which is in a separate place from the actual computer cooling. Air cooling is still king, especially in big scale, filtered air and controlled temp and humidity are the way to go, but feel free do create such a thing, it may catch on (?).

 

If you look at the big datacenters all water cooling is off in a separate place, which helps keep the datacenter at the same humidity level all the time as its controlled by the air's percent humidity, the last thing you want is a leak making it humid as a swamp in the summertime, voids most warranties.

 

There is that one company doing oil cooled servers but, well, I don't think its really catching on in a big way, sure some localized installs but talk about a mess, especially when you need to work on a node/blade/server. I personally wouldn't get such a thing, all the wires as it is is a mess, then add oil or a liquid to the mix.

I roll with sigs off so I have no idea what you're advertising.

 

This is NOT the signature you are looking for.

Link to comment
Share on other sites

Link to post
Share on other sites

- snip -

Aquacomputer have been selling data center water cooling solutions

in cooperation with a German data center company for quite some years

now, seems to work pretty well. They make custom waterblocks for 1u

servers and everything IIRC.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Aquacomputer have been selling data center water cooling solutions

in cooperation with a German data center company for quite some years

now, seems to work pretty well. They make custom waterblocks for 1u

servers and everything IIRC.

 

Well there's water cooling of air (chillers, etc.) and there's water block cooling like you listed but unless you go full on one vendor buying or retro-fitting to no end and only buying what can be retro-fitted with company X's water blocks and cases then maybe. If you instead buy based on need and go with what works and have to deal with N manufactures then it does not work. I could not envision limiting what we buy based on if we could retro-fit a water block and allow for water cooling ports, that would limit us to no avail or any datacenter which is as flexible to meet its users needs.

I roll with sigs off so I have no idea what you're advertising.

 

This is NOT the signature you are looking for.

Link to comment
Share on other sites

Link to post
Share on other sites

Well there's water cooling of air (chillers, etc.) and there's water block cooling like you listed but unless you go full on one vendor buying or retro-fitting to no end and only buying what can be retro-fitted with company X's water blocks and cases then maybe. If you instead buy based on need and go with what works and have to deal with N manufactures then it does not work. I could not envision limiting what we buy based on if we could retro-fit a water block and allow for water cooling ports, that would limit us to no avail or any datacenter which is as flexible to meet its users needs.

 

Ah yeah, that makes sense. Having to deal with all the usual compatibility

hassles you have when building a custom loop in your home PC would not

something I'd want to need to deal with when putting together a data center.

 

I suspect that's precisely why Aquacomputer is in a joint venture with a data

center company for this, so that they can avoid the hackery and provide

completely custom solutions for the entire facility, not just slap a few blocks

onto some boards here and there and mount a few radiators somewhere in

a rack.

 

You can have it tightly integrated into the building, even using the water from

the computers to heat buildings and stuff like that. The blocks are custom made

for the servers, and it's all very tightly monitored (as servers should be, of

course). As far as I can make out, you have central pump/monitoring stations

which then distribute the water to the servers via copper pipes according

to their website, each server having an inlet and an outlet connected via

quick disconnects (you can see a picture in the pdf they have on their website).

 

They have a website on the thing here

It's in German, but there's at least one pic, and in the pdf they have, there

are a few more.

 

Considering that this is most likely custom tailored to each customer, I'd

wager that you can have them make custom block for pretty much any

sort of machine you want.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

 

They have a website on the thing here

It's in German, but there's at least one pic, and in the pdf they have, there

are a few more.

 

Considering that this is most likely custom tailored to each customer, I'd

wager that you can have them make custom block for pretty much any

sort of machine you want.

 

Yes, I've been looking at their site from the post you listed it, German but its computer so kinda universal ;)

 

Its interesting and yes would be more energy efficient but boy what a pain in the rear to take all our mash up of vendors to get it done, then there's the ones that it would be impossible to do as they are already so tight (blade servers) so air cooling would remain none the less.

I roll with sigs off so I have no idea what you're advertising.

 

This is NOT the signature you are looking for.

Link to comment
Share on other sites

Link to post
Share on other sites

Yes, I've been looking at their site from the post you listed it, German but its computer so kinda universal ;)

 

Its interesting and yes would be more energy efficient but boy what a pain in the rear to take all our mash up of vendors to get it done, then there's the ones that it would be impossible to do as they are already so tight (blade servers) so air cooling would remain none the less.

 

Yeah, I imagine this is something you implement when you build up a

server center from the ground or are at least doing a major revamp

of your entire infrastructure, not something you gradually introduce

into an online server farm piece by piece.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Still can't get over the data cable management  :wub:

 

I hope to do something like this (although not as outrageous) to store

all  of  my  families  downloaded  movies,  music,  work  files,  and  my

Solidworks files that would easily take up 10 Tb. I would just have to 

read up on RAID and such.

i5 4670k| Asrock H81M-ITX| EVGA Nex 650g| WD Black 500Gb| H100 with SP120s| ASUS Matrix 7970 Platinum (just sold)| Patriot Venom 1600Mhz 8Gb| Bitfenix Prodigy. Build log in progress 

Build Log here: http://linustechtips.com/main/topic/119926-yin-yang-prodigy-update-2-26-14/

Link to comment
Share on other sites

Link to post
Share on other sites

aw--apollo--logo.png

Table of Contents

01. 2013-NOV-14: First Hardware Tests & The Noctua NH-U9DX 1366

02. 2013-NOV-16: Temporary Ghetto Setup, OS Installed

03. 2014-APR-01: PSU Mounting & LSI Controller Testing

04. 2014-APR-02: The Disk Racks

05. 2014-APR-08: Chipset Cooling & Adventures in Instability

06. 2014-APR-09: Disk Ventilation

07. 2014-APR-11: Fan Unit for Main Compartment Ventilation

08. 2014-APR-12: Storage Topology & Cabling

09. 2014-APR-26: Storage and Networking Performance

10. 2014-MAY-10: Sound Dampening & Final Pics

Hardware - Final Config

CASE:               InWin PP689

PSU:                Enermax Platimax 600 W

MB:                 Supermicro X8DT3-LN4F

CPU:                2 × Intel Xeon L5630 (quadcore, hyperthreaded)

HS:                 Noctua NH-U9DX - Socket LGA1366

RAM:                24 GB Hynix DDR3 1333 MHz ECC

HBA CARD 0:         LSI 9211-8i, flashed to IT mode (Tutorial)

HBA CARD 1:         LSI 9211-8i, flashed to IT mode

HBA CARD 2:         LSI 9211-8i, flashed to IT mode

SSD:                Intel 520, 120 GB

HDD 0:              WD VelociRaptor 150 GB (2.5")

HDD 1-3:            Samsung HD103UJ 1 TB F1 × 3

HDD 4-7:            WD RE4 2 TB × 4

HDD 8-13:           WD Red 3 TB × 6

Total Raw Capacity: 29 TB

 

Pics of Final Form - More in Final Post

 

(click image for full res)

aw--apollo--2014-05-10--05--open.jpeg

(click image for full res)

aw--apollo--2014-05-10--15--back-side.jp

 

Wait, What, and Why?

So,   yeah,    another   build. Another   server,    to   be

precise. Why? Well, as  nice of  a system  ZEUS is,  it does

have two major shortcomings for its use as a server.

When I  originally conceived ZEUS,  I did not plan  on using

ZFS (since it was not  yet production-ready on Linux at that

point). The  plan was  to use  ZEUS' HDDs  as single  disks,

backing up the  important stuff. In case of  a disk failure,

the loss of  non-backed up data would  have been acceptable,

since it's mostly  media files. As long as  there's an index

of  what  was  on  the  disk,  that  data  could  easily  be

reaquired.

But right  before ZEUS was  done, I  found out that  ZFS was

production-ready on Linux, having kept a bit of an eye on it

since fall  2012 when I dabbled  in FreeBSD and ZFS  for the

first time. Using  FreeBSD on the  server was not  an option

though since I was nowhere near proficient enough with it to

use it for  something that important, so it had  to be Linux

(that's why I didn't originally plan on ZFS).

So,  I deployed  ZFS on  ZEUS,  and it's  been working  very

nicely  so  far. However, that  brought  with  it two  major

drawbacks: Firstly, I was now missing 5 TB of space, since I

had been  tempted by ZFS  to use those for  redundancy, even

for our media files. Secondly, and more importantly, ZEUS is

not an ECC-memory-capable system. The reason this might be a

problem is that  when ZFS verifies the data on  the disks, a

corrupted bit in your RAM  could cause a discrepancy between

the  data in  memory and  the data  on disk,  in which  case

ZFS  would  "correct"  the  data  on  your  disk,  therefore

corrupting it. This  is not exactly optimal  IMO. How severe

the consequences of this would  be in practice is an ongoing

debate in various ZFS  threads I've read. Optimists estimate

that it would merely corrupt  the file(s) with the concerned

corrupt bit(s), pessimists are  afraid it might corrupt your

entire pool.

The main focus of this machine will be:

  • room to install more disks over time
  • ECC-RAM capable
  • not ridiculously expensive
  • low-maintenance, high reliability and

    availability (within reason, it's still

    a home and small business server)

Modding

Instead of some  uber-expensive W/C setup, the  main part of

actually building  this rig will  be in modifying  the PP689

for fitting as many HDDs  as halfway reasonable as neatly as

possible. I have not  yet decided if there  will be painting

and/or sleeving  and/or a window. A window  is unlikely, the

rest depends mostly  on how much time I'll have  in the next

few weeks (this  is not a long-term project, aim  is to have

it done way before HELIOS).

Also, since  costs for this  build should not spiral  out of

control, I will  be trying to reuse as many  scrap and spare

parts I have laying around as possible.

Teaser

More  pics  will  follow  as  parts  arrive  and  the  build

progresses, for now a shot of the case:

(click image for full res)

aw--apollo--2013-11-07--01--pp689.jpeg

That's all for now, thanks for stopping by, and so long. :)

all full throttle whats your electricity consumption like Edited by alpenwasser
spoiler tag
Link to comment
Share on other sites

Link to post
Share on other sites

Storage Topology & Cabling

Storage Topology

In case you can't read the text, the full res version should

be more easily readable.

(click image for full res)

aw--apollo--2014-04-13--storage-topology

The   idea  behind   the  storage   topology  is   based  on

@wpirobotbuilder's  post  about  reducing single  points  of

failure. Any one of the three LSI controllers can fail and I

still have all my data available.

You'll  see  below  that  I haven't  yet  gotten  around  to

installing the Velociraptor.

I use coloured zip ties to mark the cables that go to the

different controllers.

BLUE   = controller 0

YELLOW = controller 1

GREEN  = controller 2

Tidiness

There isn't really any space to hide the cables, so this was

rather  tricky  and  required  three attempts  until  I  was

satisfied with the result. In the  end I hid the extra cable

behind the triple  fan unit, good thing they're  38 mm fans,

which makes the space behind them just about large enough to

fit the extra cable bits.

The power cables for the disks are two cables that came with

the PSU  and onto  which I  just put  a lot  more connectors

while  taking off  the stock  connectors because  those were

neither placed  in the correct  locations nor facing  in the

right direction.

Looks harmless, right? Yeah...

(click image for full res)

aw--apollo--2014-04-13--01--cables-harml

And the disks:

(click image for full res)

aw--apollo--2014-04-13--02--disks.jpeg

OK then, first try:

(click image for full res)

aw--apollo--2014-04-13--03--all-cables.j

I soon realized that this  wasn't going to work. The problem

was that  I had the  disks arranged in  the same way  as the

will be  set up  in the  storage pool  layout, so  the disks

which go into the same  storage pool were also mounted below

each  other.  Sounds  nice in  theory,  but if  you want  to

have  disk from  each pool  distributed among  the different

controllers, you'll get quite the cable mess.

(click image for full res)

aw--apollo--2014-04-13--04--cabling-roun

(click image for full res)

aw--apollo--2014-04-13--05--cabling-roun

Second Try

Next try, this time I arranged  the disks to that the cables

to the controllers could be  better laid out. Since I wanted

to set up  all the cables for all the  disk slots, even ones

that will  stay empty for  now, I  had to shuffle  the disks

around when laying out the cables.

(click image for full res)

aw--apollo--2014-04-13--06--cabling-roun

(click image for full res)

aw--apollo--2014-04-13--08--cable-hydra.

(click image for full res)

aw--apollo--2014-04-13--09--cable-crossr

Better. But I still wasn't quite happy, mainly because...

(click image for full res)

aw--apollo--2014-04-13--10--cables-done.

(click image for full res)

aw--apollo--2014-04-13--11--disks-distri

(click image for full res)

aw--apollo--2014-04-13--12--cables-detai

... of this:

(click image for full res)

aw--apollo--2014-04-13--13--cables-nest.

Third Try

This time  I made sure the  cables stayed tidy on  both ends

while hiding  the mess  (which cannot  be avoided  since all

cables are the same length but lead to different end points,

obviously) behind the triple fan unit.

(click image for full res)

aw--apollo--2014-04-13--15--hiding-space

The loop of extra cable length for the top cable loom:

(click image for full res)

aw--apollo--2014-04-13--16--hiding-space

And the cable loom for controller 0, from the disk side...

(click image for full res)

aw--apollo--2014-04-13--17--cable-loom-c

and the M/B side. Much better IMHO. :)

(click image for full res)

aw--apollo--2014-04-13--18--cable-loom-c

The bottom controller had a bit more extra cable length to hide, so

that part is a bit messier.

(click image for full res)

aw--apollo--2014-04-13--19--cable-loom-c

And the middle one:

(click image for full res)

aw--apollo--2014-04-13--20--cable-loom-c

Tada! While not perfect (I'd need  longer cables for that to

make cleaner runs,  but I'm not buying more  cables just for

the sake of that for a  build that has a closed side panel),

with this iteration of my cabling I'm now rather happy:

(click image for full res)

aw--apollo--2014-04-13--21--disk-rack-ca

(click image for full res)

aw--apollo--2014-04-13--22--disk-rack-ca

(click image for full res)

aw--apollo--2014-04-13--23--disk-rack-ca

And the other side. Much better than before methinks. :)

(click image for full res)

aw--apollo--2014-04-13--24--mb-cmpt-over

(click image for full res)

aw--apollo--2014-04-13--25--mb-cmpt-clos

The SATA cable for the system SSD:

(click image for full res)

aw--apollo--2014-04-13--26--system-drive

And the controller LEDs when there's some activity:

(click image for full res)

aw--apollo--2014-04-13--27--controller-l

Now  if you'll  excuse me,  there's a  dinner waiting  to be

cooked. :)

Cheers,

-aw

that is some fantastic cable management Edited by alpenwasser
spoiler tag

Looking for a job

Link to comment
Share on other sites

Link to post
Share on other sites

Still can't get over the data cable management  :wub:

 

I hope to do something like this (although not as outrageous) to store

all  of  my  families  downloaded  movies,  music,  work  files,  and  my

Solidworks files that would easily take up 10 Tb. I would just have to 

read up on RAID and such.

 

Hehe, thanks, much appreciated! :)

For help with RAID (or storage in general), just stop by the storage

section, we have a few pretty knowledgeable people there eager to help.

 

all full throttle whats your electricity consumption like

It spikes at ~390 W during bootup, then drops to ~180 W in idle

while running normally (so, most of the time). I haven't actually

measured power consumption at full load in its current config,

but it's probably somewhere between 250 W and 300 W max.

 

that is some fantastic cable management

Thank you, I am indeed pretty happy with it. :)

Side note: I've edited your quotes to add spoiler tags to keep

the forum a bit more readable. ;)

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×