Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
alpenwasser

APOLLO (2 CPU LGA1366 Server | InWin PP689 | 24 Disks Capacity) - by alpenwasser [COMPL. 2014-MAY-10]

Recommended Posts

Just out of curiosity why call it Apollo? Sorry if this has already been answered btw. 

He likes classical Greek/Roman mythology IIRC. Thus Zeus (Lightning/Leader god guy), Apollo (Music/Art god guy), Helios (Sun/Cattle god guy), etc.


† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to post
Share on other sites

Haha, yeah I'm quite happy with it, and it still has more than 10

disk slots free. :D

 

 

At the moment it's 29 TB raw storage, so yeah, you're not too far off. ;)

Usable capacity when deducting parity is 17 TB (12 TB on the media pool,

4 TB on my personal data pool, and 1 TB on my dad's business pool).

 

Wow, that's a lot of media!

 

I love how you say it's a business server as well, but only 1/17 of it will be used for business purposes :P

 

Are you sure you're not just saying that to use high grade server components? ;)

 

Either way, awesome, awesome build :)


My Personal Rig - i9 7960X | GTX 1080 | AsRock X299 Taichi | NZXT S340

My Wife's Rig - i7 7820X | GTX 970 | AsRock X299 Taichi | Fractal Design Meshify C

Link to post
Share on other sites
Posted · Original PosterOP

Just out of curiosity why call it Apollo? Sorry if this has already been answered btw.

 

 

He likes classical Greek/Roman mythology IIRC. Thus Zeus (Lightning/Leader god guy), Apollo (Music/Art god guy), Helios (Sun/Cattle god guy), etc.

Pretty much yeah. Originally, this trend started when the original Deus Ex

came out in 2000, I named my very first PC after the AI in that game (HELIOS),

and then just kind of stuck with the trend ever since, because yes, I do have

a weakness for ancient Greek/Roman mythology. :)

 

 

Wow, that's a lot of media!

 

I love how you say it's a business server as well, but only 1/17 of it will be used for business purposes :P

 

Are you sure you're not just saying that to use high grade server components? ;)

 

Either way, awesome, awesome build :)

*whistles innocently* :rolleyes:

Seriously though, there are a few reasons why I went for proper server parts.

Firstly, yes indeed, while the business data does not take up lots of space

on the machine, it is certainly the most importrant thing on it, along with

my personal data (which includes my school files and stuff from work from me,

so I'd rank that of almost equal importance). So having that on a reliable

machine is important, even though it is not that much data.

The bulk of the data, i.e. the media files, are not that crucial of course.

However, unlike the other data, I cannot afford to make a backup of that,

so having it on a machine that's hopefully pretty reliable gives me some

peace of mind.

Lastly, I have come to very much appreciate IPMI, which is great for remote-

controlling the machine. The only cables I have connected to it are five LAN

cables and the power cable, that's it. I can do everything (including entering

the BIOS) remotely from another machine, which I think is fantastic. :)


BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites

Lastly, I have come to very much appreciate IPMI, which is great for remote-

controlling the machine. The only cables I have connected to it are five LAN

cables and the power cable, that's it. I can do everything (including entering

the BIOS) remotely from another machine, which I think is fantastic. :)

I can not wait until my new IPMI-enabled board arrives. Stop making

the anxiety even worse, you Evil Overlord! :angry:

Link to post
Share on other sites
Posted · Original PosterOP

I can not wait until my new IPMI-enabled board arrives. Stop making

the anxiety even worse, you Evil Overlord! :angry:

I think top-end consumer boards should also come with IPMI, it's just

too awesome not to have IMHO. I get why it's not done, but still, a

man can dream. ;)


BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites

I think top-end consumer boards should also come with IPMI, it's just

too awesome not to have IMHO. I get why it's not done, but still, a

man can dream. ;)

 

IPMI is great not only to log in to over the net or serial port but you can also query tons of system data via SNMP.

 

The bad thing about IPMI is that at times you need to access it via Java, sounds innocent enough but if you have not experienced Java's top security features yet it can prevent you from logging in to IMPI as well. Sure it will work just fine today but eventually a Java update will render it useless so make sure to setup all the additional log in methods and test them now especially if they have Ssh or the serial console. In theory your IPMI should be on a separate network as its a semi backdoor so keep that in mind, I know it may not be possible though, especially at home.


I roll with sigs off so I have no idea what you're advertising.

 

This is NOT the signature you are looking for.

Link to post
Share on other sites
Posted · Original PosterOP

IPMI is great not only to log in to over the net or serial port but you can also query tons of system data via SNMP.

Yes indeed. :)

The bad thing about IPMI is that at times you need to access it via Java, sounds innocent enough but if you have not experienced Java's top security features yet it can prevent you from logging in to IMPI as well. Sure it will work just fine today but eventually a Java update will render it useless so make sure to setup all the additional log in methods and test them now especially if they have Ssh or the serial console. In theory your IPMI should be on a separate network as its a semi backdoor so keep that in mind, I know it may not be possible though, especially at home.

I have actually run into that very issue with our printer's web interface,

the last time I tried to access it I needed to disable pretty much all Java

security to be able to run the printer's interface, and it warned me that

in future updates of Java, this will no longer be possible, thus making me

unable to access our printer via web interface.

With IPMI I at least still have Supermicro's proprietary software (which is

Java based, but should hopefully keep working long enough until I retire

this server), and the CLI IPMItool (which I can't use for everything though,

I have not so far been able to log into the M/B BIOS via that, although I've

read it's possible). But with our printer, no such thing, and I've checked, there's

no firmware update available either.


BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites
Posted · Original PosterOP

I concur with you Sir. Looney, AlpenWasser and afew others are in a whole other field in regards to data storage space for home use. Impressive to have such talent around and for them to be sharing. They embody the LinusTechTips spirit.

 

Well thank you very much for the compliments buddy! One does what one can. :)

 

(PS: You're not exactly a slouch either when it comes to build skills if I may say so. ;) )


BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites

Nice Job, Cant wait to see more,


THOR D3SK V2,Asus Maximus VII Hero  ,INTEL4790K,CorsairHX1000i,ASUSR.O.G MATRIX PLATINUM R9 280X  CROSSFIRE,SAMSUNGSSD840 pro RAID0Corsair Force LX 120Gb RAID0,WD 2Tb Black Edition,CORSAIR Dominator Platinum16GB, CUSTOM WATERCOOLED

CUSTOM BUILD 2014

Link to post
Share on other sites

much storage. 


Specs: Cpu: i7-4790k@4.5ghz 1.19v Cooler: H100i Motherboard: Msi z97 g55 SLI  Ram: Kingston HyperX Black 16gb 1600mhz GPU: XFX R9 290X Core Edition PSU: Corsair HX850  Case: Fractal Design Define R4 Storage: Force series 3 120gb ssd, sandisk ultra 256gb ssd, 1tb blue drive  Keyboard: Rosewill RK9100x Mouse: DeathAdder  Monitors: 3 22 inch on a triple monitor mount

Link to post
Share on other sites
Posted · Original PosterOP

Nice Job, Cant wait to see more,

 

Thanks! There isn't all that much left to do on the hardware side to be honest.

There is one more primary thing I want to get done: Make the thing less noisy.

It's in a separate room in our apartment, but you can still hear it through the

door, which is a bit annoying for a place where you live (it's not loud, but you

can notice it).

 

The problem is, I don't just want to use smaller/slower 92 mm fans. The M/B

and the SAS controllers require quite a bit of airflow over them, so I do need

fans blowing into that compartment with some punch. I have however been

thinking about replacing the 92 mm ones with some 120 mm ones, which would

allow similar airflow for less noise (albeit the airflow would be slower and

therefore might not cool the SAS controllers as well since those are located

towards the back of the compartment).

 

Alternatively, I could mount some acoustic dampening material into the build,

but I'm not quite sure if that will be sufficient.

 

Other than that, the main part of this going forward is in the software side oft

things.

 

 

much storage. 

 

such terabytes, very movies, wow. :D

 

Although there are home servers with far more storage than this

one, I will admit. Still, more space will be needed soon-ish anyway.

 

Those sata data cables are ridiculous. Kudos for putting in the effort.

 

Yeah, I'm pretty happy with the result, thanks! And indeed, 'twas

a rather laborious task if I may say so. :D


BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites

Top notch work on those SATA cables man, looks like something straight out of a high end data centre.

I agree with that, but even data centers won't (usually) make cabling that neat. It depends on your lab technicians.


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites

I agree with that, but even data centers won't (usually) make cabling that neat. It depends on your lab technicians.

Whilst it's true that it does depend on the technician, anybody worth their salt would have been taught to make things exactly this neat. The goal with having cable management that organised in a data centre is that maintenance costs money, as does down time. Should anything go wrong or a part break, they need to be able to assess the situation and fix it as quickly as possible, which simply isn't an option with the remotest quantity of spaghetti. Source: been working with some of the guys for the last month who build and maintain some of Facebook's servers (among other clients).


Finished Projects: Loramentum, VesperModerne, ExsectusAetos

Current Project: Parvum Argentum

Link to post
Share on other sites
Posted · Original PosterOP

I agree with that, but even data centers won't (usually) make cabling that neat. It depends on your lab technicians.

 

 

Whilst it's true that it does depend on the technician, anybody worth their salt would have been taught to make things exactly this neat. The goal with having cable management that organised in a data centre is that maintenance costs money, as does down time. Should anything go wrong or a part break, they need to be able to assess the situation and fix it as quickly as possible, which simply isn't an option with the remotest quantity of spaghetti. Source: been working with some of the guys for the last month who build and maintain some of Facebook's servers (among other clients).

Without ever having worked in a datacenter, I'd estimate that it would

depend on the situation. If you have a shortsighted boss who is primarily

interested in getting everything done yesterday and doesn't care about

future costs, it's probably going to be messier than if your superiors

place importance on long-term thinking. As said though, just a guess.

Regardless, I appreciate the complements. ;)

In any case, while I don't have any spare HDDs to experiment around with,

I've been doing some benchmarking on my existing ZFS pools and my network,

and also trying to tune the settings in my virtual machines for better

storage and network performance.

I intend to make a proper post about it once I actually have enough data

for that (not done yet with testing, let alone result analysis). Preliminary

results are that write speeds on my two main pools are sufficient to

saturate a gigabit connection in theory, but read speeds are not (yet, at

least). Must do more testing. Also, with actual NFS file transfers, I'm

currently topping out at ~80 MB/s for some reason, despite the actual

ethernet connection being capabile of ~940 Mbit/s (according to iperf tests).

So yeah, it's more or less up and running at this point, but I still

want to run more tests and do some fine-tuning.


BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites

Also, with actual NFS file transfers, I'm currently topping out at ~80 MB/s for some reason, despite the actual

ethernet connection being capabile of ~940 Mbit/s (according to iperf tests).

Have you enabled lz4 compression? With 12 cores you can get a sizeable performance increase.


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites
Posted · Original PosterOP

Have you enabled lz4 compression? With 12 cores you can get a sizeable performance increase.

Not on the test shares, because I'm using /dev/zero as my data source

for some of the tests, which would skew the results severely (and yeah,

you're right, performance does increase noticeably).

But performance issues are with NFS, not the pool itself, my media

pool (six drives, RAIDZ2, WDC Red 3 TB) has ~240 MB/s write speed

for a single large file, my other pool (four drives, WDC RE4 2 TB)

about 140 MB/s (on the datasets which have compression disabled,

using /dev/zero as my data source).

I have done some more testing, tried out CIFS on suggestion from @MG2R.

This test is without ZFS, just writing to an SSD on the target machine:

2014-04-21--19-19-02--nfs-test-zeus--686

Curiously, for some reason, NFS performance drops very low for short

intervals, which is why average transfer speed is so low:

2014-04-21--19-18-10--nfs-drop--322x376.

The peaks are at full network speeds, it just more or less stops doing

any work for short moments during the transfer.

When I try out CIFS, I get this marvellous result:

2014-04-21--19-14-25--CIFS-test--690x144

Now, while this is all nice and wonderful, it is really weird in both

MG2R's and my own opinion. Pretty much everything I've ever read says

that CIFS is slower than NFS, but in this case CIFS actually pretty

much saturates my network.

Now, NFS actually does so as well, it just more or less stops transfering

data for short time intervals, pulling down average transfer speeds

significantly.

I suspect that this is a config error on the NFS target machine, but in

all honesty I have tried out pretty much everything I've been able to

find via Google on the topic of NFS tuning, and I have not been able to

get rid of this oddity with any config I've tried.

So yeah, on the plus side: I seem to have found a solution that performs

as desired, on the negative side, this is rather weird according to what

I've read and been told so far.


BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites
Posted · Original PosterOP

Have you enabled lz4 compression? With 12 cores you can get a sizeable performance increase.

OK, did a quick benchmark of this.

Target pool is my RAIDZ2 with four WDC RE4 disks.

compression=lz4:

2014-04-21--19-40-47--zfs-lz4-zeroes-tra

2014-04-21--19-39-59--zfs-lz4-zeroes.png

compression=off:

2014-04-21--19-45-01--zfs-nocompression-

2014-04-21--19-41-25--zfs-nocompression-

So yeah, nice (but it would skew the NFS benches)! :D


BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites

 

Do you have a ZIL? That should speed up NFS

 

Also, such throughput.


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites
Posted · Original PosterOP

Do you have a ZIL? That should speed up NFS

Well, according to what I've read, I do have a ZIL, but it's not on its

separate device, but on the pool vdevs (if I've understood correctly).

source

 

The solution to this is to move the writes to something faster. This is called creating a Separate intent LOG, or SLOG. This is often incorrectly referred to as "ZIL", especially here on the FreeNAS forums. Without a dedicated SLOG device, the ZIL still exists, it is just stored on the ZFS pool vdevs. ZIL traffic on the pool vdevs substantially impacts pool performance in a negative manner, so if a large amount of sync writes are expected, a separate SLOG device of some sort is the way to go.

I've been considering getting two SSDs for this purpose for my

personal pool (not for the media pool probably), since the SLOG

devices should usually be mirrored (failure of SLOG devices is

a good way for data loss from what I've read).

If/when I do go that way, I'll definitely try it out, see how

it impacts performance.

In the meantime, need to do more testing with CIFS, have only

just begun looking into that.


BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites

 

Without ever having worked in a datacenter, I'd estimate that it would depend on the situation.

 

 

Agree, there's pigs everywhere <_<  no matter how hard you try there's always someone who could care less. You lift a panel or go to add a wire at the network closet and all it takes is one freaking wire that can ruin all your work... :angry:  I've seen network cables literally thrown on top of other cables and what not just because they want a connection to hell with color coded standards and common sections cabled together... ugh.


I roll with sigs off so I have no idea what you're advertising.

 

This is NOT the signature you are looking for.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×