Jump to content
Phishing Emails & YouTube Messages - Fake Giveaway Read more... ×
Search In
  • More options...
Find results that contain...
Find results in...
alpenwasser

APOLLO (2 CPU LGA1366 Server | InWin PP689 | 24 Disks Capacity) - by alpenwasser [COMPL. 2014-MAY-10]

Recommended Posts

 

Do you have a ZIL? That should speed up NFS

 

Also, such throughput.


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites
Posted · Original PosterOP

Do you have a ZIL? That should speed up NFS

Well, according to what I've read, I do have a ZIL, but it's not on its

separate device, but on the pool vdevs (if I've understood correctly).

source

 

The solution to this is to move the writes to something faster. This is called creating a Separate intent LOG, or SLOG. This is often incorrectly referred to as "ZIL", especially here on the FreeNAS forums. Without a dedicated SLOG device, the ZIL still exists, it is just stored on the ZFS pool vdevs. ZIL traffic on the pool vdevs substantially impacts pool performance in a negative manner, so if a large amount of sync writes are expected, a separate SLOG device of some sort is the way to go.

I've been considering getting two SSDs for this purpose for my

personal pool (not for the media pool probably), since the SLOG

devices should usually be mirrored (failure of SLOG devices is

a good way for data loss from what I've read).

If/when I do go that way, I'll definitely try it out, see how

it impacts performance.

In the meantime, need to do more testing with CIFS, have only

just begun looking into that.


BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites

 

Without ever having worked in a datacenter, I'd estimate that it would depend on the situation.

 

 

Agree, there's pigs everywhere <_<  no matter how hard you try there's always someone who could care less. You lift a panel or go to add a wire at the network closet and all it takes is one freaking wire that can ruin all your work... :angry:  I've seen network cables literally thrown on top of other cables and what not just because they want a connection to hell with color coded standards and common sections cabled together... ugh.


I roll with sigs off so I have no idea what you're advertising.

 

This is NOT the signature you are looking for.

Link to post
Share on other sites
Posted · Original PosterOP

Storage and Networking Performance



Beware: This  post  will  be  of little  interest  to  those
who  are   primarily  in  it   for  the  physical   side  of
building. Instead, this update will be about the performance
and  software  side  of   things. So, lots of text, lots of
numbers. :D

These results  are still somewhat preliminary  since I'm not
yet 100% sure  if the hardware config will  remain like this
for an extended period of time (I really want to put another
12  GB of  RAM in  there,  for example,  and am  considering
adding  some  SSD  goodness  to  my ZFS  pools),  nor  am  I
necessarily  done with  tuning software  parameters, but  it
should  give some  idea  of what  performance I'm  currently
getting.

As you may recall from my previous update, I'm running three
VMs on this machine, two of  which are pretty much always on
(the media VM and my personal VM), and the third of which is
only  active when  I'm pulling  a  backup of  my dad's  work
machine (apollo-business).



NOTE: I  know  there's   lots  of  text  and   stuff  in  my
screenshots and  it may  be a  bit difficult  to read. Click
on  any  image to  get  the  full-res version  for  improved
legibility. :)


The storage setup  has been revised somewhat  since the last
update. I now have  a mirrored ZFS pool in  ZEUS for backing
up my dad's  business data (so, in total his  data is on six
HDDs, including  the one in  his work machine). His  data is
pulled onto  the apollo-business  VM from his  work machine,
and  then  pulled  onto  ZEUS. The  fact  that  neither  the
business VM  nor ZEUS  are online 24/7  (ZEUS is  turned off
physically  most of  the  time) should  provide some  decent
protection  against most  malheurs, the  only thing  I still
need to  implement is a  proper off-site backup  plan (which
I  will  definitely do,  in  case  of unforeseen  disasters,
break-ins/theft and so on).


(click image for full res)
aw--apollo--2014-04-26--01--apollo-zeus-


The Plan

For  convenience's sake,  I was  planning on  using NFS  for
sharing  data between  the  server and  its various  clients
on  our network. Unfortunately,  I was  getting some  rather
disappointing benchmarking results  initially, with only ~60
MB/s to ~70 MB/s transfer speeds between machines.


Tools

I'm not  really a  storage benchmarking  expert, and  at the
moment I  definitely don't have  the time to become  one, so
for  benchmarking  my storage  I've  used  dd for  the  time
being. It's  easy to  use and  is pretty  much standard  for
every  Linux install. I  thought about  using other  storage
benchmarks like Bonnie++ and FIO,  and at some point I might
still do that, but for the time being dd will suffice for my
purposes.

For  those  not  familiar  with  this:  /dev/zero  basically
serves  as a data source for  lots of zeroes, /dev/null is a
sink into which you can  write data without it being written
to disk.  So,  if you want to do writing  benchmarks to your
storage, you can grab data from /dev/zero without needing to
worry  about a  bottleneck  on your  data  source side,  and
/dev/null  is the  equivalent when  you wish  to do  reading
benchmarks. To demonstrate  this, I  did a quick  test below
directly from /dev/zero into /dev/null.

Basically. It's a bit  of a simplification, but  I hope it's
somewhat understandable. ;)


Baseline


Before  doing  storage  benchmarks across  the  network,  we
should of course  get a baseline for both  the storage setup
itself as well as the network.

The base pipe from /dev/zero  into /dev/null transfers has a
transfer speed  of ~9  GB/s. Nothing unexpected, but  it's a
quick test to do and I was curious about this:


(click image for full res)
aw--apollo--2014-04-26--02--baseline--de


For measuring this I used iperf, here's a screencap from one
of my test runs. The machine it's running on was my personal
VM.

Top to bottom:
- my dad's Windows 7 machine
- APOLLO host (Arch Linux)
- HELIOS (also Windows 7 for the time being, sadly)
- ZEUS (Arch Linux)
- My Laptop via WiFi (Arch Linux)
- APOLLO business VM (Arch Linux)
- APOLLO media VM

The  bottom  two  results aren't  really  representative  of
typical  performance,  usually  it's  ~920  Mbit/s  to  ~940
Mbit/s, But as with any setup, outliers happen.


(click image for full res)
2014-04-21--17-33-30--iperf.png


The networking performance  is where I hit  my first hickup.
I failed to specify to the VM which networking driver it was
supposed to use,  and the default one does  not exactly have
stellar performance. It was an easy fix though, and with the
new  settings I  now  get pretty  much  the same  networking
performance across all my machines (except the Windows ones,
those are  stuck at ~500 Mbit/s  for some reason as  you can
see  above, but  that's not  hugely important  to me  at the
moment TBH).

This is representative of what I can get most of the time:

(click image for full res)
aw--apollo--2014-04-26--03--baseline--ne


I had a  similar issue with the storage  subsystem at first,
the default  parameters for caching were  not very conducive
to high performance and resulted in some pretty bad results:

(click image for full res)
aw--apollo--2014-04-26--04--baseline--ca


Once I  fixed that  though, much  better, and  sufficient to
saturate a gigabit networking connection.


(click image for full res)
aw--apollo--2014-04-26--05--baseline--ca



Networking Benchmark Results

Initially, I got only around 60 MB/s for NFS, after that the
next plateau was somewhere between  75 MB/s and 80 MB/s, and
lastly, this is the current situation. I must say I find the
results to  be slightly... peculiar. Pretty  much everything
I've ever read says that NFS should offer better performance
than CIFS, and yet, for some  reason, in many cases that was
not the result I got.

I'm not yet  sure if I'll be  going with NFS or  CIFS in the
end  to be  honest. On one  hand, CIFS  does give  my better
performance for  the most  part, but I  have found  NFS more
convenient  to configure  and use,  and NFS'  performance at
this point is decent enough for most of my purposes.

In  general,  I  find  the NFS  results  just  rather  weird
TBH. But they have been  reproducible over different runs on
several days, so for the time being I'll accept them as what
I can get.


Anyway, behold the mother of all graphics! :D


(click image for full res)
aw--apollo--2014-04-26--06--network-benc


FTP

As  an alternative,  I've also  tried FTP  on recommendation
of   @Whaler_99,   but   results  were   not   really   very
satisfying. This is just a screenshot from one test run, but
it is representative of the various other test runs I did:

(click image for full res)
2014-04-19--19-59-00--ftp.png


ZFS Compression
Also, for  those curious  about ZFS' compression  (which was
usually disabled in the above  tests because zeroes are very
compressible and would therefore skew the benchmarks), I did
a quick  test to compare writing  zeroes to a ZFS  pool with
and without compression.

This is  CPU utilization without compression  (the grey bars
are CPU time spent waiting for  I/O, not actual work the CPU
is doing):

(click image for full res)
2014-04-21--19-41-25--zfs-nocompression-

And this was the write speed for that specific test run:
(click image for full res)
2014-04-21--19-45-01--zfs-nocompression-


With lz4 compression enabled, the  CPU does quite a bit more
work,  as expected  (though it  still seems  that you  don't
really need a very powerful CPU to make use of this):

(click image for full res)
2014-04-21--19-39-59--zfs-lz4-zeroes.png


And the write speed goes up almost to a gigabyte per second,
pretty neat if you ask me. :D

(click image for full res)
2014-04-21--19-40-47--zfs-lz4-zeroes-tra


Side note: ZFS'  lz4 compression  is allegedly  smart enough
not to  try to compress  incompressible data, such  as media
files  which are  already compressed,  which should  prevent
such writes from being slowed down. Very nice IMHO.



That's it for  today. What's still left to do  at this point
is installing  some sound-dampening materials (the  rig is a
bit on the  loud side, even despite being in  its own room),
and possibly upgrading  to more RAM, the  rest will probably
stay like this  for a while. If I really do  upgrade to more
RAM,  I'll adjust  the  VMs accordingly  and  run the  tests
again, just to see if that really makes a difference. So far
I have  been unable  to get better  performance from  my ZFS
pools  by  allocating  more  RAM, or  even  running  benches
directly on  the host machine  with the  full 12 GB  RAM and
eight cores/sixteen threads.


Cheers,
-aw


BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites
Posted · Original PosterOP

Sound Dampening, Final Pics


As mentioned  previously, the 92  mm fans are  rather noisy,
but  I didn't  want to  replace  them. For one  thing, I  do
actually need  some powerful fans  to move air from  the HDD
compartment into  the M/B compartment,  on the other  hand I
didn't feel like spending more money on expensive fans.

For this purpose, I ordered some AcoustiPack foam in various
thicknesses  (12 mm,  7  mm and  4 mm)  and  lined parts  of
the  case  with them. I  wasn't  quite  sure how  well  they
would work,  as my past experiences  with acoustic dampening
materials weren't  all that impressive, but  to my surprise,
they're actually pretty damn effective.

I have also put in another  12 GB or RAM. I was lucky enough
to get six 2  GB sticks of the exact same  RAM I already had
for 70 USD (plus shipping and  fees, but still a pretty good
price IMHO)  from eBay. 24 GB  should easily suffice  for my
purposes.


Lastly, I've repurposed the 2.5" drive cage from my Caselabs
SMH10; cleaner than the rather improvised mount from before.



For the time being, the build is now pretty much complete.


Cost Analysis

One  of the  original  goals  was to  not  have this  become
ridiculously expensive. Uhm, yeah, you know how these things
usually go. :rolleyes:

Total system cost:  ~5,000 USD
of which were HDDs: ~2,500 USD

My share of the total cost  is ~42%, the remainder was on my
dad, which is pretty fair I think. In the long run, my share
will probably rise as I'll most likely be the one paying for
most future storage expansions (at  the moment I've paid for
~54%  of  the  storage  cost,  and  ~31%  of  the  remaining
components).

One thing to keep in mind though is that some of these costs
go back a while as not  all HDDs were bought for this server
but have been  migrated into it from  other machines. So the
actual project costs were less by about 1,300 USD.

Overall I'm  still pretty  happy with  the price/performance
ratio. There  aren't really  that many  areas where  I could
have saved  a lot  of money  without also  taking noticeably
hits in performance or features.

I could  have gone  with a  single-socket motherboard,  or a
dual  socket one  with  fewer features  (say, fewer  onboard
SAS/SATA ports as I'm not using  nearly all of the ones this
one  has due  to  the 2  TB  disk limit),  but  most of  the
features this one has I wouldn't  want to miss TBH (the four
LAN  ports  are  very  handy,  and  IPMI  is  just  freaking
awesome). And  let's  be  honest: A dual-socket  board  just
looks freaking awesome (OK, I'll concede that that's not the
best argument, bit still, it does!). :D

Other than  that, I  could have gone  with some  cheaper CPU
coolers as the  40 W CPUs (btw., core voltage  is ~0.9 V :D)
don't  really require  much in  that area,  but the  rest is
pretty much what I want need for an acceptable price.


Anyway, enough blabbering:


Final Pics

So, some final  pics (I finally managed to  acquire our DSLR
for these):

(click image for full res)
aw--apollo--2014-05-10--01--acoustifoam-

(click image for full res)
aw--apollo--2014-05-10--02--acoustifoam-

(click image for full res)
aw--apollo--2014-05-10--03--outside.jpeg

(click image for full res)
aw--apollo--2014-05-10--04--open.jpeg

(click image for full res)
aw--apollo--2014-05-10--05--open.jpeg


That Caselabs drive  cage I mentioned. The top  drive is the
WDC VelociRaptor.

(click image for full res)
aw--apollo--2014-05-10--06--2.5-inch-cag


And some more cable shots, because why not.

(click image for full res)
aw--apollo--2014-05-10--07--cables.jpeg

(click image for full res)
aw--apollo--2014-05-10--08--cables.jpeg

 

Note: The power connectors look crooked in this pic because

they're not all connected to a drive, so the ones which are

floating freely in the air don't align perfectly.  Once the

build is filled up with drives, this will look much better. :D
 

(click image for full res)
aw--apollo--2014-05-10--09--cables.jpeg


Looks much better with all RAM slots filled IMHO. :D

(click image for full res)
aw--apollo--2014-05-10--10--cables-and-r

(click image for full res)
aw--apollo--2014-05-10--11--cables.jpeg

(click image for full res)
aw--apollo--2014-05-10--12--chipset-fan.

(click image for full res)
aw--apollo--2014-05-10--13--cpu-coolers.

(click image for full res)
aw--apollo--2014-05-10--14--ram.jpeg

(click image for full res)
aw--apollo--2014-05-10--15--back-side.jp


It's kinda funny: Considering how  large the M/B compartment
actually is,  it's pretty packed now  with everything that's
in there. The impression is even  stronger in person than on
the pics.

(click image for full res)
aw--apollo--2014-05-10--16--front-side.j



Thanks for tagging along everyone, and until next time! :)

 


BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites

I'm so glad you have sound dampening. Kinda sad the build is finished... but glad I see the final product!


Andres "Bluejay" Alejandro Montefusco - The Forums Favorite Bird!!!

Top Clock: 7.889 Ghz Cooled by: Liquid Helium   

#ChocolateRAM #OatmealFans #ScratchItHarder #WorstcardBestoverclocker #CrazySexStories #SchnitzelQuest TS3 SERVER

Link to post
Share on other sites

How much memory did this end up having?

 

Never mind, I just saw it. Nice!


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites

Holy <bleep> hell.

Edited by MG2R
language

Main Rig: -FX8150 -32gb Kingston HyperX BLUE -120gb Kingston HyperX SSD -1TB WD Black -ASUS R9 270 DCUII OC -Corsair 300r -Full specs on Profile


Other Devices: -One Plus One 64gb Sandstone Black -Canon T5 -Moto G -Pebble Smartwatch -Nintendo 2DS -G27 Racing Wheel


#PlugYourStuff - 720penis - 1080penis - #KilledMyWife - #LinusButtPlug - #HashtagsAreALifestyle - CAR BOUGHT: 2010 Corolla

Link to post
Share on other sites
Posted · Original PosterOP

I'm so glad you have sound dampening. Kinda sad the build is finished... but glad I see the final product!

 

Yeah, it's definitely necessary. Even though it's in its own room you could

still hear it through the door before, and since it's in our apartment, that's

a nono. Much better with the foam. :)

 

Look at the upside: Now that this is done, I can finally turn my attention to

HELIOS again. :D

 

You wouldn't have a video with before and after shots of the noise dampening stuff, would you?

 

I did make a vid of it spinning up before, but it's going to be very difficult

to make a good comparison as it wasn't really a controlled experiment.

 

I'll see what I can do. :)

 

How much memory did this end up having?

 

Never mind, I just saw it. Nice!

 

Yeah, I'm still lagging behind you by a few kilobytes. :D

But this is sufficient for my purposes, I'm pretty happy

with 24 GB.

 

Holy <bleep> hell.

 

Thanks! :D


BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites

damn son, thats a crazy build. One of my favourites for a while, I wish more people did server builds, because I can't afford to do them my self ;D


Main Rig: Cpu: Intel Core i7 4790k CPU Cooler: NH_d15 GPU: Gigabyte Radeon HD7970 Windforce III MOBO: Msi Z97 Gaming 5 RAM: HyperX Fury 1600 8gb white SSD: Samsung 840 evo 250gb PSU: Corsair CX750M Case: NZXT H630 white

PC peripherals: Monitor: Samsung Syncmaster T27B550 Keyboard: Logitech G105 Mouse: Logitech G700s Headphones: Logitech G430 

 

Link to post
Share on other sites
Posted · Original PosterOP

damn son, thats a crazy build. One of my favourites for a while, I wish more people did server builds, because I can't afford to do them my self ;D

 

That moment when you get called "son" by a person 12 years younger than you. :P

(no worries, I get it, figure of speech and all that, plus it makes me feel young again)

 

Anyway, ontopic: Yes, indeed I have very much enjoyed this build, and yes, I too

have a weakness for server builds.

 

If you want to see some more servers, I recommend you check out our 10 TB+ topic,

as well as its pendant on Hardocp. OCN also has a nice server thread here

 

If I ever get the money, I've got something in store for you :D

 

Less teasing, more building! :P


BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites
Posted · Original PosterOP

@alpenwasser

 

How much does that server weigh? :P

Ha, how could I forget, I actually weighed it!

~33 kg currently (~73 lbs).


BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites

Ha, how could I forget, I actually weighed it!

~33 kg currently (~73 lbs).

YEP YEP LIGHTWEIGHT, BABY, YEP LETS GO (Coleman)


My builds:


'Baldur' - Data Server - Build Log


'Hlin' - UTM Gateway Server - Build Log

Link to post
Share on other sites
Posted · Original PosterOP

YEP YEP LIGHTWEIGHT, BABY, YEP LETS GO (Coleman)

Yup, easy-peasy, one could almost call it a portable machine. :D

(well, strictly speaking, it is portable, it just takes a bit

of effort to move it)


BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites

Yup, easy-peasy, one could almost call it a portable machine. :D

(well, strictly speaking, it is portable, it just takes a bit

of effort to move it)

Wait until it's filled up with HDDs ;DD

 

+11 drives means +8kg or so

 

 

But i like your drive holding config ;) I'm going with a comparable design but i want to use drive trays and since I do not want having to remove a side panel in order to get access to the drives, my build is going to be accessible by sliding out the drive cages to the front and then removing the trays to the side. But i have to admit your thread served as an inspiration for me


My builds:


'Baldur' - Data Server - Build Log


'Hlin' - UTM Gateway Server - Build Log

Link to post
Share on other sites
Posted · Original PosterOP

Wait until it's filled up with HDDs ;DD

 

+11 drives means +8kg or so

 

 

Yup. :D

But i like your drive holding config ;) I'm going with a comparable design but i want to use drive trays and since I do not want having to remove a side panel in order to get access to the drives, my build is going to be accessible by sliding out the drive cages to the front and then removing the trays to the side. But i have to admit your thread served as an inspiration for me

Ah, I see the creative wheels have started spinning. :D

I considered avoiding the sidepanel by cutting holes/slots in it through

which I could have taken out the drives directly (proper hotswapping),

but I couldn't find a PCB for making the hotswap part work (and I

don't have the capability to print my own hotswap PCB, at least not

yet ;)).

To be honest though, it's not a big deal really. After all, the

sidepanel is only fixed with two thumbscrews, so taking it off is

a matter of a few seconds (as is remounting it again), which, considering

I'm not going to be adding drives very often, is acceptable for my

use case.

Looking forward to your build then. :)


BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites

 

 

Yup. :D

Ah, I see the creative wheels have started spinning. :D

I considered avoiding the sidepanel by cutting holes/slots in it through

which I could have taken out the drives directly (proper hotswapping),

but I couldn't find a PCB for making the hotswap part work (and I

don't have the capability to print my own hotswap PCB, at least not

yet ;)).

To be honest though, it's not a big deal really. After all, the

sidepanel is only fixed with two thumbscrews, so taking it off is

a matter of a few seconds (as is remounting it again), which, considering

I'm not going to be adding drives very often, is acceptable for my

use case.

Looking forward to your build then. :)

 

I had more requirements to it than you had. In the end I found this one:

 

rh-sas-01l.jpg

 

The advantage of that one was that it has pin outs for Activity/Power LEDs. Fully hot swappable as well... 6$

 

Sata+SAS, hotswap, failover, pin outs for LEDs, all together for a small price


My builds:


'Baldur' - Data Server - Build Log


'Hlin' - UTM Gateway Server - Build Log

Link to post
Share on other sites
Posted · Original PosterOP

I had more requirements to it than you had. In the end I found this one:

 

rh-sas-01l.jpg

 

The advantage of that one was that it has pin outs for Activity/Power LEDs. Fully hot swappable as well... 6$

 

Sata+SAS, hotswap, failover, pin outs for LEDs, all together for a small price

 

Nice, nice, nice! Where'd you find these, model number, manufacturer, product name?

(sry for the questions, I might still have use for something like this at some point ;))


BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites

Nice, nice, nice! Where'd you find these, model number, manufacturer, product name?
(sry for the questions, I might still have use for something like this at some point ;))

 

Here, I ordered 50 with the intention of selling some as well ;) hard to get something equal to that...


My builds:


'Baldur' - Data Server - Build Log


'Hlin' - UTM Gateway Server - Build Log

Link to post
Share on other sites

Nice, nice, nice! Where'd you find these, model number, manufacturer, product name?

(sry for the questions, I might still have use for something like this at some point ;))

I felt like throwing a party when finding it after searching for almost 2 weeks

This should be featured on the WAN show ;D


My builds:


'Baldur' - Data Server - Build Log


'Hlin' - UTM Gateway Server - Build Log

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×