Jump to content

Tophat

TOPHAT

post-653-0-68482900-1397787589.png 

 

In the industry, a lot of projects are given codenames that derive from some subject. That subject could be anything from pirate ships to alcohol distilleries to plays on words. If I end up building more computers (likely), they'll continue some sort of naming scheme I can come up with. Like Monocle and Moustache.

Rationale and Ramblings

 

This started out with a blog post here. I've been using LGA 1366 blade servers at work, and they've got a good amount of horsepower. Looking on ebay I saw a pair of x5650's for $240 and started planning. When I found ASUS's Z8NR-D12 for sale on Newegg I was very happy indeed (RMA support and a manufacturer warranty of three years). After finding ECC RAM for cheap and a low-wattage PSU with dual EPS connectors (and confirming with the manufacturers that the parts should be supported), I decided to go for it.

 

I'm going with a tower case, since I can't find a PSU that isn't horribly expensive that will fit in a 2U case and have access to fresh air. I suppose I could go with a fanless PSU and just use the forced air from the server chassis to make it work, but the ones I can find don't have multiple CPU connectors on them (and probably for good reason). Maybe a molex to 8P EPS would work? The one I linked has a single 12V rail.

 

Or I could go with a 4U chassis and then I could use the PSU I bought, but it seems like a waste since I don't think I'll need so many storage drives.

 

In the meantime, this will be my first foray into DIY servers. It will likely start out as a platform for me to experiment with FreeNAS/ZFS and ESXi, as well as to do some benchmarking. More knowledge of storage is better. Eventually this will become either a ZFS-based NAS or an ESXi host. I'm leaning towards the NAS because it'll be of more use to me in the short term. I will need an ESXi machine in the future for development on a project I will be working on within the next year, so maybe there'll be another guide like this :blink:.

 

Parts

 

CPU: Dual Intel x5650 @ 2.66 GHz. Six cores, twelve threads a piece. 95W

Motherboard: ASUS Z8NR-D12 - There are plenty of dual-1366 motherboards out there, but this one was on sale for $200 from Newegg. I bought the very last one, which makes me sad because now I have to either buy more expensive Tyan or Supermicro boards or step up to LGA 2011, where CPUs are expensive.

Memory: Kingston 96GB DDR3 1333 ECC Memory - Bought off E-Bay for $180 for 24GB. I went single socket first to make sure the board functioned, then bought a second RAM kit so I could run both CPUs with 3 dims each. All DIMMs are now in.

Power Supply: Seasonic S12II 520W Bronze - It's non-modular, cheap, 80+ Bronze rated, and comes with the CPU connectors I need.

Boot Drive: Adata SP600 64GB - It was on sale for $5 more than the 32GB version. Cheapest I could find. Will be used to test the server, then used as a cache in whatever server I end up running.

CPU Cooler: Intel BXSTS100C - It's a 2U cooler, and pretty beefy (meant to cool a 130W processor). It's reasonably quiet, but is definitely the loudest part of the system.

Case: Fractal Design Define XL R2 - I live in an apartment, so I will always have to hear whatever noise it is making. Additionally, my girlfriend might move in within the next few years, and I don't want someone else complaining about the noise. As it is, it's pretty silent, but it sure as hell won't be once a bunch of drives are in there. The case provides good enough cooling, enough drives space for my usage, a compatible motherboard standoff layout, and looks good enough to be out in the open (which it will be).

HDD: Seagate NAS 3TB x2 - On sale for less than the 3TB WD offering. Consumes slightly more power, but performs better on average. Almost all other features were identical. Will be used for media storage and general backups in a RAID 1 array.

SSD: Samsung 840 EVO 250GB (not yet ordered) - Has basically one of the best $/GB ratio for SSDs with similar capacity. No RAID here, will likely be used in the future as an iSCSI extent for VM storage

  • Norco RPC-270 - Fits the motherboard and gives me up to 8 drives (which is honestly fine for my purposes).
  • Norco RPC-470 - 4U version of the chassis with up to 13 drives (not excited about the 4U, but it is cheap).
  • Norco RPC-2008 - Same thing, but with external hot-swap bays (might be useful). Costs a lot more though.
  • Norco RPC-2212 - 12 hot-swap bays if I really must have more drives. Really expensive though.

 

I do have pictures today, but the way I want to arrange this build log forces me to copy the way alpenwasser did his log for Apollo.

 

When the final machine is complete, it will appear in this post at the bottom. For now, enjoy the (in progress) build log!

 

Update Log

 

01. 2014-APR-17: Birthday Stuff and First Parts!

02. 2014-APR-19: Missing the Postman: The arrival of the CPUs

03. 2014-APR-21: RAM Installation, First Power-On, and Software Installation

04. 2014-APR-28: Remote Access, Going Dual-CPU With 48GB of Memory, and a First Look at Power Consumption

05. 2014-MAY-02: Heart Failure, Maxed RAM, and Next Steps...

06. 2014-MAY-05: Initial Performance Testing w/ iozone

07. 2014-JUN-10: Chipset Cooling, Mounting Holes, and Final Case Selection

08. 2014-Jun-13: From the Desk to the Case: Building in the Define XL R2

09. 2014-Jun-22: Storage and Cable Management

10. 2014-Jun-23: Performance and Stress Testing

11. 2015-Jan-12: A (Hopefully) Useful Purchase

12. 2015-Jan-17: PIKE Photos, More PCI Cards, and Repurposing

 

Breakdown:

CPU: $235

Mobo: $200

RAM: 4x $180

PSU: $60

SSD: $45

Coolers: 2x $35

Noctua 60mm Fan: $7 (sale)

Define XL R2: $100 (sale)

HDD: 2x $120 (sale)

PSU extensions (24-pin and 8-pin EPS): $30

PIKE 2008 RAID Controller: $77 + $11 shipping

Intel 1GbE x1 NIC: $40

Intel x520-DA2: $200-350.

 

Total Cost: $2035

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

Birthday Stuff and First Parts!

 

Today was a good birthday. Because: parts. Also tax return, but mostly parts! I went to the post office to get them, but they were out for delivery. So I rushed home, hoping to beat the postman so I wouldn't have to wait until Friday.

 

I got two lovely brown boxes from Newegg, which contained the motherboard, PSU, CPU cooler, and boot drive:

 

post-653-0-89550000-1397783704_thumb.jpg

 

 

This cooler is only for LGA 1356 and LGA 1366 sockets, which conveniently have a (really good) backplate. It was meant to cool CPUs with a 130W TDP, so the x5650's should be relatively small fry.

 

post-653-0-42662400-1397784146_thumb.jpg

 

It comes wrapped in sturdy cardboard, and (surprisingly to me) a clear ABS plastic case.

 

post-653-0-51989900-1397784209_thumb.jpg

 

I'd seen a review of the cooler which complained of dusty thermal compound, so I did a quick check:

 

post-653-0-86927100-1397784228_thumb.jpg

 

All good here. Now it's time to strip naked the heatsink:

 

post-653-0-07150600-1397784284_thumb.jpg

 

Rawr. It's got two good-sized heat pipes that go through the full radiator. The fan is also removable if I had good enough airflow via two screws on the bottom:

 

post-653-0-54910400-1397784344_thumb.jpg

 

 

 

The power supply isn't tremendously powerful at 520W, but it's got more than enough juice for a dual CPU server with the necessary dual 8-pin EPS connectors. It's got six SATA connectors and six Molex 4-pins, so I can have up to twelve drives if I really want to. It's got a 6-pin and 6+2-pin PCI-E connector, so I could reuse it for a gaming rig if I reuse the PSU for another build, and came with a molex to some other more different dual 4-pin connector (almost looks like a fan connector).

 

post-653-0-95168800-1397784756_thumb.jpg

 

Here it is, with all the connectors separated by type. From top to bottom: 24-pin, SATA, Molex, PCI-E, ATX/EPS

 

post-653-0-71277100-1397784822_thumb.jpg

 

These are the babies I need: Those EPS connectors.

 

post-653-0-38194800-1397784917_thumb.jpg

 

 

It's an SSD, not much else to be said about it. I didn't need the highest end SSD out there, but I didn't want to have to worry about my boot drive very much, so mechanical storage was ruled out. Plus it'll probably end up being used as a cache somewhere.

 

post-653-0-58415100-1397785228_thumb.jpg

 

The naked drive. I love the brushed finish.

 

post-653-0-45457400-1397785310_thumb.jpg

 

It comes with a 3.5' adapter and a laptop adapter.

 

post-653-0-10850000-1397785733_thumb.jpg

 

 

 

I originally wanted an ATX motherboard, but I couldn't find one. So I went with this one the Z8NR-D12, which was $200 ($450 originally). It supports up to 96GB of ECC memory with 8GB DIMMs, has six SATA ports by default, plus eight additional ones with a PIKE card installed. It also came with the integrated KVM, which was a very nice touch, as well as 6 SATA cables.

 

post-653-0-83391400-1397785982_thumb.jpg

 

The contents of the box:

 

post-653-0-52587400-1397786097_thumb.jpg

post-653-0-73504200-1397786101_thumb.jpg

post-653-0-33021200-1397786104_thumb.jpg

 

And the motherboard itself:

 

post-653-0-01218500-1397786133_thumb.jpg

 

post-653-0-18462700-1397786111_thumb.jpg

 

Verifying Compatibility

 

I didn't want to sit around and wait for the rest of the parts to get here, so I decided to get started plugging stuff in. I started by making sure the PSU would connect to the motherboard properly.

 

Yep.

post-653-0-63269300-1397786106_thumb.jpg

 

The iKVM was easy, got it set up on the proper header:

post-653-0-91312600-1397786108_thumb.jpg

 

I really wanted to make sure the CPU cooler mounted correctly. Looking at the sockets from above, you can see why:

post-653-0-49120500-1397786113_thumb.jpg

 

There is very little room between sockets. I decided to do a test fit of the cooler, but first I checked the pins. Supposedly you aren't supposed to remove the socket protector unless you're installing a CPU.

 

Oops:

post-653-0-81390500-1397786115_thumb.jpg

 

I put the cooler on the socket. Didn't tighten it down, just verified that I could fit two of these coolers side-by-side.

 

I can! Yay!

post-653-0-82041600-1397786129_thumb.jpg

 

Once the CPU and Memory arrive I will be able to put the whole thing together and power it on! If everything works out, I'll buy the second RAM kit and CPU cooler and install them.

 

Those will be the subjects of the next two updates.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

For starters, a happy belated birthday from me. :)

 

The downside? I don't have a case yet  :(. I was looking at the GD07 and GD08 from Silverstone, but I would have to sacrifice the drive cage to get an SSI EEB board to work with
it. That's not okay, since this will likely start out as a storage machine. I can fit this system in a 2U chassis, but I need find a non-server PSU with two EPS connectors that also pulls air through horizontally (not vertically) or has an open chassis. If someone can find one, PLEASE LET ME KNOW! I will love you forever.

 
I don't know how suitable this will be for you, but I've
actually been pretty happy with my Inwin for APOLLO. It
fits EEB motherboards (up to 12" x 13"), it's pretty well
built IMO, and it's quite a bit less expensive than the
usual server tower chassis from Supermicro et al.

The one downside is that you'd need to make a ventilation
opening either in the case top (as I did for APOLLO) or
into the sheet that separates the M/B compartment from
the PSU compartment if you wanted to use the PSU you have,
if I've understood correctly.


Maximum 3.5" drive capacity for that chassis (unmodded)
would be 13 drives, though you'd need to get another 4-drive
cage and a 5-disk cage for the 5.25" bays. Mine came with
one 4-disk hotswap cage in stock config. Also, the fans that
come with it are pretty awesome, albeit a bit noisy (well,
server fans, obviously). They're also all PWM.

 

More knowledge of storage is better.


+1
 

I will need an ESXi machine in the future for development on a project I will be working on within the next year, so maybe there'll be another guide like this :blink:.


Excellent.
 

I do have pictures today, but the way I want to arrange this build log forces me to copy the way alpenwasser did his log for Apollo.


Well, obviously I can't say no to that. :D

Also, side note: I did not get notification for this (if that was
your intention), if you want members to be notified, you need to
use the @ before the member name. (like @wpirobotbuilder).
 


Oh my, a properly formatted index table? :wub: :D 
 

Man 1366 is so old.

 

I prefer the term "classic". :D

 

But yeah, LGA1366 came out ages ago indeed.

 

I have two 1366 dual-socket systems, one with 2xX5680,  (hexacore,

hyperthreaded, 3.33 GHz stock) and one with 2 x L5630 (quadcore,
also hyperthreaded, 2.13 GHz stock). As long as you give it something
that scales well with adding more threads,the more powerful of the two
systems still chews through pretty much anything I throw at it with ease
(though obviously in areas where single-core performance is significant
it will lag behind today's CPUs), and the other machine is still easily
powerful enough to run some VMs and storage stuff.
 

Plus, price/performance for mid-range LGA1366 Xeons on eBay is

very good at the moment if you can manage to get a server pull

part.

 

Having said that, those 12-core Xeons Intel has these days do look rather

lovely... :wub:
 

This cooler is only for LGA 1356 and LGA 1366 sockets, which conveniently have a (really good) backplate. It was meant to cool CPUs with a 130W TDP, so the x5650's should be relatively small fry.

Yeah, I love that about the server boards, no annoying backplate stuff
to deal with, so much easier than having a cooler with its own backplate
that you need to bother with. Shame it took Intel so long to move that
system to the desktop and only with LGA2011.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Missing the Postman: The arrival of the CPUs

 

Got home from work yesterday, and instead of finding the CPUs I found a little pink slip:

 

post-653-0-02596000-1397923864_thumb.jpg

 

To my dismay, they had listed this morning as the pick-up time, rather than yesterday evening. So I played Banished and watched the WAN show to pass the evening.

 

This morning, I went to the post office and picked up another brown box, which contained the CPUs:

 

post-653-0-85198500-1397924097_thumb.jpg

 

 

I was pleased to see excellent packaging. The box made it intact (aside from me failing to remove the outer plastic), and the inside contained non-conductive bubble wrap padding, which also surrounded the CPU.

 

post-653-0-43781600-1397923869_thumb.jpg

 

I was further pleased to see that they had properly stored each CPU individually in a sealed ESD bag:

 

post-653-0-31710700-1397923872_thumb.jpg

 

Opening them, I find the CPU in good condition:

 

post-653-0-76376300-1397923874_thumb.jpg

post-653-0-11513500-1397923877_thumb.jpg

 

 

CPU Installation

 

Like pretty much all other sockets, they're keyed so the processor can go in one (und only one) way. The two circles on the socket are the keys for these CPUs:

 

post-653-0-78933100-1397923879_thumb.jpg

 

Gently now...

post-653-0-52742500-1397923893_thumb.jpg

post-653-0-04474900-1397923896_thumb.jpg

 

Putting the socket retention mechanism down:

post-653-0-50874600-1397924498_thumb.jpg

 

Now I can actually install the CPU cooler! First, the alignment phase:

post-653-0-73383200-1397924582_thumb.jpg

 

Beginning the tightening process, going in a cross-pattern:

post-653-0-65715100-1397924650_thumb.jpg

 

The whole thing securely tightened down. I didn't know this, but the mechanism actually prevents you from tightening too much.

post-653-0-33764200-1397923898_thumb.jpg

 

There she is, fully mounted and ready to go, just as soon as the RAM gets here.

 

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

subbed

moar updates please

HTTP 404 = Server cannot be reached please try again later, try refreshing(F5) the page or re-enable your wireless

Link to comment
Share on other sites

Link to post
Share on other sites

RAM Installation, First Power-On, and Software Installations

 

Came home to find that the postman left me another brown box. This time it contained something special: my RAM!

 

post-653-0-26843200-1398117235_thumb.jpg

 

 

It's a 3x8 ECC memory kit from Kingston, not much else to say.

 

Here it is in the packaging:

post-653-0-38790100-1398117258_thumb.jpg

 

post-653-0-13480700-1398117285_thumb.jpg

 

And installed in the server.

 

post-653-0-64377400-1398117310_thumb.jpg

 

 

First Power-On

 

I was very anxious today, because I was dreading some memory incompatibility. Here is the setup before the first power-on:

 

post-653-0-50234000-1398117384_thumb.jpg

 

She's loading...

 

post-653-0-37209700-1398117411_thumb.jpg

 

Unfortunately for Dr. McCoy, he's not dead, Jim!

 

post-653-0-88005200-1398117493_thumb.jpg

 

Everything was detected, all memory and the CPU. Sigh of relief.

 

I loaded the Ubuntu Server Installer and ran memtest:

 

post-653-0-69572000-1398117570_thumb.jpg

 

I got bored around 45%, so I exited it. It takes a long time to memtest so much memory.. Unfortunately, I couldn't install Ubuntu Server. When I went to install it, my monitor came back with "Input Signal Out Of Range: Change Settings to 1600x900"., which my monitor is already at. I wonder if it's just that the Ubuntu Server installer (which doesn't come with a graphical interface) doesn't support the onboard Aspeed AST 2050 chip. So I pulled out my installer for Windows 7. Looks like she works, Jim.
 

 

Unfortunately the Windows install I have is Home Premium, so I couldn't use two CPUs or more than 16GB of RAM. I just wanted to make sure it all worked.

 

post-653-0-81858900-1398117699_thumb.jpg

post-653-0-26965100-1398132580_thumb.jpg

post-653-0-90487400-1398132605_thumb.jpg

 

Everything was properly detected :)

 

 

Next I tried ESXi 5.5, which is something I might use this for later on.

 

 

It's a fairly simple and straightforward installation. You load up a usbstick using unetbootin with the ISO image (link here), you only need a 1GB flash drive.

post-653-0-69281500-1398132792_thumb.jpg

 

The main page:

post-653-0-43738200-1398132907_thumb.jpg

 

After some menus and drive selection, you end up here:

post-653-0-27277200-1398132952_thumb.jpg

 

Reboot, and you get here:

post-653-0-72381700-1398132974_thumb.jpg

 

You can set up your own IPs or use DHCP (I tried both, they both work).

post-653-0-23703400-1398133051_thumb.jpg

After restarting the network manager, you get this:

post-653-0-99655300-1398133106_thumb.jpg

Using VMware vSphere to connect to the IP specified, you get to this menu, which is where you can create VMs and deploy a virtual vCenter (if you have a template and a license).

post-653-0-38066100-1398133120_thumb.jpg

 

Next I wanted to try FreeNAS:

 

 

Installation instructions are here. You only need a 2GB disk, but they recommend a 4GB disk because not all 2GB disks have the same number of sectors, so they won't work. As luck would have it, one of my 2GB drives had enough sectors, so I burned the disk successfully.

 

post-653-0-71053900-1398133705_thumb.jpg

 

Looks like FreeNAS picked up all the RAM. The only thing that's failing is the startup of the CIFS service, but I'll probably be using iSCSI more often anyways. Still, I want to get it going so I can start doing more testing!

post-653-0-77379100-1398138412_thumb.jpg

 

Some Thoughts

 

Holy crap is it quiet. The cooler is barely audible with these CPUs, even though it is idling at 2000 RPM. I'm sure under load it'll ramp up, but I'm still very impressed.

 

The chipset gets pretty hot. Not too hot to touch, but close. Will have to make sure that there is adequate airflow when I get the chassis.

 

ESXi works flawlessly (I went through and made VMs and stuff), Windows works but only with Professional or higher, and the only thing I'm having trouble with on FreeNAS is starting the CIFS service. I don't want this to be a Windows box.

 

More configuration will be the subject of the next update, and possibly the addition of the second CPU and RAM. After that, I can get some storage and begin testing.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

The chipset gets pretty hot. Not too hot to touch, but close. Will have to make sure that there is adequate airflow when I get the chassis.

Yeah, definitely pay attention to that, the 5500/5520 chipsets run

very hot. Have you stress tested the machine yet (mprime or w/e)?

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah, definitely pay attention to that, the 5500/5520 chipsets run

very hot. Have you stress tested the machine yet (mprime or w/e)?

I did a half-pass of memtest86 (I got bored waiting for it to finish), and prime95'd it when I had windows installed. Under full load it gets to around 66 degrees on the hottest core, and the fan ramps up to around 3800 RPM, which is about as loud as a GPU under load.

 

Once I get a case and all the hardware I'll do proper stress testing, I don't want to leave it to stress test without proper airflow over all the components and have a crash while I'm asleep.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

Once I get a case and all the hardware I'll do proper stress testing, I don't want to leave it to stress test without proper airflow over all the components and have a crash while I'm asleep.

Yeah, I managed to crash my machine multiple times with mprime before

figuring out it was the chipset that was getting too hot. Was a rather

annoying learning process because after each emergency shutoff the machine

would refuse to POST for the next 15 mins or so. Presumably that's a

safety feature on the M/B, but it's a bit unnerving, especially the

first few times because I wasn't quite sure if something had actually

broken...

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah, I managed to crash my machine multiple times with mprime before

figuring out it was the chipset that was getting too hot. Was a rather

annoying learning process because after each emergency shutoff the machine

would refuse to POST for the next 15 mins or so. Presumably that's a

safety feature on the M/B, but it's a bit unnerving, especially the

first few times because I wasn't quite sure if something had actually

broken...

I ended up putting a glass on top of the chipset about 1/4 full of water so I could do extended testing. Worked like a charm, but the water got pretty warm, and definitely isn't a permanent solution :)

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

I ended up putting a glass on top of the chipset about 1/4 full of water so I could do extended testing. Worked like a charm, but the water got pretty warm, and definitely isn't a permanent solution  :)

Haha, that chipset family just seems to inspire inventiveness

in people who use boards based on it. :D

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

 

Installation instructions are here. You only need a 2GB disk, but they recommend a 4GB disk because not all 2GB disks have the same number of sectors, so they won't work. As luck would have it, one of my 2GB drives had enough sectors, so I burned the disk successfully.

 

attachicon.gifIMG_1477.JPG

 

Looks like FreeNAS picked up all the RAM. The only thing that's failing is the startup of the CIFS service, but I'll probably be using iSCSI more often anyways. Still, I want to get it going so I can start doing more testing!

attachicon.gifIMG_1480.JPG

To get CIFS to start correctly in FreeNAS, configure the settings both for CIFS [(GUI>Services>Control Services>Wrench Icon beside CIFS) or (GUI>Services>CIFS)] and the Global Config for the Network (GUI>Global Configuration).

The NetBIOS name in the CIFS configuration == The Hostname in Global Configuration. /This is usually the issue with CIFS starting when it's not configured.

Also, be sure to uncheck Local Master under the CIFS configuration settings so FreeNAS doesn't try to be elected as Master Browser of your network (unless you want it to be).

Then reboot FreeNAS and the CIFS service should start. If it doesn't, manually switch it to On from Services>Control Services>CIFS.

Awesome build log so far bro. :D Loving it. If mine had gone more smoothly, I might have done the same, but ... tis life. 

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Awesome build log so far bro. :D Loving it. If mine had gone more smoothly, I might have done the same, but ... tis life. 

The Ubuntu installation had me pulling hair out, but I tried Windows and it worked, so I at least knew the hardware wasn't bad (my main concern). I'm just glad ESXi and FreeNAS work, since those are what I would want on it.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

Remote Access, Going Dual-CPU With 48GB of Memory, and a First Look at Power Consumption

 

Up until now, I've been running a single socket only. Today both the second CPU cooler and the second 24 GB RAM kit arrived, so I of course installed them immediately.

 

post-653-0-66076300-1398720586_thumb.jpg

post-653-0-87907300-1398720588_thumb.jpg

 

Off topic, but one of the defining characteristics of this chipset is that it gets really hot. Until now, I'd been keeping a glass 1/4 filled with water on top of the chipset to keep it cool. Now I have that purple fan there, so it stays a lot cooler long term.

 

I got the remote KVM working last week, so I didn't have to shuffle my keyboard/mouse/monitor back and forth anymore. It works pretty well so far, maybe I'll go over it more in another post.

 

post-653-0-51756100-1398720591_thumb.jpg

 

From the remote console I can power on/off, and even do proper shutdowns regardless of what OS is installed (for the most part). So I powered it on that way. Basically it just displays what a monitor would and gives me keyboard/mouse passthrough. I had to move my router onto my desk since I don't have a switch or long enough cable to reach the previous location (perhaps a different post later on about network setup...)

 

 

Here is the BIOS system information menu, showing both CPUs and all memory recognized.

post-653-0-59787400-1398720593_thumb.jpg

 

And here is VMware ESXi recognizing the memory and CPUs as well.

post-653-0-37545500-1398720595_thumb.jpg

 

I also got out my Kill-A-Watt meter to see what power consumption looked like at idle, which is what it'll be doing most of the time.

 

 

post-653-0-25825200-1398720597_thumb.jpg

 

 

About as much as a high-power incandescent bulb. I'll probably toy around with power-saving features if this is to become a 24/7 system. My electric bill is around $35/month from early fall to late spring since I don't have much drawing power (all my lightbulbs are LED or CFL) and my regular computer pulls about 100W at full load (for now...). The most energy-intensive appliances are definitely the kitchen ones, so I don't anticipate a huge (relative) jump in my power bill.

 

I still am looking for a case, but it'll also depend on what the system ends up doing. I'm still considering a rackmount case, but I also might go with the GD07 or GD08 if this ends up being an ESXi system, where I don't need drives other than a boot drive. I'd just use network storage for the VMs and keep the SSD that's on here now as the boot drive. Plus I can always attempt to mod the drive cages to work with an SSI EEB case by making a few cuts here and there.

 

EDIT: I found this on hardforum that shows I could install this board in an Arc XL or Define XL R2 from Fractal. That's probably the route I'm going to go, since I get lots of drive support and can keep my current PSU. I'll just be sacrificing some cable management holes and not using all screw mounts.

 

Next update will be on the last 48 GB of memory installation. Then I can start looking at storage, as I only have consumer grade drives around my apartment.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

I love the remote control features on proper server boards, really

miss that on my SR-2.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Heart Failure, Maxed RAM, and Next Steps...

 

I got the RAM in the mail today, which is odd because I wasn't expecting it until Tuesday. The seller is definitely getting a 5-star rating for delivering working modules. However, I almost had heart failure for this reason:

 

 

The memory they sent me is the correct kit (looking at the sticker on the module), however it was labeled as a 16GB LRDIMM, which this board doesn't support (max 8GB). But then I looked at the module, and it said the right things.

post-653-0-95177200-1399066529_thumb.jpg 

 

My excitement returned after my heart rate settled a bit, and quickly installed the modules.

 

Pro tip: If you want to get really good at building computers, build and rebuild a server over and over again. I'm much more comfortable building computers now that I've worked on this one.

 

post-653-0-32521700-1399066526_thumb.jpg

 

I booted it up, and the motherboard POSTed with all 96 GB recognized by both the BIOS and ESXi.

 

 

post-653-0-18081200-1399066536_thumb.jpg

post-653-0-00350800-1399066549_thumb.jpg

 

So that's it. That's all the parts to get the server fully outfitted.

 

What's Next?

 

The case:

 

I've been looking around and discovered that the Define XL R2 and Arc XL can fit the motherboard in it without modification, but I'll lose some cable management holes. Fortunately, the drives are in a part of the motherboard where there is a free cable management hole so that won't be an issue. I'm thinking about going for the Define XL, since this will be running a lot, and I don't have a closet I'd feel comfortable stashing it in, especially since it idles at around 100W, and will be even higher once I get hard drives in there.

 

Power Supply Extensions:

 

My power supply cables won't reach everywhere in either of the mentioned cases. Which is probably a good thing, since they look like crap from the pictures I've put up. I was thinking about these. I can't go with many color schemes in this build, since the green PCB of the motherboard and RAM would make them look terrible. So I plan on going with silver colors. Firstly, because there is a lot of silver on the motherboard (heatsinks, all the solder for components), and second because silver would go okay with the green of the motherboard. There's nothing I can or would do about the orange RAM slots or red/blue SATA connectors.

 

I would need a 24-pin, two 8-pins, and SATA data cables. The SATA power might reach the drives in the case, but I can always use extensions to get to the drives (not like anyone's going to see the back). If I really get into it, I'll sleeve the CPU fans as well.

 

Networking Equipment:

 

I need a switch as it is, since my router is going to fill up fast; I have an older computer that will become a steam client for in-home streaming and media browsing, which will require a hardline connection, plus my main PC, plus three for the server (2 LAN and 1 remote management). Plus whatever else I want to add later. Since the router only has 4 ports, I need a switch. I'm looking for a managed switch (link aggregation, preferably QoS/traffic prioritization and possibly VLAN support) with between 8 and 16 ports. I'll also need more Ethernet cables, for which Staples and Best Buy are happy to charge $40 for a 25 foot length, so I'll be doing some more business with Newegg.

 

The router would perform the DHCP, wireless and internet access, while the switch managed the rest of the traffic.

 

That's it for now, next up will be the case followed by the power supply extensions. Network equipment will come in there somewhere.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

@wpirobotbuilder 96GB of RAM? Holy batman

Our Grace. The Feathered One. He shows us the way. His bob is majestic and shows us the path. Follow unto his guidance and His example. He knows the one true path. Our Saviour. Our Grace. Our Father Birb has taught us with His humble heart and gentle wing the way of the bob. Let us show Him our reverence and follow in His example. The True Path of the Feathered One. ~ Dimboble-dubabob III

Link to comment
Share on other sites

Link to post
Share on other sites

@wpirobotbuilder 96GB of RAM? Holy batman

Yep. It's useful for hosting lots of VMs or (more likely) to get excellent performance out of a ZFS-based NAS.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

Yep. It's useful for hosting lots of VMs or (more likely) to get excellent performance out of a ZFS-based NAS.

Will this be kept in a tower or a rack mount case? Or just leave it on the floor :)

Our Grace. The Feathered One. He shows us the way. His bob is majestic and shows us the path. Follow unto his guidance and His example. He knows the one true path. Our Saviour. Our Grace. Our Father Birb has taught us with His humble heart and gentle wing the way of the bob. Let us show Him our reverence and follow in His example. The True Path of the Feathered One. ~ Dimboble-dubabob III

Link to comment
Share on other sites

Link to post
Share on other sites

Haha, you know what's funny? You have more RAM capacity than some of

my system SSDs have storage capacity.

*cries in corner*

 

Yep. It's useful for hosting lots of VMs or (more likely) to get excellent performance out of a ZFS-based NAS.

 

Yup. Very interested to see your results on that. :)

 

Will this be kept in a tower or a rack mount case? Or just leave it on the floor :)

Probably a Fractal Define XL or Arc XL.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Dang it I've been missing all the fun!  :(  Well subed now. ;)

I roll with sigs off so I have no idea what you're advertising.

 

This is NOT the signature you are looking for.

Link to comment
Share on other sites

Link to post
Share on other sites

Initial Performance Testing w/ iozone

 

Got a volume set up on the SSD (which was wiped of ESXi). 50GB in size, with lz4 (default) compression enabled.

 

File is attached. All numbers are in Kilobits/second (divide the listed numbers by 8 for KiloBytes/second).

iozone_initial.txt

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×