Posted April 18, 2014 TOPHAT In the industry, a lot of projects are given codenames that derive from some subject. That subject could be anything from pirate ships to alcohol distilleries to plays on words. If I end up building more computers (likely), they'll continue some sort of naming scheme I can come up with. Like Monocle and Moustache. Rationale and Ramblings This started out with a blog post here. I've been using LGA 1366 blade servers at work, and they've got a good amount of horsepower. Looking on ebay I saw a pair of x5650's for $240 and started planning. When I found ASUS's Z8NR-D12 for sale on Newegg I was very happy indeed (RMA support and a manufacturer warranty of three years). After finding ECC RAM for cheap and a low-wattage PSU with dual EPS connectors (and confirming with the manufacturers that the parts should be supported), I decided to go for it. I'm going with a tower case, since I can't find a PSU that isn't horribly expensive that will fit in a 2U case and have access to fresh air. I suppose I could go with a fanless PSU and just use the forced air from the server chassis to make it work, but the ones I can find don't have multiple CPU connectors on them (and probably for good reason). Maybe a molex to 8P EPS would work? The one I linked has a single 12V rail. Or I could go with a 4U chassis and then I could use the PSU I bought, but it seems like a waste since I don't think I'll need so many storage drives. In the meantime, this will be my first foray into DIY servers. It will likely start out as a platform for me to experiment with FreeNAS/ZFS and ESXi, as well as to do some benchmarking. More knowledge of storage is better. Eventually this will become either a ZFS-based NAS or an ESXi host. I'm leaning towards the NAS because it'll be of more use to me in the short term. I will need an ESXi machine in the future for development on a project I will be working on within the next year, so maybe there'll be another guide like this :blink:. Parts CPU: Dual Intel x5650 @ 2.66 GHz. Six cores, twelve threads a piece. 95W Motherboard: ASUS Z8NR-D12 - There are plenty of dual-1366 motherboards out there, but this one was on sale for $200 from Newegg. I bought the very last one, which makes me sad because now I have to either buy more expensive Tyan or Supermicro boards or step up to LGA 2011, where CPUs are expensive. Memory: Kingston 96GB DDR3 1333 ECC Memory - Bought off E-Bay for $180 for 24GB. I went single socket first to make sure the board functioned, then bought a second RAM kit so I could run both CPUs with 3 dims each. All DIMMs are now in. Power Supply: Seasonic S12II 520W Bronze - It's non-modular, cheap, 80+ Bronze rated, and comes with the CPU connectors I need. Boot Drive: Adata SP600 64GB - It was on sale for $5 more than the 32GB version. Cheapest I could find. Will be used to test the server, then used as a cache in whatever server I end up running. CPU Cooler: Intel BXSTS100C - It's a 2U cooler, and pretty beefy (meant to cool a 130W processor). It's reasonably quiet, but is definitely the loudest part of the system. Case: Fractal Design Define XL R2 - I live in an apartment, so I will always have to hear whatever noise it is making. Additionally, my girlfriend might move in within the next few years, and I don't want someone else complaining about the noise. As it is, it's pretty silent, but it sure as hell won't be once a bunch of drives are in there. The case provides good enough cooling, enough drives space for my usage, a compatible motherboard standoff layout, and looks good enough to be out in the open (which it will be). HDD: Seagate NAS 3TB x2 - On sale for less than the 3TB WD offering. Consumes slightly more power, but performs better on average. Almost all other features were identical. Will be used for media storage and general backups in a RAID 1 array. SSD: Samsung 840 EVO 250GB (not yet ordered) - Has basically one of the best $/GB ratio for SSDs with similar capacity. No RAID here, will likely be used in the future as an iSCSI extent for VM storage Norco RPC-270 - Fits the motherboard and gives me up to 8 drives (which is honestly fine for my purposes). Norco RPC-470 - 4U version of the chassis with up to 13 drives (not excited about the 4U, but it is cheap). Norco RPC-2008 - Same thing, but with external hot-swap bays (might be useful). Costs a lot more though. Norco RPC-2212 - 12 hot-swap bays if I really must have more drives. Really expensive though. I do have pictures today, but the way I want to arrange this build log forces me to copy the way alpenwasser did his log for Apollo. When the final machine is complete, it will appear in this post at the bottom. For now, enjoy the (in progress) build log! Update Log 01. 2014-APR-17: Birthday Stuff and First Parts! 02. 2014-APR-19: Missing the Postman: The arrival of the CPUs 03. 2014-APR-21: RAM Installation, First Power-On, and Software Installation 04. 2014-APR-28: Remote Access, Going Dual-CPU With 48GB of Memory, and a First Look at Power Consumption 05. 2014-MAY-02: Heart Failure, Maxed RAM, and Next Steps... 06. 2014-MAY-05: Initial Performance Testing w/ iozone 07. 2014-JUN-10: Chipset Cooling, Mounting Holes, and Final Case Selection 08. 2014-Jun-13: From the Desk to the Case: Building in the Define XL R2 09. 2014-Jun-22: Storage and Cable Management 10. 2014-Jun-23: Performance and Stress Testing 11. 2015-Jan-12: A (Hopefully) Useful Purchase 12. 2015-Jan-17: PIKE Photos, More PCI Cards, and Repurposing Breakdown: CPU: $235 Mobo: $200 RAM: 4x $180 PSU: $60 SSD: $45 Coolers: 2x $35 Noctua 60mm Fan: $7 (sale) Define XL R2: $100 (sale) HDD: 2x $120 (sale) PSU extensions (24-pin and 8-pin EPS): $30 PIKE 2008 RAID Controller: $77 + $11 shipping Intel 1GbE x1 NIC: $40 Intel x520-DA2: $200-350. Total Cost: $2035 I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo GalileiBuild Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted April 18, 2014 Man 1366 is so old. My Reviews: Sennheiser HD 518 Mayflower Electronics ODAC/O2 Review My Tutorials: How To Play Crysis on Linux in DirectX 10 with DXVK Splinter Cell Conviction General Protection Fault Fix Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted April 18, 2014 Author Birthday Stuff and First Parts! Today was a good birthday. Because: parts. Also tax return, but mostly parts! I went to the post office to get them, but they were out for delivery. So I rushed home, hoping to beat the postman so I wouldn't have to wait until Friday. I got two lovely brown boxes from Newegg, which contained the motherboard, PSU, CPU cooler, and boot drive: This cooler is only for LGA 1356 and LGA 1366 sockets, which conveniently have a (really good) backplate. It was meant to cool CPUs with a 130W TDP, so the x5650's should be relatively small fry. It comes wrapped in sturdy cardboard, and (surprisingly to me) a clear ABS plastic case. I'd seen a review of the cooler which complained of dusty thermal compound, so I did a quick check: All good here. Now it's time to strip naked the heatsink: Rawr. It's got two good-sized heat pipes that go through the full radiator. The fan is also removable if I had good enough airflow via two screws on the bottom: The power supply isn't tremendously powerful at 520W, but it's got more than enough juice for a dual CPU server with the necessary dual 8-pin EPS connectors. It's got six SATA connectors and six Molex 4-pins, so I can have up to twelve drives if I really want to. It's got a 6-pin and 6+2-pin PCI-E connector, so I could reuse it for a gaming rig if I reuse the PSU for another build, and came with a molex to some other more different dual 4-pin connector (almost looks like a fan connector). Here it is, with all the connectors separated by type. From top to bottom: 24-pin, SATA, Molex, PCI-E, ATX/EPS These are the babies I need: Those EPS connectors. It's an SSD, not much else to be said about it. I didn't need the highest end SSD out there, but I didn't want to have to worry about my boot drive very much, so mechanical storage was ruled out. Plus it'll probably end up being used as a cache somewhere. The naked drive. I love the brushed finish. It comes with a 3.5' adapter and a laptop adapter. I originally wanted an ATX motherboard, but I couldn't find one. So I went with this one the Z8NR-D12, which was $200 ($450 originally). It supports up to 96GB of ECC memory with 8GB DIMMs, has six SATA ports by default, plus eight additional ones with a PIKE card installed. It also came with the integrated KVM, which was a very nice touch, as well as 6 SATA cables. The contents of the box: And the motherboard itself: Verifying Compatibility I didn't want to sit around and wait for the rest of the parts to get here, so I decided to get started plugging stuff in. I started by making sure the PSU would connect to the motherboard properly. Yep. The iKVM was easy, got it set up on the proper header: I really wanted to make sure the CPU cooler mounted correctly. Looking at the sockets from above, you can see why: There is very little room between sockets. I decided to do a test fit of the cooler, but first I checked the pins. Supposedly you aren't supposed to remove the socket protector unless you're installing a CPU. Oops: I put the cooler on the socket. Didn't tighten it down, just verified that I could fit two of these coolers side-by-side. I can! Yay! Once the CPU and Memory arrive I will be able to put the whole thing together and power it on! If everything works out, I'll buy the second RAM kit and CPU cooler and install them. Those will be the subjects of the next two updates. I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo GalileiBuild Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted April 18, 2014 Looks good! Subbed. PRODIGY -- SMALL TORRENT -- K'NEX SERVER -- NOCTUA PSU Community Standards -- Moderators Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted April 18, 2014 For starters, a happy belated birthday from me. The downside? I don't have a case yet . I was looking at the GD07 and GD08 from Silverstone, but I would have to sacrifice the drive cage to get an SSI EEB board to work withit. That's not okay, since this will likely start out as a storage machine. I can fit this system in a 2U chassis, but I need find a non-server PSU with two EPS connectors that also pulls air through horizontally (not vertically) or has an open chassis. If someone can find one, PLEASE LET ME KNOW! I will love you forever. I don't know how suitable this will be for you, but I'veactually been pretty happy with my Inwin for APOLLO. Itfits EEB motherboards (up to 12" x 13"), it's pretty wellbuilt IMO, and it's quite a bit less expensive than theusual server tower chassis from Supermicro et al.The one downside is that you'd need to make a ventilationopening either in the case top (as I did for APOLLO) orinto the sheet that separates the M/B compartment fromthe PSU compartment if you wanted to use the PSU you have,if I've understood correctly.Maximum 3.5" drive capacity for that chassis (unmodded)would be 13 drives, though you'd need to get another 4-drivecage and a 5-disk cage for the 5.25" bays. Mine came withone 4-disk hotswap cage in stock config. Also, the fans thatcome with it are pretty awesome, albeit a bit noisy (well,server fans, obviously). They're also all PWM. More knowledge of storage is better. +1 I will need an ESXi machine in the future for development on a project I will be working on within the next year, so maybe there'll be another guide like this :blink:. Excellent. I do have pictures today, but the way I want to arrange this build log forces me to copy the way alpenwasser did his log for Apollo. Well, obviously I can't say no to that. Also, side note: I did not get notification for this (if that wasyour intention), if you want members to be notified, you need touse the @ before the member name. (like @wpirobotbuilder). Update Log 01. 2014-APR-17: Birthday Stuff and First Parts! Oh my, a properly formatted index table? :wub: Man 1366 is so old. I prefer the term "classic". But yeah, LGA1366 came out ages ago indeed. I have two 1366 dual-socket systems, one with 2xX5680, (hexacore, hyperthreaded, 3.33 GHz stock) and one with 2 x L5630 (quadcore,also hyperthreaded, 2.13 GHz stock). As long as you give it somethingthat scales well with adding more threads,the more powerful of the twosystems still chews through pretty much anything I throw at it with ease(though obviously in areas where single-core performance is significantit will lag behind today's CPUs), and the other machine is still easilypowerful enough to run some VMs and storage stuff. Plus, price/performance for mid-range LGA1366 Xeons on eBay is very good at the moment if you can manage to get a server pull part. Having said that, those 12-core Xeons Intel has these days do look rather lovely... :wub: This cooler is only for LGA 1356 and LGA 1366 sockets, which conveniently have a (really good) backplate. It was meant to cool CPUs with a 130W TDP, so the x5650's should be relatively small fry. Yeah, I love that about the server boards, no annoying backplate stuffto deal with, so much easier than having a cooler with its own backplatethat you need to bother with. Shame it took Intel so long to move thatsystem to the desktop and only with LGA2011. BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing TutorialFORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted April 19, 2014 Author Missing the Postman: The arrival of the CPUs Got home from work yesterday, and instead of finding the CPUs I found a little pink slip: To my dismay, they had listed this morning as the pick-up time, rather than yesterday evening. So I played Banished and watched the WAN show to pass the evening. This morning, I went to the post office and picked up another brown box, which contained the CPUs: I was pleased to see excellent packaging. The box made it intact (aside from me failing to remove the outer plastic), and the inside contained non-conductive bubble wrap padding, which also surrounded the CPU. I was further pleased to see that they had properly stored each CPU individually in a sealed ESD bag: Opening them, I find the CPU in good condition: CPU Installation Like pretty much all other sockets, they're keyed so the processor can go in one (und only one) way. The two circles on the socket are the keys for these CPUs: Gently now... Putting the socket retention mechanism down: Now I can actually install the CPU cooler! First, the alignment phase: Beginning the tightening process, going in a cross-pattern: The whole thing securely tightened down. I didn't know this, but the mechanism actually prevents you from tightening too much. There she is, fully mounted and ready to go, just as soon as the RAM gets here. I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo GalileiBuild Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted April 20, 2014 subbed moar updates please HTTP 404 = Server cannot be reached please try again later, try refreshing(F5) the page or re-enable your wireless Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted April 22, 2014 Author RAM Installation, First Power-On, and Software Installations Came home to find that the postman left me another brown box. This time it contained something special: my RAM! It's a 3x8 ECC memory kit from Kingston, not much else to say. Here it is in the packaging: And installed in the server. First Power-On I was very anxious today, because I was dreading some memory incompatibility. Here is the setup before the first power-on: She's loading... Unfortunately for Dr. McCoy, he's not dead, Jim! Everything was detected, all memory and the CPU. Sigh of relief. I loaded the Ubuntu Server Installer and ran memtest: I got bored around 45%, so I exited it. It takes a long time to memtest so much memory.. Unfortunately, I couldn't install Ubuntu Server. When I went to install it, my monitor came back with "Input Signal Out Of Range: Change Settings to 1600x900"., which my monitor is already at. I wonder if it's just that the Ubuntu Server installer (which doesn't come with a graphical interface) doesn't support the onboard Aspeed AST 2050 chip. So I pulled out my installer for Windows 7. Looks like she works, Jim. Unfortunately the Windows install I have is Home Premium, so I couldn't use two CPUs or more than 16GB of RAM. I just wanted to make sure it all worked. Everything was properly detected Next I tried ESXi 5.5, which is something I might use this for later on. It's a fairly simple and straightforward installation. You load up a usbstick using unetbootin with the ISO image (link here), you only need a 1GB flash drive. The main page: After some menus and drive selection, you end up here: Reboot, and you get here: You can set up your own IPs or use DHCP (I tried both, they both work). After restarting the network manager, you get this: Using VMware vSphere to connect to the IP specified, you get to this menu, which is where you can create VMs and deploy a virtual vCenter (if you have a template and a license). Next I wanted to try FreeNAS: Installation instructions are here. You only need a 2GB disk, but they recommend a 4GB disk because not all 2GB disks have the same number of sectors, so they won't work. As luck would have it, one of my 2GB drives had enough sectors, so I burned the disk successfully. Looks like FreeNAS picked up all the RAM. The only thing that's failing is the startup of the CIFS service, but I'll probably be using iSCSI more often anyways. Still, I want to get it going so I can start doing more testing! Some Thoughts Holy crap is it quiet. The cooler is barely audible with these CPUs, even though it is idling at 2000 RPM. I'm sure under load it'll ramp up, but I'm still very impressed. The chipset gets pretty hot. Not too hot to touch, but close. Will have to make sure that there is adequate airflow when I get the chassis. ESXi works flawlessly (I went through and made VMs and stuff), Windows works but only with Professional or higher, and the only thing I'm having trouble with on FreeNAS is starting the CIFS service. I don't want this to be a Windows box. More configuration will be the subject of the next update, and possibly the addition of the second CPU and RAM. After that, I can get some storage and begin testing. I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo GalileiBuild Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted April 22, 2014 The chipset gets pretty hot. Not too hot to touch, but close. Will have to make sure that there is adequate airflow when I get the chassis. Yeah, definitely pay attention to that, the 5500/5520 chipsets run very hot. Have you stress tested the machine yet (mprime or w/e)? BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing TutorialFORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted April 22, 2014 Author Yeah, definitely pay attention to that, the 5500/5520 chipsets run very hot. Have you stress tested the machine yet (mprime or w/e)? I did a half-pass of memtest86 (I got bored waiting for it to finish), and prime95'd it when I had windows installed. Under full load it gets to around 66 degrees on the hottest core, and the fan ramps up to around 3800 RPM, which is about as loud as a GPU under load. Once I get a case and all the hardware I'll do proper stress testing, I don't want to leave it to stress test without proper airflow over all the components and have a crash while I'm asleep. I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo GalileiBuild Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted April 22, 2014 Once I get a case and all the hardware I'll do proper stress testing, I don't want to leave it to stress test without proper airflow over all the components and have a crash while I'm asleep. Yeah, I managed to crash my machine multiple times with mprime before figuring out it was the chipset that was getting too hot. Was a rather annoying learning process because after each emergency shutoff the machine would refuse to POST for the next 15 mins or so. Presumably that's a safety feature on the M/B, but it's a bit unnerving, especially the first few times because I wasn't quite sure if something had actually broken... BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing TutorialFORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted April 22, 2014 Author Yeah, I managed to crash my machine multiple times with mprime before figuring out it was the chipset that was getting too hot. Was a rather annoying learning process because after each emergency shutoff the machine would refuse to POST for the next 15 mins or so. Presumably that's a safety feature on the M/B, but it's a bit unnerving, especially the first few times because I wasn't quite sure if something had actually broken... I ended up putting a glass on top of the chipset about 1/4 full of water so I could do extended testing. Worked like a charm, but the water got pretty warm, and definitely isn't a permanent solution I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo GalileiBuild Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted April 22, 2014 I ended up putting a glass on top of the chipset about 1/4 full of water so I could do extended testing. Worked like a charm, but the water got pretty warm, and definitely isn't a permanent solution Haha, that chipset family just seems to inspire inventiveness in people who use boards based on it. BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing TutorialFORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted April 22, 2014 Installation instructions are here. You only need a 2GB disk, but they recommend a 4GB disk because not all 2GB disks have the same number of sectors, so they won't work. As luck would have it, one of my 2GB drives had enough sectors, so I burned the disk successfully. IMG_1477.JPG Looks like FreeNAS picked up all the RAM. The only thing that's failing is the startup of the CIFS service, but I'll probably be using iSCSI more often anyways. Still, I want to get it going so I can start doing more testing! IMG_1480.JPG To get CIFS to start correctly in FreeNAS, configure the settings both for CIFS [(GUI>Services>Control Services>Wrench Icon beside CIFS) or (GUI>Services>CIFS)] and the Global Config for the Network (GUI>Global Configuration). The NetBIOS name in the CIFS configuration == The Hostname in Global Configuration. /This is usually the issue with CIFS starting when it's not configured. Also, be sure to uncheck Local Master under the CIFS configuration settings so FreeNAS doesn't try to be elected as Master Browser of your network (unless you want it to be). Then reboot FreeNAS and the CIFS service should start. If it doesn't, manually switch it to On from Services>Control Services>CIFS. Awesome build log so far bro. Loving it. If mine had gone more smoothly, I might have done the same, but ... tis life. † Christian Member † For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder. Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted April 22, 2014 Author Awesome build log so far bro. Loving it. If mine had gone more smoothly, I might have done the same, but ... tis life. The Ubuntu installation had me pulling hair out, but I tried Windows and it worked, so I at least knew the hardware wasn't bad (my main concern). I'm just glad ESXi and FreeNAS work, since those are what I would want on it. I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo GalileiBuild Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted April 28, 2014 Author Remote Access, Going Dual-CPU With 48GB of Memory, and a First Look at Power Consumption Up until now, I've been running a single socket only. Today both the second CPU cooler and the second 24 GB RAM kit arrived, so I of course installed them immediately. Off topic, but one of the defining characteristics of this chipset is that it gets really hot. Until now, I'd been keeping a glass 1/4 filled with water on top of the chipset to keep it cool. Now I have that purple fan there, so it stays a lot cooler long term. I got the remote KVM working last week, so I didn't have to shuffle my keyboard/mouse/monitor back and forth anymore. It works pretty well so far, maybe I'll go over it more in another post. From the remote console I can power on/off, and even do proper shutdowns regardless of what OS is installed (for the most part). So I powered it on that way. Basically it just displays what a monitor would and gives me keyboard/mouse passthrough. I had to move my router onto my desk since I don't have a switch or long enough cable to reach the previous location (perhaps a different post later on about network setup...) Here is the BIOS system information menu, showing both CPUs and all memory recognized. And here is VMware ESXi recognizing the memory and CPUs as well. I also got out my Kill-A-Watt meter to see what power consumption looked like at idle, which is what it'll be doing most of the time. About as much as a high-power incandescent bulb. I'll probably toy around with power-saving features if this is to become a 24/7 system. My electric bill is around $35/month from early fall to late spring since I don't have much drawing power (all my lightbulbs are LED or CFL) and my regular computer pulls about 100W at full load (for now...). The most energy-intensive appliances are definitely the kitchen ones, so I don't anticipate a huge (relative) jump in my power bill. I still am looking for a case, but it'll also depend on what the system ends up doing. I'm still considering a rackmount case, but I also might go with the GD07 or GD08 if this ends up being an ESXi system, where I don't need drives other than a boot drive. I'd just use network storage for the VMs and keep the SSD that's on here now as the boot drive. Plus I can always attempt to mod the drive cages to work with an SSI EEB case by making a few cuts here and there. EDIT: I found this on hardforum that shows I could install this board in an Arc XL or Define XL R2 from Fractal. That's probably the route I'm going to go, since I get lots of drive support and can keep my current PSU. I'll just be sacrificing some cable management holes and not using all screw mounts. Next update will be on the last 48 GB of memory installation. Then I can start looking at storage, as I only have consumer grade drives around my apartment. I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo GalileiBuild Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted April 28, 2014 I love the remote control features on proper server boards, really miss that on my SR-2. BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing TutorialFORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted April 28, 2014 Looks ballin'! I love the remote control features on proper server boards, really One of the reasons I can't wait for my new server board to arrive PRODIGY -- SMALL TORRENT -- K'NEX SERVER -- NOCTUA PSU Community Standards -- Moderators Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted May 2, 2014 Author Heart Failure, Maxed RAM, and Next Steps... I got the RAM in the mail today, which is odd because I wasn't expecting it until Tuesday. The seller is definitely getting a 5-star rating for delivering working modules. However, I almost had heart failure for this reason: The memory they sent me is the correct kit (looking at the sticker on the module), however it was labeled as a 16GB LRDIMM, which this board doesn't support (max 8GB). But then I looked at the module, and it said the right things. My excitement returned after my heart rate settled a bit, and quickly installed the modules. Pro tip: If you want to get really good at building computers, build and rebuild a server over and over again. I'm much more comfortable building computers now that I've worked on this one. I booted it up, and the motherboard POSTed with all 96 GB recognized by both the BIOS and ESXi. So that's it. That's all the parts to get the server fully outfitted. What's Next? The case: I've been looking around and discovered that the Define XL R2 and Arc XL can fit the motherboard in it without modification, but I'll lose some cable management holes. Fortunately, the drives are in a part of the motherboard where there is a free cable management hole so that won't be an issue. I'm thinking about going for the Define XL, since this will be running a lot, and I don't have a closet I'd feel comfortable stashing it in, especially since it idles at around 100W, and will be even higher once I get hard drives in there. Power Supply Extensions: My power supply cables won't reach everywhere in either of the mentioned cases. Which is probably a good thing, since they look like crap from the pictures I've put up. I was thinking about these. I can't go with many color schemes in this build, since the green PCB of the motherboard and RAM would make them look terrible. So I plan on going with silver colors. Firstly, because there is a lot of silver on the motherboard (heatsinks, all the solder for components), and second because silver would go okay with the green of the motherboard. There's nothing I can or would do about the orange RAM slots or red/blue SATA connectors. I would need a 24-pin, two 8-pins, and SATA data cables. The SATA power might reach the drives in the case, but I can always use extensions to get to the drives (not like anyone's going to see the back). If I really get into it, I'll sleeve the CPU fans as well. Networking Equipment: I need a switch as it is, since my router is going to fill up fast; I have an older computer that will become a steam client for in-home streaming and media browsing, which will require a hardline connection, plus my main PC, plus three for the server (2 LAN and 1 remote management). Plus whatever else I want to add later. Since the router only has 4 ports, I need a switch. I'm looking for a managed switch (link aggregation, preferably QoS/traffic prioritization and possibly VLAN support) with between 8 and 16 ports. I'll also need more Ethernet cables, for which Staples and Best Buy are happy to charge $40 for a 25 foot length, so I'll be doing some more business with Newegg. The router would perform the DHCP, wireless and internet access, while the switch managed the rest of the traffic. That's it for now, next up will be the case followed by the power supply extensions. Network equipment will come in there somewhere. I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo GalileiBuild Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted May 2, 2014 @wpirobotbuilder 96GB of RAM? Holy batman Our Grace. The Feathered One. He shows us the way. His bob is majestic and shows us the path. Follow unto his guidance and His example. He knows the one true path. Our Saviour. Our Grace. Our Father Birb has taught us with His humble heart and gentle wing the way of the bob. Let us show Him our reverence and follow in His example. The True Path of the Feathered One. ~ Dimboble-dubabob III Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted May 2, 2014 Author @wpirobotbuilder 96GB of RAM? Holy batman Yep. It's useful for hosting lots of VMs or (more likely) to get excellent performance out of a ZFS-based NAS. I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo GalileiBuild Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted May 3, 2014 Yep. It's useful for hosting lots of VMs or (more likely) to get excellent performance out of a ZFS-based NAS. Will this be kept in a tower or a rack mount case? Or just leave it on the floor Our Grace. The Feathered One. He shows us the way. His bob is majestic and shows us the path. Follow unto his guidance and His example. He knows the one true path. Our Saviour. Our Grace. Our Father Birb has taught us with His humble heart and gentle wing the way of the bob. Let us show Him our reverence and follow in His example. The True Path of the Feathered One. ~ Dimboble-dubabob III Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted May 3, 2014 Haha, you know what's funny? You have more RAM capacity than some of my system SSDs have storage capacity. *cries in corner* Yep. It's useful for hosting lots of VMs or (more likely) to get excellent performance out of a ZFS-based NAS. Yup. Very interested to see your results on that. Will this be kept in a tower or a rack mount case? Or just leave it on the floor Probably a Fractal Define XL or Arc XL. BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing TutorialFORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted May 5, 2014 Dang it I've been missing all the fun! Well subed now. I roll with sigs off so I have no idea what you're advertising. This is NOT the signature you are looking for. Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Posted May 5, 2014 Author Initial Performance Testing w/ iozone Got a volume set up on the SSD (which was wiped of ESXi). 50GB in size, with lz4 (default) compression enabled. File is attached. All numbers are in Kilobits/second (divide the listed numbers by 8 for KiloBytes/second). iozone_initial.txt I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo GalileiBuild Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking Link to comment Share on other sites More sharing options... Link to post Share on other sites More sharing options...
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now