Jump to content

The Pure Solid State Server Build Log

52 minutes ago, leadeater said:

Infiniband has way lower latency and RDMA support, you need RDMA to get the really high throughput. The X540's don't have it.

I had looked it up earlier and found out on my own that the X540 doesn't support RDMA. I'm just saying I don't think File Explorer is my bottleneck that's all.

 

I tried doing a local file transfer and hit 1GB/s writing to the pool. I think with some tweaking I could get close to this over the network but it seems without some type of faster cache or expanding the pool this will be the best I can expect for the time being.

Link to comment
Share on other sites

Link to post
Share on other sites

@leadeater So I did some tinkering and...:D

Spoiler

What I now see over the network.

Screenshot_1.png.12fb4fbb54acd691b9f52da978fb2f03.png

 

Full NIC utilization. Multi-channel doing it's job right.Screenshot_2.png.2ae6cc981d80efc5ab0e6b392b7b546c.png

 

What I see with one of my typical file transfers.

Screenshot_3.png.3aa4a1fa862ebbd35c7688386bd24b7e.png

 

Told ya I just needed to tinker with it. ?

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, leadeater said:

Sooo..... what did you change? ?

I enabled Jumbo Frames on the network switch and set the MTU of all four NICs to 9014. It seems to really help when the CPU clock isn't high enough to pack those 1500MTU packets at 10Gbit fast enough to saturate the link. Now if the X540 supported RDMA I could probably keep it at 1500 but the CPU needs a little help.

Link to comment
Share on other sites

Link to post
Share on other sites

I need to ask and leadeater you'll probably know more about this specifically than others. The next step we're on is the need to setup software to backup the system. I plan to have a offsite backup solution. The remote server will run FreeNAS and it supports SSH/SFTP. I need some form of replication software for Windows Server (if it doesn't have it to begin with) that can automatically replicate the data on a schedule and send the copies to the FreeNAS box over the internet via SSH/SFTP.

 

Anybody know where I could find such software?

Link to comment
Share on other sites

Link to post
Share on other sites

Have you considered replacing the 40mm fans?

(maybe you did mention it or did it but didn't see it in the 4 pages)

Based on the first page pictures it looks like you could easily remove groups of 4 x 40mm fans and replace them with 80mm fans, or potentially 92mm fans.

Seems like you have exactly 5 groups x 4 40mm fans .. so quite easy to put 5 fans there.

You would get more airflow for less noise.

Also, if those small fans push fans towards the outside of the case, how's the airflow with the two fans on the sides pushing air out...

 

You could go nuts and use 38mm thick Sanyo Denki fans along with a fan controller: https://www.digikey.com/product-detail/en/sanyo-denki-america-inc/9GA0812P1H611/1688-2174-ND/8285297

 

You'll need a fan controller to adjust rpm using pwm otherwise the server will sound like airplane with fans running at 8000rpm , but you will move air around.

 

or you could go with thin 15mm thick fans like these if space is an issue : https://www.digikey.com/product-detail/en/delta-electronics/EFC0812DB-F00/603-1159-ND/1850528

 

Here's loads of fans, filtered by 12v and 80x80 mm : https://www.digikey.com/short/ppq058

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Windows7ge said:

The next step we're on is the need to setup software to backup the system. I plan to have a offsite backup solution. The remote server will run FreeNAS and it supports SSH/SFTP. I need some form of replication software for Windows Server (if it doesn't have it to begin with) that can automatically replicate the data on a schedule and send the copies to the FreeNAS box over the internet via SSH/SFTP.

rsync over ssh would probably be the go to for that.

 

https://heejune.me/2018/08/02/setup-rsync-server-over-ssh-on-windows-server-2012-easy-way/

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, leadeater said:

It says the easy way but it's still going over my head. And apparently it's all cli which means even if I manage to install rsync I still have to learn how to configure & enable it.

 

Well, I don't actually NEED this for quite some time to come so I guess I'll just have to sit down and find the time to learn.

 

I also now have a powerful VM server running Proxmox so I can test rsync extensively in an isolated environment before I actually deploy it.

 

I hope it works out. In the meantime I'm searching for possible alternatives. Having options is always nice.

Link to comment
Share on other sites

Link to post
Share on other sites

Oh nice! Glad i followed this a while back or else i would have forgotten about it :P 

Looking real good so far. And I am just drooling at all those SSDs. Although I couldn't even begin to consider what I would use to saturate that massive uplink. My 1gb teamed links are more then enough as is XD. 

Use this guide to fix text problems in your postGo here and here for all your power supply needs

 

New Build Currently Under Construction! See here!!!! -----> 

 

Spoiler

Deathwatch:[CPU I7 4790K @ 4.5GHz][RAM TEAM VULCAN 16 GB 1600][MB ASRock Z97 Anniversary][GPU XFX Radeon RX 480 8GB][STORAGE 250GB SAMSUNG EVO SSD Samsung 2TB HDD 2TB WD External Drive][COOLER Cooler Master Hyper 212 Evo][PSU Cooler Master 650M][Case Thermaltake Core V31]

Spoiler

Cupid:[CPU Core 2 Duo E8600 3.33GHz][RAM 3 GB DDR2][750GB Samsung 2.5" HDD/HDD Seagate 80GB SATA/Samsung 80GB IDE/WD 325GB IDE][MB Acer M1641][CASE Antec][[PSU Altec 425 Watt][GPU Radeon HD 4890 1GB][TP-Link 54MBps Wireless Card]

Spoiler

Carlile: [CPU 2x Pentium 3 1.4GHz][MB ASUS TR-DLS][RAM 2x 512MB DDR ECC Registered][GPU Nvidia TNT2 Pro][PSU Enermax][HDD 1 IDE 160GB, 4 SCSI 70GB][RAID CARD Dell Perc 3]

Spoiler

Zeonnight [CPU AMD Athlon x2 4400][GPU Sapphire Radeon 4650 1GB][RAM 2GB DDR2]

Spoiler

Server [CPU 2x Xeon L5630][PSU Dell Poweredge 850w][HDD 1 SATA 160GB, 3 SAS 146GB][RAID CARD Dell Perc 6i]

Spoiler

Kero [CPU Pentium 1 133Mhz] [GPU Cirrus Logic LCD 1MB Graphics Controller] [Ram 48MB ][HDD 1.4GB Hitachi IDE]

Spoiler

Mining Rig: [CPU Athlon 64 X2 4400+][GPUS 9 RX 560s, 2 RX 570][HDD 160GB something][RAM 8GBs DDR3][PSUs 1 Thermaltake 700w, 2 Delta 900w 120v Server modded]

RAINBOWS!!!

 

 QUOTE ME SO I CAN SEE YOUR REPLYS!!!!

Link to comment
Share on other sites

Link to post
Share on other sites

Nitpick: you will lose memory performance if you are using just one stick of RAM per processor. Those Xeons uses quad-channel RAM, so with two of them you need 8 matching sticks of memory for best performance.

The Fruit Pie: Core i7-9700K ~ 2x Team Force Vulkan 16GB DDR4-3200 ~ Gigabyte Z390 UD ~ XFX RX 480 Reference 8GB ~ WD Black NVMe 1TB ~ WD Black 2TB ~ macOS Monterey amd64

The Warship: Core i7-10700K ~ 2x G.Skill 16GB DDR4-3200 ~ Asus ROG Strix Z490-G Gaming Wi-Fi ~ PNY RTX 3060 12GB LHR ~ Samsung PM981 1.92TB ~ Windows 11 Education amd64
The ThreadStripper: 2x Xeon E5-2696v2 ~ 8x Kingston KVR 16GB DDR3-1600 Registered ECC ~ Asus Z9PE-D16 ~ Sapphire RX 480 Reference 8GB ~ WD Black NVMe 1TB ~ Ubuntu Linux 20.04 amd64

The Question Mark? Core i9-11900K ~ 2x Corsair Vengence 16GB DDR4-3000 @ DDR4-2933 ~ MSI Z590-A Pro ~ Sapphire Nitro RX 580 8GB ~ Samsung PM981A 960GB ~ Windows 11 Education amd64
Home server: Xeon E3-1231v3 ~ 2x Samsung 8GB DDR3-1600 Unbuffered ECC ~ Asus P9D-M ~ nVidia Tesla K20X 6GB ~ Broadcom MegaRAID 9271-8iCC ~ Gigabyte 480GB SATA SSD ~ 8x Mixed HDD 2TB ~ 16x Mixed HDD 3TB ~ Proxmox VE amd64

Laptop 1: Dell Latitude 3500 ~ Core i7-8565U ~ NVS 130 ~ 2x Samsung 16GB DDR4-2400 SO-DIMM ~ Samsung 960 Pro 512GB ~ Samsung 850 Evo 1TB ~ Windows 11 Education amd64
Laptop 2: Apple MacBookPro9.2 ~ Core i5-3210M ~ 2x Samsung 8GB DDR3L-1600 SO-DIMM ~ Intel SSD 520 Series 480GB ~ macOS Catalina amd64

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, maxtch said:

Nitpick: you will lose memory performance if you are using just one stick of RAM per processor. Those Xeons uses quad-channel RAM, so with two of them you need 8 matching sticks of memory for best performance.

Probably not going to matter for this, I've even seen commercial storage products do similar in respect to ram channels. Grates me every time I see it.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, leadeater said:

Probably not going to matter for this, I've even seen commercial storage products do similar in respect to ram channels. Grates me every time I see it.

Depending on what you do. If it is a pure storage server the memory bandwidth won’t matter much. But if you compute on it, it becomes very taxing on the memory bandwidth.

 

I have an ex-server workstation I compile code on. Compiling code with aggressive optimizations on is a strictly CPU-only, integer-only, high throughput, high IOPS compute task if parallelized. Not only do I need octa-channel memory, but also NVMe SSD, to keep my dual Xeon E5-2680 fed.

The Fruit Pie: Core i7-9700K ~ 2x Team Force Vulkan 16GB DDR4-3200 ~ Gigabyte Z390 UD ~ XFX RX 480 Reference 8GB ~ WD Black NVMe 1TB ~ WD Black 2TB ~ macOS Monterey amd64

The Warship: Core i7-10700K ~ 2x G.Skill 16GB DDR4-3200 ~ Asus ROG Strix Z490-G Gaming Wi-Fi ~ PNY RTX 3060 12GB LHR ~ Samsung PM981 1.92TB ~ Windows 11 Education amd64
The ThreadStripper: 2x Xeon E5-2696v2 ~ 8x Kingston KVR 16GB DDR3-1600 Registered ECC ~ Asus Z9PE-D16 ~ Sapphire RX 480 Reference 8GB ~ WD Black NVMe 1TB ~ Ubuntu Linux 20.04 amd64

The Question Mark? Core i9-11900K ~ 2x Corsair Vengence 16GB DDR4-3000 @ DDR4-2933 ~ MSI Z590-A Pro ~ Sapphire Nitro RX 580 8GB ~ Samsung PM981A 960GB ~ Windows 11 Education amd64
Home server: Xeon E3-1231v3 ~ 2x Samsung 8GB DDR3-1600 Unbuffered ECC ~ Asus P9D-M ~ nVidia Tesla K20X 6GB ~ Broadcom MegaRAID 9271-8iCC ~ Gigabyte 480GB SATA SSD ~ 8x Mixed HDD 2TB ~ 16x Mixed HDD 3TB ~ Proxmox VE amd64

Laptop 1: Dell Latitude 3500 ~ Core i7-8565U ~ NVS 130 ~ 2x Samsung 16GB DDR4-2400 SO-DIMM ~ Samsung 960 Pro 512GB ~ Samsung 850 Evo 1TB ~ Windows 11 Education amd64
Laptop 2: Apple MacBookPro9.2 ~ Core i5-3210M ~ 2x Samsung 8GB DDR3L-1600 SO-DIMM ~ Intel SSD 520 Series 480GB ~ macOS Catalina amd64

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, maxtch said:

Depending on what you do. If it is a pure storage server the memory bandwidth won’t matter much. But if you compute on it, it becomes very taxing on the memory bandwidth.

 

I have an ex-server workstation I compile code on. Compiling code with aggressive optimizations on is a strictly CPU-only, integer-only, high throughput, high IOPS compute task if parallelized. Not only do I need octa-channel memory, but also NVMe SSD, to keep my dual Xeon E5-2680 fed.

That's why I said for this, storage isn't exactly a big compute or memory system workload though there are particular software defined storage technologies that can be like very high performance erasure coding.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, leadeater said:

That's why I said for this, storage isn't exactly a big compute or memory system workload though there are particular software defined storage technologies that can be like very high performance erasure coding.

If, for example, you implement on-the-fly full-disk encryption, you need strong compute. Although for a storage server it is better to implement that on the client side so the server never sees the cryptographic keys. Or if your server also hosts a DBMS compute is needed to deal with the requests.

The Fruit Pie: Core i7-9700K ~ 2x Team Force Vulkan 16GB DDR4-3200 ~ Gigabyte Z390 UD ~ XFX RX 480 Reference 8GB ~ WD Black NVMe 1TB ~ WD Black 2TB ~ macOS Monterey amd64

The Warship: Core i7-10700K ~ 2x G.Skill 16GB DDR4-3200 ~ Asus ROG Strix Z490-G Gaming Wi-Fi ~ PNY RTX 3060 12GB LHR ~ Samsung PM981 1.92TB ~ Windows 11 Education amd64
The ThreadStripper: 2x Xeon E5-2696v2 ~ 8x Kingston KVR 16GB DDR3-1600 Registered ECC ~ Asus Z9PE-D16 ~ Sapphire RX 480 Reference 8GB ~ WD Black NVMe 1TB ~ Ubuntu Linux 20.04 amd64

The Question Mark? Core i9-11900K ~ 2x Corsair Vengence 16GB DDR4-3000 @ DDR4-2933 ~ MSI Z590-A Pro ~ Sapphire Nitro RX 580 8GB ~ Samsung PM981A 960GB ~ Windows 11 Education amd64
Home server: Xeon E3-1231v3 ~ 2x Samsung 8GB DDR3-1600 Unbuffered ECC ~ Asus P9D-M ~ nVidia Tesla K20X 6GB ~ Broadcom MegaRAID 9271-8iCC ~ Gigabyte 480GB SATA SSD ~ 8x Mixed HDD 2TB ~ 16x Mixed HDD 3TB ~ Proxmox VE amd64

Laptop 1: Dell Latitude 3500 ~ Core i7-8565U ~ NVS 130 ~ 2x Samsung 16GB DDR4-2400 SO-DIMM ~ Samsung 960 Pro 512GB ~ Samsung 850 Evo 1TB ~ Windows 11 Education amd64
Laptop 2: Apple MacBookPro9.2 ~ Core i5-3210M ~ 2x Samsung 8GB DDR3L-1600 SO-DIMM ~ Intel SSD 520 Series 480GB ~ macOS Catalina amd64

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, maxtch said:

Or if your server also hosts a DBMS

Then it isn't just a storage server ?, nor would I refer to it as one. I more commonly see remote block storage for database servers though so compute and storage are separated. Full ram channel allocation is best but in a lot of cases it's far from required, I just hate not seeing it done as I would irrespective of what the server is doing. Just size the DIMMs to meet the total capacity you're after, coming across servers with 128GB/256GB ram running one or two channels down is just disheartening to see because it's so easily avoidable.

 

Edit:

An nobody really actually adds more ram to the system later if they planned to and it's why the channels weren't populated to start with.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, 8uhbbhu8 said:

Oh nice! Glad i followed this a while back or else i would have forgotten about it :P 

Looking real good so far. And I am just drooling at all those SSDs. Although I couldn't even begin to consider what I would use to saturate that massive uplink. My 1gb teamed links are more then enough as is XD. 

My first real use of the full 20Gbit connection was to transfer 80GB of files.

 

The transfer took <1min.

 

5 hours ago, maxtch said:

Nitpick: you will lose memory performance if you are using just one stick of RAM per processor. Those Xeons uses quad-channel RAM, so with two of them you need 8 matching sticks of memory for best performance.

If you read though the topic you'll see that 1 stick per CPU is in no way the final configuration. It's just that each stick costs at least $500 per (usually a lot more) so I didn't plan on waiting for the spare money to buy sticks when the system can function with just 2. Network transfers aren't bottlenecked by RAM and my use case isn't RAM intensive so it's fine for now. More RAM will definitely come in the future though. That I guarantee I want quad channel.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Windows7ge said:

My first real use of the full 20Gbit connection was to transfer 80GB of files.

 

The transfer took <1min.

I wish I had that kind of transfer speeds when I am restoring my backups.

3 minutes ago, Windows7ge said:

If you read though the topic you'll see that 1 stick per CPU is in no way the final configuration. It's just that each stick costs at least $500 per (usually a lot more) so I didn't plan on waiting for the spare money to buy sticks when the system can function with just 2. Network transfers aren't bottlenecked by RAM and my use case isn't RAM intensive so it's fine for now. More RAM will definitely come in the future though. That I guarantee I want quad channel.

TL;DR, sorry. Just keep in mind that some workloads you might throw at that processor can be demanding on memory bandwidth, for example DBMS, for example code compile with aggressive optimization and parallelized to the thread.

The Fruit Pie: Core i7-9700K ~ 2x Team Force Vulkan 16GB DDR4-3200 ~ Gigabyte Z390 UD ~ XFX RX 480 Reference 8GB ~ WD Black NVMe 1TB ~ WD Black 2TB ~ macOS Monterey amd64

The Warship: Core i7-10700K ~ 2x G.Skill 16GB DDR4-3200 ~ Asus ROG Strix Z490-G Gaming Wi-Fi ~ PNY RTX 3060 12GB LHR ~ Samsung PM981 1.92TB ~ Windows 11 Education amd64
The ThreadStripper: 2x Xeon E5-2696v2 ~ 8x Kingston KVR 16GB DDR3-1600 Registered ECC ~ Asus Z9PE-D16 ~ Sapphire RX 480 Reference 8GB ~ WD Black NVMe 1TB ~ Ubuntu Linux 20.04 amd64

The Question Mark? Core i9-11900K ~ 2x Corsair Vengence 16GB DDR4-3000 @ DDR4-2933 ~ MSI Z590-A Pro ~ Sapphire Nitro RX 580 8GB ~ Samsung PM981A 960GB ~ Windows 11 Education amd64
Home server: Xeon E3-1231v3 ~ 2x Samsung 8GB DDR3-1600 Unbuffered ECC ~ Asus P9D-M ~ nVidia Tesla K20X 6GB ~ Broadcom MegaRAID 9271-8iCC ~ Gigabyte 480GB SATA SSD ~ 8x Mixed HDD 2TB ~ 16x Mixed HDD 3TB ~ Proxmox VE amd64

Laptop 1: Dell Latitude 3500 ~ Core i7-8565U ~ NVS 130 ~ 2x Samsung 16GB DDR4-2400 SO-DIMM ~ Samsung 960 Pro 512GB ~ Samsung 850 Evo 1TB ~ Windows 11 Education amd64
Laptop 2: Apple MacBookPro9.2 ~ Core i5-3210M ~ 2x Samsung 8GB DDR3L-1600 SO-DIMM ~ Intel SSD 520 Series 480GB ~ macOS Catalina amd64

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, maxtch said:

TL;DR, sorry. Just keep in mind that some workloads you might throw at that processor can be demanding on memory bandwidth, for example DBMS, for example code compile with aggressive optimization and parallelized to the thread.

Shortened: This RAM is expensive but quad channel WILL be achieved in the future. I'll @ you about it.

Link to comment
Share on other sites

Link to post
Share on other sites

I thought I would give an update about the whole Supermicro making me give them a Security Deposit for cross-shipment situation (no other replacement option was available).

 

After having the defective board officially back in their possession for the past 8 days Supermicro finally approved and refunded me (I only had to send them an e-mail asking what was going on). They even refunded the state tax associated with the "purchase" which I did not expect to have returned.

 

As much as I like the idea of having a replacement board overnighted to me the day after submitting an RMA I REALLY did not like having to put down over $650, going through a very detailed and hard to navigate RMA process then being told:

  • We may not refund you
  • The replacement board may be refurbished
  • If the board is irreparable you have to pay to have the defective board shipped back to you (if I asked for repair)

Thankfully despite the rather complicated RMA procedure that I near screwed up about 3 times I got refunded.

 

To be honest for future servers I'm probably going to go back to ASRock Rack even if their boards aren't "real" server motherboards. I've worked with their RMA service. It's like any other desktop motherboard manufacturer, no security deposit weirdness and their boards have worked pretty well for me. Unless I end up building something that requires a VERY specific motherboard layout or needs a VERY specific list of features of which Supermicro is the only manufacturer then I don't think I'm going to buy from them again. Not with an RMA service like this and telling me I may not even get my money back if the return is unsatisfactory.

Link to comment
Share on other sites

Link to post
Share on other sites

This update is going to be a bit of a rant.

 

The past four days have been nothing but trying things and failing, failing more then trying more things until finally I got something to work between the Windows server and a FreeNAS VM (it's only in a VM for testing purposes - don't use FreeNAS in a VM for regular use).

 

To let it be known immediately all hopes of a GUI replication program that support replication using SSH/SFTP/Rsync (for free) failed.

 

Like leadeater suggested (and Bitter had mentioned for an unrelated issue) using Rsync seems to be the only real free option for cross-platform automatic synchronization via standard networking protocols

 

To get it working though for my exact configuration wasn't easy. First-off the "Setup Rsync Server Easy Way" guide was in no way designed for your average user. Throughout the guide there are whole sections of information that are excluded because the author expects you to already know how to do it. Most of which I did not and ultimately I had to abandon it because near the end when you need to fetch the rsync.exe file the hyperlinks take you to dead sites.

 

I did get something out of it though.

 

Screenshot_1.png.8aef0b5fe12beb70965075055b047088.png

That is a command prompt accessed via SSH using PuTTY. That is so cool. I never thought I could remote into the Windows equivalent of a Linux terminal.

 

What I found was a program called Cygwin which installs a Terminal like environment on windows. Two of the supported packages of which are Rsync & SSH.

 

Understanding how to configure Rsync I knew was going to be the biggest pain in the ass but I managed to find a website that explained most of how to set it up. A lucky find. With that my test backup script looks like this:

#!/bin/bash
rsync -avzhP -e 'ssh -p 22' --delete  --stats --log-file=/cygdrive/c/Users/Administrator/Desktop/backup-log/backup-`date +”%F-%I%p”`.log /cygdrive/d/Shares/Storage/users/jon root@192.168.0.247:/mnt/test/

If you're looking at that and don't understand any of it then you know how I felt while figuring out how to write it.

 

So, great we have a working script. We're done...right?...

 

Nope, we need to automate it but we have an issue of while authenticating the session with the FreeNAS server it prompts you for the user password on the server. That's a problem because we don't want to have to authenticate manually to run automatic backups. The solution? Public/Private key authentication w/out password protection. That sounds bad but it's stronger than plain password authentication and it lets it run automatically but in order to run the script and have the user authenticate itself requires a whole other set of setup instructions without of which you'll get an error of "Permission denied (publickey)".

 

OK, so now we can run the backup script and the server automatically authenticates us. Great. Now we need to setup a Windows Task to run it.

 

But guess what.

 

Our script isn't an .exe, in fact scripts don't use file extensions at all and it won't run if you tell windows to open it using our simulated terminal. So how do we add it to a task? Well when executing an application within Task Scheduler it has a field that allows you to add an argument. So NOW we have to figure out the CLI argument that will make the program launch the script upon startup.

 

And that argument is:

/bin/bash -l -e '/cygdrive/c/cygwin64/home/administrator/rsync-backup-script'

 

So with that figured out:

  1. Windows Task Scheduler launches our simulated Terminal
  2. The Terminal automatically runs our backup script
  3. The Rsync application automatically SSH's into the server
  4. Rsync synchronizes our data, outputs a log, closes the session, then the terminal exits on it's own.

So Rsync over SSH on Windows once you have it figured out is a very nice tool but I have to say that NO SINGLE WEBSITE HAD A GUIDE ON THIS! I had to go over several websites and forums to locate the next step to getting the whole configuration running.

 

I am going to write a guide and post it here on the forum. Clear and collected instructions need to exist on the Internet for this.

 

Alright I'm done ranting. Next on our list of things left to do is I need some UPS's. There's not a whole lot to them but I'll go over them when I put in the order and they show up.

Link to comment
Share on other sites

Link to post
Share on other sites

Making tools made for Linux/Unix work with Windows is always a pain, unless there's an easy Windows side version that integrates with the Linux side stuff. I remember mucking about with RDP way back about 10 years ago getting it working both ways between Win/nix over gigabit for several computers in a room, it was cheaper than a KVM switch! For in Windows and Linux because I'm lazy I use grsync which is a GUI for rsync because easy clicking, but that's just for local stuff as far as I know, however if the target drive is mapped to Windows then I don't see why you couldn't use it on the local machine to run as a task to execute a backup but you know a lot more about this than I do. I just plug a big hard drive into my dumb boxes, start grsync working, and it's done when I wake up from my nap.

 

http://www.opbyte.it/grsync/

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Bitter said:

Making tools made for Linux/Unix work with Windows is always a pain, unless there's an easy Windows side version that integrates with the Linux side stuff. I remember mucking about with RDP way back about 10 years ago getting it working both ways between Win/nix over gigabit for several computers in a room, it was cheaper than a KVM switch! For in Windows and Linux because I'm lazy I use grsync which is a GUI for rsync because easy clicking, but that's just for local stuff as far as I know, however if the target drive is mapped to Windows then I don't see why you couldn't use it on the local machine to run as a task to execute a backup but you know a lot more about this than I do. I just plug a big hard drive into my dumb boxes, start grsync working, and it's done when I wake up from my nap.

 

http://www.opbyte.it/grsync/

For local synchronization I use FreeFileSync. This project requires the ability to synchronize over the internet using the SSH protocol and relies on public/private key authentication to boot so Grsync won't work. I appreciate the suggestion though.

Link to comment
Share on other sites

Link to post
Share on other sites

An unplanned update. So I had bought a cheapo Intel NIC. I didn't bother mentioning it in the build because that's all it was. It uses the Intel 82574L controller. Not only was I trying to diagnose why the two 10Gbit NICs kept disabling themselves but this 1Gbit NIC was too. Trying to locate drivers for it led to a conundrum. The driver for the 10Gbit NICs was the same driver for the 1Gbit NIC. That sounds like a good thing, but no. Windows refuses to recognize the driver as correct for the NIC and with the default Windows driver it keeps disabling itself. Although it could also be the motherboard slot possibly. I really hope not. So instead of spending money on possibly another NIC in the event something else is the issue I grabbed a spare dual port BCM57810S (because who doesn't have a spare one of those when their gig NIC misbehaves) and put it in it's place. There is one issue though. It's SFP+, our 1Gbit NIC is Ethernet. So in order to plug it in to my local network I had to pass-though the 10Gbit switch effectively turning it into a $500 media converter (if you think that's bad I can tell you stories about worse things I've seen companies do). So with that I plan to have the server run a bunch of Internet bound traffic and if the NIC hasn't disabled itself after a couple days I'll buy a SFP to Ethernet transceiver (SFP not SFP+) and just use this dual port NIC for my WAN access. If I ever break 2GB/s on our high speed network I might just buy 2x SFP+ to Ethernet transceivers since it'd be cheaper than buying a 2nd X540.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

With the new NIC proving to be stable I went ahead and bought a SFP to Ethernet transceiver . I also picked up another 1TB SSD this time the Intel D3-S4510. I need to expand the servers pool.

IMAG0451.thumb.jpg.69321af39269d41a108b93f2a2e4887f.jpg

 

I also wanted a way to power rack devices on or off easily so I picked up a rack mountable power strip.

IMAG0452.thumb.jpg.a3dc13395c0688ee88f0a653175d0faf.jpg

This one is kind of cheap though. I can't recommend it. The switches are all crooked and aren't fully seated in the enclosure. The labels on the back are also misleading.

 

And finally my new batteries the CyberPower OR1500LCDRT2U these are middle of the road UPS's being simulated sine-wave. It's a step up from desktop while being a lot cheaper than pure sine-wave.

IMAG0453.thumb.jpg.b459eb74033ca3d7dfcbb0611e79883b.jpg

These things are heavy. 54lbs each. I heard people complaining about the USB connection not working but I guess we'll see how it goes.

 

I also picked up this:

IMAG0455.thumb.jpg.cd4b29d18a7f16cf24bf2251f071aedc.jpg

82ft of Velcro. I'm done with cable managing Ethernet cable, fibre cable, power cable, etc with zip ties this will make everything better.

 

The first thing I did was install the new transceiver. The NIC is dual SFP+ but those transceivers are pricey and I don't need the speed for a gigabit connection. Getting it installed was easy but I did have to disable & re-enable the interface after installing it. Protrudes quite a bit out the back of the chassis.

IMAG0454.thumb.jpg.c6bc4a8cfce68fe74724c13ef945b755.jpg

 

Next I installed the new SSD. This took a little bit to figure out along with the help of some people on the forum. A plus with Storage Spaces is the ability to add disks to an existing pool a feature yet to be introduced to ZFS. Going though the Server Manager on Server 2016 you need to to File and Storage Services > Storage Pools > (select your pool) > Under Physical Disks > TASKS > Add Physical Disk...

 

This adds the disk to the pool but from here the storage isn't usable on the network share. We have to extend the Virtual Disk. To do that we need to right click on the Virtual Disk under the Virtual Disks menu > Extend Virtual Disk... > From there you can choose how much you want to add from the available pool.

 

After that it's technically there to use but the actual system volume now needs to be extended like how I did here (or via Disk Management):

Screenshot_2.png.a9e4731571b360892e8ce1c4653cdc7d.png

 

This now makes the capacity of the new disk you added to the pool usable. However we're still not done. We have to balance the pool by running this command via powershell:

Optimize-StoragePool -FriendlyName "name_of_pool"

Screenshot_1.png

This will balance the data across the disks including the new disk(s) you added.

 

After that I plugged the new batteries in. I didn't photograph this because they're just underneath the table the servers sit on but they are more than I expected. If the server isn't doing much of anything between the two of them the estimated run time is close to 4hrs. In comparison the old desktop UPS I was using would run for about 10min. It does seem the USB doesn't work when you plug them in until you install their software. After that it works fine. Go figure.

 

I'm not sure if there's much more to post about this I think this build has come to a close. I will be building another server a much smaller cheaper one but one that has it's unique perks so I may link it here when that build log starts. Thanks for following along.

 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 7 months later...

@maxtch ...7 months later and endless BS dealing with NEMIX support...

 

I have acquired Quad-Channel x2. All matching sticks. >:)

IMAG0616.thumb.jpg.daee7cabc2c75c8608e553b6b9a8d7b3.jpg

647369510_Screenshotfrom2019-12-2400-22-48.png.2732e62ad0edebd2fc5995fcfcf359be.png.2fafa1e30ec3510a00849575d14a5d79.png

 

May your nitpick be at ease.

 

And as for YOU leadeater. I hate opening someone else's server to find 1 stick of RAM or a mishmash of different RAM as well. This project was always intended to be a long term project that I would build as I found the spare income to put towards it. Quad-Channel was an inevitability I was going to make it happen...it only took 7 months. :D

 

More updates may occur should I come across a good supply of Intel DC/D3 4XXX series 960GB SSDs. If anybody knows any sources it could help speed things along.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×