Official Project Personal Datacentre Build for 2020/2021...
Okay - there's a new option for if I can't get the LGA2011-0 Xeons (E5-2600 v2's) on the table. It's cheap, scalable, and works with a lot of parts that I already own (including a pair of LGA1567 Xeons that I got for under 20 USD). The only downsides are that I can't use 3.5-inch HDDs with it, and that it's from HP/HPE. @Ithanul summoning you for your opinion on this machine - while it appears to be much better than the last option I saw from them (especially for under 300 USD barebones), still may have other issues to contend with. Yes, it's a server - I can handle the noise. This is meant to act as a replacement for my current workstation (in addition to other machines/roles). Currently a part of Project Datacentre Evolution...
This is the final plan - no longer an if/conditional. And found a way to get the 3.5 inch HDDs working with the server :3
PLAN 0 :: BETA
CSE :: HPE ProLiant DL580 G7
CPU :: 4x Intel Xeon E7-8870's (10c/20t each; 40c/80t total)
RAM :: 128GB (32x4GB) DDR3-1333 PC3-10600R ECC
STR :: 1x HP 518216-002 146GB HDD (ESXi, VMware Linux Appliance, System ISOs) +
1x 500GB Seagate Video ST500VT003 HDD (Remote Development VM) +
4x HP 507127-B21 300GB HDDs +
1x Western Digital WD Blue 3D NAND 500GB SSD (Virtual Flash) +
1x Intel 320 Series SSDSA2CW600G3 600GB SSD (Virtual Flash)
1x LSI SAS 9201-16e HBA SAS card (4-HDD DAS) +
1x Mini-SAS SFF-8088 cable +
1x Dell EMC KTN-STL3 (15x 3.5in HDD enclosure) +
4x HITACHI Ultrastar HUH728080AL4205 8TB HDDs +
4x IBM Storewise XIV v7000 98Y3241 4TB HDDs
1x nVIDIA GeForce GTX Titan Z
SFX :: 1x Creative Sound Blaster Audigy Rx
NIC :: 1x HPE NC524SFP (489892-B21)
I/O :: 1x HPE PCIe ioDuo MLC I/O Accelerator (641255-001)
ODD :: 1x Sony Optiarc BluRay drive
3DConnexion SpaceNavigator (replaced by a better option)
Rosewill RASA-11001 (replaced by a better option)
Mediasonic HT31-304 (replaced by a better option)
TESmart HKS0401A KVM Switch (eliminated due to current GPU choices)
HPE NC522SFP Dual-port wired NIC (replaced by a better option)
NOTE :: All other components (PSUs, heatsinks, etc.) provided by OEM. Items in the OTR category (currently empty - replaced by PRP) are non-essential items, that have a decent likelihood of being purchased. This is compared to the optional/cancelled items in the spoiler directly above, which have a lower likelihood of being considered over time.
* Parts that have been acquired, but may require modifications/configuration changes to work. As of now, the fans aren't required
for functionality. They were meant to help quiet the server down, but require pin-out modification to work. This part can wait at all.
Project New Client is still active, and has even had purchases put toward it recently. It is very likely that this will run in parallel with the server project for the foreseeable future. The software configuration details I posted last time have changed significantly:
- VMware Appliance (for managing and monitoring ESXi 6.5)
- Temporary/Test VM (for when I want to host a temporary role/application, without impacting mission-critical infrastructure)
- Windows Server Datacentre (Active Directory^, AD CA^, Technitium^, NTP^, SoftEther+, hMailServer+, PXE Boot^, ejabberd+ etc.)
- MacOS Mojave (Time Machine, PleX**, DaVinci Resolve, Google Drive/MEGA sync*, Deluge, Handbrake, SVP4 Pro, iTunes, F@H, etc.)
- Artix Linux (OpenSSH^, Server JRE, Docker, NextCloud** +, FreePBX+SMS+, YaCy Grid, YouPHPTube ^, Minecraft servers, etc.)
- Windows 10 LTSB (front-end tasks - OBS Studio, Moonlight**, DesignWorks, Console Game Emulators, modpack dev., etc.)
- Remote Development (my build environment - Netbeans, MinGW/64, cmake, GNUWin, CUDA SDK, Mosh, nginx+SQLite+Php, etc.) ^
Services/roles that are still under consideration/haven't been allocated yet include (in order of decreasing priority):
* Temporary task that will be replaced by a permanent, self-hosted solution
** Can benefit from port forwarding, but will be primarily tunnel-bound
^ Tunnel-bound (VPN/SSH) role - not port forwarded/exposed to the Internet
+ Active Directory enabled - Single Sign On (SSO)
On the other hand, there are certain tools that all VMs will possibly be equipped with. Some of those tools include:
- UnGoogled-Chromium + KeePassXC
- Notepad++ (Windows) + Notepadqq (Linux, MacOS)
- Mozilla Thunderbird + Lightning Calendar
- Malwarebytes (Free Edition)
- HashCheck (Windows)
And those that will possibly be shared through Horizon, for a seamless experience:
- coming soon...
I think the following distribution of resources should work:
- VMware NIX Appliance :: 24/7 - true, dedicatedHDD - false, dedicatedGPU - false, 2c/4t + 12GB
- Temporary/Testing VM :: 24/7 - false, dedicatedHDD - false, dedicatedGPU - true, 12c/24t + 32GB *
- Windows Server 2016 :: 24/7 - true, dedicatedHDD - true, dedicatedGPU - false, 8c/16t + 16GB
- macOS Server 10.14.X :: 24/7 - true, dedicatedHDD - true, dedicatedGPU - true, 8c/16t + 16GB
- Artix Linux - Xfce ISO :: 24/7 - true, dedicatedHDD - true, dedicatedGPU - false, 8c/16t + 16GB
- Windows 10 Enterprise :: 24/7 - false, dedicatedHDD - true, dedicatedGPU - true, 12c/24t + 32GB *
- Remote Development VM :: 24/7 - false, dedicatedHDD - true, dedicatedGPU - true, 12c/24t + 32GB *
A small amount of thin provisioning/overbooking (RAM only) won't hurt, as long as I don't try using more than one 12-core VM at a time (there's no need to - just pause and switch). MacOS and Linux would have gotten Radeon/FirePro (ie., Rx Vega 64), for best compatibility and stability, but market forces have prevented this. Windows Server 2016 can have the server's on-board audio. Windows 10 gets the Audigy Rx and whatever GPU I can give it. The MacOS and Linux VMs get whatever audio the GRID K520's provide (either that or a software solution). The whole purpose behind NextCloud is to phase out the use of Google Drive/Photos, iCloud, Box.com, and other externally-hosted cloud services (Mega can stay though). Windows 10, Remote Development, and the Temp/Testing VM will be put to sleep (or offed) until they are needed (Wake on LAN), since they don't host any essential services. The reason for why I'm doing this.
Other Project Mirrors:
@Windows7ge In due time, I will switch over to the 4XXX Xeons. Starting with a pair of 2XXX Xeons will allow me to keep the initial cost(s) low until I can afford to purchase more cores...
On a side note, just caught wind of this:
Kinda needed to see that, seeing that I am now considering ditching the idea of getting two Vega 64's if I can just grab a single SR-IOV card instead and share it between MacOS and Linux. Not like I'll be using them for extreme gaming tasks, so I should be able to save some money by doing this. Current AIB options include:
Would also free up some PCI-e slots for other devices that I could use - maybe leaving room for a better capture card in the future?
SR-IOV is something that got brought up to me on the level1techs forum (yes I'm cheating on the linustechtips forum) as a way to divide my multiport 10Gbit NIC between my host and VFIO VM I created and passed though a GPU to and got Looking Glass working on.
I have not yet explored SR-IOV for NICs but the issue is being on the same card they belong to the same IOMMU group of which when you pass a device though to a VM the system really doesn't like splitting IOMMU group devices. That's where SR-IOV would come in. For now I've just created a network bridge in Linux and attached the VM to the bridge. Let's me pass data between the virtual and physical network at 10Gbit. I may move forward with SR-IOV I may not because setup like this if I make more VMs on my desktop I can have them all share the 10Gbit NIC simultaneously.
Depending on your use case SR-IOV is definitely a route you can go. Do as much research as you can that it'll work on the hardware and OS you plan to use. I don't know about vmware/ESXi.
It's actually because of VFIO & Looking Glass that I'm able to move away from Windows and just use it in a high performance VM for the few applications I still need it for.
If you don't need the VM's for anything super graphically stressful or GPU acceleration, etc then yeah go for the FirePros if you can find them for a half decent price. That extra benefit here is as you mentioned with the DL580 G7 is that it's scalable. Need to host 20 more VMs? Pop in 1 or 2 more FirePros & spin them up.
What you could do is use the FirePros to consolidate the VMs that need some graphics power but not excessive amount then just have one up to date Vega GPU to give to a VM that needs a lot of power.
Just eliminated one of the FirePro options, due to relatively low performance-to-cost. The s7150 X2 was barely better than a Fermi mid-range GPU, but sellers were demanding upwards of ~1k USD for it. Probably solely for the VRAM size (2x8GB GDDR5). While the FirePro that I left on the list comes with only (2x4GB HBM), the memory bandwidth is much higher. Also consumes less power, and is way cheaper (~300 USD average). On a lighter note, I've picked up a pair of Intel Xeon E7-8870's - so, here's to hoping that I can go full S8 in the years to come :3
Just got my second E7-2870 in the mail today. Also ordered a 3rd E7-8870, in anticipation of the upcoming purchase later this year (DL580 G7). Might pick up some more RAM while I'm at it, to keep the DIMM slots filled Still deciding on whether to switch every Windows machine I have to Enterprise LTSB/LTSC...
Nice. I'm currently in limbo waiting on NEMIX to return my request for RAM replacement. They sent me the wrong stuff. I'll post a Status Update about it tomorrow.
Eh, depending on what you're doing with them I don't know how worth it that'd be. That'd be some costly software that could have been spent on more hardware.
Okay - just ordered my 4th E7-8770 for 15 USD (Best Offer is nice), and am waiting for one on another 24GB of RAM. If that goes through, the only thing(s) I'll need to get are the server itself and the PCIe risers. May consider getting the new sound card, to start transitioning my audio setup over (before starting with ESXi). Once I get that figured out, I'll be able to use it as a raw replacement for the T7500 until I get the GPU(s)...
@Windows7ge At prices like this, and with Best Offer available?
I remember 16GB sticks coming in at ~26 USD per unit when I last bought them.
@Windows7ge I'll considering switching to 8GB sticks when I get my next job. For now, I have no choice but to stick to 4GB sticks, so that I can still afford purchases scheduled for later this year. There is no guarantee that I'll be able to continue with this streak of serendipity that I've been riding on the last few days, so I have to play it safe...