Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Official Project Personal Datacentre Build for 2020/2021...

Spoiler

Okay - there's a new option for if I can't get the LGA2011-0 Xeons (E5-2600 v2's) on the table. It's cheap, scalable, and works with a lot of parts that I already own (including a pair of LGA1567 Xeons that I got for under 20 USD). The only downsides are that I can't use 3.5-inch HDDs with it, and that it's from HP/HPE. @Ithanul summoning you for your opinion on this machine - while it appears to be much better than the last option I saw from them (especially for under 300 USD barebones), still may have other issues to contend with. Yes, it's a server - I can handle the noise. This is meant to act as a replacement for my current workstation (in addition to other machines/roles). Currently a part of Project Datacentre Evolution...


This is the final plan - no longer an if/conditional. And found a way to get the 3.5 inch HDDs working with the server :3

 

 

PLAN 0 :: BETA

 

CSE  :: HPE ProLiant DL580 G7
CPU  :: 4x Intel Xeon E7-8870's (10c/20t each; 40c/80t total)
RAM :: 128GB (32x4GB) DDR3-1333 PC3-10600R ECC 
STR  :: 1x HP 518216-002 146GB HDD (ESXi, VMware Linux Appliance, System ISOs) + 

                        1x 500GB Seagate Video ST500VT003 HDD (Remote Development VM) + 

                        4x HP 507127-B21 300GB HDDs +

                        1x Western Digital WD Blue 3D NAND 500GB SSD (Virtual Flash) + 

                        1x Intel 320 Series SSDSA2CW600G3 600GB SSD (Virtual Flash)

             1x LSI SAS 9201-16e HBA SAS card (4-HDD DAS) + 

                        1x Mini-SAS SFF-8088 cable + 

                        1x Dell EMC KTN-STL3 (15x 3.5in HDD enclosure) + 

                        4x HITACHI Ultrastar HUH728080AL4205 8TB HDDs +

                        4x IBM Storewise XIV v7000 98Y3241 4TB HDDs

PCIe :: 1x HP 512843-001/591196-001 System I/O board + 

                        1x HP 588137-B21591205-001/591204-001 PCIe Riser board
GPU :: 1x nVIDIA GeForce GTX 1080 + 

                        1x nVIDIA GeForce GTX Titan Z
SFX  ::  1x Creative Sound Blaster Audigy Rx

NIC   ::  1x HPE NC524SFP (489892-B21)

I/O    ::   1x HPE PCIe ioDuo MLC I/O Accelerator (641255-001)

PSU  ::  4x 1200W Server PSU's (HP 441830-001/438203-001)

PRP  ::  1x Dell MS819 Wired Mouse

ODD ::  1x Sony Optiarc BluRay drive

 

Spoiler

            1x 3DConnexion SpaceNavigator (replaced by a better option)

            1x Rosewill RASA-11001 (replaced by a better option)

            1x Mediasonic HT31-304 (replaced by a better option)

            1x TESmart HKS0401A KVM Switch (eliminated due to current GPU choices)

            1x HPE NC522SFP Dual-port wired NIC (replaced by a better option)

            1x AMD FirePro S9300 X2's (SR-IOV (eliminated due to poor availability and pricing)

 

NOTE :: All other components (PSUs, heatsinks, etc.) provided by OEM. Items in the OTR category (currently empty - replaced by PRP) are non-essential items, that have a decent likelihood of being purchased. This is compared to the optional/cancelled items in the spoiler directly above, which have a lower likelihood of being considered over time.

 

* Parts that have been acquired, but may require modifications/configuration changes to work. As of now, the fans aren't required for functionality. They were meant to help quiet the server down, but require pin-out modification to work. This part can wait at all.

 

 


Project New Client is still active, and has even had purchases put toward it recently. It is very likely that this will run in parallel with the server project for the foreseeable future. The software configuration details I posted last time have changed significantly:

 

Services/roles that are still under consideration/haven't been allocated yet include (in order of decreasing priority):

 

Temporary task that will be replaced by a permanent, self-hosted solution

** Can benefit from port forwarding, but will be primarily tunnel-bound

^  Tunnel-bound (VPN/SSH) role - not port forwarded/exposed to the Internet

+ Active Directory enabled - Single Sign On (SSO)

 

 

On the other hand, there are certain tools that all VMs will possibly be equipped with. Some of those tools include:

 

 

And those that will possibly be shared through Horizon, for a seamless experience:

  • coming soon...

 

 

I think the following distribution of resources should work:

  • VMware NIX Appliance     :: 24/7 - true,  dedicatedHDD - false, dedicatedGPU -  false,      2c/4t + 12GB
  • Temporary/Testing VM     :: 24/7 - false,  dedicatedHDD - false, dedicatedGPU -  true,  12c/24t + 32GB *
  • Windows  Server  2016     :: 24/7 - true,  dedicatedHDD - true,  dedicatedGPU -  false,    8c/16t + 16GB
  • macOS Server 10.14.X      :: 24/7 - true,  dedicatedHDD -  true,  dedicatedGPU -  true,    8c/16t + 16GB
  • Artix Linux - Xfce ISO       :: 24/7 - true,   dedicatedHDD - true,  dedicatedGPU -  false,    8c/16t + 16GB
  • Windows 10 Enterprise    :: 24/7 - false,  dedicatedHDD - true,  dedicatedGPU -  true,  12c/24t + 32GB *
  • Remote Development VM :: 24/7 - false, dedicatedHDD -  true,  dedicatedGPU -  true,  12c/24t + 32GB *

 

 

A small amount of thin provisioning/overbooking (RAM only) won't hurt, as long as I don't try using more than one 12-core VM at a time (there's no need to - just pause and switch). MacOS and Linux would have gotten Radeon/FirePro (ie., Rx Vega 64), for best compatibility and stability, but market forces have prevented this. Windows Server 2016 can have the server's on-board audio. Windows 10 gets the Audigy Rx and whatever GPU I can give it. The MacOS and Linux VMs get whatever audio the GRID K520's provide (either that or a software solution). The whole purpose behind NextCloud is to phase out the use of Google Drive/Photos, iCloud, Box.com, and other externally-hosted cloud services (Mega can stay though). Windows 10, Remote Development, and the Temp/Testing VM will be put to sleep (or offed) until they are needed (Wake on LAN), since they don't host any essential services. The reason for why I'm doing this.

 

 

Other Project Mirrors:

  1. Windows7ge

    Windows7ge

    The only problem I see is that it'd be very ill-advised to use the 2XXX series Xeons on a quad-socket board.

  2. TopHatProductions115

    TopHatProductions115

    @Windows7ge In due time, I will switch over to the 4XXX Xeons. Starting with a pair of 2XXX Xeons will allow me to keep the initial cost(s) low until I can afford to purchase more cores...

  3. Windows7ge

    Windows7ge

    If you know you'll have a use for the CPUs afterwords then that's an option I suppose.

  4. TopHatProductions115

    TopHatProductions115

    On a side note, just caught wind of this:

     


    Kinda needed to see that, seeing that I am now considering ditching the idea of getting two Vega 64's if I can just grab a single SR-IOV card instead and share it between MacOS and Linux. Not like I'll be using them for extreme gaming tasks, so I should be able to save some money by doing this. Current AIB options include:

    Would also free up some PCI-e slots for other devices that I could use - maybe leaving room for a better capture card in the future?

  5. Windows7ge

    Windows7ge

    SR-IOV is something that got brought up to me on the level1techs forum (yes I'm cheating on the linustechtips forum) as a way to divide my multiport 10Gbit NIC between my host and VFIO VM I created and passed though a GPU to and got Looking Glass working on.

     

    I have not yet explored SR-IOV for NICs but the issue is being on the same card they belong to the same IOMMU group of which when you pass a device though to a VM the system really doesn't like splitting IOMMU group devices. That's where SR-IOV would come in. For now I've just created a network bridge in Linux and attached the VM to the bridge. Let's me pass data between the virtual and physical network at 10Gbit. I may move forward with SR-IOV I may not because setup like this if I make more VMs on my desktop I can have them all share the 10Gbit NIC simultaneously.

     

    Depending on your use case SR-IOV is definitely a route you can go. Do as much research as you can that it'll work on the hardware and OS you plan to use. I don't know about vmware/ESXi.

  6. TopHatProductions115

    TopHatProductions115

    The only thing that kinda irks me is the fact that the SR-IOV cards I picked aren't Vega-based. Due to pricing (used market), I went with them over a Vega Frontier Edition (and anything newer)...

  7. Windows7ge

    Windows7ge

    Do any Vega cards exist that support the feature? Seems like something that may be specific to the FirePros. Wendel mentioned adding support to consumer cards but I can't imagine AMD doing that unless they knew it was a feature everyday consumers wanted.

  8. TopHatProductions115

    TopHatProductions115

     

    @Windows7ge From what I'm seeing online, consumer Vega doesn't have it. Which leaves server/enterprise cards that are 2-3 years old - and still going for over 1k USD used. 

     

  9. Windows7ge

    Windows7ge

    It's actually because of VFIO & Looking Glass that I'm able to move away from Windows and just use it in a high performance VM for the few applications I still need it for.

     

    If you don't need the VM's for anything super graphically stressful or GPU acceleration, etc then yeah go for the FirePros if you can find them for a half decent price. That extra benefit here is as you mentioned with the DL580 G7 is that it's scalable. Need to host 20 more VMs? Pop in 1 or 2 more FirePros & spin them up.

     

    What you could do is use the FirePros to consolidate the VMs that need some graphics power but not excessive amount then just have one up to date Vega GPU to give to a VM that needs a lot of power.

  10. TopHatProductions115

    TopHatProductions115

    Just eliminated one of the FirePro options, due to relatively low performance-to-cost. The s7150 X2 was barely better than a Fermi mid-range GPU, but sellers were demanding upwards of ~1k USD for it. Probably solely for the VRAM size (2x8GB GDDR5). While the FirePro that I left on the list comes with only (2x4GB HBM), the memory bandwidth is much higher. Also consumes less power, and is way cheaper (~300 USD average). On a lighter note, I've picked up a pair of Intel Xeon E7-8870's - so, here's to hoping that I can go full S8 in the years to come :3

  11. Windows7ge

    Windows7ge

    I don't even know where you can find 8 socket boards but each individual CPU can handle up to 4TB of RAM so if you need a lot of that :D.

  12. TopHatProductions115

    TopHatProductions115

    Just got my second E7-2870 in the mail today. Also ordered a 3rd E7-8870, in anticipation of the upcoming purchase later this year (DL580 G7). Might pick up some more RAM while I'm at it, to keep the DIMM slots filled ;) Still deciding on whether to switch every Windows machine I have to Enterprise LTSB/LTSC...

  13. Windows7ge

    Windows7ge

    Nice. I'm currently in limbo waiting on NEMIX to return my request for RAM replacement. They sent me the wrong stuff. I'll post a Status Update about it tomorrow.

     

    Eh, depending on what you're doing with them I don't know how worth it that'd be. That'd be some costly software that could have been spent on more hardware.

  14. TopHatProductions115

    TopHatProductions115

    Okay - just ordered my 4th E7-8770 for 15 USD (Best Offer is nice), and am waiting for one on another 24GB of RAM. If that goes through, the only thing(s) I'll need to get are the server itself and the PCIe risers. May consider getting the new sound card, to start transitioning my audio setup over (before starting with ESXi). Once I get that figured out, I'll be able to use it as a raw replacement for the T7500 until I get the GPU(s)...

  15. Windows7ge

    Windows7ge

    I would very much like to see pics once this comes together.

     

    With a total of 40C/80T I'd opt for denser RAM than 4GB or 8GB sticks. 16GB RDIMM ECC are the sweet spot right now for cost:density.

  16. TopHatProductions115

    TopHatProductions115

    @Windows7ge At prices like this, and with Best Offer available?

    I remember 16GB sticks coming in at ~26 USD per unit when I last bought them. 

  17. Windows7ge

    Windows7ge

    Eh, the idea is density. With this many sockets & this many slots it just seems like kind of a waste to not use at least 8GB sticks. With RDIMM modules I'd want to go no less than 16GB especially for a VM application. You can never have too much RAM when dealing with VMs trust me.

  18. TopHatProductions115

    TopHatProductions115

    @Windows7ge I'll considering switching to 8GB sticks when I get my next job. For now, I have no choice but to stick to 4GB sticks, so that I can still afford purchases scheduled for later this year. There is no guarantee that I'll be able to continue with this streak of serendipity that I've been riding on the last few days, so I have to play it safe...

  19. TopHatProductions115

    TopHatProductions115

    Ooo - the kind seller just accepted my offer. I now have 64GB of RAM ready for the server when I finally purchase it later this year :D 

  20. Windows7ge

    Windows7ge

    Fair enough. I too know what it's like to be on a tight budget. I just choose to prioritise my servers that's why they're so overkill.

     

    64GB? Sweet. I take it that's 4x4GB per physical CPU correct?

  21. TopHatProductions115

    TopHatProductions115

    Yes. The original plan was only designed around 64GB RAM, so no harm done in that :D 

  22. TopHatProductions115

    TopHatProductions115

    My current mouse likes to stop working every once in a while, so I might end up replacing it this week as well if it gets bad enough...

  23. Windows7ge

    Windows7ge

    My mouse is the only component of my computer that I haven't changed since I first built it back somewhere around 2008~2009. It's worn but still works perfectly.

  24. TopHatProductions115

    TopHatProductions115

    Might try using my current 2TB (3.5-inch) HDDs as backup drives for the ones I put in this server if everything goes well. Or maybe set them up as extended storage for the server if I get the funds to do so ?

  25. Windows7ge

    Windows7ge

    It would be a good idea to backup the VM's and ESXi's config. Alternatively a disk-shelf and a RAID controller with external ports should let you extend the storage if you don't have enough internal bays.

×