Jump to content

Time for a much-needed update to Project Personal Datacentre. Due to a new development, Plans 2 and 3 are being changed around. Here's what's up:

TL;DR HPE made the GX Blades so that they're incompatible with PCI-e x16 without some serious measures. And I'm not really in the mood for B.S. from an OEM, so I'll take my chances with hand-building my first real workstation when the time comes. @Ithanul, your instincts were right - HPE ruined the seemingly perfect enclosure option (c7000). Also, the E5-4657L v2's are way cheaper than the E5-2697 v2's - which are definitely tempting options for a HPC server/workstation build. I've also decided to remove some parts that are of no interest to me as if this writing. With that, here are the current prospects:

 

 

PLAN 1 :: 2019-2022

Base Computer/Tower :: Dell Precision T7610 or Dell PowerEdge T620
CPU ::  2x Intel Xeon E5-2697 v2's or

            2x Intel Xeon E5-2695 v2's or  (lower cost/clockspeed)

            2x Intel Xeon E5-4657L v2's (lowest cost/clockspeed)
RAM :: 16x 4GB DDR3-1333 PC3-10600 REG ECC (may use 8GB DIMMs)
STR  :: 4x 2TB HITACHI HUA722020ALA330 HDDs +
                        1x Western Digital WD Blue 3D NAND 500GB SATA SSD (vSphere Flash Read Cache)

                        1x 250GB HITACHI HTS542525K9SA00 HDD (for the VMware Linux Appliance)
GPU :: 2x AMD Radeon Rx Vega 64's +

                        1x nVIDIA GeForce GTX 1080/1080 Ti (may leave this out, depending on budget)
SFX ::  1x Creative Sound Blaster Zx with ACM
PRP ::  1x Logitech M510 Wireless Mouse or
                        Dell MS819 Wired Mouse
            1x HITACHI GH24NSC0 optical drive

            1x TESmart HKS0401A variant KVM Switch (for switching between live VMs)

Spoiler

        1x BlackMagic Design DeckLink 4K/6G SDI (may leave this out - Software X264/X265) *

        1x 49-inch curved [4K or 5K] 60Hz, +90%sRGB, DisplayHDR monitor (may leave this out for current monitor) *

NOTE :: All other components (not included in the linked photo) provided by OEM

 

 

PLAN 2 :: 2021-2023 - NOT GUARANTEED TO HAPPEN

Base Computer/Tower :: Custom configuration 

MTB :: AsRock Rack EP2C602-4L/D16 (SSI EEB - for E5-2600 v2) or

            Supermicro X9DRI-F-O (Normal E-ATX - for E5-2600 v2) or

            Supermicro MBD-X9DR3-F-O (SSI EEB - for E5-2600 v2) or

            Supermicro X9DRI-LN4F+ (Enhanced E-ATX- for E5-2600 v2) ...

Spoiler

 or  Supermicro X9QRi-F (Proprietary - for E5-4600 v2) or

       Supermicro X9QR7-TF-JBOD (Proprietary - for E5-4600 v2) or

       Intel S4600LT2 + F4S16 (L/R) Riser cards (for E5-4600 v2)

 

Not likely to happen if the dual-socket motherboards can be had for cheaper than 560 USD, seeing that the PCI-e expansion for Intel's mobo is quite limited, and prices for mobo's with more slots are in the thousands...

CPU ::  2-4x Intel Xeon E5-4657L v2's 

RAM :: 16x 4GB DDR3-1333 PC3-10600 REG ECC (probably will use 8GB DIMMs instead)

STR ::  4-6x 2TB HITACHI HUA722020ALA330 HDDs +
                        1x Western Digital WD Blue 3D NAND 500GB SATA SSD (vSphere Flash Read Cache)

                        1x 250GB HITACHI HTS542525K9SA00 HDD (for the VMware Linux Appliance)

GPU ::  same mess as above*
SFX ::  1x Creative Sound Blaster Zx with ACM (may use my current audio card if I run out of x16 slots)

PRP ::  1x Logitech M510 Wireless Mouse or
                        Dell MS819 Wired Mouse
            1-2x HITACHI GH24NSC0 optical drives

            1x TESmart HKS0401A variant KVM Switch (for switching between live VMs)

Spoiler

        1x BlackMagic Design DeckLink 4K/6G SDI (may leave this out - Software X264/X265) *

        1x 49-inch curved [4K or 5K] 60Hz, +90%sRGB, DisplayHDR monitor (may leave this out for current monitor) *

PSU ::  2x Corsair AX1200i 1200W 80+ Platinums

Spoiler

 

             expecting a minimum of 2 CPUs/3 GPUs and a maximum of 4 CPUs/3 GPUs...

             I could borrow an Arc Reactor, or go dual-PSU like Linus just did with 6 Editors - One Tower?

 

NOTE :: Case and fans/cooler(s) are still being decided on (Corsair A70's or Noctua NH-D15's). Most likely going Noctua for the case fans. Now to pick a case...

 

 

PLAN 3 :: 2023-20XX

Base Computer/Tower :: Custom configuration 
MTB ::  ASUS ROG Zenith Extreme X399 or

             MSI MEG X399 Creation (@LukeSavenije Your motherboard suggestion)

Spoiler

 

As I stated in a previous reply, I'll see if I can make the MEG work. Hypothetically, I could:

  • use PCI-e x16, slot 0 for a GPU
  • use PCI-e x16, slot 1 with a splitter, and hope there's enough room between slots 0 and 2
  • use PCI-e x16, slot 2 for another GPU
  • use PCI-e x16, slot 3 for another GPU

 

With the ROG, I was gonna do something like either:

  • use PCI-e x16, slot 0 for a GPU
  • use PCI-e x16, slot 1 for another GPU
  • use PCI-e x16, slot 2 for another GPU
  • use PCI-e x16, slot 3 with a splitter for the audio and capture card(s)

or 

  • use PCI-e x16, slot 0 with a splitter for the audio and capture card(s)
  • use PCI-e x16, slot 1 for a GPU
  • use PCI-e x16, slot 2 for another GPU
  • use PCI-e x16, slot 3 for another GPU

and since there tends to be enough space for smaller devices near the top and bottom, that would have been easy to setup. Not exactly up for taking a jump and hoping there's enough space later. Does anyone on the forum own the MEG, for measurements?

 

CPU ::  AMD Ryzen Threadripper 2970WX or

             AMD Ryzen Threadripper 2990WX
RAM :: 4x 8GB PC4-25600 DDR4-3200MHz ECC, Single-Sided B (Hynix or SAMSUNG)
STR ::  4-6x 2TB HITACHI HUA722020ALA330 HDDs +

                        1x Western Digital WD Blue 3D NAND 500GB SATA SSD (vSphere Flash Read Cache)

                        1x 250GB HITACHI HTS542525K9SA00 HDD (for the VMware Linux Appliance)
GPU ::  same mess as above*
SFX ::  1x Creative Sound Blaster ZxR with ACM

PRP ::  1x Logitech M510 Wireless Mouse or
                        Dell MS819 Wired Mouse
            2x HITACHI GH24NSC0 optical drives

            1x TESmart HKS0401A variant KVM Switch (for switching between live VMs)

Spoiler

        1x BlackMagic Design DeckLink 4K/6G SDI (may leave this out - Software X264/X265) *

        1x 49-inch curved [4K or 5K] 60Hz, +90%sRGB, DisplayHDR monitor (may leave this out for current monitor) *

PSU ::  Corsair AX1200i 1200W 80+ Platinum (You can thank @LukeSavenije for this change)

 

NOTE :: Case and fans/cooler are still being decided on. Most likely going Noctua for the case fans. Now to just decide on the case...

 

 

Can't wait to start on Plan 1 (finally)!!! :D 

  1. TopHatProductions115

    TopHatProductions115

    Current ESXi 6.X plans:

    • VMware Linux Appliance (for managing and monitoring ESXi - a must-have)
    • Windows Server 2016 Datacentre (Simple DNSCrypt, SoftEther VPN, Chrome Remote Desktop, a password manager, etc.)
    • Mac OS X El Capitan/Mojave (Google Drive sync *, MEGA file sync *, Plex Media Server, iTunes, DaVinci Resolve, software development, etc.)
    • Manjaro Linux (NextCloud, DNSCrypt Server, Android backups, Discord bots, F@H, BOINC, software development etc.)
    • Windows 10 Enterprise (VM for front-end tasks. Steam/GOG, OBS livestreaming, SVP4 Pro/Handbrake, Moonlight Streaming server, etc.)
    • Game Server Container (for when I want to host a temporary multiplayer server. OS and general config can vary) **
    • Deployment Container (for when new software packages need to be tested before deployment) **

    ** Temporary VMs with dynamic configuration and no long-term persistence

    Temporary task that will be replaced by a permanent, self-hosted solution

    ^  Shared task - not statically restricted to one container/VM

     

     

    I think the following distribution of resources might work:

    • VMware NIX Appliance   :: 24/7 - true,   dedicatedHDD - true,     2c/4t + 4GB
    • Deployment  Container  :: 24/7 - false,  dedicatedHDD - false,  nonpersistent 
    • Game Server Container  :: 24/7 - false, dedicatedHDD - false,  nonpersistent
    • Windows  Server  2016   :: 24/7 - true,   dedicatedHDD - true,     4c/8t + 8-12GB
    • Manjaro OS, Arch Linux :: 24/7 - true,   dedicatedHDD - true,     4c/8t + 12-16GB
    • Mac OS  X - El Capitan   :: 24/7 - false,  dedicatedHDD - true, 12c/24t + 16-32GB
    • Windows 10 Enterprise  :: 24/7 - false,  dedicatedHDD - true, 12c/24t + 16-32GB

    A small amount of thin provisioning/overbooking (RAM only) won't hurt, as long as I don't try using more than one 12-core VM at a time (there's no need to - just pause and switch). ESXi itself is usually run from a USB thumbdrive, so no harm done. MacOS and Linux are getting Vega 64's, for best compatibility and stability. Windows Server 2016 can have the motherboard audio. Windows 10 gets the audio card and whatever GPU I can give it (might even get stuck with the GTX 1060 6GB I currently have), because the GTX 1080/1080 Ti might not be in the cards by the time I get the cash. The whole purpose behind NextCloud is to phase out the use of my Google Drive/Photos, iCloud, Box.com, and Mega accounts. MacOS is still being planned out, so might not be in charge of multimedia when the final build starts. The reason for why I'm doing this. BTW, there are 2 spare cores left on the table for now...

     

    If I somehow end up with the quad-Xeon build (very unlikely), I'd simply double the core-count(s) allocated to each VM, since I'd have double the available cores to work with. 

    If I end up with less cores than expected, I'm still willing to cut out the VMware Appliance, depending on whether I've managed to find it or not. Can't install something I can't locate an image/container source for :( 

     

    For those of you who are wondering, the VMware Appliance, Windows Server, and Manjaro will be the only VMs running 24/7. MacOS, Windows 10, and the the Containers will be put to sleep until they are needed.

     

     

     

    P.S. By the time I get to plan 3, DDR5 should have come out - which will drop the price of DDR4 to a usable level...

  2. TopHatProductions115

    TopHatProductions115

    BTW, still considering these options, depending on whether the PCI-e slots end up being too clustered together on the motherboard:

    Because of the fact that the quad-socket motherboard only comes with 4 PCI-e x16 slots. And as far as I know, that appears to be the only quad-socket mobo available on the used market. Unless someone has another one in mind...

     

    Not even looking at these, due to cost:

    • iPass OSS-KIT-EXP-9000-2M
    • iPass OSS-KIT-EXP-3810-1M
    • iPass OSS-KIT-EXP-6810

    Yeah - I'll have to do everything by hand if I go down this route. Kinda wishing that HPE had went the consumer-friendly route with their c7000...

     

    On the other hand, though, I won't need these if I get a big enough motherboard from Supermicro :3

  3. Ithanul

    Ithanul

    HPE is not the grandest to say the least.  At work we just recently had to deal with their support.  Talk about wasting over 30-40 minutes of our time telling them that the issue is most likely the motherboard or CPU.  Finally got them to send a tech and guess what, bad motherboard.  Would of part change it ourselves, but the darn things have three years warranty we had to pay for since we could not buy without the warranty.

     

    Also, research about those Xeons and make sure their are no BIOS issues with them and those boards.  At work we just found out some random reboots are occurring because of bad microcode release on these Xeon workstations.   Over all, I just say avoid HPE until they get their crap straighten out.

     

    On the X399 motherboards if you go that route, make sure to research how well ESXi works on those boards.  Some X399 boards are a pain last I researched into but that been a year or so back now.  Good chance newer BIOS revisions have corrected issues.  Personally, I would lean to the MSI Creation board as those have good VRMs and the WX chips are power hungry.  Along with 1usmus releases custom BIOS for those boards (yep, the chap that makes the Ryzen/TR RAM calculator releases modify BIOS for that board).  You maybe better off to wait for news on the next gen TRs.  As 7nm should help with power draw (along with supposedly better latency and RAM compatibility - not set in stone info though, so take as rumor until official chips are release and tested).

  4. TopHatProductions115

    TopHatProductions115

    Do keep in mind that if I end up completing Plan 1, I may end up skipping Plan 2. This is because of the fact that Ryzen Threadripper is probably going to get refreshed soon, which makes it a better pick for the next iteration of this build. Which means that Plan 3 will receive changes as well, due to price drops and other factors (in addition to the upcoming advent of DDR5).

  5. TopHatProductions115

    TopHatProductions115

    Just made some changes to Plan 2 - the motherboard options have been expanded to include more, cheaper options (dual-socket in this case). If possible, it would be nice to start out with a dual-socket build, and expand to 4 CPU sockets if I need more/if I have the spare cash :3 But I'm not counting on it. Also, the power bill...

  6. TopHatProductions115

    TopHatProductions115

    Just added a KVM Switch to the list, for switching between the live VMs more seamlessly (hopefully). Also made some changes to the planned storage configuration and use of the SSD in ESXi. Also, as mentioned in the VMware thread I linked, ESXi will most likely be running from a USB drive this time...

  7. TopHatProductions115

    TopHatProductions115

    I'm kinda tempted to have Manjaro managing my Google Drive sync (at least until I fully transition to NextCloud). If the integration with Google accounts is as nice as I think it is, MacOS will only be managing MEGA by the time I get around to implementing this new workstation/server hybrid...

  8. TopHatProductions115

    TopHatProductions115

    On a side note, this single-tower setup is currently being categorised as a Workstation/Server Hybrid. Also, here's part of the reason for why the current Turing cards are not under consideration for this project:

    If you want to make suggestions for better categorisations, please reply via Discord...

  9. TopHatProductions115

    TopHatProductions115

    I don't know why, but now I'm pretty tempted to grab a dedicated  2TB 2.5-inch 7200RPM HDD for the Plex library (since the T7610 has four 2.5-inch bays?
    dell1.jpg

     

    Just changed some details for audio devices as well. On a side note, the T7610 has Intel vPro - a necessary evil, I suppose. Guess I could start learning how to remote manage the workstation at some point ? Especially for updates and such...

  10. TopHatProductions115

    TopHatProductions115

    No longer being pursued - please move to here if you haven't already...

×