Jump to content

Project Olympus

mattsb42

This has been a slow burn, but things are finally coming together, so I figured I would put together a build log.

 

The Problem

 

The original inspiration for this build was several things happening at once:

  1. The CPU and SSD in my gaming rig (i7-4771 and 1TB Samsung 840 EVO) are finally starting to become bottlenecks for me.
  2. I want to move my dev activity from a laptop to a more powerful desktop, since I always use it docked for that work anyway.
  3. I haven't had or needed a NAS for several years, but I've been starting to feel the need again recently.

So I had basic requirements for three systems I wanted to build.

 

Gaming

  • 4+ cores
  • 16GB+ RAM
  • NMVe SSD
  • GTX1070 (carry-over from current build because I'm not hitting its limits any time soon)

Dev

  • 8+ cores
  • 16GB+ RAM
  • Some GPU that can drive three 2650x1440 panels

NAS

  • 10TB+ usable storage
  • Room to upgrade storage and networking later

 

I specifically wanted the gaming and dev systems to be separate because I have two separate, and very different, interfaces to each, and I wanted them to be able to function independently of each other.

I've also been burned in the past by running a NAS from the same OS as a system I actually use directly, so that needed to be its own thing.

 

However, thinking about how I'd be using these systems, they would be sitting right next to each other on a shelf, and probably never moving...and there are a lot of duplicate parts there...

 

Some of you probably see where I'm going here. ;)

 

What if I could put them all in the same system?

Historically, this has never been a real option; I've run VM hosts at home before (VMWare Server, ESXi, Xen, etc), but interacting with VMs directly was never really been feasible with those platforms. If you wanted to interact with a VM your only option was to have another system that you used to connect to it. I've toyed with thin clients in the past, and especially for multi-display setups I always found them wanting.

 

BUT...things have changed since then, as the series of "${X} ${ROLE}s, ${Y} CPUs" demonstrations that keep popping up around here and elsewhere have shown.

Finally, IOMMU pass-through tech has trickled its way down from enterprise applications and is viable (well, ok, on the edge of viable) for use without having to be a full-time sysadmin to get it set up and keep it working.

 

The Solution...maybe...

 

So could I put everything in one host?

I decided yes, I probably could...and so the initial build began (in October...I said it's been a slow burn.)

3h3yVsg.jpg?1

 

  • Threadripper 2950x
  • ASRock X399M mATX
  • 4x 16GB G.Skill Trident Z DDR4-3200 CL14-14-14-34
  • 512GB Samsung 970 Pro NVMe
  • 2x 2TB Samsung 860 EVO SATA
  • 2x 10TB Seagate IronWolf
  • Corsair AX1600i
  • Thermaltake Core V21

 

Like I said, I'm carrying over my GTX1070 from my previous rig, but the open question remained: what GPU can I give to the dev VM?

 

Well, it turns out despite absurd retail prices Quadro NVS cards are super cheap on eBay. So I picked up some NVS295s (2x DP) to experiment with and a NVS450 (4x DP) as a hopeful for the dev GPU.

 

Unfortunately, as it turns out, IOMMU pass-through does not work so well if the device does not support UEFI.

...and despite still being manufactured (I think?), because the NVS295 and NVS450 both hail from 2009...neither of them do.

 

This had an interesting effect when I tried passing them through anyway.

It would work ok at first; a bit slow on the uptake on boot, but I can live with that.

But, once I turned off the VM the card would freeze and remain unresponsive until I rebooted the host.

The best I could tell, it was getting stuck in some sort of semi-activated state that it could not be pulled out of.

Maybe there's a way around that, idk. My solution to this was to break down and upgrade to a NVS510 (4x mDP).

This was much more expensive than the older cards (295: $4.25, 450: $30, 510: $60), but still a bargain compared to any other options for a single card that can drive three 1440p displays.

 

After testing, I found that the 510 did in fact work perfectly, which finally validated that this experiment would actually work. And so the game was on.

 

So, this is where I stand today.

All of this...needs to go in to that...

Ph40OeY.jpg?1

 

New to the game:

  • Black Ice GTS140
  • Black Ice GTS280
  • [waterblocks off-screen apparently]
  • Intel I340-T4
  • StarTech quad-USB3.0 controller
  • Quadro NVS295
  • Quadro NVS510
  • ...and a few more unusual things... ;)

 

But Matt, you say...there are only three PCIe slots on that board, but you mentioned five PCIe cards. What gives?

Well, this first post is already rather large, so I'll talk about that as I get to the related mods.

Eagle-eyed readers might see the answer though.

Link to comment
Share on other sites

Link to post
Share on other sites

your work surface has me nervous.  Looking forward to seeing all this stuffed in a v21

"And I'll be damned if I let myself trip from a lesser man's ledge"

Link to comment
Share on other sites

Link to post
Share on other sites

  • 4 weeks later...
On 6/10/2019 at 10:28 AM, Velcade said:

your work surface has me nervous

Don't worry, that only for staging and cabling work. Anything more involved happens elsewhere (though I have been very impressed with those Worx sawhorse-tables).

 

Slowly coming along. Getting close to having the bones done...sneak shots of the revised structure for a hint at what is coming.

 

yc4gxVU.jpg?1

 

The rear is starting to take shape.

S5TOOYT.jpg?1

 

The right-hand cutout is just to get rid of the old tubing outlet to make the inner surface flush (has anyone ever actually used those? I feel like if you need to send tubing or wiring out of your case you're probably going to be cutting it up anyway...)

 

Anyway, the left-hand cutout does actually have a reason...

 

SpaADw1.jpg?1PQ380UC.jpg?1GQuFo57.jpg?1

 

The rear panel will be getting a new surface panel that covers the holes and provides cut-outs for the new expansion slots.

Link to comment
Share on other sites

Link to post
Share on other sites

I noticed a small display screen (10.1 inch?) on the table with the parts

 

What are you planning to use that for?

Link to comment
Share on other sites

Link to post
Share on other sites

Heheh, good eye.

 

That's actually a little 5" display[1].

 

...side note, little HDMI displays are amazingly available now...I remember looking for something like this ~5-6 years ago and there being absolutely nothing aside from some hackery around the Retina iPad panel (early eDP panel). I'm not sure what tech trend it was that made these little HDMI displays so readily available, but I heartily approve. :)

 

Anyway, what is it for, yes. On-board host display.

 

I found in testing that the viability of doing GPU passthrough of the primary GPU is...let's say finicky. It seemed to be highly dependent on the GPU and the VM OS, not to mention highly dependent on coercing the GPU to let me pull a ROM off it. And of course, to top it off, it means I would lose an actual host-level display. These issues were actually some of the first things that pushed me to find a solution to the expansion slot problem: I didn't want to pass through one of the main GPUs but I also didn't want to lose a full 16x slot to the host GPU.

 

After much searching and many dead-ends, I came to the conclusion that almost all of the PCIe splitters on the market are designed for mining, and so only allocate one lane per split child. This is not what I wanted, as I have some devices that will need more bandwidth than one PCIe 3.0 lane can provide. Fortunately, there is a niche product that a couple companies make: PCIe M.2 break-out boards.

 

I went with the Asus "Hyper M.2 X16" card because it was the narrowest one available, and where I'll be mounting it length isn't really a concern. Then, because I want to hook up desktop expansion cards, not M.2 cards, I found that the mining market isn't without its merits, as there are several companies that make M.2-PCIe x4 converter boards[2] that do the simplest thing of just passing through all four lanes...which was exactly what I wanted. :)

 

Now, unfortunately, the design of the two boards didn't work to mount additional cards directly, so I needed to relocate the PCIe x4 slots away from the breakout board. I'm not sure who I have to thank for this, but thankfully there is at least one company that makes M.2 extension cards[3]. These are what you can see in the new half-height expansion card space I made above.

 

So, now that I have plenty of (up to eight) PCIe x4 slots, giving up one to a host GPU is not really an issue. For that, I have a NVS 295, which works wonderfully if you just need to display some pixels on a screen and care about very little else, and are dirt-cheap on eBay.

 

Then, I decided that I really didn't want to have to keep another monitor around, or change around inputs on any of the monitors that will be hooked up to this, so I wanted a display onboard that would give me the host output. Thus: little 5" monitor. This will be going on the front of the case, under the front fascia, which just pops off.

 

Now, because I'm putting the host output onto an on-board display, there's not really much point in exposing those GPU connections, is there? And because I can relocate the host GPU almost anywhere in the case, there's no reason to waste space in my new externally-accessible expansion bay on this host GPU. So instead, the host GPU will be mounted inside the case, up near the front, with a short cable going from it to the onboard display.

 

[1] https://smile.amazon.com/gp/product/B013JECYF2

[2] https://smile.amazon.com/gp/product/B074Z5YKXJ

[3] https://smile.amazon.com/gp/product/B07KX86N7V

Link to comment
Share on other sites

Link to post
Share on other sites

It seems impressive what you are planning to do there

 

Can't wait to see the end result !

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 months later...

Ok, it's been a while, but I promise this project isn't dead. :)

I've been getting things rolling again the last few weeks:

 

----------

 

The breakout cards, host GPU, and new expansion card bay are mounted now:

 

HGIT4vR.jpg

 

and around the back:

TWXOauL.jpg

 

I ended up flipping the GPU upside down, so the ports point down, in order to clear motherboard tray with the adapter board. In a happy accident, that actually made the cabling for the onboard monitor much more streamlined. More on that in a bit.

 

---

 

The pump mount is done now, and holds the pump securely out of the way:

V9i6ARP.jpg

O9z7bOw.jpg

 

As you can see from the above, the radiator mounts are done now too, holding one 2x140 and one 1x140 up at the top of the case.

82PXTlY.jpg

 

 

Noctua NF-A14's will sit underneath them and push air up through, after being pulled in the front by an NF-A20:

Speaking of the front of the case, I said I would be putting a monitor there, didn't I?

 

Well:

 

03eEhpr.jpg

 

I routed the cabling down the outside of the case (will be hidden by the front fascia) to come down and meet up with the host GPU:

 

Nnl2I0c.jpg

 

Those cables are just kind of hanging there though...that won't do...
Well, I had picked up a sheet metal bender to help with some of the other parts of this case, and it made quick work of a bracket for the DP-HDMI adapter.

 

0dIpCr7.jpg

 

And installed. Muuch better. :D
 

apaM8Nn.jpg

 

---

 

So, what's next?

The framing is almost done, then it'll be on to the plumbing, electrical, and finally tearing everything down for paint.

 

Before I can move on to that, though, there is one last piece of framing that needs to be done.

One more thing (well, three...) that Thermaltake never intended me to put in this case.

 

This case originally has three 3.5" HDD bays.

I need to double that.

 

Fortunately, there's "plenty" of space in front of the original bays: Tt just left that space open. ? It's almost like they were trying to make this easy to work on and not a thermal death trap. ?

 

My plan is for two columns of three drives each, back-to-back with cables running down the front and rear, and a Noctua NF-A9 in the back pulling air over all of these drives to keep them cool.

 

RuX0g20.jpg

 

I got the main structures done today, but the motherboard tray needs to be...adjusted...in order for it to mount flush and by the time I had this part done today it was too late to start grinding.

 

NNow0TC.jpg

 

 

---

 

More on this to come soon. Once the HDD tunnel is in place, it's headed upstairs to start on the plumbing and the 100% custom wiring harness.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×