Jump to content

mattsb42

Member
  • Posts

    6
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

mattsb42's Achievements

  1. Ok, it's been a while, but I promise this project isn't dead. I've been getting things rolling again the last few weeks: ---------- The breakout cards, host GPU, and new expansion card bay are mounted now: and around the back: I ended up flipping the GPU upside down, so the ports point down, in order to clear motherboard tray with the adapter board. In a happy accident, that actually made the cabling for the onboard monitor much more streamlined. More on that in a bit. --- The pump mount is done now, and holds the pump securely out of the way: As you can see from the above, the radiator mounts are done now too, holding one 2x140 and one 1x140 up at the top of the case. Noctua NF-A14's will sit underneath them and push air up through, after being pulled in the front by an NF-A20: Speaking of the front of the case, I said I would be putting a monitor there, didn't I? Well: I routed the cabling down the outside of the case (will be hidden by the front fascia) to come down and meet up with the host GPU: Those cables are just kind of hanging there though...that won't do... Well, I had picked up a sheet metal bender to help with some of the other parts of this case, and it made quick work of a bracket for the DP-HDMI adapter. And installed. Muuch better. --- So, what's next? The framing is almost done, then it'll be on to the plumbing, electrical, and finally tearing everything down for paint. Before I can move on to that, though, there is one last piece of framing that needs to be done. One more thing (well, three...) that Thermaltake never intended me to put in this case. This case originally has three 3.5" HDD bays. I need to double that. Fortunately, there's "plenty" of space in front of the original bays: Tt just left that space open. ? It's almost like they were trying to make this easy to work on and not a thermal death trap. ? My plan is for two columns of three drives each, back-to-back with cables running down the front and rear, and a Noctua NF-A9 in the back pulling air over all of these drives to keep them cool. I got the main structures done today, but the motherboard tray needs to be...adjusted...in order for it to mount flush and by the time I had this part done today it was too late to start grinding. --- More on this to come soon. Once the HDD tunnel is in place, it's headed upstairs to start on the plumbing and the 100% custom wiring harness.
  2. Heheh, good eye. That's actually a little 5" display[1]. ...side note, little HDMI displays are amazingly available now...I remember looking for something like this ~5-6 years ago and there being absolutely nothing aside from some hackery around the Retina iPad panel (early eDP panel). I'm not sure what tech trend it was that made these little HDMI displays so readily available, but I heartily approve. Anyway, what is it for, yes. On-board host display. I found in testing that the viability of doing GPU passthrough of the primary GPU is...let's say finicky. It seemed to be highly dependent on the GPU and the VM OS, not to mention highly dependent on coercing the GPU to let me pull a ROM off it. And of course, to top it off, it means I would lose an actual host-level display. These issues were actually some of the first things that pushed me to find a solution to the expansion slot problem: I didn't want to pass through one of the main GPUs but I also didn't want to lose a full 16x slot to the host GPU. After much searching and many dead-ends, I came to the conclusion that almost all of the PCIe splitters on the market are designed for mining, and so only allocate one lane per split child. This is not what I wanted, as I have some devices that will need more bandwidth than one PCIe 3.0 lane can provide. Fortunately, there is a niche product that a couple companies make: PCIe M.2 break-out boards. I went with the Asus "Hyper M.2 X16" card because it was the narrowest one available, and where I'll be mounting it length isn't really a concern. Then, because I want to hook up desktop expansion cards, not M.2 cards, I found that the mining market isn't without its merits, as there are several companies that make M.2-PCIe x4 converter boards[2] that do the simplest thing of just passing through all four lanes...which was exactly what I wanted. Now, unfortunately, the design of the two boards didn't work to mount additional cards directly, so I needed to relocate the PCIe x4 slots away from the breakout board. I'm not sure who I have to thank for this, but thankfully there is at least one company that makes M.2 extension cards[3]. These are what you can see in the new half-height expansion card space I made above. So, now that I have plenty of (up to eight) PCIe x4 slots, giving up one to a host GPU is not really an issue. For that, I have a NVS 295, which works wonderfully if you just need to display some pixels on a screen and care about very little else, and are dirt-cheap on eBay. Then, I decided that I really didn't want to have to keep another monitor around, or change around inputs on any of the monitors that will be hooked up to this, so I wanted a display onboard that would give me the host output. Thus: little 5" monitor. This will be going on the front of the case, under the front fascia, which just pops off. Now, because I'm putting the host output onto an on-board display, there's not really much point in exposing those GPU connections, is there? And because I can relocate the host GPU almost anywhere in the case, there's no reason to waste space in my new externally-accessible expansion bay on this host GPU. So instead, the host GPU will be mounted inside the case, up near the front, with a short cable going from it to the onboard display. [1] https://smile.amazon.com/gp/product/B013JECYF2 [2] https://smile.amazon.com/gp/product/B074Z5YKXJ [3] https://smile.amazon.com/gp/product/B07KX86N7V
  3. Don't worry, that only for staging and cabling work. Anything more involved happens elsewhere (though I have been very impressed with those Worx sawhorse-tables). Slowly coming along. Getting close to having the bones done...sneak shots of the revised structure for a hint at what is coming. The rear is starting to take shape. The right-hand cutout is just to get rid of the old tubing outlet to make the inner surface flush (has anyone ever actually used those? I feel like if you need to send tubing or wiring out of your case you're probably going to be cutting it up anyway...) Anyway, the left-hand cutout does actually have a reason... The rear panel will be getting a new surface panel that covers the holes and provides cut-outs for the new expansion slots.
  4. This has been a slow burn, but things are finally coming together, so I figured I would put together a build log. The Problem The original inspiration for this build was several things happening at once: The CPU and SSD in my gaming rig (i7-4771 and 1TB Samsung 840 EVO) are finally starting to become bottlenecks for me. I want to move my dev activity from a laptop to a more powerful desktop, since I always use it docked for that work anyway. I haven't had or needed a NAS for several years, but I've been starting to feel the need again recently. So I had basic requirements for three systems I wanted to build. Gaming 4+ cores 16GB+ RAM NMVe SSD GTX1070 (carry-over from current build because I'm not hitting its limits any time soon) Dev 8+ cores 16GB+ RAM Some GPU that can drive three 2650x1440 panels NAS 10TB+ usable storage Room to upgrade storage and networking later I specifically wanted the gaming and dev systems to be separate because I have two separate, and very different, interfaces to each, and I wanted them to be able to function independently of each other. I've also been burned in the past by running a NAS from the same OS as a system I actually use directly, so that needed to be its own thing. However, thinking about how I'd be using these systems, they would be sitting right next to each other on a shelf, and probably never moving...and there are a lot of duplicate parts there... Some of you probably see where I'm going here. What if I could put them all in the same system? Historically, this has never been a real option; I've run VM hosts at home before (VMWare Server, ESXi, Xen, etc), but interacting with VMs directly was never really been feasible with those platforms. If you wanted to interact with a VM your only option was to have another system that you used to connect to it. I've toyed with thin clients in the past, and especially for multi-display setups I always found them wanting. BUT...things have changed since then, as the series of "${X} ${ROLE}s, ${Y} CPUs" demonstrations that keep popping up around here and elsewhere have shown. Finally, IOMMU pass-through tech has trickled its way down from enterprise applications and is viable (well, ok, on the edge of viable) for use without having to be a full-time sysadmin to get it set up and keep it working. The Solution...maybe... So could I put everything in one host? I decided yes, I probably could...and so the initial build began (in October...I said it's been a slow burn.) Threadripper 2950x ASRock X399M mATX 4x 16GB G.Skill Trident Z DDR4-3200 CL14-14-14-34 512GB Samsung 970 Pro NVMe 2x 2TB Samsung 860 EVO SATA 2x 10TB Seagate IronWolf Corsair AX1600i Thermaltake Core V21 Like I said, I'm carrying over my GTX1070 from my previous rig, but the open question remained: what GPU can I give to the dev VM? Well, it turns out despite absurd retail prices Quadro NVS cards are super cheap on eBay. So I picked up some NVS295s (2x DP) to experiment with and a NVS450 (4x DP) as a hopeful for the dev GPU. Unfortunately, as it turns out, IOMMU pass-through does not work so well if the device does not support UEFI. ...and despite still being manufactured (I think?), because the NVS295 and NVS450 both hail from 2009...neither of them do. This had an interesting effect when I tried passing them through anyway. It would work ok at first; a bit slow on the uptake on boot, but I can live with that. But, once I turned off the VM the card would freeze and remain unresponsive until I rebooted the host. The best I could tell, it was getting stuck in some sort of semi-activated state that it could not be pulled out of. Maybe there's a way around that, idk. My solution to this was to break down and upgrade to a NVS510 (4x mDP). This was much more expensive than the older cards (295: $4.25, 450: $30, 510: $60), but still a bargain compared to any other options for a single card that can drive three 1440p displays. After testing, I found that the 510 did in fact work perfectly, which finally validated that this experiment would actually work. And so the game was on. So, this is where I stand today. All of this...needs to go in to that... New to the game: Black Ice GTS140 Black Ice GTS280 [waterblocks off-screen apparently] Intel I340-T4 StarTech quad-USB3.0 controller Quadro NVS295 Quadro NVS510 ...and a few more unusual things... But Matt, you say...there are only three PCIe slots on that board, but you mentioned five PCIe cards. What gives? Well, this first post is already rather large, so I'll talk about that as I get to the related mods. Eagle-eyed readers might see the answer though.
  5. Thanks, @TVwazhere, that's a great example of one that I missed! Unfortunately, I think that the combination of the LC loop and the 3.5" HDDs would be a bit too much to squeeze into the Define/Meshify C. Maybe if I could fit the build into an 800W power window and drop down to an SFX PSU...but based on my latest config[1] (still deciding on the GPUs) it won't quite fit into that power envelope. :/ [1] https://pcpartpicker.com/user/x88x/saved/#view=kxGnHx
  6. I've been doing some planning around my next build, and while I think I've found the right case, it's been several years since I was actively paying attention to the space so I figured I would see if anyone had any suggestions for cases I might have missed. To sketch out my requirements, I want a case that: is mATX compatible is as small as possible has room* for at least 480mm of radiators has room* for at least three 3.5" drives (internal) has room* for at least two 2.5" drives (internal) ATX PSU compatibility preferred, but SFX is ok too * existing mounts preferred but not necessary I don't really care about windows or RGB; my personal aesthetic preferences are very much function over form. Think less "magic light show" and more "obsidian obelisk with impeccably arranged custom wiring". I prefer having the MB horizontal, as that lets me keep heavy cards vertical, but that's not a hard requirement. Cost is...well, I won't say "cost is no object", but let's just say that if I build this thing as I currently have it planned (topic for a different thread) it'll be topping $6k...so tacking on another $100 or two to the case isn't really a concern if it gives me a better result. Given the above, after searching through those of my old resources that I can remember and are still active, I found the Thermaltake Core V21[1], which seems to handily check all the boxes. Like I said though, I've been out of the game for a while, so wanted to see if anyone knew of anything on the market that I missed that might be a better fit. [1] https://www.thermaltake.com/products-model.aspx?id=C_00002559
×