Jump to content

Dual User AMD Zen 3 Gaming/Streaming/Development PC

smooph

Hi everybody,

 

so before I start describing my plan, I want to prefix it by saying this is still in the making and depends on the upcoming benchmark results and other aspects of the build which I am currently exploring. If I am in the wrong section I am sorry but in my opinion it best fits here.

 

Why: You will see that this post is less about specific components but more about the idea. I am posting this because I am looking for ideas on specific hardware, answers to the questions I have stated below, other ideas solve the whole situation, problems I have not thought about or even better an LTT video which showcases exactly what I want to do ;).

 

Preface: For COVID-19 my girlfriend and I have improved our office setup with multiple desks. Due to the extended quarantine we expect to spend more of our free time inside and the last couple of month have shown that gaming together (and with friends) is what we like to do. This is why I want to build a dual user gaming PC similar to the one featured in this LTT video with the hope that it will cost less than getting two (I believe with higher prices of individual components and the increased amount of cores of modern CPUs this should be feasible). This single PC when not used by two people will also be "fast enough" for longer or allow a single user to have more performance available. Since all computers in the office are either ancient (Pentium and A8-3870) or laptops (work), a new machine is needed to satisfy our gaming urges :D 

 

The plan: Via the magic of virtualization I want to allow both of us to have their own dedicated OS and GPU. (I hear AMD has trouble with virtualization. Some feedback regarding this topic would be greatly appreciated.) Our desks are about 2 meters apart so routing a monitor cable even though possible wouldn't be great. I am therefore looking for a solution to either use the already available network or some other way of transmitting the content to the monitor. I would greatly prefer the network solution because this would allow us to also use this solution with the living room TV giving the other person some space in the office. (The solution from the video is for cost, we also live in a rental and cannot wire cables through the apartment not a viable one. But we have the old computers as potential "thin clients" to the new computer. If I can use cloud gaming I should be able to game from the other room, right?)

 

The hardware for the plan:

(because of some good numbers rumored currently and the potentially better price point I plan on building a Zen 3 version, but this can be changed at any point)

  • Motherboard: X570 B550 (looks like other Chipsets might also support Zen 3 I am not set on a specific one) with 2x PCIe-3.0 (4.0 is not necessary as far as I understand 3.0 is more than enough for GPUs. Should I future proof with at least one 4.0 slot? And should I like for a board with two real (is it referred as electrical compared to mechanical which is just the size of the slot?) x16 slots (don't some boards split the available lanes different even if the slot x16) x16 slots for the 2 GPUs, 1x PCIe x4 slot (3.0 for a capture card?), 2x m.2 slots for each VM, Lan.
  • CPU: Ryzen 9 5950X
  • 2x GPU: As a developer I would like to have one Nvidia and one AMD GPU. But I am not set on either one. I usually buy what is most cost-effective. (Probably low tier version of Nvidia's 3000 series and/or AMDs upcoming 6000 series, maybe one high and one low tier, maybe simply what can be gotten for cheap)
  • Cooling: I have no experience with water cooling, and I am of the opinion that air is less hassle (setup and maintenance). But I understand that depending on the position of the GPUs and the size of the motherboard it might be the only option.
  • Other: RAM, M.2s, Power supply 1000W (I guess this needs to be a beefy boy if I want to run 2 GPUs with it) and Capture card

 

Budget (including currency): The budget is pretty much defined by the plan. Of course, it shouldn't be more than 2 gaming computers. And as a student it can never be cheap enough. That said I am willing to spend a little more if it will be the better solution.

Country: Germany

Games, programs or workloads that it will be used for: Due to the old PCs we don't have a list of current titles we play but the goal is to play and stream all current games (not necessarily at the same time but it would be nice). 

Other details: Monitors, keyboards, mice other peripherals are already owned. The things listed under The hardware for the plan need to be bought.

 

Best

Smooph

Link to comment
Share on other sites

Link to post
Share on other sites

@smooph Desktop X570 is not exactly the platform to be building a virtualized workstation on. There are no boards (to my knowledge) that support x16/x16 slots (they're all x8/x8 or less). None of them will have both M.2 slots through the CPU, so I'm not sure if you'd be able to allocate the chipset M.2 separately from everything else connected to the chipset. Basically, you run out of PCIe lanes very quickly with more than one user on this platform.

 

I say this to every person trying to build one of these monstrosities: is having one box worth the headache of software bugs and incompatibilities (not to mention downtime when both stations go down with a hardware failure) vs. just having two towers?

Main System (Byarlant): Ryzen 7 5800X | Asus B550-Creator ProArt | EK 240mm Basic AIO | 16GB G.Skill DDR4 3200MT/s CAS-14 | XFX Speedster SWFT 210 RX 6600 | Samsung 990 PRO 2TB / Samsung 960 PRO 512GB / 4× Crucial MX500 2TB (RAID-0) | Corsair RM750X | Mellanox ConnectX-3 10G NIC | Inateck USB 3.0 Card | Hyte Y60 Case | Dell U3415W Monitor | Keychron K4 Brown (white backlight)

 

Laptop (Narrative): Lenovo Flex 5 81X20005US | Ryzen 5 4500U | 16GB RAM (soldered) | Vega 6 Graphics | SKHynix P31 1TB NVMe SSD | Intel AX200 Wifi (all-around awesome machine)

 

Proxmox Server (Veda): Ryzen 7 3800XT | AsRock Rack X470D4U | Corsair H80i v2 | 64GB Micron DDR4 ECC 3200MT/s | 4x 10TB WD Whites / 4x 14TB Seagate Exos / 2× Samsung PM963a 960GB SSD | Seasonic Prime Fanless 500W | Intel X540-T2 10G NIC | LSI 9207-8i HBA | Fractal Design Node 804 Case (side panels swapped to show off drives) | VMs: TrueNAS Scale; Ubuntu Server (PiHole/PiVPN/NGINX?); Windows 10 Pro; Ubuntu Server (Apache/MySQL)


Media Center/Video Capture (Jesta Cannon): Ryzen 5 1600X | ASRock B450M Pro4 R2.0 | Noctua NH-L12S | 16GB Crucial DDR4 3200MT/s CAS-22 | EVGA GTX750Ti SC | UMIS NVMe SSD 256GB / Seagate 1.5TB HDD | Corsair CX450M | Viewcast Osprey 260e Video Capture | Mellanox ConnectX-2 10G NIC | LG UH12NS30 BD-ROM | Silverstone Sugo SG-11 Case | Sony XR65A80K

 

Camera: Sony ɑ7II w/ Meike Grip | Sony SEL24240 | Samyang 35mm ƒ/2.8 | Sony SEL50F18F | Sony SEL2870 (kit lens) | PNY Elite Perfomance 512GB SDXC card

 

Network:

Spoiler
                           ┌─────────────── Office/Rack ────────────────────────────────────────────────────────────────────────────┐
Google Fiber Webpass ────── UniFi Security Gateway ─── UniFi Switch 8-60W ─┬─ UniFi Switch Flex XG ═╦═ Veda (Proxmox Virtual Switch)
(500Mbps↑/500Mbps↓)                             UniFi CloudKey Gen2 (PoE) ─┴─ Veda (IPMI)           ╠═ Veda-NAS (HW Passthrough NIC)
╔═══════════════════════════════════════════════════════════════════════════════════════════════════╩═ Narrative (Asus USB 2.5G NIC)
║ ┌────── Closet ──────┐   ┌─────────────── Bedroom ──────────────────────────────────────────────────────┐
╚═ UniFi Switch Flex XG ═╤═ UniFi Switch Flex XG ═╦═ Byarlant
   (PoE)                 │                        ╠═ Narrative (Cable Matters USB-PD 2.5G Ethernet Dongle)
                         │                        ╚═ Jesta Cannon*
                         │ ┌─────────────── Media Center ──────────────────────────────────┐
Notes:                   └─ UniFi Switch 8 ─────────┬─ UniFi Access Point nanoHD (PoE)
═══ is Multi-Gigabit                                ├─ Sony Playstation 4 
─── is Gigabit                                      ├─ Pioneer VSX-S520
* = cable passed to Bedroom from Media Center       ├─ Sony XR65A80K (Google TV)
** = cable passed from Media Center to Bedroom      └─ Work Laptop** (Startech USB-PD Dock)

 

Retired/Other:

Spoiler

Laptop (Rozen-Zulu): Sony VAIO VPCF13WFX | Core i7-740QM | 8GB Patriot DDR3 | GT 425M | Samsung 850EVO 250GB SSD | Blu-ray Drive | Intel 7260 Wifi (lived a good life, retired with honor)

Testbed/Old Desktop (Kshatriya): Xeon X5470 @ 4.0GHz | ZALMAN CNPS9500 | Gigabyte EP45-UD3L | 8GB Nanya DDR2 400MHz | XFX HD6870 DD | OCZ Vertex 3 Max-IOPS 120GB | Corsair CX430M | HooToo USB 3.0 PCIe Card | Osprey 230 Video Capture | NZXT H230 Case

TrueNAS Server (La Vie en Rose): Xeon E3-1241v3 | Supermicro X10SLL-F | Corsair H60 | 32GB Micron DDR3L ECC 1600MHz | 1x Kingston 16GB SSD / Crucial MX500 500GB

Link to comment
Share on other sites

Link to post
Share on other sites

For your case I would honestly just get 2 cheaper computers. What games are you playing?

 

Why the 2 systems on their own?

Virtualization overhead

Complexity

Easily messed up

Price

Conveniece

 

For the livingroom thing. If you have a ethernet cable you can use steam in home streaming.

 

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, smooph said:

Motherboard: X570 B550 (looks like other Chipsets might also support Zen 3 I am not set on a specific one) with 2x PCIe-3.0 (4.0 is not necessary as far as I understand 3.0 is more than enough for GPUs. Should I future proof with at least one 4.0 slot? And should I like for a board with two real (is it referred as electrical compared to mechanical which is just the size of the slot?) x16 slots (don't some boards split the available lanes different even if the slot x16) x16 slots for the 2 GPUs, 1x PCIe x4 slot (3.0 for a capture card?), 2x m.2 slots for each VM, Lan.

Keep in mind you don't just need the slots. You need good IOMMU groups, which if memory serves can change with AGESA updates? The Level1Techs forums would be a good place to ask about specific motherboards, a lot of them do Linux hosts with VMs using IOMMU passthrough. 

You'll also need more full PCIe slots than you think you do, again because of the IOMMU thing. Usually it's much easier to split out the PCIe slots on a good board, vs the built in hardware. So you'll likely want a PCIe to USB card for each VM as I/O, if your NICs cannot be split out you'll need PCIe cards for those too. 
 

18 minutes ago, jaslion said:

Why the 2 systems on their own?

Virtualization overhead

Complexity

Easily messed up

Price

Conveniece

Overhead is much smaller, IIRC newer stuff can get really close to bare metal performance. But everything else is true, it can be really complicated to set up + if the hypervisor/host OS ever goes down, there go both VMs as well. 

Intel HEDT and Server platform enthusiasts: Intel HEDT Xeon/i7 Megathread 

 

Main PC 

CPU: i9 7980XE @4.5GHz/1.22v/-2 AVX offset 

Cooler: EKWB Supremacy Block - custom loop w/360mm +280mm rads 

Motherboard: EVGA X299 Dark 

RAM:4x8GB HyperX Predator DDR4 @3200Mhz CL16 

GPU: Nvidia FE 2060 Super/Corsair HydroX 2070 FE block 

Storage:  1TB MP34 + 1TB 970 Evo + 500GB Atom30 + 250GB 960 Evo 

Optical Drives: LG WH14NS40 

PSU: EVGA 1600W T2 

Case & Fans: Corsair 750D Airflow - 3x Noctua iPPC NF-F12 + 4x Noctua iPPC NF-A14 PWM 

OS: Windows 11

 

Display: LG 27UK650-W (4K 60Hz IPS panel)

Mouse: EVGA X17

Keyboard: Corsair K55 RGB

 

Mobile/Work Devices: 2020 M1 MacBook Air (work computer) - iPhone 13 Pro Max - Apple Watch S3

 

Other Misc Devices: iPod Video (Gen 5.5E, 128GB SD card swap, running Rockbox), Nintendo Switch

Link to comment
Share on other sites

Link to post
Share on other sites

This solution is a cool demo of what's possible but is pretty much never a good solution in practice. Between what you need to get to make it work, what you'll need to replace becasue it doesn't, the setup and maintenance hassle etc you're better off with 2 computers, and that'll likely be cheaper as well. 

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Zando Bob said:

Keep in mind you don't just need the slots. You need good IOMMU groups, which if memory serves can change with AGESA updates? The Level1Techs forums would be a good place to ask about specific motherboards, a lot of them do Linux hosts with VMs using IOMMU passthrough. 

You'll also need more full PCIe slots than you think you do, again because of the IOMMU thing. Usually it's much easier to split out the PCIe slots on a good board, vs the built in hardware. So you'll likely want a PCIe to USB card for each VM as I/O, if your NICs cannot be split out you'll need PCIe cards for those too. 
 

Overhead is much smaller, IIRC newer stuff can get really close to bare metal performance. But everything else is true, it can be really complicated to set up + if the hypervisor/host OS ever goes down, there go both VMs as well. 

Overhead is still usually one core that is missing which for the usecase here is fine but for some people (like me) would be a dealbreaker as I need everything my pc can give me and then some :p.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×