Jump to content

Anybody know if there is a manufacturer out there like Tyan or Supermicro that makes a PCI-E riser with a reversed Y-split.

Pinecube

Good evening, I'm trying to find specialized hardware but googles algorithm knows better and simply will not stop trying to give me the opposite of what I'm looking for.

 

Instead of branching a PCI-E slot into two with bifurcation  I am looking to to take a PCI-E slot and branch it to two motherboards. Not to attempt to try to run a GPU in two systems but rather being a switched pathway that routes to motherboard A when switched that way, then when powered off can be switched to motherboard B and then that system booted. It's for a very specific build. Ideally what I wanted to do was just have the hardware of machine B be bare metal and completely pass-through into a host VM on machine A but that would turn the bare metal system into virtualized hardware which may prevent certain features of the processor from being accessed. This is a Xeon+Epyc project. 

 

Unless somebody knows of a way to consolidate two bare metal systems in a type 1 hypervisor and have them present hardware simultaneously, I don't think that's possible without designating one CPU as the host. 

Thanks for your time.

 

 

Budget (including currency): N/A

Country:  Soviet Canuckistan

Games, programs or workloads that it will be used for: Broderbund's Stunts 3D, Creatures II, My Little Pony: Secret Weapons of WWII, NCIX Simulator 2010, The Star Ways Holiday Special.

Other details Possibly.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Why don't you just pass the graphics card through?

 

Maybe you could use thunderbolt for this instead of PCI if someone makes a thunderbolt switch.

 

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, Pinecube said:

Budget (including currency): N/A

on a xeon+epyc kind of budget buy another GPU 💀 (unless we’re talking, like, $10000 plus GPUs, not RTX 4090)

Link to comment
Share on other sites

Link to post
Share on other sites

@heimdali

It's one of 3 GPU, it's for dev and that seems like it adds a driver problem into the mix being thunderbolt which I'm looking to avoid. Even if I was to pass through the GPUs on PCI-E from one system to another I don't think it's raw exposed hardware but still virtualization which adds another possible variable I didn't want to deal with. The feature set between Epyc and the Xeon I am looking at differ enough that having both on hand would be helpful. 

 

@NF-A12x25 

I started pricing out and designing out something awhile back for my dev stuff and ultimately decided on trying to create something fun and very unique that involves taking everything and the kitchen sink and making it function in a singular portable machine. Red, green and blue GPU, Blue and red CPU, find a jetson, stick an edge TPU in a usb slot, mix it all, see if AMD has a Xilinx VCK5000 accelerator card they want to slip into my pocket, etc. I like the idea of hardware agnosticism but I would also like to make sure whatever I do actually works on what it's supposed to. 

 

I'm not a nut, lemme explain. I got seven years of post-secondary education I need to complete at 37 years old after spending two years getting my ass up from being a highschool dropout and this next seven years is all about soft/hard dev education so  I'll be in my mid forties when I graduate. I figure I'm gonna miss my midlife crisis and I'm too disabled to ride a motorcycle anymore so going to try to get two bird stoned with one bush and build my nerdbox devbox. The goal is to dev around overcoming barriers and disabilities, creating solutions so ones impairments don't prevent them from creating in any form, musical tunes, sculpts, paint, don't matter.

 

Nerve damage took away my ability to work clay, there was digital means to learn to overcome that. I realize that others don't have that luxury, parkinsonism and holding a paintbrush for example. If a person loved to paint as their way of finding joy, and they lost that joy on top of everything else Parkinson's disease does? No way hombre, it's time to dev an AI compensated stylus style brush to give them back what was unfairly taken. Guy likes jamming out on his guitar and then loses his arm? Time to dev. Know what I mean? So lots of education needed, on top of what I've taught/teaching myself. This is all I do anymore.

 

tl;dr A nut on the internet wants to build a 64 bit midlife crisis that can play minesweeper at 600 fps so losing an arm doesn't mean a person has to join Def Leppard. 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

It sounds like you need to develop a mainboard that has two slots, one for a Xeon and the other for an Epyc CPU.  That's something I'd try to avoid ...

 

Link to comment
Share on other sites

Link to post
Share on other sites

@heimdali sadly even if a board like such existed it wouldn't work. Intel used to sell NUCs that went in a PCIE slot, Phi coprocs too but not the same. Who knows what is on the horizon with CXL and such though. 

 

In the end the simplest way I could figure it out on paper is two systems sharing video cards physically but not simultaneously. Now it would be the cats ass to have a SoC that acted as a switch device to enable/disable/route from software but I think the handshake device such stuff for gpus cannot be hotswap, egpu only. I guess for portability a guy could just use notebook mainboards to solve the problem but that stops mix n match.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Get two cards and plug one into each system?  Why would a strange approach like you have in mind be necessary?

 

Link to comment
Share on other sites

Link to post
Share on other sites

@heimdali It's not a matter of 2 cards then, it would be six. One each of blue, red and green flavours. That's a lot more space consumed and a lot more hardware. Think of two motherboards as bread. One in the blue flavour, one in the red. The idea is the GPUs are the meat in the sandwich. 

If I was one of them youtube cowboys with all the free stuff I'd say no problem and get me somebody to build a customer waterblock-block that can sandwich 3 PCBs together in the space of 1 4090 then riser cable them to each slot. I ain't one of those youtube cowboys. 

Link to comment
Share on other sites

Link to post
Share on other sites

@heimdali well I figure that's always a possibility. The GPU miners used boards that split a PCI slot into multiple with a riser card so they could run more GPUs, this is just the reverse of such. I mean physically nothing stopping me from just getting some of that miner riser and getting out soldering iron and throwing things in reverse but that leaves the problem of it being switchable, PCI-E is what, 86 pins for x16? I don't know if they make any relays to switch that many circuits. There's most likely some kind of IC solution for that though.

 

A guy could control the switching automagically by using an unused fan header from the motherboard, whichever board is juicing the fan header gets the GPUs, then a manual switching when needed. I was looking at mATX boards today and realized Asrock makes a epyc board with 4 full PCI4 x16 slots. Then started thinking more about the ability for something like this to be "small" Two motherboards with CPU waterblocks each a slice of bread around a inner core that sandwiches 3 differing GPU PCBs, small radiator on one side with fans blowing through it which are being fed from push fans on the other side so it exhausts over the radiator and clears the sandwich of ambient heat as well. Would have to flatten the PSU but that means only one PSU to get it to fit in 12x12x5" and now I'm just working myself backwards.  Bah.

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

PCI isn't hot-pluggable, so you can't just switch.  You need to forget about putting it all together into one thing and find a more feasible approach.

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, heimdali said:

PCI isn't hot-pluggable, so you can't just switch.  You need to forget about putting it all together into one thing and find a more feasible approach.

 

 

PCIe does support hot swap. I believe there are some enterprise examples. 

 

This problem isn't a hot swap situation. Actual electrical isolation of one x16 PCIe bus and connection of another is required. Presumably done with no power present, theoretically allowing for a simple mechanical solution. An electrical solution would likely be much more difficult.

 

80+ ratings certify electrical efficiency. Not quality.

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, brob said:

PCIe does support hot swap. I believe there are some enterprise examples. 

There are very few examples; usually it doesn't.  When it doesn't and you do it, the computer may turn on unexpectedly when you forgot to unplug the power cable.  So no, you can't just switch ...

7 hours ago, brob said:

This problem isn't a hot swap situation. Actual electrical isolation of one x16 PCIe bus and connection of another is required. Presumably done with no power present, theoretically allowing for a simple mechanical solution. An electrical solution would likely be much more difficult.

This is a mysterious situation with a very far fetched idea.  A simple mechanical solution would be to take one card out and plug another one in, thereby sooner or later wearing out the contacts.  That doesn't seem to be wanted.  Using relais might work with hotpluggable PCI, but will that create a reliable connection?  And what if one of the relais hangs, you wanna debug that? 🙂

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×