Jump to content

Is 3 GPU's rven a good idea?

Ok, for all you software nerds out there, is this a good idea, or even at all possible?: Having 3 GPU's in a pc, (in my case a VR pc) two of which render content at 4k, and spit it to the ridiculous amount of RAM. (by the time this is a reality, 10,000+Mhz DDR5 will be the norm.) The third, when asked, then frames it up at eye definition, adds color, etc, & spits it to the headset.

Would it be easier to go the traditional route, and render each frame separately? How would I go about doing that?

Also, if the Render-then-Frame thing will work, will I need a ridiculously high clocked CPU? Obviously,  I'll at least need some Epyc or other, (since Threadripper is predicted to go away 😞 ) but will it have to have to be clocked at something crazy to keep up with spitting that info at the three GPU's? Not that they'll be on different boards, or anything, I'm just not sure if the clocking will perfectly line up with all those variables. 

Thanks for your help!

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, UltraNerd said:

since Threadripper is predicted to go away 😞 )

Who said that?

 

I'm confused as to what you want to do.... It sounds like sli rendering on 2 gpus then vidio out on a third.....

 

< removed by moderation >

Edited by LogicalDrm

I could use some help with this!

please, pm me if you would like to contribute to my gpu bios database (includes overclocking bios, stock bios, and upgrades to gpus via modding)

Bios database

My beautiful, but not that powerful, main PC:

prior build:

Spoiler

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, UltraNerd said:

Having 3 GPU's in a pc

Dead concept.

 

13 hours ago, UltraNerd said:

but will it have to have to be clocked at something crazy to keep up with spitting that info at the GPU's?

Unless somehow DirectX 11 games are still ubiquitous in your "future", DirectX 12 and Vulkan favorings of 6-8 cores makes the "5ghz or bust" sentiment dead. And besides, Instruction Per Clock gain is more paramount than Clock Speed in hardware advancement. The GHZ war is truly dead kid, lets face that.

 

13 hours ago, comander said:

Coordinating activities between parts requires overhead.

This, and another main issue for multi-GPU gaming is simply just how ridiculously hard it is to make a game learn how to use multiple of them. Nvidia give up and gave the SLI profile job to the developer and look how that turned out, noone wanna make SLI profiles for their games because only the massive minorities uses it in modern hardware.

 

< removed by moderation >

Edited by LogicalDrm

Press quote to get a response from someone! | Check people's edited posts! | Be specific! | Trans Rights

I am human. I'm scared of the dark, and I get toothaches. My name is Frill. Don't pretend not to see me. I was born from the two of you.

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, UltraNerd said:

Having 3 GPU's in a pc, (in my case a VR pc) two of which render content at 4k, and spit it to the ridiculous amount of RAM. (by the time this is a reality, 10,000+Mhz DDR5 will be the norm.) The third, when asked, then frames it up at eye definition, adds color, etc, & spits it to the headset.

Mmmno. For one, you can't somehow magically alter the image for stereo-vision after the fact; it has to happen during rendering. Secondly, the same goes for colour. You can't just render an image in greyscale and add colour to it afterwards, because you need to render the image to know what colour goes where.

Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, UltraNerd said:

Ok, for all you software nerds out there, is this a good idea, or even at all possible?: Having 3 GPU's in a pc, (in my case a VR pc) two of which render content at 4k, and spit it to the ridiculous amount of RAM. (by the time this is a reality, 10,000+Mhz DDR5 will be the norm.) The third, when asked, then frames it up at eye definition, adds color, etc, & spits it to the headset.

Would it be easier to go the traditional route, and render each frame separately? How would I go about doing that?

Also, if the Render-then-Frame thing will work, will I need a ridiculously high clocked CPU? Obviously,  I'll at least need some Epyc or other, (since Threadripper is predicted to go away 😞 ) but will it have to have to be clocked at something crazy to keep up with spitting that info at the three GPU's? Not that they'll be on different boards, or anything, I'm just not sure if the clocking will perfectly line up with all those variables. 

Thanks for your help!

Are you asking theoretically or practically?

 

Practically the answer is no this is not possible and will never be.

 

Theoretically the answer is no because it is not the most efficient way to do this as others have already explained.

Link to comment
Share on other sites

Link to post
Share on other sites

Imagine playing VR with such latency in your processing pipeline

-sigh- feeling like I'm being too negative lately

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Moonzy said:

Imagine playing VR with such latency in your processing pipeline

Feel sick after 10 seconds

Link to comment
Share on other sites

Link to post
Share on other sites

-> Moved to Graphics Cards

^^^^ That's my post ^^^^
<-- This is me --- That's your scrollbar -->
vvvv Who's there? vvvv

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, WereCatf said:

Mmmno. For one, you can't somehow magically alter the image for stereo-vision after the fact; it has to happen during rendering. Secondly, the same goes for colour. You can't just render an image in greyscale and add colour to it afterwards, because you need to render the image to know what colour goes where.

O.K. thanks.

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/23/2021 at 6:26 PM, comander said:

Coordinating activities between parts requires overhead.  Sharing memory is particularly hard. 

 

For anything that needs to be responsive, this creates more trouble than it's worth. For things that are not time sensitive (read: you want to crunch a lot of numbers but only care about when it's all done, not when a small bit is done) it's already being done. 

 

For the type of stuff you're describing, it's probably better just to have one big expensive GPU and one big expensive CPU vs trying to build a mini-super computer at home. For a lot of tasks that are hard to parallelize or have concurrency issues an overclocked gaming rig is actually about as fast as it gets (high single threaded performance)

Well, that's the thing-even an RTX 3090 can barely keep up with 8K, ("Can any graphics card really game at 8K?") and this would be something like 2500 PPI on a 5 inch, or whatever gives you full 180 FOV, display. I really, really don't want to do anything custom for this build-it would only be an accessory to a very expensive proof of concept. I know that CPU's practically go on forever, but how do you reccomend going about getting more GPU power than a 3090 "suprim x" can provide if adding more is a terrible idea? 

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/23/2021 at 6:28 PM, HelpfulTechWizard said:

Who said that?

 

I'm confused as to what you want to do.... It sounds like sli rendering on 2 gpus then vidio out on a third.....

 

< removed by moderation >

"AMD is dumb-like a fox" that would be Linus. The person whose company runs this website.

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah, probably. Most peoples version of "inconsistent frame rates" is a lot less forgiving than mine, and, once again, proof of concept-it's not going to work well anyway! So, shakey frame rates aren't at the top of my list. Just as long as you won't get sick. 

The 8k thing: "If it can't handle 8k, than how is it going to run an AAA game at eye definition and frame rates?" Was my thought process. Correct me if I'm wrong on that. (Not eye responsiveness. I gave up on that.)

 

Hey, another question that I didn't post: Can you run a super powerful CPU for responsive number crunching with almost no RAM? That way it would only have room for the algorithms it needs for running, and would have to crunch the numbers as soon as it got them. Or is that a super dumb idea because you can just code it to do that, and only use RAM when it's overloaded?

Thanks for all your help!

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/24/2021 at 6:34 PM, UltraNerd said:

O.K. thanks.

Although I'm confused. Whats upscaling?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, UltraNerd said:

Hey, another question that I didn't post: Can you run a super powerful CPU for responsive number crunching with almost no RAM? That way it would only have room for the algorithms it needs for running, and would have to crunch the numbers as soon as it got them. Or is that a super dumb idea because you can just code it to do that, and only use RAM when it's overloaded?

Thanks for all your help!

CPUs already try to avoid RAM. They have some memory that is smaller but faster than ram called the CPU cache. https://www.makeuseof.com/tag/what-is-cpu-cache/amp/

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Craftyawesome said:

O.K., I'm not that dumb, I do know what cache is. I don't mean "avoid" DDR5 so much as "avoid at all costs but breakage" do you think that's possible with software, or will I have to limit the system's DDR5? (The algs in question are too big for even 256MB of cache.)

PS: sorry if you got this twice. Weird data thing. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/23/2021 at 8:49 PM, Moonzy said:

Imagine playing VR with such latency in your processing pipeline

Yeah, trying to get rid of that, but.....

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, UltraNerd said:
58 minutes ago, Craftyawesome said:

O.K., I'm not that dumb, I do know what cache is. I don't mean "avoid" DDR5 so much as "avoid at all costs but breakage" do you think that's possible with software, or will I have to limit the system's DDR5? (The algs in question are too big for even 256MB of cache.)

PS: sorry if you got this twice. Weird data thing. 

I mean, things would break if the CPU was unable to access ram and what it needs isn't in the cache. And the cache's goal is to minimize ram accesses. You can optimize your program to be more cache friendly.

I'm not really sure what limiting ram usage is supposed to accomplish here. And programs don't usually allocate memory for no reason AFAIK.

 

Rereading this

4 hours ago, UltraNerd said:

it would only have room for the algorithms it needs for running

it sound like you want an ASIC.

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/26/2021 at 6:22 PM, Craftyawesome said:

I mean, things would break if the CPU was unable to access ram and what it needs isn't in the cache. And the cache's goal is to minimize ram accesses. You can optimize your program to be more cache friendly.

I'm not really sure what limiting ram usage is supposed to accomplish here. And programs don't usually allocate memory for no reason AFAIK.

 

Rereading this

it sound like you want an ASIC.

After I sent that, I realized, "Oh yeah, you wouldn't have RAM if you don't need it." Dumb question 

 

I didn't know what an ASIC was, but I just googled it, and probably! Do you think it's possible to design one complex enough to make a very accurate 3D model with input from 8 cameras, and 8 LIDAR sensors? What about 16 each?

I'm trying to put you in VR without a green screen, and without annoying "full-body trackers." This CPU or ASIC would be the pre-digester for a main PC, the ont that would apparently NOT have 3 GPU's.

Thanks for all your help! 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, UltraNerd said:

I didn't know what an ASIC was, but I just googled it, and probably! Do you think it's possible to design one complex enough to make a very accurate 3D model with input from 8 cameras, and 8 LIDAR sensors? What about 16 each?

I'm trying to put you in VR without a green screen, and without annoying "full-body trackers." This CPU or ASIC would be the pre-digester for a main PC, the ont that would apparently NOT have 3 GPU's.

Thanks for all your help! 

Maybe? But keep in mind that this isn't for individual consumers. The engineering and manufacturing costs would probably be in the millions. So maybe see if a CPU/GPU is powerful enough first.

 

I'm not very experienced with recording VR, but what exactly is your end goal? If you only need one camera angle, there are far easier ways than making yourself a 3d model.

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/28/2021 at 4:42 PM, Craftyawesome said:

Maybe? But keep in mind that this isn't for individual consumers. The engineering and manufacturing costs would probably be in the millions. So maybe see if a CPU/GPU is powerful enough first.

 

I'm not very experienced with recording VR, but what exactly is your end goal? If you only need one camera angle, there are far easier ways than making yourself a 3d model.

Actually, after thinking about it, it wouldn't work at all, because I'm going to have to prototype the algs, which will involve a lot of changing.

My end goal is to put you in VR, while your on my new and improved VR ball. So I won't be recording it; the computer will be rendering YOU, (creppy) deciding how you're going to interact with the environment, and then "skin" you with your own image. 

So this CPU will be pre-digesting you for the main CPU to do all that. WAY more than one angle. What CPU do you think will be powerful enough to dot both sensors up and stitch them together?

The silly goal of no RAM was to force it to do that faster. Dumb.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×