Jump to content

Possible 100% scaling with dual GPU?

Mr.Meerkat

Amd are pushing dual gpu because they want to have small dies to increase the amount of good dies on a wafer.

The problem is that as we see software takes ages to catch up. I believe that is the problem of amd there inovation takes too long to be utilised so by the time it matters the competition has it too.

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, cj09beira said:

Amd are pushing dual gpu because they want to have small dies to increase the amount of good dies on a wafer.

The problem is that as we see software takes ages to catch up. I believe that is the problem of amd there inovation takes too long to be utilised so by the time it matters the competition has it too.

Yes they are, they want multi GPU to become as standard as multi CPU cores.
 

 

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/2/2016 at 0:13 AM, SpaceGhostC2C said:

While it's probably impossible to completely do away with the overhead of using multiple GPUs, I guess it's possible to reach the ideal scenario in which the actual scaling is close enough to 100% to be within the margin of error of these measurements. My understanding is that GPUs and their dirvers can't by themselves grant this, meaning the exact scaling you get will depend on the particular game you play. I wouldn't be surprised if Deus Ex works abnormally well with Crossfire.

 

And though the claim is for their latest cards, I never really benchmarked my dual GPU setup to see how well it scales, I only cared about being able to crank this or that setting up and get consistent 60FPS without added problems. Maybe I should, just to see in which range it lies.

i dont think its possible bcoz it takes time for info to travel  b/w the cards it might be possible in point and click games (as overhead time is pretty high) but lets say i am playing an FPS game at 144 Hz here we have 2 problems

1)FPS players are unpredictable (atleast to a CPU)

2)overhead time is much less (6.61 mS vs 16.66 mS)

 

so if we can do 1 of 2 things we can reduce this overhead 

1)improve architecture as to improve communication time

2)make vRam sharing much easier or Share the vRam  i.e instead of leaving the vRam on second chip unused (games that dont scale properly ) use it in better ways (lets say map memory )

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Scitesh said:

 

2)make vRam sharing much easier or Share the vRam  i.e instead of leaving the vRam on second chip unused (games that dont scale properly ) use it in better ways (lets say map memory )

Memory is never unused in any of the GPUs. The reason why you don't add them up is because each GPU has a copy of the exact same information stored in its own vram, since they both need it to render stuff. If you would merge vrams across multiple GPUs, they would often have to access information stored in the other GPU, either  through pcie lanes or bridges, and that would increase the overhead, as it is much slower than reading from local vram. 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, SpaceGhostC2C said:

Memory is never unused in any of the GPUs. The reason why you don't add them up is because each GPU has a copy of the exact same information stored in its own vram, since they both need it to render stuff. If you would merge vrams across multiple GPUs, they would often have to access information stored in the other GPU, either  through pcie lanes or bridges, and that would increase the overhead, as it is much slower than reading from local vram. 

i meant like give them (slightly ) diffrent setups to counter the unpredictability of a gamer and give better access to each others vRam  (kind off what Amd tried with its dual GPU cards)

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, shdowhunt60 said:

It's an AMD title

Every DX12/ Vulkan title is an AMD title. Whether the sticker on the box is there or not.

In case the moderators do not ban me as requested, this is a notice that I have left and am not coming back.

Link to comment
Share on other sites

Link to post
Share on other sites

It is possible in theory. In practice, they probably managed to reduce the overhead to undetectable levels, which is still impressive.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, That Norwegian Guy said:

Every DX12/ Vulkan title is an AMD title. Whether the sticker on the box is there or not.

Every DX11 title is a Nvidia title. Whether the sticker on the box is there or not.

 

Same stupid logic

Before you buy amp and dac.  My thoughts on the M50x  Ultimate Ears Reference monitor review I might have a thing for audio...

My main Headphones and IEMs:  K612 pro, HD 25 and Ultimate Ears Reference Monitor, HD 580 with HD 600 grills

DAC and AMP: RME ADI 2 DAC

Speakers: Genelec 8040, System Audio SA205

Receiver: Denon AVR-1612

Desktop: R7 1700, GTX 1080  RX 580 8GB and other stuff

Laptop: ThinkPad P50: i7 6820HQ, M2000M. ThinkPad T420s: i7 2640M, NVS 4200M

Feel free to pm me if you have a question for me or quote me. If you want to hear what I have to say about something just tag me.

Link to comment
Share on other sites

Link to post
Share on other sites

On ‎2016‎.‎11‎.‎01‎. at 8:48 PM, zMeul said:

CPU bound at 1080p probly...

PCPer has 1440p and 4K results which show pretty much 95%-100% scaling

 

Tomb Raider 2013 was pretty close to 100% with 2 GPUs and about 90%-95% even with quad GPUs

CPU: Intel i7 5820K @ 4.20 GHz | MotherboardMSI X99S SLI PLUS | RAM: Corsair LPX 16GB DDR4 @ 2666MHz | GPU: Sapphire R9 Fury (x2 CrossFire)
Storage: Samsung 950Pro 512GB // OCZ Vector150 240GB // Seagate 1TB | PSU: Seasonic 1050 Snow Silent | Case: NZXT H440 | Cooling: Nepton 240M
FireStrike // Extreme // Ultra // 8K // 16K

 

Link to comment
Share on other sites

Link to post
Share on other sites

This is nice, let's hope this is the start of a nice trend.

Probably not but whatever :P

Who knows, maybe they figure something out that makes multi-gpu setups a lot easier. 

Maybe some AI that can figure that out, who knows what the future brings.

If you want my attention, quote meh! D: or just stick an @samcool55 in your post :3

Spying on everyone to fight against terrorism is like shooting a mosquito with a cannon

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, The Benjamins said:

Every company cherry picks stats. Also the way I see it is that 1080p was probably CPU bound so it was being held back and in 1440p and up it becomes GPU bound and that is when it can scale near perfectly. 

i still dont really find it to be worth it in almost any situation. If all developers put decent SLI/Crossfire support in their products, i would happily hop on the bandwagon. but with games like tombraider, not even supporting SLI...i cant justify it at all.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, lettuce head said:

i still dont really find it to be worth it in almost any situation. If all developers put decent SLI/Crossfire support in their products, i would happily hop on the bandwagon. but with games like tombraider, not even supporting SLI...i cant justify it at all.

This is not SLI or Crossfire it is DC multi GPU, it is a whole new thing

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×