Jump to content

DX12: Say Goodbye to Red vs Green?

Omon_Ra

http://www.pcworld.com/article/3036760/hardware/the-impossible-has-happened-radeon-and-geforce-come-together-in-directx-12.html

 

The weather forecast for hell is snow. And, yes, those are pigs taking flight. Indeed, one of the most unthinkable events in PC gaming is underway thanks to DirectX 12: GeForce and Radeon cards can run side-by-side in a single PC. The long-touted, but not quite public feature in DX12 that makes this possible is Explicit Multi-GPU. It lets games parcel out graphics chores to any GPU that supports a multi-GPU mode.

 

fgLaXaa.png

 

So it appears DirectX12 is starting to implement multi-GPU support between AMD and Nvidia. It does need to be added to a game for support; Ashes of the Singularity is the flag-bearer for this at the time being. While this article doesn't go into the details of how it works, it looks like it scales and performs better than SLI. I'm interested in how VRAM works; will you be restricted to the smaller amount (think RAID or RAM with varying storage sizes/speeds) or if it combines them. I suppose it really depends on how the game implements it and what is being processed on each card, but the possibilities! And no more SLI bridges! It also doesn't say how many GPU's can be combined, I think the author should have tried two 980's AND the Fury X, but this support is still beta, so there could be updates later. I can only imagine how this would help for the incoming VR wave and heavy titles like Star Citizen.

Link to comment
Share on other sites

Link to post
Share on other sites

This is either going to be great or flop. I so hope it'll be the former.

My Build:

Spoiler

CPU: i7 4770k GPU: GTX 780 Direct CUII Motherboard: Asus Maximus VI Hero SSD: 840 EVO 250GB HDD: 2xSeagate 2 TB PSU: EVGA Supernova G2 650W

Link to comment
Share on other sites

Link to post
Share on other sites

i didn't try mixing two cards since i only have nvidia gpus, but i guess that dx12 would only give you the performance of the secondary gpu and not it's total features, cause i don't think you would have both freesync and g-sync in one system together at the same time, but i may also be wrong.
 and another thing, i don't think that dx12 can mix two cards if they don't support dx12, let's say HD5450 and GTX480 ? or can he !

Link to comment
Share on other sites

Link to post
Share on other sites

Well i dont expect this to work very well for paralel work split since the cards are not in sync, but it could work for offloading certain calculation off the main card to a 2nd or 3rd.

Hopefully Vulkan supports this aswell and we enter a new era for PC gaming where i can keep my old GTX 670 and a new Polaris gpu and get 2x performance without the need for same frequency/VRAM capacity /driver support or sync between cards :D 

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Remon Salazar said:

i didn't try mixing two cards since i only have nvidia gpus, but i guess that dx12 would only give you the performance of the secondary gpu and not it's total features, cause i don't think you would have both freesync and g-sync in one system together at the same time, but i may also be wrong.
 and another thing, i don't think that dx12 can mix two cards if they don't support dx12, let's say HD5450 and GTX480 ? or can he !

From what the article says (it's not too technical, but oh well) it seems that the game is distributing the graphics processing over the two cards:

 

It lets games parcel out graphics chores to any GPU that supports a multi-GPU mode.

 

I imagine if you have a Freesync monitor, you can run your outputs from the AMD card and use Freesync and like wise for G-sync/Nvidia. It would depend on the game and drivers I suppose, but I don't think that would matter. And yes, the cards need to support DX12 (it is a DX12 feature after all).

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Katorice said:

inb4 nvidia removes feature cuz "bugs" and "consumers first"

But at the same time, this can work between say a 960 and a 970 which can prove more cost effective vs a single 980ti for example.

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

Doubt anyone is going to buy cards from both vendors (or completely different cards from same vendor) knowing that the number of DX 12 games (right now) is small, and the number of DX 12 games that use Explicit Multi GPU will be much more smaller than that.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Misanthrope said:

But at the same time, this can work between say a 960 and a 970 which can prove more cost effective vs a single 980ti for example.

A 960 and 970 should already work as they are the same GPU die.

Nvidia just doesn't allow it, unlike AMD who let you Crossfire a 290 and a 290x, or a 7970 and a 7950 and a 280x.

 

Nvidia = Apple

 

BTW guys, now is the time to buy used HD 7990s and GTX 690s

In case the moderators do not ban me as requested, this is a notice that I have left and am not coming back.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Misanthrope said:

But at the same time, this can work between say a 960 and a 970 which can prove more cost effective vs a single 980ti for example.

960+970 is cheaper than a single 980Ti. cannibalizing the 980Ti, ouch. idk if they'll roll with that :/ 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, That Norwegian Guy said:

A 960 and 970 should already work as they are the same GPU die.

Nvidia just doesn't allow it, unlike AMD who let you Crossfire a 290 and a 290x, or a 7970 and a 7950 and a 280x.

 

Nvidia = Apple

Yeah I realize that my point wasn't too clear but that's what I'm saying: Nvidia might try to prevent this but is not necessarily because they don't want AMD + Nvidia, It's because they don't want Nvidia + Nvidia since it makes high end cards mostly irrelevant.

 

Because it should also work across different chips too i.e. a 970 paired with a future, mid range polaris card. You'd get the Async performance needed but without the price premium of mid-high or high end card.

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Katorice said:

960+970 is cheaper than a single 980Ti. cannibalizing the 980Ti, ouch. idk if they'll roll with that :/ 

They can either disable the feature and piss people off, or they can rework their pricing strategy.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Katorice said:

960+970 is cheaper than a single 980Ti. cannibalizing the 980Ti, ouch. idk if they'll roll with that :/ 

Considering it'll ONLY work in certain games, even DX12 games will probably rarely support multi-gpu for the foreseeable future, this won't be a thing.

i7 6700K - ASUS Maximus VIII Ranger - Corsair H110i GT CPU Cooler - EVGA GTX 980 Ti ACX2.0+ SC+ - 16GB Corsair Vengeance LPX 3000MHz - Samsung 850 EVO 500GB - AX760i - Corsair 450D - XB270HU G-Sync Monitor

i7 3770K - H110 Corsair CPU Cooler - ASUS P8Z77 V-PRO - GTX 980 Reference - 16GB HyperX Beast 1600MHz - Intel 240GB SSD - HX750i - Corsair 750D - XB270HU G-Sync Monitor
Link to comment
Share on other sites

Link to post
Share on other sites

It looks tough to implement, so I think game devs are going to have a hard time integrating this into their games.

I'm not SAV1OUR. I promise. | Number of successfully bricked phones: 1 Samsung Galaxy S5 | 01001001 01110100 00100000 01110111 01100001 01110011 00100000 01100001 01101100 01101100 00100000 01100001 01101110 00100000 01100101 01101100 01100001 01100010 01101111 01110010 01100001 01110100 01100101 00100000 01110010 01110101 01110011 01100101 00101110

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, PerfectTemplar said:

Considering it'll ONLY work in certain games, even DX12 games will probably rarely support multi-gpu for the foreseeable future, this won't be a thing.

 

3 minutes ago, Misanthrope said:

Yeah I realize that my point wasn't too clear but that's what I'm saying: Nvidia might try to prevent this but is not necessarily because they don't want AMD + Nvidia, It's because they don't want Nvidia + Nvidia since it makes high end cards mostly irrelevant.

Nvidia in theory could disable it, however it's more up to the game developers as the multi-GPU is part of DX12.

 

While DirectX 12 enables Explicit Multi-GPU support, it’s entirely up to developers to support it...The difference with Explicit Multi-GPU is it’s now baked into DirectX 12 as a standard. By default, that means AMD and Nvidia have to support it to an extent. If more developers support Explicit Multi-GPU’s  mix-and-match capability, it may soon be feasible to buy whatever GPU is on sale to meet your needs. 

Link to comment
Share on other sites

Link to post
Share on other sites

Also sorry for borderline spamming the thread but here's another point: Nvidia might not be able to block Async compute across multiple GPUs but they probably do not need to anyway: They're still in the overwhelming leadership position in the GPU market right now, and their influence is fairly significant. We've seen tons of Nvidia gameworks as of late and they can easily modify the program to, for technical sounding nonsense, not allow multiple gpus. Traditional SLI support only.

 

That would be enough to basically stop this dead in it's tracks: You can either waste your effort and money into implementing this, or you can accept this big briefcase filled with Nvidia cash to make it "gameworks" and forget that feature. Hmm what to do? I guess I'll take the fucking money instead of spending it.

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, That Norwegian Guy said:

A 960 and 970 should already work as they are the same GPU die.

Uhh... I think you have the wrong chips. The 960 is GM206 and 970 a cut down GM204. Do you mean 980 + 970?

 

Regardless, neither manufacturer is going to allow this. What's going to happen is AMD is going to "allow it" for marketing purposes but bank heavily on the fact that NVIDIA is going to say "oh hell no," and then NVIDIA is going to say "oh hell no."

if you have to insist you think for yourself, i'm not going to believe you.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, powderbanks said:

lhtm While this article doesn't go into the details of how it works, it looks like it scales and performs better than SLI. I'm interested in how VRAM works; will you be restricted to the smaller amount (think RAID or RAM with varying storage sizes/speeds) or if it combines them.

Here:"  It lets games parcel out graphics chores to any GPU that supports a multi-GPU mode."

 

I understand it like this: A GPU has to do texture filtering, rendering of objects, rendering of light etc. Now with DX12 each GPU could have a specific task instead of dealing with it on its own.

 

So they both keep their vram value same. No adding, no combining. The GPU with less vram would do less work if it runs into vram limit.

 

Multi GPU feature would also mean end for Sli/Cross bridges. 

 

It alI sounds perfect to me, but real world won't be that easy. I'm sure Nvidia doesn't like this a lot more than AMD. 

Connection200mbps / 12mbps 5Ghz wifi

My baby: CPU - i7-4790, MB - Z97-A, RAM - Corsair Veng. LP 16gb, GPU - MSI GTX 1060, PSU - CXM 600, Storage - Evo 840 120gb, MX100 256gb, WD Blue 1TB, Cooler - Hyper Evo 212, Case - Corsair Carbide 200R, Monitor - Benq  XL2430T 144Hz, Mouse - FinalMouse, Keyboard -K70 RGB, OS - Win 10, Audio - DT990 Pro, Phone - iPhone SE

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, dragosudeki said:

Doubt anyone is going to buy cards from both vendors (or completely different cards from same vendor) knowing that the number of DX 12 games (right now) is small, and the number of DX 12 games that use Explicit Multi GPU will be much more smaller than that.

I saw some tests on this a while back and iirc some odd things were true.  Like for one, the order mattered (980 ti + fury x performed different than fury x + 980 ti) and (again iirc) mixed brand setups (or at least one of them ) beat everything else, even dual 980 tis ... I really have to find that report again and make sure though :D

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

Didn't notice anything about RAM usage in the article (in work, I can only skim before my manager notices!).  I take it card#1's RAM will be used and card#2's will be redundant as  with SLI, x-fire (or whatever it's called these days)?

 

Please tell me I'm wrong and it could be theoretically doubled....?

Link to comment
Share on other sites

Link to post
Share on other sites

Perhaps if we're fortunate (like a 2% chance) bith gpu companies will kiss and make up, and keep down the insurgence of Intel iGPUs.

 

Could happen...

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, xeon48 said:

Didn't notice anything about RAM usage in the article (in work, I can only skim before my manager notices!).  I take it cad#1's RAM will be used and card#2's will be redundant as  with SLI, x-fire (or whatever it's called these days)?

There's no details in the article, but @Thony surmises that since processes will be distributed to the cards, depending on what each card is tasked with, it will use its VRAM as needed.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, powderbanks said:

There's no details in the article, but @Thony surmises that since processes will be distributed to the cards, depending on what each card is tasked with, it will use its VRAM as needed.

Cheers for the info.

 

Wow - depending on how the processes are divided - this could be the real bonus.

 

Imagine a "1080X" with 8GB(?) HBM2 on DX12 with a supporting game, so we are talking dual 1080's with shared memory resources..? Bring on the VR...

 

A shot in the dark would be one RAM set is holding the visual graphics, while the other is holding and dealing with processes of some sort?  As I would imagine splitting the visuals between two sets of RAM on separate cards could potentially lead to choppiness of some sort?

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×