Jump to content

DirectX 12 - FAQ

Hello everyone.

 

I've noticed that there is quite a bit of confusion about DirectX 12, and a lot of people are asking the same questions over and over again. So I thought we could try to clear up some of the confusion through a little FAQ thread. Most of this information is from various different articles and news sources, and from official information released by Microsoft, Nvidia and AMD. I'm going to start off with some of the most frequent questions that I've seen here, but this is not a complete guide to DX12 in any way. If there is something missing, or if there's something you're wondering about that is not answered here, please tell me, and I'll try to add it.

 

Q: Will DX12 increase my graphics performance?

A: Yes, in theory. DX12 operates at a lower level, as compared to previous versions of DirectX, which operated at higher levels. What this means is that developers will get better access directly to the hardware, using fewer calls from the CPU (we'll explain this below) by having more control over commands which normally are handled by drivers. In other words, you can do more with less resources. This is also what we mean when we talk about "reducing overhead". This does not mean that every DX12 game that comes out will run completely smoothly on your 5+ year old GPU - but it should make it easier for developers to use your hardware in a more efficient way, for instance by making it easier to use multiple CPU cores.

 

Q: What does "fewer calls from the CPU" mean?

A: Every time something happens, or changes, in a game or any other rendered graphics, your CPU sends instructions to your GPU, giving it the necessary information on what it should do. This is what we mean by a "call" - it's basically the CPU saying "do something" to the GPU. One of the main advantages of DX12 over previous generations is that, since it operates on a lower level (see above), developers have more control over these calls. The result is being able to send fewer, and more detailed, instructions, which is good because it leaves the CPU to do other things (decreasing bottlenecking from lower end CPUs).

 

Q: Which GPUs will support DirectX 12?

A: Basically all that currently supports DX11. On the nvidia side this means Fermi and later (400-series and above), on the AMD side it means everything based on the GCN-architecture (HD 7000 series and up), and on the Intel side it means Haswell-based or later integrated GPUs.

 

Q: Will I be able to use an Nvidia GPU and an AMD GPU at the same time? (multi-GPU configurations)

A: In theory, yes! DX12 supports two different multi-GPU modes, called implicit and explicit multiadapter mode. Implicit multiadapter mode works similarly to SLI and Crossfire, supporting AFR (alternate frame rendering) across two linked GPUs of smiliar power. This is basically just moving SLI/CFX into the API/game engine itself, instead of letting the driver handle it outside of the game engine.

 

Explicit multiadapter mode is where it gets really interesting. This mode is further divided into two sub-modes: linked GPUs and unlinked GPUs. Linked GPUs means that the game will see GPUs in SLI or Crossfire as one GPU instead of two. Basically a much better form of linking GPUs, as instead of either the driver (as how the current SLI/CFX implementation works) or the API/game engine (as in implicit multiadapter) dividing the work between two GPUs, the game will instead just treat them as one single, very powerful, GPU. This means that SLI profiles and individual game optimization might be a thing of the past, as long as the developer is utilizing explicit multiadapter mode in DX12.

 

Now, unlinked GPUs are probably what you've all been waiting for. This mode will let you run two completely different GPUs at the same time. Yes, you can now build a system with both a R9 Fury X and a GTX 980 Ti, and they will work together - in games that support this mode, at least. How many games will support it? Hard to say, but personally, I don't think there will be a lot of them, at least not in the near future. The reason for this is that because this is all handled through the game engine itself, the developers need to program the game to use very different GPUs at the same time. This requires a lot more attention, time and resources from the developers, and it's a lot easier to just go with linked GPUs, or just implicit multiadapter mode, which will most likely be a lot easier to code for.

 

Q: Speaking of multi-GPU, what about stacked VRAM?!

A: IN THEORY, yes. In practice, no. That IN THEORY part is very important. You see, because DX12 is a low-level API, developers will have a much greater degree of control over the hardware, including how the GPUs communicate between themselves, and how the GPU core access the video RAM. Theoretically, this means that developers could be able to program the games to avoid copying the contents of the VRAM of GPU 1 over to GPU 2, which is how SLI and Crossfire works today. HOWEVER, there's one more thing in the way of stacked VRAM, and that is hardware support. Having two (or more) GPUs access VRAM on two different cards is simply put impossible with todays standard. PCIe can't move that much data (not even with PCIe 4.0), and the SLI bridge is even slower (because SLI is very old).

 

Let's have a look at the numbers, just to see how impossible it would be to access VRAM on a different card at a reasonable speed.

PCIe 3.0 supports up to 15.75 GB/s on a x16-link. The SLI bridge adds another 1GB/s, making the total bandwidth around 17 GB/s (I don't even know why we even need the SLI bridge anymore).

In comparison, a modern, high-end GPU like the GTX 980 Ti, has about 336.5 GB/s bandwidth between the GPU and the memory, on a 384-bit bus. That's quite a bit more than what PCIe is capable of. And as video memory continues to evolve, with HBM and the like, this already huge performance gap will grow even larger. And to those of you who're thinking "but what about PCIe 4.0?!": a 16-lane PCIe 4.0 link doubles the bandwith of a similar PCIe 3.0 link, making the total bandwidth 31.51 GB/s - that's still about 10 times slower than the bandwidth between the GPU and the VRAM in the case of the 980 Ti.

 

For stacked VRAM to actually be of any use, all GPUs needs equally fast access to all video memory, which as you can see is currently impossible

Maybe we'll have stacked VRAM in the future. But for now, we won't.

 

Q: What operating system do I need?

A: Currently, DX12 is only supported on Windows 10, and that probably isn't going to change. So you'll just have to upgrade.

 

Q: Okay, this is all good, but what about my current DX11 games?

A: While developers could of course update their game engines to utilize DX12, it is highly unlikely that they will. So no, current DX11-based games (or older) will not see any of the mentioned improvements.

 

 

That's all I could think of for the moment. I hope this will be useful. As I said in the beginning of this post, please tell me if there's anything missing, or incorrect. Thanks!

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...

A lot of people keep saying "drivers will fix it", could you add the section where it says the game developer is basically writing the driver so driver updates are no longer required?

Too many people keep saying this, all AMD/Nvidia can do with DX12 drivers is implement the API better.

They no longer need to write game specific drivers.

Link to comment
Share on other sites

Link to post
Share on other sites

A lot of people keep saying "drivers will fix it", could you add the section where it says the game developer is basically writing the driver so driver updates are no longer required?

Too many people keep saying this, all AMD/Nvidia can do with DX12 drivers is implement the API better.

They no longer need to write game specific drivers.

Pretty sure there still will be driver updates and optimizations even with DX12, but it's true that as game devs will have more direct access to the hardware, drivers will have less of an impact. We don't really know exactly how this will be, and I don't want to state things as fact when I'm not 100% certain.

Link to comment
Share on other sites

Link to post
Share on other sites

Don't post this as an FAQ, its mostly wrong and misleading.

 

Q: Will DX12 increase my graphics performance?

 

Only if its draw call limited. DX12 makes the calls lower level which increases the number of draw calls necessary to do the same thing, its more flexible but also doesn't catch bugs and problems it pushes a lot more onto the developers to do. So while the top Draw call calls goes from 10,000 to about 80,000 so its a substantial reduction in overhead but so far its looking like roughly 1.5x the use on the same game.

 

Q: What does "fewer calls from the CPU" mean?

 

Wrong question because it increases the number of draw calls made, the impact is only a reduction in overhead per call but by passing on a lot of the complexity to the developers, making games potentially buggier and having problems that used to be solved in drivers now fixed by the developers instead.

 

Q: Which GPUs will support DirectX 12?

 

There are two aspects to support, which GPUs support the new API style and then which support the features. You have only addressed API support but its the feature support that most confuses people and the support from different companies is just different. Nvidia has the widest support of other key features and genuinely supports all of DX12 but its support of async compute is more limited than AMDs.

 

The rest is just speculation on the various levels of support and could be correct but we don't really know how any of this is going to play out. So -1 from me, not FAQ material as its flat out wrong.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

Pretty sure there still will be driver updates and optimizations even with DX12, but it's true that as game devs will have more direct access to the hardware, drivers will have less of an impact. We don't really know exactly how this will be, and I don't want to state things as fact when I'm not 100% certain.

There will probably be, but I find those to be dishonest since the game developer can bring in those improvements. They can help code the game, but they shouldn't replace game code in drivers. Most crashes on Nvidia hardware these days are probably related to aggressive "optimisations" which introduce bugs(I've had my fair share of game crashes on my laptop)

 

Don't post this as an FAQ, its mostly wrong and misleading.

 

Q: Will DX12 increase my graphics performance?

 

Only if its draw call limited. DX12 makes the calls lower level which increases the number of draw calls necessary to do the same thing, its more flexible but also doesn't catch bugs and problems it pushes a lot more onto the developers to do. So while the top Draw call calls goes from 10,000 to about 80,000 so its a substantial reduction in overhead but so far its looking like roughly 1.5x the use on the same game.

 

Q: What does "fewer calls from the CPU" mean?

 

Wrong question because it increases the number of draw calls made, the impact is only a reduction in overhead per call but by passing on a lot of the complexity to the developers, making games potentially buggier and having problems that used to be solved in drivers now fixed by the developers instead.

 

Q: Which GPUs will support DirectX 12?

 

There are two aspects to support, which GPUs support the new API style and then which support the features. You have only addressed API support but its the feature support that most confuses people and the support from different companies is just different. Nvidia has the widest support of other key features and genuinely supports all of DX12 but its support of async compute is more limited than AMDs.

 

The rest is just speculation on the various levels of support and could be correct but we don't really know how any of this is going to play out. So -1 from me, not FAQ material as its flat out wrong.

Also important to note: most current games are designed around a maximum amount of draw calls. This means porting them to DX12/Vulkan would not benefit them that much if at all, since those draw calls simply don't occur.

Exception would be Assassin's Creed Unity, that unoptimised piece of crap.

And DX12/Vulkan don't just bring less driver overhead to the table, there can be other ways to improve performance if you have a lower level API (example would be Carmack's inverse square root)

Cheers.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×