Jump to content

DX12, what was promised and what we got .. a rainbow shitting unicorn

45 minutes ago, zMeul said:

bullshit!

look at my numbers, 1080p low quality settings - CPU bound scenario

the differences are minimal

 

I like how anything that doesn't agree with your opinion or testing is bullshit. 

 

Seriously. Get your head out of the sand

Do you even fanboy bro?

Link to comment
Share on other sites

Link to post
Share on other sites

Your results are entirely different then Anandtech's, I'll take Anantech's opinion over yours  every day of the week.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Liltrekkie said:

 

I like how anything that doesn't agree with your opinion or testing is bullshit. 

 

Seriously. Get your head out of the sand

Hmmm.... Sounds a lot like politics...

YOU'RE FAKE NEWS!!!

COMPUTER: Mobile Battlestation  |  CPU: INTEL I7-8700k |  Motherboard: Asus z370-i Strix Gaming  | GPU: EVGA GTX 1080 FTW ACX 3.0 | Cooler: Scythe Big Shuriken 2 Rev. b |  PSU: Corsair SF600 | HDD: Samsung 860 evo 1tb

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, FAQBytes said:

Hmmm.... Sounds a lot like politics...

YOU'RE FAKE NEWS!!!

fake-news-phc4q8.jpg

Do you even fanboy bro?

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, leadeater said:

Come on guys keep it clean, you know who you are. Offending posts have been removed.

I am sorry for going off topic again, but this needs to be done.

We could all use a little bear-hugging right now. Make us less hateful to each other.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, Sniperfox47 said:

The problem is render paths built around DX11 or OpenGL make a lot of assumptions around the GPU which isn't actually true with modern GPUs but which the drivers are designed to effectively "emulate" to ensure comparability.

And what assumptions would those be?

 

Quote

Vulkan and DirectX12 can take a lot more advantage of the semi-deferred, tile based nature of a modern renderer if their render pipeline is designed specifically with it in mind.

Tile based rasterization has nothing to do with what API you use and the API shouldn't care how the hardware rasterizes pixels.

 

EDIT: If you're referring to technique of splitting up shading tasks into tiles, then okay. But AMD proved you could effectively do that in DX11 in immediate mode.

Link to comment
Share on other sites

Link to post
Share on other sites

>Testing DX12

>At 1440p on a brand new i5 CPU

 

Now try doing it on 1080p on an older, lower IPC CPU with more cores. 


DX12 benefits in CPU bound scenarios, specifically draw call limited ones.  1440p with a Skylake i5 is going to push the bottleneck onto the GPU, which is why the results are tiny.

 

Do the same test on an FX 4300 and see the difference it makes.

 

 

 

 

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Citadelen said:

http://www.anandtech.com/show/10067/ashes-of-the-singularity-revisited-beta/5

Don't really know what's going on with HardCops results, none of the GPUs they tested are in the benches below, but the Fury X gains 50% in DX12 Vs DX11. The 480 doesn't gain as much though due to the way GCN works, it has less shaders therefore utilises them better with less bubbles. Nvidia cards either lose performance or gain very little because their compute pipeline is very efficient, so it scales with cores much better than AMDs card, one of the main reasons the Fury X was so disappointing considering its core count.

 

There's also more to async than just more performance, for example AMD has tools that allow the GPU to process audio asynchronously to the graphics through TrueAudio 2.

  Hide contents

80318.png.0d3ac630f9e93cb83fb4e0af322cd18b.png80319.png.c84e854ddf63b39286a69335c4e668ac.png

 

All that really shows, is how AMD's DX11 drivers are utter shit.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Sniperfox47 said:

Quantum Break was not designed for DX12 from the ground up. You're also comparing its implementation as a UWP to a native Win32 app which has other differences in performance characteristics. It's not a straight up DX11/DX12 comparison.

yeah sure, that's why the UWP apps runs in DX what now?! 12

have you actually wondered why no one (Futuremark, Unigine, etc) is actually building a DirectX versus benchmark?

 

Quote

Shaders run on the GPU. Do you know what uses the exact same shaders? DX11 and 12

and I'm now quoting an AMD:

Quote

DirectX® 12 Async Shaders supercharge work completion in a compatible AMD Radeon™ GPU by interleaving these tasks across multiple threads to shorten overall render time. Async Shaders are materially important to a PC gamer’s experience because shorter rendering times reduce graphics pipeline latency, and lower latency equals greater performance. “Performance” can mean higher framerates in gameplay and better responsiveness in VR environments. Further, finer levels of granularity in breaking up the workload can yield even greater reductions in work time. As they say: work smarter, not harder.

 

why the shaders don't differ much? because most cards still are limited to DX11 feature levels - I still don't believe there's a single GPU that supports all DX12 feature levels; might be wrong

 

---

 

the diff in CPU workloads?

DX11 driver takes care of multithreading where as DX12 app needs to create the worker threads

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, sgloux3470 said:

>Testing DX12

>At 1440p on a brand new i5 CPU

 

Now try doing it on 1080p on an older, lower IPC CPU with more cores. 


DX12 benefits in CPU bound scenarios, specifically draw call limited ones.  1440p with a Skylake i5 is going to push the bottleneck onto the GPU, which is why the results are tiny.

 

Do the same test on an FX 4300 and see the difference it makes.

see my Tomb Raider test a few pages back

draw call limitation occur only when the developers didn't do their jobs and pushed a shit product on the market

 

draw calls are not limited/influenced by the resolution, but by the number of objects/polygons in the scene

the number of polygons that need to be rendered can be reduced by culling - game engine have already started to do it naturally; before they needed a middleware (Umbra)

Link to comment
Share on other sites

Link to post
Share on other sites

I'll take a rainbow shit cone with sprinkles. Oh, and DX12 multi card support. --------- You're out? Any in the back. ---------- No need to be rude, then just the shit cone. ----------- The unicorn is out? Seriously? -------- OK, then I'll take the DX11 horse shit...

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, zMeul said:

draw calls are not limited/influenced by the resolution, but by the number of objects/polygons in the scene

the number of polygons that need to be rendered can be reduced by culling - game engine have already started to do it naturally; before they needed a middleware (Umbra)

And yet consoles have consistently been able to handle more draw calls by huge factors over many generations with much more limited hardware where culling would have made more logical sense than on high end computers with many times the CPU and GPU power. They have always been able to do this due to lower level programming and DX11 is well known for having a huge overhead penalty and has many limitations.

 

However DX11, and previous versions, all solved issues that consoles did not have, many different hardware platforms and combinations from multiple different hardware vendors.

 

DX12/Vulkan are both born from improvements in tool sets that allows game developers to better handle hardware differences in PCs meaning using lower level programming puts less of a burned on them than in the past.

 

We the gaming community are the ones that made the leap to DX12 means more performance/higher FPS when that was never actually a design goal of DX12/Vulkan at all. 

 

Want to have some fun with seeing how the narrative plays a huge part in perception? Remember back to DX9 vs DX11, what was that all about? What was the purpose? How was it portrayed in the media and public?

 

Guess what? DX9 consistently produced high FPS than DX11 did but it was never pronounced dead on arrival, in fact everyone loved it. You know why? It was about achieving higher graphics fidelity and effects. I'm not aware of a single game that had DX9 and DX11 where DX11 had higher FPS.

 

P.S.

Yes DX10 was DOA.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, leadeater said:

And yet consoles have consistently been able to handle more draw calls by huge factors over many generations with much more limited hardware where culling would have made more logical sense than on high end computers with many times the CPU and GPU power. They have always been able to do this due to lower level programming and DX11 is well known for having a huge overhead penalty and has many limitations.

because console APIs were always low level

where do you think DX12 has it's roots? in DX9.0e - the 1st low level DX API available only to X360

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, zMeul said:

because console APIs were always low level

where do you think DX12 has it's roots? in DX9.0e - the 1st low level DX API

Yes that was my point, keep reading the post and you'll find the rest of my point.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, leadeater said:

Yes that was my point, keep reading the post and you'll find the rest of my point.

that's not the point

 

the point of why console games seemingly performed better is because they

  • the consoles were not following standard x86 HW platform - eMMC cache non existent on PCs; and so on
  • were build for a single HW and SW platform
  • were ported defectively, incompatibilities with PC APIs
Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, zMeul said:

that's not the point

 

the point of why console games seemingly performed better is because they

  • the consoles were not following standard x86 HW platform - eMMC cache non existent on PCs; and so on
  • were build for a single HW and SW platform
  • were ported defectively, incompatibilities with PC APIs  (never talked about or implied that)

Um no, my point is my own and is written in my post that I wrote. That is the first part of it but does not form the entire post or my whole point.

 

What you just did there was tell me what my own point was, you cannot tell me that. Neither did I say consoles performed better, draw calls does not equal performance.

 

You have only exclusively addresses the first paragraph then formed this discussion around that rather than the reasons around why there are API differences and how they have evolved which is actually what I was talking about. 

 

Edit:

Also I said across many console generations, not the current or last one but across them all.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

You have only exclusively addresses the first paragraph then formed this discussion around that rather than the reasons around why there are API differences and how they have evolved which is actually what I was talking about. 

they 'evolved" because

  • games were ported badly and PC could not keep up with unit sales
  • AMD's DX and OpenGL drivers were not performing and the solution they got was a new API Mantle
  • Mantle's release pushed MS to develop DX12
  • AMD "sold" the API to Khronos group - Vulkan was born
Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, zMeul said:

they 'evolved" because

  • games were ported badly and PC could not keep up with unit sales
  • AMD's DX and OpenGL drivers were not performing and the solution they got was a new API Mantle
  • Mantle's release pushed MS to develop DX12
  • AMD "sold" the API to Khronos group - Vulkan was born

The issues existed with DX on both vendors well before porting games was as common as it is now. Game porting is not the issue, the issue is with DX11 overhead and exists in PC exclusive games as well which should be obvious as it's due to the API.

 

The issue existed before DX11, it's always been there. It's never actually been a significant problem until recently because of hardware limitations and other more important things to focus on, we are now here looking at things left to improve.

 

We can use APIs like DX12 and Vulkan now due to the tools available to make it easier to develop with such APIs, those helpful things aren't just in the API.

 

When I say evolved I mean over a very long time, I've been playing PC games before DX existed and when I was talking about many generations of consoles I was also talking about over that same time period. I'm not just talking about DX11 vs DX12.

 

If you go in to DX12 looking purely at FPS increases which is how this thread is framed you will be disappointed just like if you did the same with DX9 vs DX11. 

Link to comment
Share on other sites

Link to post
Share on other sites

@zMeul

If you don't like DX12 and think Vulkan should be the one we focus on then yes I agree with that, why not just say that? Unless you already have, haven't read all 6 pages.

 

But I wouldn't try and use current game performance as justification for using Vulkan over DX12, there's too many counter points trying to do it that way. There is one very simple and clear reason why Vulkan should be preferred, cross-platform support.

 

Over the long term I don't know which of the two is the better API for performance and graphics features but neither of them are at any point to be able to start making judgement on that.

 

We should all support Vulkan over DX12 though, it'll be better of us all if we do.

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, leadeater said:

@zMeul

If you don't like DX12 and think Vulkan should be the one we focus on then yes I agree with that, why not just say that? Unless you already have, haven't read all 6 pages.

 

But I wouldn't try and use current game performance as justification for using Vulkan over DX12, there's too many counter points trying to do it that way. There is one very simple and clear reason why Vulkan should be preferred, cross-platform support.

 

Over the long term I don't know which of the two is the better API for performance and graphics features but neither of them are at any point to be able to start making judgement on that.

 

We should all support Vulkan over DX12 though, it'll be better of us all if we do.

I haven't said that at all

is Vulkan a better choice because of the multiplatform aspect, yes

 

but the fact still remains that DX12 is not better than DX11

and fun fact, Vulkan is not better than DX11

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, zMeul said:

I haven't said that at all

is Vulkan a better choice because of the multiplatform aspect, yes

 

but the fact still remains that DX12 is not better than DX11

and fun fact, Vulkan is not better than DX11

Well define better, because if it's purely FPS gains as you are trying to show in the OP then DX9 is better than DX11 which it isn't.

 

I simply wont say how good or bad DX12 is yet, too soon.

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, leadeater said:

Well define better, because if it's purely FPS gains as you are trying to show in the OP then DX9 is better than DX11 which it isn't.

 

I simply wont say how good or bad DX12 is yet, too soon.

Its not too soon. All previous versions of DirectX were picked up by devs in a similar time frame, and just look at the release dates: https://en.wikipedia.org/wiki/DirectX#Release_history

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Dabombinable said:

Its not too soon. All previous versions of DirectX were picked up by devs in a similar time frame, and just look at the release dates: https://en.wikipedia.org/wiki/DirectX#Release_history

I dunno, a lot of games were heavily embedded in DX9 well after the release of DX10 and even DX11.

 

There are only about 6 DX12 exclusive games released so far, I say about because that list isn't fully up to date. That is a very small list of games.

https://en.wikipedia.org/wiki/List_of_games_with_DirectX_12_support

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, SamStrecker said:

It isn't DX12s fault, it is lazy devs. 

As someone who is/was working on a DX12 render/game engine, I am kinda offended by that remark.

Developers porting to DX12 are not lazy, if they were lazy they wouldnt make a port at all.

 

As Microsoft has already explained multiple times during DX12 (developer) presentations, you should not be expecting a DX11 -> DX12 port to perform better in DX12.

Microsoft suggests that the only way to get the best performance is to build a DX12 only engine from scratch.

Developing a whole new engine takes waayyy too much time.

Take a look at Unreal (its on Github), just the rendering part is HUGE and they have to keep support for a whole range of APIs (including PS4 API and OpenGL).

 

Furthermore, a team of Nvidia/AMD developers worked for years on the DX11 driver.

It is basically now expected that every single game company does the same thing.

Also the driver devs at AMD/Nvidia have more knowledge about their respective GPU than game devs can ever have.

 

I think the DX11 way was perfectly fine, the "low level" APIs of the Xbone, PS4 and AMD Mantle are efficient because they are written for a single architecture (all GCN).

Some of the (render) engine devs actually write inline assembly/architecture specific intrinsics.

Check out the "Optimizing the Graphics Pipeline with Compute" talk (from Dice) as an example.

You cant expect game devs to do all that work for every architecture (mainly Nvidia, who changes there architecture every year).

 

TLDR: game devs now have to do all the work that driver devs have done over the past years without some of the low level access that driver devs have.

Desktop: Intel i9-10850K (R9 3900X died 😢 )| MSI Z490 Tomahawk | RTX 2080 (borrowed from work) - MSI GTX 1080 | 64GB 3600MHz CL16 memory | Corsair H100i (NF-F12 fans) | Samsung 970 EVO 512GB | Intel 665p 2TB | Samsung 830 256GB| 3TB HDD | Corsair 450D | Corsair RM550x | MG279Q

Laptop: Surface Pro 7 (i5, 16GB RAM, 256GB SSD)

Console: PlayStation 4 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×