Jump to content

DX12, what was promised and what we got .. a rainbow shitting unicorn

1 minute ago, spartaman64 said:

so you are a formalist well i disagree that we cant discount the author in the interpretation of the message and the foul language completely changes the meaning

oh poor Oxyde taking AMD's money and not delivering on the product ...

you should shed a tear or two ;)

Link to comment
Share on other sites

Link to post
Share on other sites

This is largely caused by people (mainly over-optimistic gamers) hyping up DirectX 12, and claiming some ridiculously high and rapid adoption (which never had any grounds in reality to begin with). Ironically, most of this hype originates from AMD, who was using the async compute "feature" (which is one of the least important features of the DX12 spec) as a weapon against NVidia, who then is apparently the worlds greatest corporate supervillain according to AMD ads dating back a decade.

Read the community standards; it's like a guide on how to not be a moron.

 

Gerdauf's Law: Each and every human being, without exception, is the direct carbon copy of the types of people that he/she bitterly opposes.

Remember, calling facts opinions does not ever make the facts opinions, no matter what nonsense you pull.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, zMeul said:

oh poor Oxyde taking AMD's money and not delivering on the product ...

you should shed a tear or two ;)

 

for some reason i dont see you being as angry about the way its meant be played titles

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Colonel_Gerdauf said:

This is largely caused by people (mainly over-optimistic gamers) hyping up DirectX 12, and claiming some ridiculously high and rapid adoption (which never had any grounds in reality to begin with). Ironically, most of this hype originates from AMD, who was using the async compute "feature" (which is one of the least important features of the DX12 spec) as a weapon against NVidia, who then is apparently the worlds greatest corporate supervillain according to AMD ads dating back a decade.

Asynchronous compute isn't even a feature of DX12. That's the biggest irony.

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, Colonel_Gerdauf said:

This is largely caused by people (mainly over-optimistic gamers) hyping up DirectX 12

people (gamers) weren't the ones who hyped DX12!

AMD, nVidia and MicroSoft were the ones who had dev panels all over

 

16 hours ago, spartaman64 said:

for some reason i dont see you being as angry about the way its meant be played titles

because a equals 1 for some reason?!

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, zMeul said:

Projecting

Once again, not actually understanding any of my posts and projecting.

Your bench is nothing I haven't seen before.

 

20% performance boost on low-end PCs as they were more CPU bound, which is what DX12 focuses on.

(NOTE: Despite being on r/AMD, this is running a Nvidia GPU and i3.

COMPUTER: Mobile Battlestation  |  CPU: INTEL I7-8700k |  Motherboard: Asus z370-i Strix Gaming  | GPU: EVGA GTX 1080 FTW ACX 3.0 | Cooler: Scythe Big Shuriken 2 Rev. b |  PSU: Corsair SF600 | HDD: Samsung 860 evo 1tb

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, FAQBytes said:

20% performance boost on low-end PCs as they were more CPU bound, which is what DX12 focuses on.

bullshit!

look at my numbers, 1080p low quality settings - CPU bound scenario

the differences are minimal

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, zMeul said:

bullshit!

look at my numbers, 1080p low quality settings - CPU bound scenario

the differences are minimal

Ok, let's look at your numbers. When both CPUs are running at 100%, DX12 is outperforming DX11 by about 10-15 FPS. Even when not, DX12 seems to be outperforming DX11.

 

12v11.png

COMPUTER: Mobile Battlestation  |  CPU: INTEL I7-8700k |  Motherboard: Asus z370-i Strix Gaming  | GPU: EVGA GTX 1080 FTW ACX 3.0 | Cooler: Scythe Big Shuriken 2 Rev. b |  PSU: Corsair SF600 | HDD: Samsung 860 evo 1tb

 

Link to comment
Share on other sites

Link to post
Share on other sites

Any DX12 data that includes Nvidia GPUs is going to skew it to the shit, do the numbers again with just AMD and wait for Nvidia to release new hardware. And for the love of god don't take this as pure support for AMD or something, we all know Nvidia isn't fantastic at DX12 even after their driver patch that was supposed to fix everything.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, M.Yurizaki said:

Asynchronous compute isn't even a feature of DX12. That's the biggest irony.

Be careful now; you do not want to face the wrath of the Team Red keyboard engineers.

Just now, zMeul said:

people (gamers) weren't the ones who hyped DX12!

AMD, nVidia and MicroSoft were the ones who had dev panels all over

Huh? What does that have to do with my point?

 

Did you magically forget the absolute mess that the LTTF was in, between one to two years ago? Yes, the companies had dev panels, and Microsoft/NVidia advertised DX12 support (but not much beyond that), but neither were as significant of factors as the people loudly claiming that "DirectX 12 will bring NVidia to it's knees".

Read the community standards; it's like a guide on how to not be a moron.

 

Gerdauf's Law: Each and every human being, without exception, is the direct carbon copy of the types of people that he/she bitterly opposes.

Remember, calling facts opinions does not ever make the facts opinions, no matter what nonsense you pull.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, FAQBytes said:

Ok, let's look at your numbers. When both CPUs are running at 100%, DX12 is outperforming DX11 by about 10-15 FPS. Even when not, DX12 seems to be outperforming DX11.

the actual gameplay starts at about 0:46 and the FPS diff is marginal; also it goes out of sync quite fast

also take a look at the averages from the benchmark runs

Link to comment
Share on other sites

Link to post
Share on other sites

Don't mind me everyone. Just here to restore peace and balance to the world in the most sexy way possible... With my mouth.

 

http://vocaroo.com/i/s1Dx9QfFrZjT

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, zMeul said:

the actual gameplay starts at about 0:46 and the FPS diff is marginal; also it goes out of sync quite fast

also take a look at the averages from the benchmark runs

Same difference between the two, and if the difference is negligible this early in development in a game that was ported, why are you complaining?

COMPUTER: Mobile Battlestation  |  CPU: INTEL I7-8700k |  Motherboard: Asus z370-i Strix Gaming  | GPU: EVGA GTX 1080 FTW ACX 3.0 | Cooler: Scythe Big Shuriken 2 Rev. b |  PSU: Corsair SF600 | HDD: Samsung 860 evo 1tb

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, leadeater said:

Any DX12 data that includes Nvidia GPUs is going to skew it to the shit, do the numbers again with just AMD

U sure?! OK ....

Spoiler

 

1490072593TutxbEh0bA_4_3.png

1490072593TutxbEh0bA_5_3.png

1490072593TutxbEh0bA_6_3.png

1490072593TutxbEh0bA_7_3.png

1490072593TutxbEh0bA_8_3.png

1490072593TutxbEh0bA_9_3.png

1490072593TutxbEh0bA_10_3.png

 

so much better ... oh wait!

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, MageTank said:

Don't mind me everyone. Just here to restore peace and balance to the world with in the most sexy way possible... With my mouth.

 

http://vocaroo.com/i/s1Dx9QfFrZjT

#Nice

COMPUTER: Mobile Battlestation  |  CPU: INTEL I7-8700k |  Motherboard: Asus z370-i Strix Gaming  | GPU: EVGA GTX 1080 FTW ACX 3.0 | Cooler: Scythe Big Shuriken 2 Rev. b |  PSU: Corsair SF600 | HDD: Samsung 860 evo 1tb

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, MageTank said:

Don't mind me everyone. Just here to restore peace and balance to the world with in the most sexy way possible... With my mouth.

 

http://vocaroo.com/i/s1Dx9QfFrZjT

I call dibs on @MageTank becoming a voice actor one day.

Read the community standards; it's like a guide on how to not be a moron.

 

Gerdauf's Law: Each and every human being, without exception, is the direct carbon copy of the types of people that he/she bitterly opposes.

Remember, calling facts opinions does not ever make the facts opinions, no matter what nonsense you pull.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, FAQBytes said:

Same difference between the two, and if the difference is negligible this early in development in a game that was ported, why are you complaining?

more bullshit ...

do I need to post the DF analysis of Quantum Break again? https://www.youtube.com/watch?v=-PK55-kCviA

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Colonel_Gerdauf said:

I call dibs on @MageTank becoming a voice actor one day.

My voice is far too generic for that to ever happen, lol. I also lack range, something that tends to be required in that field. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, zMeul said:

really? OK ..

here, directly from MS' camp:

 

if MS's own camp can't do better, why should we expect greatness from others?

 

---

 

two: the issue with the DX11 vs DX12 API draw calls not CPU performance - when you hit the draw call limit you are introducing a CPU bottleneck and GPU usage will get lower; and it's not depended on the resolution

the way you understand what DX12/Vulkan do is quite wrong

 

you get frustrated? imagine me trying to explain to people the actual problem

I get frustrated by end users who the APIs were never intended for ripping on APIs -that they've never actually written render pipelines and shaders for- over not doing something they were never designed to do.

 

The way I understand what OpenGL and Vulkan do is actually quite well considering they're right in my wheelhouse and I work with them on a regular basis. DirectX 11/12 less so admittedly, but they're not *that* different.

 

Was DX11 better than OpenGL at generating draw calls across multiple cores? Sure but it was by no means perfect.

 

Was DX11 better than OpenGL for running it's driver on multiple cores? Not in the slightest.

 

Were DX11 drivers lighter than OpenGL because OpenGL was a steaming pile of $#!+? Yeah a little, but they weren't all that light.

 

Quantum Break was not designed for DX12 from the ground up. You're also comparing its implementation as a UWP to a native Win32 app which has other differences in performance characteristics. It's not a straight up DX11/DX12 comparison.

 

How about you actually take some time to go read *any* of the developer documents on these APIs and read the parts on how they can *actually* benefit your workflow/workload?

 

DirectX12 is meant to do the same thing as Vulkan was. That's better utilize the CPU by lowering driver overhead and better spreading that overhead across multiple CPU cores. It's never been intended (or marketed) as being a magic bullet that will improve GPU performance.

 

Drivers run on the CPU. Draw calls run on the CPU. The whole point of these APIs is to make drivers lighter, make drivers easier to thread, and make draw calls easier to spread among cores, by giving developers more control.

 

Shaders run on the GPU. Do you know what uses the exact same shaders? DX11 and 12.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, zMeul said:

U sure?! OK ....

  Reveal hidden contents

 

1490072593TutxbEh0bA_4_3.png

1490072593TutxbEh0bA_5_3.png

1490072593TutxbEh0bA_6_3.png

1490072593TutxbEh0bA_7_3.png

1490072593TutxbEh0bA_8_3.png

1490072593TutxbEh0bA_9_3.png

1490072593TutxbEh0bA_10_3.png

 

so much better ... oh wait!

If you go looking for supporting data for your point you will find it, if I go looking for supporting data of my point I will also find it.

 

There is more to this than an RX480 and there is more to it than showing a few games which for most of them are rooted in DX11 development tool sets, who then chose to add some level of DX12 support.

 

Also DX12 allows for a lot of different features which isn't just about getting higher FPS, some of those games in DX12 mode are more demanding on the system as they are able to do more with the same hardware.

 

Making a straight FPS comparison between two games using different APIs is hard, and you can't quote the games using the same graphics settings as validity to compare them as stated already different API so you can't say the same settings produce the same system load and end resulting visual quality.

Link to comment
Share on other sites

Link to post
Share on other sites

I see I'm getting nowhere with this. I have other things I need to be doing anyways. Good luck everyone.

COMPUTER: Mobile Battlestation  |  CPU: INTEL I7-8700k |  Motherboard: Asus z370-i Strix Gaming  | GPU: EVGA GTX 1080 FTW ACX 3.0 | Cooler: Scythe Big Shuriken 2 Rev. b |  PSU: Corsair SF600 | HDD: Samsung 860 evo 1tb

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, FAQBytes said:

I see I'm getting nowhere with this. I have other things I need to be doing anyways. Good luck everyone.

Don't fret, the information you and others are providing to counter the OP's posts has been very enlightening and I now have a much greater understanding of DX12 than before this thread. I know this thread was supposed to be anti-DX12 but all of the info I'm finding as a result of this thread makes me excited for DX12 in the future. :D

-KuJoe

Link to comment
Share on other sites

Link to post
Share on other sites

I'm struggling to see how a game that is built on DX11 then has a DX12 render path makes it worse than a game that was built with a DX12 render path to begin with. There were quite a bit of games in the late 90s that supported something like 4 different render paths (software, Glide, OGL, DX) and they all did just fine. More or less.

 

The only way I see having two render paths as being bad is that it pulls resources away from one or the other.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, KuJoe said:

Don't fret, the information you and others are providing to counter the OP's posts has been very enlightening and I now have a much greater understanding of DX12 than before this thread. I know this thread was supposed to be anti-DX12 but all of the info I'm finding as a result of this thread makes me excited for DX12 in the future. :D

They say a person's primary goal in a debate is not to sway the opposing side, but to sway the audience. I am glad to see that we are at least somewhat achieving this goal.

COMPUTER: Mobile Battlestation  |  CPU: INTEL I7-8700k |  Motherboard: Asus z370-i Strix Gaming  | GPU: EVGA GTX 1080 FTW ACX 3.0 | Cooler: Scythe Big Shuriken 2 Rev. b |  PSU: Corsair SF600 | HDD: Samsung 860 evo 1tb

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, M.Yurizaki said:

I'm struggling to see how a game that is built on DX11 then has a DX12 render path makes it worse than a game that was built with a DX12 render path to begin with. There were quite a bit of games in the late 90s that supported something like 4 different render paths (software, Glide, OGL, DX) and they all did just fine. More or less.

 

The only way I see having two render paths as being bad is that it pulls resources away from one or the other.

The problem is render paths built around DX11 or OpenGL make a lot of assumptions around the GPU which isn't actually true with modern GPUs but which the drivers are designed to effectively "emulate" to ensure comparability.

 

Vulkan and DirectX12 can take a lot more advantage of the semi-deferred, tile based nature of a modern renderer if their render pipeline is designed specifically with it in mind.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×