Jump to content

Nvidia Pressuring Oxide developers to change DX12 Benchmark

Inimigor

Ok the situation is this:


Nvidia has a performance issue with the asynchronous compute, and then it's disabled, even though their driver claims support for it, and they want Oxide developers to change the Benchmark in order for it not to hurt Nvidia performance so badly.


What happened in this is that, Oxide has released a DX12 Benchmark where AMD gets a clear win, and Nvidia even has a worse performance when moving to DX12 according to the benchmark.

And Nvidia's PR has said that they don't consider this a proper benchmark since it has various issues, especially with MSAA.



Oxide's response to the situation is:

 

 

There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally.

Oxide developer claims nvidia is placing A LOT of pressure for them to disable certains things in the benchmark in order to give Nvidia better performance. Bad for AMD. They also said that AMD wants to disable DX12 Asynchronous compute on Nvidia components.

 

Why that? you say.

 

Despite Nvidia's drivers claiming support, when Oxide's benchmark tries to use the method it hurts EVEN MORE the performance on Nvidia GPU's, so they disabled it on Nvidia side.

 

Also Oxide developers say that Nvidia has a bit more of a CPU overhead since their implementation of DX12 is tier 2, while AMD's is tier 3.

 

 

Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only ‘vendor’ specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn’t really have Async Compute so I don’t know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don’t think it ended up being very significant. This isn’t a vendor specific path, as it’s responding to capabilities the driver reports.

 

Crazy huh?


AMD apparently won this so far, they have invested heavily in development and support for API's like Mantle and DX12, let's hope it pays off and we have some nice competition from now on!

------Personal Opinion------

While Nvidia could be unhappy with those performance issues I think trying to work together with the developers instead of AGAINST the developers would be so much better for their image as a company and even for solving the issue would be far less work.

 

The issue with the asynchronous compute seems to be withing a driver issue, since the GPU tries to use it, but won't run it properly, maybe bad code optimization (since it hasn't been used a lot) or something like that, all there is for us, is to wait and see what will happen!


Original article from Overclock 3D
http://www.overclock3d.net/articles/gpu_displays/oxide_developer_says_nvidia_was_pressuring_them_to_change_their_dx12_benchmark/1

post-7479-0-62421900-1441034809_thumb.jp

post-7479-0-62421900-1441034809_thumb.jp

|CPU : Core i7 4770 (non-K :( ) | GPU : XFX RX 480 GTR 8GB @ 1385Mhz | MoBo: Gigabyte GA-Z87-HD3 | PSU: XFX 850W PRO | Case: In-Progress Silverstone TJ-07 |

Zenfone 2 ZE551ml 32GB + 64GB SD - Rooted LineageOS |

 

Link to comment
Share on other sites

Link to post
Share on other sites

Why is so many shit happening? 

 

d6f.jpg

i5 2400 | ASUS RTX 4090 TUF OC | Seasonic 1200W Prime Gold | WD Green 120gb | WD Blue 1tb | some ram | a random case

 

Link to comment
Share on other sites

Link to post
Share on other sites

"benchmark as a proper representation if DirectX 12 performance" if should be of. Why are they so bothered? I mean how many people are going to see this, most gamers don't care about benchmarks, sure it gives a good representation of the game but still.. 

CPU: Intel 3570 GPUs: Nvidia GTX 660Ti Case: Fractal design Define R4  Storage: 1TB WD Caviar Black & 240GB Hyper X 3k SSD Sound: Custom One Pros Keyboard: Ducky Shine 4 Mouse: Logitech G500

 

Link to comment
Share on other sites

Link to post
Share on other sites

What do Nvidia care so much about it instead of taking on board these problems with their drivers and trying to improve it because right now the looks of it AMD has a good advantage over Nvidia in DX12 like they promised

                                                                                                                 Setup

CPU: i3 4160|Motherboard: MSI Z97 PC MATE|RAM: Kingston HyperX Blue 8GB(2x4GB)|GPU: Sapphire Nitro R9 380 4GB|PSU: Seasonic M12II EVO 620W Modular|Storage: 1TB WD Blue|Case: NZXT S340 Black|PCIe devices: TP-Link WDN4800| Montior: ASUS VE247H| Others: PS3/PS4

Link to comment
Share on other sites

Link to post
Share on other sites

DX12gate incoming :D

AMD Rig - (Upgraded): FX 8320 @ 4.8 Ghz, Corsair H100i GTX, ROG Crosshair V Formula, Ghz, 16 GB 1866 Mhz Ram, Msi R9 280x Gaming 3G @ 1150 Mhz, Samsung 850 Evo 250 GB, Win 10 Home

(My first Intel + Nvidia experience  - recently bought ) : MSI GT72S Dominator Pro G ( i7 6820HK, 16 GB RAM, 980M SLI, GSync, 1080p , 2x128 GB SSD + 1TB HDD... FeelsGoodMan

Link to comment
Share on other sites

Link to post
Share on other sites

 

 

Also Oxide developers say that Nvidia has a bit more of a CPU overhead since their implementation of DX12 is tier 3, while AMD's is tier 2.

 

 

 

you got DX12 tiers mixed up. AMD is tier 3 and Nvidia is tier 2. 

 

Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD

Link to comment
Share on other sites

Link to post
Share on other sites

Based on how I've seen other Nvidia/AMD threads on other parts go down, I'm sure this will be a civil thread with reasonable discourse from all sides and a mutual understanding that our preference in brand does not make us shills or fanboys and that we can hold mature discussion about the issue at hand without resorting to shenanigans. 

Link to comment
Share on other sites

Link to post
Share on other sites

you got DX12 tiers mixed up. AMD is tier 3 and Nvidia is tier 2. 

Fixed :)

|CPU : Core i7 4770 (non-K :( ) | GPU : XFX RX 480 GTR 8GB @ 1385Mhz | MoBo: Gigabyte GA-Z87-HD3 | PSU: XFX 850W PRO | Case: In-Progress Silverstone TJ-07 |

Zenfone 2 ZE551ml 32GB + 64GB SD - Rooted LineageOS |

 

Link to comment
Share on other sites

Link to post
Share on other sites

I think partly, the post is more about the issue with the API overhead, but the Nvidia pressure, apparently is something quite new.

|CPU : Core i7 4770 (non-K :( ) | GPU : XFX RX 480 GTR 8GB @ 1385Mhz | MoBo: Gigabyte GA-Z87-HD3 | PSU: XFX 850W PRO | Case: In-Progress Silverstone TJ-07 |

Zenfone 2 ZE551ml 32GB + 64GB SD - Rooted LineageOS |

 

Link to comment
Share on other sites

Link to post
Share on other sites

No this is a complete repost of the other thread on this page written in one of the most inflammatory ways possible.

(And that's saying something.)

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I think partly, the post is more about the issue with the API overhead, but the Nvidia pressure, apparently is something quite new.

Nope, this is a re-post of http://linustechtips.com/main/topic/440575-oxide-responds-to-aots-conspiracies-maxwell-20-has-no-native-a-sync-compute/

CPU: i7 4770k | GPU: Sapphire 290 Tri-X OC | RAM: Corsair Vengeance LP 2x8GB | MTB: GA-Z87X-UD5HCOOLER: Noctua NH-D14 | PSU: Corsair 760i | CASE: Corsair 550D | DISPLAY:  BenQ XL2420TE


Firestrike scores - Graphics: 10781 Physics: 9448 Combined: 4289


"Nvidia, Fuck you" - Linus Torvald

Link to comment
Share on other sites

Link to post
Share on other sites

No this is a complete repost of the other thread on this page written in one of the most inflammatory ways possible.

(And that's saying something.)

how is this, or the other thread inflammatory?

 

It simply states a personal opinion, and sections of the article on OC.net

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.

×