Jump to content

How Nvidia's GameWorks Program Will Affect Gaming.

Extremetech published an article recently which deeply explores the implications of Nvidia's GameWorks program, how it will affect gaming and how it already affected gaming.

 

Understanding libraries

Simply put, a library is a collection of implemented behaviors. They are not application specific — libraries are designed to be called by multiple programs in order to simplify development. Instead of implementing a GPU feature five times in five different games, you can just point the same five titles at one library. Game engines like Unreal Engine 3 are typically capable of integrating with third party libraries to ensure maximum compatibility and flexibility. Nvidia’s GameWorks contains libraries that tell the GPU how to render shadows, implement ambient occlusion, or illuminate objects.

That's not the issue according to Extremetech, the issue is that these libraries are closed.

In Nvidia’s GameWorks program, though, all the libraries are closed. You can see the files in games like Arkham City or Assassin’s Creed IV — the file names start with the GFSDK prefix. However, developers can’t see into those libraries to analyze or optimize the shader code. Since developers can’t see into the libraries, AMD can’t see into them either — and that makes it nearly impossible to optimize driver code.

 

The author goes on in the article to explore performance in games which have implemented these libraries in comparison to games from the same developer which haven't.
The games he tested were Batman Arkham Origins & Batman Arkham CIty, both of these are Nvidia TWIMTBP titles, but only Arkham Origins utilizes Nvidia's new GameWorks libraries.

Previous Arkham titles favored Nvidia, but never to this degree. In Arkham City, the R9 290X has a 24% advantage over the GTX 770 in DX11, and a 14% improvement in DX9. In Arkham Origins, they tie. Can this be traced directly back to GameWorks? Technically, no it can’t — all of our feature-specific tests showed the GTX 770 and the R9 290X taking near-identical performance hits with GameWorks features set to various detail levels. If DX11 Enhanced Ambient Occlusion costs the GTX 770 10% of its performance, it cost the R9 290X 10% of its performance.

 

The problem with that “no,” though, is twofold. First, because AMD can’t examine or optimize the shader code, there’s no way of knowing what performance could look like. In a situation where neither the developer nor AMD ever has access to the shader code to start with, this is a valid point. Arkham Origins offers an equal performance hit to the GTX 770 and the R9 290X, but control of AMD’s performance in these features no longer rests with AMD’s driver team — it’s sitting with Nvidia.

 

The author also found similar behavior when it came to tesselation an area where Nvidia hardware has traditionally showed strong performance in.
What's more worrying is that according to Extremetech, AMD offered the game studio code to improve tessellation performance as well as fix some Multi-GPU issues but the studio turned AMD down.
 

AMD attempted to provide Warner Bros. Montreal with code to improve Arkham Origins performance in tessellation, as well as to fix certain multi-GPU problems with the game. The studio turned down both. Is this explicitly the fault of GameWorks? No, but it’s a splendid illustration of how developer bias, combined with unfair treatment, creates a sub-optimal consumer experience.

Under ordinary circumstances, the consumer sees none of this. The typical takeaway from these results would be “Man, AMD builds great hardware, but their driver support sucks.”

 

A fundamentally unequal playing field

Nvidia’s GameWorks program is conceptually similar to what Intel pulled on AMD 8-10 years back. In that situation, Intel’s compilers refused to optimize code for AMD processors, even though AMD had paid Intel for the right to implement SSE, SSE2, and SSE3. The compiler would search for a CPU string rather than just the ability to execute the vectorized code, and if it detected AuthenticAMD instead of GenuineIntel, it refused to use the most advantageous optimizations.

 

The situation here is different, in that we’re discussing third-party libraries and not the fundamental tools used to build executables, but the end result is similar. AMD is no longer in control of its own performance. While GameWorks doesn’t technically lock vendors into Nvidia solutions, a developer that wanted to support both companies equally would have to work with AMD and Nvidia from the beginning of the development cycle to create a vendor-specific code path. It’s impossible for AMD to provide a quick after-launch fix.

 

 

I highly encourage you to read the full article here .

 

In the end one might wonder how is this different from what AMD has done with their gaming evolved partners and/or Mantle.
The answer is very different, a quick example would be TressFX AMD's physics rendering technology for hair, when the technology was first launched with Tomb Raider AMD hardware enjoyed a performance advantage over equivalent Nvidia hardware well into the double digit percentages.
But not long after launch Nvidia managed to close the performance gap to the point where now both companies are roughly on equal footing, that's because AMD's libraries were open to Nvidia to analyze and improve performance, similarly Mantle is open for Nvidia to implement its own libraries in as it sees fit.
Had AMD closed the libraries for its own technologies in Tomb Raider like TressFX, Ambient Occlusion (HDAO) and so on Nvidia users would've still struggled with performance issues today.

Link to comment
Share on other sites

Link to post
Share on other sites

Interesting article, the graphics card scene seem to be more software-centralised this time around, AMD is pulling Mantle on to optimize performance and allow for more drawcalls to amplify both visuals, and performance, and then there's this, a set of libraries and SDKs completely controlled by NVIDIA to add more graphical prowess to games making other GPU makers suffer (i didn't say it was their goal though).

 

It's great for me as i happen to have an NVIDIA, but it certainly isn't for AMD's customers.

 

(Although, i have to point out a little detail, the Mantle API may be open for NVIDIA to exploit, but it's way different than the TressFX situation, most of its features are designed to improve GCN based cards, i doubt that without forking it out (basically creating a whole different API optimised for Kepler, Maxwell and such...) i see hardly NVIDIA benefiting that much from it.)

 

 

Nvidia has been known to fight dirty especially when they're in a rough spot.
The rougher the spot the dirtier they'll fight.

 

At the moment, they really aren't, it's a long term play.

Stop bloating nonsense, and reason to contribute in a constructive manner.

Link to comment
Share on other sites

Link to post
Share on other sites

Interesting, would love to read the full article, anyone else getting page not found?

 

 

EDIT Got it now.

CPU: 6700k GPU: Zotac RTX 2070 S RAM: 16GB 3200MHz  SSD: 2x1TB M.2  Case: DAN Case A4

Link to comment
Share on other sites

Link to post
Share on other sites

On a related note one of the reasons I decided to go AMD this round is that I am ideologically opposed to PhysX, requiring a certain brand of GPU in order to render X feature in hardware is against what PC gaming is all about. I hope that Nvidia adopts mantle as well, since the opportunity is open to them from a technical standpoint.

 

Instant WAN show discussion topic.
I am very interested to hear what Linus and Luke think about this.

@LinusTech

Worthy of WAN show discussion?

Link to comment
Share on other sites

Link to post
Share on other sites

Nvidia has been known to fight dirty especially when they're in a rough spot.

The rougher the spot the dirtier they'll fight.

Fighting dirty is one thing, in a way it helps competition. However after reading the whole article, especially the last paragraph, it seems that Nvidia is attempting a takeover of sorts.

Link to comment
Share on other sites

Link to post
Share on other sites

Interesting, would love to read the full article, anyone else getting page not found?

 

Lol, it's true, there was nothing wrong with the links though..

 

EDIT : it's back.

Stop bloating nonsense, and reason to contribute in a constructive manner.

Link to comment
Share on other sites

Link to post
Share on other sites

It's a well known fact that nVidia strong-arms developers. This kind of thing has been going on for years now.
This is a little different however. It's much more nefarious it's as if nvidia is offering developers a deal with the devil in the sense that it seams wonderful on the surface.
Awesome visuals with no development effort what so ever but suddenly all of your graphics code is under the mercy of nVidia, you can't see it you can't change it and once it's in you will have to spend a lot of time if you want to replace it.

Link to comment
Share on other sites

Link to post
Share on other sites

I have always had Nvidia cards but  this kind of bias makes me want to give Nvidia a strong message that these are rubbish tactics. I am all for competition, but not to the point where certain games have unfair advantages. I feel like this is the console war where people complain how x game runs better on xbox or x game has a higher resolution and better framerates on a playstation. At least let rival companies be able to work hard and get better drivers out, this locking of features and libraries is absurd in my opinion.  

Link to comment
Share on other sites

Link to post
Share on other sites

Hmmm a little bit scary.

says the shadowed cheetah 

Case: NZXT Phantom PSU: EVGA G2 650w Motherboard: Asus Z97-Pro (Wifi-AC) CPU: 4690K @4.2ghz/1.2V Cooler: Noctua NH-D15 Ram: Kingston HyperX FURY 16GB 1866mhz GPU: Gigabyte G1 GTX970 Storage: (2x) WD Caviar Blue 1TB, Crucial MX100 256GB SSD, Samsung 840 SSD Wifi: TP Link WDN4800

 

Donkeys are love, Donkeys are life.                    "No answer means no problem!" - Luke 2015

 

Link to comment
Share on other sites

Link to post
Share on other sites

The GPU wars are as fierce as ever... I love it! Carn' the green team!

GamingPC: Intel 4770k CPU, 2xMSI 780 GTX Twin Frozr, 16 GB Corsair Vengeance Pro, Swiftech H220 CPU Cooler.

Cookie Cutter Build log

Link to comment
Share on other sites

Link to post
Share on other sites

Who do you think owns OpenGL?

 

http://en.wikipedia.org/wiki/Khronos_Group

 

The Kronos *cough* Saturn group (looks at internet explorer logo) took over OpenGL. Nvidia and AMD/ATI are both co owners of OpenGL along with Intel and a bunch of other tech companies.

 

Now tell me why we don't have native Open GL backends when Mac ports use OpenGL and the game is on Windows/Mac. Why are these companies kissing Direct X butt? Wouldn't it be easier to just use OpenGL for all platforms? Why would they purposely make 2 versions when they could make one, while MS holds a gun to our heads to force us to upgrade to a new OS to get a new Direct X?

 

Direct X is slow as hell in things like Dolphin emulator, while OpenGL is stupidly fast. Direct X 9 can keep but can't display all the effects OpenGL can. Then you have people like Carmack saying OpenGL blows Direct X away. Now mantle might give us better performance then OpenGL but why the hell is this even an issue?  Why isn't everything just OpenGL right now? 

http://gearnuke.com/john-carmack-nvidia-opengl-extensions-amd-mantle-offer-similar-improvements-over-directx/

 

Something ain't right and I hate all these companies. 

CPU:24/7-4770k @ 4.5ghz/4.0 cache @ 1.22V override, 1.776 VCCIN. MB: Z87-G41 PC Mate. Cooling: Hyper 212 evo push/pull. Ram: Gskill Ares 1600 CL9 @ 2133 1.56v 10-12-10-31-T1 150 TRFC. Case: HAF 912 stock fans (no LED crap). HD: Seagate Barracuda 1 TB. Display: Dell S2340M IPS. GPU: Sapphire Tri-x R9 290. PSU:CX600M OS: Win 7 64 bit/Mac OS X Mavericks, dual boot Hackintosh.

Link to comment
Share on other sites

Link to post
Share on other sites

This the main reason why I've always shown a stronger support for ATi/AMD, nVidia always, ALWAYS, resort to crap like this when it feels it's in a corner. AMD ahs always been about being open and fair, TressFX works on both brands of GPU's, it's just that AMD was better at DirectCompute which was why it posted bigger numbers even with TressFX enabled. This reminds me of the time when AC was first launched, when AA was enabled, it was found that ATi hardware did better.....so what did nV do? They got the developer to patch the game to remove ATi's advantage. This is so typical of their Apple-like behavior....it's got to be their way, or it's the highway mentality just ticks me off to no end.

Main Rig: AMD AM4 R9 5900X (12C/24T) + Tt Water 3.0 ARGB 360 AIO | Gigabyte X570 Aorus Xtreme | 2x 16GB Corsair Vengeance DDR4 3600C16 | XFX MERC 310 RX 7900 XTX | 256GB Sabrent Rocket NVMe M.2 PCIe Gen 3.0 (OS) | 4TB Lexar NM790 NVMe M.2 PCIe4x4 | 2TB TG Cardea Zero Z440 NVMe M.2 PCIe Gen4x4 | 4TB Samsung 860 EVO SATA SSD | 2TB Samsung 860 QVO SATA SSD | 6TB WD Black HDD | CoolerMaster H500M | Corsair HX1000 Platinum | Topre Type Heaven + Seenda Ergonomic W/L Vertical Mouse + 8BitDo Ultimate 2.4G | iFi Micro iDSD Black Label | Philips Fidelio B97 | C49HG90DME 49" 32:9 144Hz Freesync 2 | Omnidesk Pro 2020 48" | 64bit Win11 Pro 23H2

2nd Rig: AMD AM4 R9 3900X + TR PA 120 SE | Gigabyte X570S Aorus Elite AX | 2x 16GB Patriot Viper Elite II DDR4 4000MHz | Sapphire Nitro+ RX 6900 XT | 500GB Crucial P2 Plus NVMe M.2 PCIe Gen 4.0 (OS)2TB Adata Legend 850 NVMe M.2 PCIe Gen4x4 |  2TB Kingston NV2 NVMe M.2 PCIe Gen4x4 | 4TB Leven JS600 SATA SSD | 2TB Seagate HDD | Keychron K2 + Logitech G703 | SOLDAM XR-1 Black Knight | Enermax MAXREVO 1500 | 64bit Win11 Pro 23H2

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I don't see how this is evil in any way. Nvidia has made it easy for game devs to implement new technology like PhysX, TXAA, Tessellation and many many other technologies. AMD could do the same thing but they are too busy making their own API.

 (\__/)

 (='.'=)

(")_(")  GTX 1070 5820K 500GB Samsung EVO SSD 1TB WD Green 16GB of RAM Corsair 540 Air Black EVGA Supernova 750W Gold  Logitech G502 Fiio E10 Wharfedale Diamond 220 Yamaha A-S501 Lian Li Fan Controller NHD-15 KBTalking Keyboard

Link to comment
Share on other sites

Link to post
Share on other sites

I don't see how this is evil in any way. Nvidia has made it easy for game devs to implement new technology like PhysX, TXAA, Tessellation and many many other technologies. AMD could do the same thing but they are too busy making their own API.

AMD made it easy for developers to implement technologies like TressFX, MLAA, Tessellation and HDAO without hiding the code from the developers or Nvidia.

Link to comment
Share on other sites

Link to post
Share on other sites

Nvidia has been known to fight dirty especially when they're in a rough spot.

The rougher the spot the dirtier they'll fight.

 

only thing is I don't think they're in a rough spot atm, their prices are lower on competing gpu's right now since the mining "boom" price increases, people are PISSED at AMD for raising prices like that, while nvidia kept them lowered the entire time. 

Stuff:  i7 7700k @ (dat nibba succ) | ASRock Z170M OC Formula | G.Skill TridentZ 3600 c16 | EKWB 1080 @ 2100 mhz  |  Acer X34 Predator | R4 | EVGA 1000 P2 | 1080mm Radiator Custom Loop | HD800 + Audio-GD NFB-11 | 850 Evo 1TB | 840 Pro 256GB | 3TB WD Blue | 2TB Barracuda

Hwbot: http://hwbot.org/user/lays/ 

FireStrike 980 ti @ 1800 Mhz http://hwbot.org/submission/3183338 http://www.3dmark.com/3dm/11574089

Link to comment
Share on other sites

Link to post
Share on other sites

And there I was finishing my coco before bed. Now i have to read up this article as it sounds very interesting.

Core - EVGA Classified 3 | i7 980x | 12GB Corsair Dominator GT | Lian Li P80 | Corsair 128 Neutron GTX | 2 x WD 500gb Velociraptor | Asus Xonar Xense | 2 x EVGA 590 | Enermax Platimax 1500


Water Cooling - Alphacool NexXxos 360 Monsta | TFC 360 | Alphacool D5 Vario | Alphacool 250 Tube res | EK Supreme HF Nickle Plexi | 2 x EK Nickle Plexi 590 WB | Aquaero 5 XT

Link to comment
Share on other sites

Link to post
Share on other sites

AMD made it easy for developers to implement technologies like TressFX, MLAA, Tessellation and HDAO without hiding the code from the developers or Nvidia.

I think the point is that Nvidia doesn't want to put all that work into making libraries for AMD to take advantage from it. It doesn't really make sense for Nvidia to help out AMD now does it.

 (\__/)

 (='.'=)

(")_(")  GTX 1070 5820K 500GB Samsung EVO SSD 1TB WD Green 16GB of RAM Corsair 540 Air Black EVGA Supernova 750W Gold  Logitech G502 Fiio E10 Wharfedale Diamond 220 Yamaha A-S501 Lian Li Fan Controller NHD-15 KBTalking Keyboard

Link to comment
Share on other sites

Link to post
Share on other sites

I think the point is that Nvidia doesn't want to put all that work into making libraries for AMD to take advantage from it. It doesn't really make sense for Nvidia to help out AMD now does it.

 

This kind of mentality towards NVIDIA really ticks me off, I just don't understand how they can just get a free pass over stuff like this. Is it the marketing?

 

I love competition, I like AMD and NVIDIA and I think they do things that help benefit the GPU market as a whole, but I will not stand when companies LOCK OUT code that makes the game run perform better drastically on a NVIDIA GPU compared to an AMD GPU or vice-versa.

 

If NVIDIA truly wanted to make the PC platform as accessible as possible like they have boasting for months, then this GameWorks program should be open to AMD and others to help optimize code for ALL GPUs from both manufacturers.

 

Furthermore, is there really any point boasting about the PC as an open platform as long as companies do bullshit things like this? Doesn't sound open to me frankly.

Link to comment
Share on other sites

Link to post
Share on other sites

I think the point is that Nvidia doesn't want to put all that work into making libraries for AMD to take advantage from it. It doesn't really make sense for Nvidia to help out AMD now does it.

I agree, it does not make sense for one company to aid another, especially a competitor. However, end users such as us, this can be seen as a problem. AMD already stated that they will open their low level api, Mantle, up to Nvidia allowing the betterment of the pc world. Nvidia is not wanting this and is taking advantage of everyone's trust to ensure their dominance. 

Business is business. Companies will demolish each other so that they may survive. But as a consumer, one should be cautious of big bad companies waiting to eat you up and spit you out.

Granted, there is not much we can do other than wait for the inevitable. 

Link to comment
Share on other sites

Link to post
Share on other sites

I don't care if i start earning 100000000000000$ a month and NVIDIA makes the best graphics card the world has ever seen i will no longer support them if they do shit like this.

 

AMD have their faults but they don't try to fuck over the entire industry this way. Fuck the miners but fuck NVIDIA even more.

 

I will patiently wait for the lightning version of the 290X and not even think about getting a 780 TI no matter what the price. Its time we stopped talking and stopped buying from those we do not support.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×