Jump to content

Project CARS devs address AMD performance issues, AMD drivers to blame entirely, PhysX runs on CPU only, no GPU involvement whatsoever.

You sure you understand how drivers work?

At least how they are supposed to work...

I mean all this GPU driver update nonsense is fault of the game developers and only game developers.

Seriously, GPU drivers are the only thing that you have to constantly keep updating, have you ever tried to update your ethernet or soundcard driver? - usually any sane person would tell you not to mess with them for the sake of messing with them if things are working.

Nowadays GPU drivers have become a new layer where poorly written code get's optimized.

nVidia LOVES to optimise in the driver, because that gives them more power to differenciate their GPU lineup *cough* GTX 960 performs on par with Titan and 780 *cough*

AMD on the other hand believes that it should be done the "right" way - optimisation should take place in the game code - and they actively promote it and all kinds of stuff to assist them. Unfortuantely most of the devs are so bad/uneducated/greedy/lazy or whatever are their motives - it just doesn't leave GPU vendors a choice but to keep doing shader substitutions and all kinds of goory stuff in the driver which deal with the poorly written code.

 

EXAMPLE #1:

void RetardedCode()

{

   int c=0;

   int f=0;

   while(c<1000)

   {

       c++;

       if(c==1000 && f<1000)

       {

              c=0; f++;

       }

   }

   cout<<c;

}

 

EXAMPLE #2:

void OptimisedCode()

{ cout<<1000; }

 

Both of these yield exactly the same end result - number '1000' gets printed on the screen, but I hope you would agree that the first example would take a bajillion times more memory and procesor time to execute.

 

Another set of examples - When was the last time a driver came out that addresses performance issues in an indy developed game?

Those devs usually try harder to code for hardware and solve the issues by them selves or just don't release it to the market.

1) you example is horrible

2) the two pieces of code are not the same, one just prints out the number 1000, the other prints the value in c when f = 1000 and f is only incremented upon c reaching 1000. ie, neither of these are unoptimized (not even a real word) as they do different things.

3) I don't think example 2 will compile (haven't used cout in a while)

Link to comment
Share on other sites

Link to post
Share on other sites

For those who still haven't read the memo: If game devs made the game correctly in the first place instead of releasing half baked shit then AMD and Nvidia wouldn't need to fix the problems by releasing drivers telling graphics cards how to work around the problems.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Well, isn't this the sad truth. Professional programmers are professionals for a reason. They passed through some rigorous standards and wear that degree with pride, because they truly know better and know more.

One of my relatives did the song and dance with computer sciences, thought of game development, realized just how big a joke it truly was and that a degree was maybe even a hindrance, since studios want to hire fucking morons for 50% the price of what they'd have to pay a properly qualified programmer. Its a monumental joke how much work drivers do to cover up the pitfalls of developers.

Its even sadder when games like these have lead times that stretch into YEARS, yet still can't get it right.

I must admit the first paragraph made me think you were mocking me, especially with the wearing the degree line, but yeah, if you think academia is elitist, game studios are more so. They think most people who graduated from an engineering field are too cold and logical to be useful for the creative process of building a game.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

1) you example is horrible

2) the two pieces of code are not the same, one just prints out the number 1000, the other prints the value in c when f = 1000 and f is only incremented upon c reaching 1000. ie, neither of these are unoptimized (not even a real word) as they do different things.

3) I don't think example 2 will compile (haven't used cout in a while)

While this is in fact a terrible example, I can tell you and show you just what sorts of naive mistakes are made in game programming which sacrifice performance in sometimes obvious and sometimes very subtle ways. Take a 2-D array and give me the first algorithm which comes to mind to give you the sum of every column. I can rewrite for the same functionality and get you a 10x speed up depending on your processor.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

While this is in fact a terrible example, I can tell you and show you just what sorts of naive mistakes are made in game programming which sacrifice performance in sometimes obvious and sometimes very subtle ways. Take a 2-D array and give me the first algorithm which comes to mind to give you the sum of every column. I can rewrite for the same functionality and get you a 10x speed up depending on your processor.

wasn't saying that there isn't horrible programming done in the games industry (AC Unity is the poster child of that) was just pointing out the terribleness of his example is all. in spirit he is right, just not in how he tried to represent it. better example would be the classic entry level programmer question on how to improve the function that finds the largest value in an array.

Link to comment
Share on other sites

Link to post
Share on other sites

1) you example is horrible

2) the two pieces of code are not the same, one just prints out the number 1000, the other prints the value in c when f = 1000 and f is only incremented upon c reaching 1000. ie, neither of these are unoptimized (not even a real word) as they do different things.

3) I don't think example 2 will compile (haven't used cout in a while)

1) I know, but it might sadly not be far from truth

2) I didn't say they are the same code - they just produce the same output - a one followed by three zeros and the end-user won't know a differecne

3) will not argue, I'm more of a c# person myself =(

 

This one's a fun read, mind you the topic is misleading, because well.. get to the lawyer part...

http://www.theregister.co.uk/2003/06/03/futuremark_nvidia_didnt_cheat/

CPU: Intel i7 5820K @ 4.20 GHz | MotherboardMSI X99S SLI PLUS | RAM: Corsair LPX 16GB DDR4 @ 2666MHz | GPU: Sapphire R9 Fury (x2 CrossFire)
Storage: Samsung 950Pro 512GB // OCZ Vector150 240GB // Seagate 1TB | PSU: Seasonic 1050 Snow Silent | Case: NZXT H440 | Cooling: Nepton 240M
FireStrike // Extreme // Ultra // 8K // 16K

 

Link to comment
Share on other sites

Link to post
Share on other sites

While this is in fact a terrible example, I can tell you and show you just what sorts of naive mistakes are made in game programming which sacrifice performance in sometimes obvious and sometimes very subtle ways. Take a 2-D array and give me the first algorithm which comes to mind to give you the sum of every column. I can rewrite for the same functionality and get you a 10x speed up depending on your processor.

If we're using c#, I'd go for the .NET's built in sum function and rely on a hope that Microsoft developers know better :)

alternatively lambda delegate every core a portion of the array to sum?

I know some crazy masterminds can do it with binary xor/and/or without using a single '+' operator

CPU: Intel i7 5820K @ 4.20 GHz | MotherboardMSI X99S SLI PLUS | RAM: Corsair LPX 16GB DDR4 @ 2666MHz | GPU: Sapphire R9 Fury (x2 CrossFire)
Storage: Samsung 950Pro 512GB // OCZ Vector150 240GB // Seagate 1TB | PSU: Seasonic 1050 Snow Silent | Case: NZXT H440 | Cooling: Nepton 240M
FireStrike // Extreme // Ultra // 8K // 16K

 

Link to comment
Share on other sites

Link to post
Share on other sites

For those who still haven't read the memo: If game devs made the game correctly in the first place instead of releasing half baked shit then AMD and Nvidia wouldn't need to fix the problems by releasing drivers telling graphics cards how to work around the problems.

 

A million times this ^ 

 

--------------

 

What bothers me is the contradictory nature of Gameworks. Every game so far using Gameworks has minor or major performance issues for both Nvidia cards and AMD cards, which must be addressed through heavy-handed driver optimizations. In some games the engine is so locked down that it doesn't seem to matter. 

 

If AMD stopped communicating with the developers last year, it likely means that there was nothing more to be said. I'm not calling shenanigans per say, but if the Developers are using Gameworks, there's nothing stopping Nvidia from sending updates which may change how certain features that are not specific to Nvidia cards may work. If AMD - a company that is incredibly open and transparent with developers and customers - stopped talking to the developers of Cars last fall, it doesn't take a genius to figure out what went down behind the scenes.

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

A million times this ^ 

 

--------------

 

What bothers me is the contradictory nature of Gameworks. Every game so far using Gameworks has minor or major performance issues, which must be addressed through heavy-handed driver optimizations. In some games the engine is so locked down that it doesn't seem to matter. 

 

If AMD stopped communicating with the developers last year, it likely means that there was nothing more to be said. I'm not calling shenanigans per say, but if the Developers are using Gameworks, there's nothing stopping Nvidia from sending updates which may change how certain features that are not specific to Nvidia cards may work. If AMD - a company that is incredibly open and transparent with developers and customers - stopped talking to the developers of Cars last fall, it doesn't take a genius to figure out what went down behind the scenes.

9/10 time a gameworks game çomes out it has AMD performançe issues.

If AMD drivers are "shit" why are they only shit on gameworks titles ? My opinion is that they dont have the tools to properly optimize things and nvidias middleware is loçked down behind NDA or some çrap.

If I write the game with some API I made myself and the tell some programmer to optimize it and I dont release the sourçe çode they need to reverse engineer it making their optimizations take more time and be less effeçtive.

What people ITT çant phanom is that maybe , just maybe AMD doesnt have the neçessary tools to optimize instead of the theory thats floating around that they  are simply lazy assholes.

Link to comment
Share on other sites

Link to post
Share on other sites

9/10 time a gameworks game çomes out it has AMD performançe issues.

If AMD drivers are "shit" why are they only shit on gameworks titles ? My opinion is that they dont have the tools to properly optimize things and nvidias middleware is loçked down behind NDA or some çrap.

If I write the game with some API I made myself and the tell some programmer to optimize it and I dont release the sourçe çode they need to reverse engineer it making their optimizations take more time and be less effeçtive.

What people ITT çant fathom is that maybe , just maybe AMD doesnt have the neçessary tools to optimize instead of the theory thats floating around that they  are simply lazy assholes.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

 

If AMD stopped communicating with the developers last year, it likely means that there was nothing more to be said. I'm not calling shenanigans per say, but if the Developers are using Gameworks, there's nothing stopping Nvidia from sending updates which may change how certain features that are not specific to Nvidia cards may work. If AMD - a company that is incredibly open and transparent with developers and customers - stopped talking to the developers of Cars last fall, it doesn't take a genius to figure out what went down behind the scenes.

 

So many assumptions and insinuations.  If AMD are "incredibly open and transparent with developers" then why haven't they openly explained the situation? There are so many different possibilities as to why things happen the way they do.  Starting to get tired of the unrelenting negativity and bias, people always assuming there's some heavy handed conspiracy where companies are going out of their way to damage your personal gaming experience.  It's like people see the underdog through rose colored glasses.  Guess what? they are just as evil, manipulating, cunning, and ready to oppress in order to make money as ANY other company on this planet.

 

Has anyone actually considered that AMD just isn't in a position to put the same resources into middle ware and game dev support?  It may not be a choice, it may not be the result of a heavy handed approach, it is very possible they just don't have the money for it.   Another possibility is that while Nvidia has 75% of the market who are game devs going to optimize for first?  the 25%?  of course not, that's bad business.  Only a fool would do that.  Again, in this scenario it's not gameworks that the issue, it's the mechanics of the market place.  You don't like it then buy AMD,  and avoid game devs that you think are the issue.  But make sure you are right, because you may simply be supporting the only devs that can't afford to advance games the same way without GPU maker assistance.  

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

If we're using c#, I'd go for the .NET's built in sum function and rely on a hope that Microsoft developers know better :)

alternatively lambda delegate every core a portion of the array to sum?

I know some crazy masterminds can do it with binary xor/and/or without using a single '+' operator

If you're gonna do such a test, it needs to be C++. Virtual machine languages have too much variability. Now, here's my base program including the naive column sum algorithm for an MxN matrix.

 

Trials run per code base: 6

CPU: Core I7 2600K stock clocks & settings, watercooled (no thermal throttling possible)

RAM: dual-channel G.Skill Trident X 1866 (3x4GB)

OS: Ubuntu 15.04 Vivid Vervet fully updated as of 5/14/15

Compiler: icpc (ICC) 15.0.3 20150407   Copyright © 1985-2015 Intel Corporation.  All rights reserved.

Compiler Flags: -Wall -Wextra -std=c++11 -g -Ofast dummy.cpp -o dummy

Timing Specifications: /usr/bin/time -v ./dummy 115000 20000

Major (requiring disk I/O) page faults: 0 (very important that we're not hitting virtual memory on the hard drives here)

CPU Usage: 99% for all valid runs

 

Be aware due to SandyBridge being limited to AVX 256 and FMA2, you will see much higher performance in Haswell, but you will need a lot of RAM to see how it scales. I use up most of my available 10 GB, and the system has the last 2.

 

Naive/First Instinct Serial Solution

Elapsed time average: 53.46 seconds

Standard deviation: 0.27 seconds

#include <iostream>

#include <vector>

#include <stdexcept>

#include <sstream>

std::vector<int> col_sums(std::vector<std::vector<short>> data) {

    unsigned int height = data.size(), width = data[0].size();

    std::vector<int> sums(width, 0);

    for (unsigned int i = 0; i < width; i++) {

        for (unsigned int j = 0; j < height; j++) {

            sums += data[j];

        }

    }

    return sums;

}

int main(int argc, char** argv) {

    if (argc < 3) {

        std::cout << "Run program as \"executable <rows> <columns>\n";

    } else {

        std::stringstream args;

        args << argv[1] << " " << argv[2];

        int rows, columns;

        args >> rows >> columns;

        std::vector<std::vector<short>> data(rows, std::vector<short>(columns));

        std::vector<int> columnSums = col_sums(data);

    }

}

 

C++ Favorable Serial Solution

Elapsed time average: 4.90 seconds (that's right, 10.91 x speedup with 1 tiny change)

Standard deviation: 0.003 seconds

std::vector<int> col_sums(std::vector<std::vector<short>> data) {

    unsigned int height = data.size(), width = data[0].size();

    std::vector<int> sums(width, 0);

    for (unsigned int i = 0; i < height; i++) {

        for (unsigned int j = 0; j < width; j++) {

            sums[j] += data[j];

        }

    }

    return sums;

}

 

And that's just by understanding how C++ stores multi-dimensional arrays: row major order, and how the translation lookahead buffer tries to keep spatially local data in the caches for your CPU. Traversing by column causes a huge number of cache misses which, as you can see, severely diminishes your performance. Feel free to experiment by adding columns, narrowing row width, expanding it, etc.. This second algorithm seems to be slightly obfuscated to a novice or someone who doesn't know the nitty gritty details of a language's implementation (COUGH* game programmers COUGH*). Believe it or not, this sort of naivete I found almost an exact copy of in the physics engine for the Unity Engine. You wouldn't believe how hard I had to fight to get it changed, including providing extensive evidence you can get here for yourselves. I could not believe how impossible it was to convince three programmers some upstart nobody found something they could vastly improve in their code. If I ever come across as an elitist, it's because I'm jaded by industry people who think stuck up college kids know nothing.

 

Now, using Intel's crafted CilkPlus libraries (which run on AMD processors all the same if you compile with g++ or clang++ (g++ 4.9.2 has some CilkPlus bugs right now though)), we can do even better before going multithreaded. To my knowledge Microsoft's Visual C/C++ compiler does not yet support CilkPlus at all.

 

Good CilkPlus Solution (for g++ or clang++, must use -fcilkplus flag)

Elapsed time average: 3.00 seconds (almost 40% speedup over good C++ algorithm)

Standard deviation: 0.000 seconds

std::vector<int> col_sums(std::vector<std::vector<short>> data) {

    unsigned int height = data.size(), width = data[0].size();

    std::vector<int> sums(width, 0);

    for (unsigned int i = 0; i < height; i++) {

        sums.data()[0:width] += data.data()[0:width];

    }

    return sums;

}

 

Bad CilkPlus Solution (for g++ or clang++, must use -fcilkplus flag)

Elapsed time average: 50.80 seconds (better than bad algorithm in raw C++, but terrible nonetheless)

Standard deviation: 0.29 seconds

std::vector<int> col_sums(std::vector<std::vector<short>> data) {

    unsigned int height = data.size(), width = data[0].size();

    std::vector<int> sums(width, 0);

    for (unsigned int i = 0; i < width; i++) {

        sums = __sec_reduce_add(data.data()[0:height].data());

    }

    return sums;

}

 

CilkPlus even manages to get you something when your algorithm doesn't use your hardware resources optimally, but it doesn't get you too much.

 

OpenMP and CilkPlus Combined Solution (export OMP_NUM_THREADS=4, compile using -fopenmp (and -fcilkplus if using g++ or clang++)

Cpu Usage: 195%

Elapsed time average: 1.54 seconds (almost a perfect 50% faster than the plain CilkPlus Solution. There is a barrier to scaling here determined by the critical section which must be resolved)

Standard deviation: 0.0045 seconds

***Note using 2 threads results in the same amount of elapsed time and lower (131%) CPU usage. Sometimes getting scale to work well is difficult***

Proposals: chunk-wise reduction from partials to total. Edit: no gains. Still confused.

#include <iostream>

#include <vector>

#include <stdexcept>

#include <sstream>

#include <omp.h>

std::vector<int> col_sums(const std::vector<std::vector<short>>& data) {

    unsigned int height = data.size(), width = data[0].size();

    std::vector<int> totalSums(width, 0), threadSums(width, 0);

    #pragma omp parallel firstprivate(threadSums)

    {

        #pragma omp for

        for (unsigned int i = 0; i < height; i++) {

            threadSums.data()[0:width] += data.data()[0:width];

        }

        #pragma omp atomic

            totalSums.data()[0:width] += threadSums.data()[0:width];

    }

    return totalSums;

}

 

Like I said, most game programmers are not even on this level of skill and knowledge, and I apparently have some things to review as well, but my point stands. High performance computing and extracting more from your CPU is not all that difficult, but you need to know the tools you have before you jump into a huge project headlong.

 

Edit: you know it's bad when I forget to use the reference operator for a huge data set too. You can get even greater speedups and better performance on top of what you see here, but I'm very tired. Finals week does that to you. :lol:

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

9/10 time a gameworks game çomes out it has AMD performançe issues.

If AMD drivers are "shit" why are they only shit on gameworks titles ? My opinion is that they dont have the tools to properly optimize things and nvidias middleware is loçked down behind NDA or some çrap.

If I write the game with some API I made myself and the tell some programmer to optimize it and I dont release the sourçe çode they need to reverse engineer it making their optimizations take more time and be less effeçtive.

What people ITT çant phanom is that maybe , just maybe AMD doesnt have the neçessary tools to optimize instead of the theory thats floating around that they  are simply lazy assholes.

Anyone thinking AMD is lazy is crazy. AMD doesn't have the manpower, hence them hiring more driver optimizers in recent days. That said, BS on not having the tools. I've disassembled and decompiled the Gameworks binaries myself. It's not terribly hard to reverse engineer them currently. Now, if they get recompiled to use AVX 2 I'm gonna be up shit creek. Reverse engineering unrolled for loops or say a bunch of different variables from AVX 2 and resultant instructions is a nightmare.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

This "Gameworks causes AMD hinderances" bullshit needs to stop. 

 

http://linustechtips.com/main/topic/316992-eidos-amd-to-show-off-tressfx-30-in-new-deus-ex-engine/?p=4323846

Oh wait, AMD cards beating Nvidia cards in Gameworks titles. 

 

LOL that link disproves your point! 290x has higher performance than a Titan/780 as it should. Yet Assassins Creed Unity runs worse on AMD, as a gameworks title.

 

Metro Last Light only has PhysX, where the advanced version can't really run on AMD, so I doubt Anandtech activated it (as it would stutter horribly on AMD).

 

Sorry, but you are ignoring actual statements from both AMD and a Valve developer. Even Nvidia admits it:

 

Could GameWorks be used to harm AMD (and by extension, AMD gamers)?

AMD: Absolutely yes. Games that are part of the GW program have been much harder for us to optimize.

Nvidia: Theoretically yes, but that’s not the point of the program or the reason we developed it.

Developers say (Ex Valve programmer Richard Geldreich): AMD has valid reason to be concerned.

 

gun-control-awareness-backwards-gun-smal

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

LOL that link disproves your point! 290x has higher performance than a Titan/780 as it should. Yet Assassins Creed Unity runs worse on AMD, as a gameworks title.

 

Metro Last Light only has PhysX, where the advanced version can't really run on AMD, so I doubt Anandtech activated it (as it would stutter horribly on AMD).

 

Sorry, but you are ignoring actual statements from both AMD and a Valve developer. Even Nvidia admits it:

 

 

gun-control-awareness-backwards-gun-smal

 

Having a valid reason to be concerned is not the same as there actually being an issue.  The US has a valid reason to be concerned with nuclear weapons in north Korea, but that doesn't mean North Korea actually has nuclear weapons, let alone ones that are causing a legitimate issue.   AMD has a legitimate concern with gameworks, that does not mean gameworks is being used to gimp their hardware.

 

What geldreich actually said in that article:

 

I don’t think NV is purposely tilting the table here. But if you look at the big picture, I wouldn’t preclude a sort of emergent behavior that could effectively tilt the competitive advantage to whatever vendor manages to embed the best developers into the most key game teams.”

 

 

 

 

He doesn't think Nvidia purposely gimps AMD products, but also states that anyone could tilt the scales if they implement the best developers into the best games.  I can't believe such a benign statement has to be made to point out that people effectively want to use the best product available.  Can you blame them, I won't use a second rate product if there is a better one on offer at no additional cost.  Why would anyone expect a developer to not want the same.

 

EDIT: here's the link to the page you quote him from.

 

http://www.extremetech.com/gaming/183411-gameworks-faq-amd-nvidia-and-game-developers-weigh-in-on-the-gameworks-controversy/4

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Having a valid reason to be concerned is not the same as there actually being an issue.  The US has a valid reason to be concerned with nuclear weapons in north Korea, but that doesn't mean North Korea actually has nuclear weapons, let alone ones that are causing a legitimate issue.   AMD has a legitimate concern with gameworks, that does not mean gameworks is being used to gimp their hardware.

 

What geldreich actually said in that article:

 

 

 

He doesn't think Nvidia purposely gimps AMD products, but also states that anyone could tilt the scales if they implement the best developers into the best games.  I can't believe such a benign statement has to be made to point out that people effectively want to use the best product available.  Can you blame them, I won't use a second rate product if there is a better one on offer at no additional cost.  Why would anyone expect a developer to not want the same.

 

EDIT: here's the link to the page you quote him from.

 

http://www.extremetech.com/gaming/183411-gameworks-faq-amd-nvidia-and-game-developers-weigh-in-on-the-gameworks-controversy/4

 

But AMD claims that it is actually a problem:

 

https://youtu.be/8uoD8YKwtww?t=1546

 

And we KNOW, that you can only do very limited optimization on these black boxed proprietary DLL's.

 

Now here is where it gets interesting: (paraphrasing)

"We've seen shaders, 6000 lines of code shaders, that change in the very last build" (about 29:35 in or something).

 

Project Cars, seems to have run nicely on betas, based on what I've read in this thread. Yet when the final version comes out, suddenly all AMD optimization, is just fubar. And the shitty dev, blames AMD for this?

 

Now remember the quote, I've used in this thread earlier from the Extremetech link:

 

There are fundamental limits to how much perf you can squeeze out of the PC graphics stack when limited to only driver-level optimizations,” Geldreich told ExtremeTech. “The PC driver devs are stuck near the very end of the graphics pipeline, and by the time the GL or D3D call stream gets to them there’s not a whole lot they can safely, sanely, and sustainably do to manipulate the callstream for better perf. Comparatively, the gains you can get by optimizing at the top or middle of the graphics pipeline (vs. the very end, inside the driver) are much larger.”

 

With all this information, and all we've seen from other gameworks titles, the picture being the same: AMD having less performance, in those titles only. Is it really so impossible to imagine, that there is an actual problem with GameWorks? I don't see how any rational, unbiased person, could say yes. All evidence points to a problem.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

But AMD claims that it is actually a problem:

 

https://youtu.be/8uoD8YKwtww?t=1546

 

And we KNOW, that you can only do very limited optimization on these black boxed proprietary DLL's.

 

Now here is where it gets interesting: (paraphrasing)

"We've seen shaders, 6000 lines of code shaders, that change in the very last build" (about 29:35 in or something).

 

Project Cars, seems to have run nicely on betas, based on what I've read in this thread. Yet when the final version comes out, suddenly all AMD optimization, is just fubar. And the shitty dev, blames AMD for this?

 

Now remember the quote, I've used in this thread earlier from the Extremetech link:

 

 

With all this information, and all we've seen from other gameworks titles, the picture being the same: AMD having less performance, in those titles only. Is it really so impossible to imagine, that there is an actual problem with GameWorks? I don't see how any rational, unbiased person, could say yes. All evidence points to a problem.

Anyone who can use inductive reasoning correctly will tell you you do not have a strong enough basis to claim that. It's no better than circumstancial evidence in a court case. That said, I've made it perfectly clear there are many tiers of programmers in this world (see how many good and bad ways you can implement the same simple thing in post #413), and I've already said my piece about how game devs are generally in the lower half if not lowest third of all programmers if we were to grade on skill at optimization. That said, there is no direct proof Gameworks does anything to directly hinder AMD's performance. In fact, I'd have found such proof by now much the way it was found Intel's C/C++/Fortran compiler searches for CPUID strings. You can find this stuff from a software profiler and disassembly/decompiling. There's nothing which remotely hints that Gameworks discriminates based on brand or architecture. Hell it's not even that much more optimized than general game libraries. When the day comes they use AVX 2 on everything I will no longer be able to reverse-engineer the code either. It's bad enough unfolding sets of 4 variables transformed together. When it becomes 8 it will be very difficult to tell the difference between an unrolled loop and a preparatory variable setup before jumping to another function.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

But AMD claims that it is actually a problem:

 

https://youtu.be/8uoD8YKwtww?t=1546

 

And we KNOW, that you can only do very limited optimization on these black boxed proprietary DLL's.

 

Now here is where it gets interesting: (paraphrasing)

"We've seen shaders, 6000 lines of code shaders, that change in the very last build" (about 29:35 in or something).

 

Project Cars, seems to have run nicely on betas, based on what I've read in this thread. Yet when the final version comes out, suddenly all AMD optimization, is just fubar. And the shitty dev, blames AMD for this?

 

Now remember the quote, I've used in this thread earlier from the Extremetech link:

 

 

With all this information, and all we've seen from other gameworks titles, the picture being the same: AMD having less performance, in those titles only. Is it really so impossible to imagine, that there is an actual problem with GameWorks? I don't see how any rational, unbiased person, could say yes. All evidence points to a problem.

See post #419, 

 

I cancel out your AMD claims with Nvidia claims. All that's left are the programers and devs. 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

See post #419, 

 

I cancel out your AMD claims with Nvidia claims. All that's left are the programers and devs. 

Who are the people that cause the need for game specific drivers.......

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

See post #419, 

 

I cancel out your AMD claims with Nvidia claims. All that's left are the programers and devs. 

 

Sorry, but I believe AMD and the Valve dev plus Tim Sweeney (founder of Epic Games), a lot more than NVidia, that has history of doing things like this (see x87 PhysX bs on CPU's).

 

But if you want more devs, read this: http://www.pcper.com/reviews/Editorial/NVIDIA-and-AMD-Fight-over-NVIDIA-GameWorks-Program-Devil-Details/NVIDIAs-Response

 

But let me ask you this then: If closed source proprietary middleware, is NOT responsible for bad performance, why did Tomb Raider run so poorly on Nvidia, when activating TressFX? And why did we see huge performance increases with TressFX, when AMD released the source code to NVidia?

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

This "Gameworks causes AMD hinderances" bullshit needs to stop. 

 

http://linustechtips.com/main/topic/316992-eidos-amd-to-show-off-tressfx-30-in-new-deus-ex-engine/?p=4323846

Oh wait, AMD cards beating Nvidia cards in Gameworks titles. 

 

Just because a title says Gameworks on it doesn't mean they all use the exact same libraries.  Some could have more significant performance impacts than others.  It all depends on what the dev uses.  The devs are also capable of making their own internal optimizations.  

 

It's confirmed that Nvidia do not allow AMD to have the source code for Gameworks.  Devs however can get access in specific cases and may manually optimize it themselves.

 

Project Cars shows mediocre performance on Kepler and AMD, yet decent performance on Maxwell.  If that doesn't scream "Exclusive driver side optimizations" to you then that's fine.  

 

1080p_Rainy.png

 

Dat 960 beating out the 780, 770 and 290x despite being a much weaker card than any of them.

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

Any word from amd yet? New drivers coming?

Link to comment
Share on other sites

Link to post
Share on other sites

Sorry, but I believe AMD and the Valve dev plus Tim Sweeney (founder of Epic Games), a lot more than NVidia, that has history of doing things like this (see x87 PhysX bs on CPU's).

 

But if you want more devs, read this: http://www.pcper.com/reviews/Editorial/NVIDIA-and-AMD-Fight-over-NVIDIA-GameWorks-Program-Devil-Details/NVIDIAs-Response

 

But let me ask you this then: If closed source proprietary middleware, is NOT responsible for bad performance, why did Tomb Raider run so poorly on Nvidia, when activating TressFX? And why did we see huge performance increases with TressFX, when AMD released the source code to NVidia?

PhysX is meant to run on GPU's, which are a shitload more powerful than a CPU. Of course PhysX has problems running on the CPU as no consumer grade CPU has anywhere near the computational power of a GPU.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Sorry, but I believe AMD and the Valve dev plus Tim Sweeney (founder of Epic Games), a lot more than NVidia, that has history of doing things like this (see x87 PhysX bs on CPU's).

 

But if you want more devs, read this: http://www.pcper.com/reviews/Editorial/NVIDIA-and-AMD-Fight-over-NVIDIA-GameWorks-Program-Devil-Details/NVIDIAs-Response

 

But let me ask you this then: If closed source proprietary middleware, is NOT responsible for bad performance, why did Tomb Raider run so poorly on Nvidia, when activating TressFX? And why did we see huge performance increases with TressFX, when AMD released the source code to NVidia?

 

Of course you believe AMD over everyone else, surprise surprise.

 

As has been pointed out so many many many times already, there is no one direct cause, there is not one over represented outcome from a singular cause I.E Not every gameworks game causes AMD GPU retardation.   For every graph or example you pick where it looks like a card is disadvantaged by middleware there is another that shows the opposite.  You cannot pick one example out of multitudes and make solid claims.   

 

AMD cards perform better than nvidia cards on many gameworks titles, Nvidia cards sometimes perform porrly when AMD middleware is used in games,  Sometimes batmans cape looks good and sometimes it doesn't.    There is no one singular unequivocal cause for issues with AMD cards.  This fanatsy people have of trying to pin it solely on Nvidia, or solely on the devs or solely on AMD is quite frankly BS.  

 

I have said so many times before there is no evidence to nearly all the claims made in this thread. 

 

Also you mention Tim Sweeny, here's what he had to say on the record about gameworks and middle ware:

 

However, there’s not a general expectation and certainly not an obligation that a middleware providers shares their code with hardware vendors, or accept optimizations back. There are legitimate reasons why they may choose not to, for example to protect trade secrets.

Nowadays, some game middleware packages are owned or supported by hardware companies, such as Intel owning Havok Physics, Nvidia owning PhysX and GameWorks, and AMD’s past funding and contributing to the development of Bullet Physics. Here IHVs are investing millions of dollars into creating middleware that is often provided to developers for free. It’s not necessarily realistic to expect that a hardware company that owns middleware to share their code with competing hardware companies for the purpose of optimizing it for their hardware, especially when hardware vendors are often involved in competing or overlapping middleware offerings.

Game and engine developers who choose middleware for their projects go in with full awareness of the situation. With Unreal Engine 4, Epic Games is a very happy user of Nvidia’s GameWorks including PhysX and other components.

 

 

There's just no justification for some of the silly arguments people are making.  Look at what he said,  Game devs know what they are getting into and it is unrealistic to expect hardware company owned middleware users to share the code with competition.  Don't think for a second that if the roles where reversed that AMD would do anything different.

 

 

 

Also for everyone else, can you please put those ugly arse unnecessarily pointless graphs that prove absolutely nothing in a spoiler.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.


×