Jump to content

Project CARS devs address AMD performance issues, AMD drivers to blame entirely, PhysX runs on CPU only, no GPU involvement whatsoever.

But tressFX though :S

The realistic fur that is possible with PhysX is far more advanced than that. And what is possible in DirectX 12 as well with individually rendered strands of hair. AMD is far behind everyone else.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

I posted this on another related post, and I legitimate want answers to this question, I want to know about the flaws in my logic. Anyways, here is the post: 

 

Honestly I don't get why people get so butthurt about this. If it bothers you that you can't get certain features or settings then just buy NVidia the next time you're due for an upgrade. What is so hard about that? Also, not everything is open source, and just because people want it to be doesn't mean that it should be/will be. Is it so bad that a company wants compensation for their hard work? 

Here is a list of common fallacies. Which ones have you used today?

Link to comment
Share on other sites

Link to post
Share on other sites

Yep, and it doesn't even make sense PhysX never effected AMD cards the best example is Hitman Absolution which uses PhysX as the main engine but is an AMD title and even runs bad on Nvidia cards:

hitman_performance_19x10_fixed.png

 

 

It didn't run bad on NVIDIA cards, AMD just optimized the in game benchmark to run better on AMD. Those results are no where near accurate or close to actual in-game performance:

 

2mg9t1y.jpg

 

CB.de here runs the game like a Luke "Slick" run instead of using the in-game benchmark.

 

http://www.computerbase.de/2012-11/eigene-benchmarks-zu-hitman-absolution/

Link to comment
Share on other sites

Link to post
Share on other sites

The realistic fur that is possible with PhysX is far more advanced than that. And what is possible in DirectX 12 as well with individually rendered strands of hair. AMD is far behind everyone else.

But but but....

 

I posted this on another related post, and I legitimate want answers to this question, I want to know about the flaws in my logic. Anyways, here is the post: 

 

Honestly I don't get why people get so butthurt about this. If it bothers you that you can't get certain features or settings then just buy NVidia the next time you're due for an upgrade. What is so hard about that? Also, not everything is open source, and just because people want it to be doesn't mean that it should be/will be. Is it so bad that a company wants compensation for their hard work? 

For some it's probably about supporting a company they believe in. All power to them.

Anyway, I'll be in my lurking closet a little while longer till I actually start to understand how this all ties together. *Goes back into hiding*

Everyone have a cool signature. I don't, so I thought I would write something.

- Cool right?

Link to comment
Share on other sites

Link to post
Share on other sites

It didn't run bad on NVIDIA cards, AMD just optimized the in game benchmark to run better on AMD. These results are no where near accurate or close to actual in-game performance:

 

 

It still shows my point that it runs just fine on AMD even though it's based on PhysX.

RTX2070OC 

Link to comment
Share on other sites

Link to post
Share on other sites

It still shows my point that it runs just fine on AMD even though it's based on PhysX.

 

I know, and there were no PhysX options in that game either.

 

I just don't want people to see that other benchmark and think the game didn't run well on NVIDIA hardware (being a Gaming Evolved title), when it did. But I understood your point in posting it.

Link to comment
Share on other sites

Link to post
Share on other sites

I posted this on another related post, and I legitimate want answers to this question, I want to know about the flaws in my logic. Anyways, here is the post: 

 

Honestly I don't get why people get so butthurt about this. If it bothers you that you can't get certain features or settings then just buy NVidia the next time you're due for an upgrade. What is so hard about that? Also, not everything is open source, and just because people want it to be doesn't mean that it should be/will be. Is it so bad that a company wants compensation for their hard work? 

If everyone follows that suggestion/advice  NVIDIA will have a monopoly and they will be able to do whatever they want pricing wise.

If 100% of the çonsumer stopped buying AMD gpus and bought nvidia ones.

I dont think thats healthy , either AMD or another çompany like possibly intel in the future needs to present some competition.

Link to comment
Share on other sites

Link to post
Share on other sites

They'll blame that new lighting engine that Nvidia introduced. Just you wait and see.

I get paid by Samsung I don't even defend them as bad as the AMD defence force mobilizes for AMD. It's amusing. I wonder what they'll blame next.

On another note--it would be pretty nice to be paid by Samsung.  Just ask them for SSD's.  "Samsung products are impervious to ruin, and stamp out the competition through intuitive design, and innovation."  Bam.  Three 850 Pro's right there.  Man, that'd be great...

Link to comment
Share on other sites

Link to post
Share on other sites

Pretty much AMD has no one to blame but itself for all of the problems they are having. Its not Nvidia's fault that AMD hasn't innovated in years.

 

This problem is related to a developer who choose to implement something in their game engine that has limited access to AMD, due to contractual agreements (NDA)... what does this has to do with AMD innovation, or blaming NVIDIA for it? I won't even comment on the "GameWorks open source" part of the previous comment.

Plus AMD has innovated to the point where the next APIs to be released are a by product of AMD Mantle - DX12 and Vulkan. Yes, the whole industry will have their next API foundation based on a part of AMD innovation. This is just an example.

 

Just because AMD doesn't go on a stage and brags themselfs about innovation, or rebranding of innovations doesn't mean they don't innovate. They should do this, infact, because it's what feeds people like you, with shallow amounts of incomprehensible information presented in a "cool way" that will leave you in awe. It just works.

 

 

 

The realistic fur that is possible with PhysX is far more advanced than that. And what is possible in DirectX 12 as well with individually rendered strands of hair. AMD is far behind everyone else.

 

LOL ... Again - part of DX12 was based on AMD Mantle... how can they be behind if they are delivering innovation? You even have the rumor, I'll say again rumor, that AMD has a Tier 3 support of DX12, while NVIDIA has a Tier2... it will be confirmed soon though :)

Link to comment
Share on other sites

Link to post
Share on other sites

If everyone follows that suggestion/advice  NVIDIA will have a monopoly and they will be able to do whatever they want pricing wise.

If 100% of the çonsumer stopped buying AMD gpus and bought nvidia ones.

I dont think thats healthy , either AMD or another çompany like possibly intel in the future needs to present some competition.

But isn't that AMDs problem... as in shouldn't they (as in AMD) offer a more compelling feature so they win the business of consumers? I understand why monopolies are bad (m'kay) shouldn't their competitors being doing more to get equal?

Here is a list of common fallacies. Which ones have you used today?

Link to comment
Share on other sites

Link to post
Share on other sites

This problem is related to a developer who choose to implement something in their game engine that has limited access to AMD, due to contractual agreements (NDA)... what does this has to do with AMD innovation, or blaming NVIDIA for it? I won't even comment on the "GameWorks open source" part of the previous comment.

This is simply false.

Project Cars uses CPU PhysX that is in fact open source and also has an SDK for years now.

Nvidia is in no way the reason for the bad performance on AMD cards in Project Cars it is clearly a driver issue just as the devs have stated.

And the fact that Windows 10 gives a huge performance boost over W7/W8 on AMD seems to only proof it even more.

 

RTX2070OC 

Link to comment
Share on other sites

Link to post
Share on other sites

This is simply false.

Project Cars uses CPU PhysX that is in fact open source and also has an SDK for years.

Nvidia is in no way the reason for the bad performance on AMD cards in Project Cars it's is clearly a driver issue just as the devs have stated.

And the fact that Windows 10 gives a huge performance boost on AMD seems to only proof it even more.

 

 

You'll never convince them, they are hell bent on it being everyone else's evil ways.  No amount of logic, fact or reason will sway them from their beliefs.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

This is simply false.

Project Cars uses CPU PhysX that is in fact open source and also has an SDK for years.

Nvidia is in no way the reason for the bad performance on AMD cards in Project Cars it's is clearly a driver issue just as the devs have stated.

And the fact that Windows 10 gives a huge performance boost over W7/W8 on AMD seems to only proof it even more.

 

I was actually reading that yesterday. From my understanding, people who complain the most are fans of AMD, and more than likely have an FX CPU, which everyone with common sense knows that they aren't good for gaming at all. Maybe for tasks which use all threads at once (mainly the FX 8*** 4 module line) such as video editing etc, but with a PhysX api that runs only on a single thread, of course the AMD fans will be seeing problems. Heck, I'd expect an old Phenom II to be a lot better for CPU PhysX due to the significantly better IPC.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

This is simply false.

Project Cars uses CPU PhysX that is in fact open source and also has an SDK for years now.

Nvidia is in no way the reason for the bad performance on AMD cards in Project Cars it is clearly a driver issue just as the devs have stated.

And the fact that Windows 10 gives a huge performance boost over W7/W8 on AMD seems to only proof it even more.

The most damning thing is if you have a Nvidia card, FORCE PhysX to run on the CPU only, people still report very good performance as was linked by another member a few pages ago.

So AMDs Defence Force better find a new Gameworks library to bitch about.

Or they could ask why AMD still refuses to license CUDA which would bring them on par with Nvidia and Intel AND let them access Gameworks on a better level, given how CUDA acceleration is the big thing for it.

Or we could bitch about the "performance decrease" that Gameworks and PhysX deliver. Even though that's been debunked.

But that's none of my business.

Link to comment
Share on other sites

Link to post
Share on other sites

I was actually reading that yesterday. From my understanding, people who complain the most are fans of AMD, and more than likely have an FX CPU, which everyone with common sense knows that they aren't good for gaming at all. Maybe for tasks which use all threads at once (mainly the FX 8*** 4 module line) such as video editing etc, but with a PhysX api that runs only on a single thread, of course the AMD fans will be seeing problems. Heck, I'd expect an old Phenom II to be a lot better for CPU PhysX due to the significantly better IPC.

PhsyX acceleration is now multithreaded BUT that's 100% on developers to either update old games (hah never happening) or to include it in new ones. If anything, modern PhysX acceleration on the CPU should be far better.

OR AMD could just license CUDA already and avoid these issues entirely.

Link to comment
Share on other sites

Link to post
Share on other sites

PhsyX acceleration is now multithreaded BUT that's 100% on developers to either update old games (hah never happening) or to include it in new ones. If anything, modern PhysX acceleration on the CPU should be far better.

OR AMD could just license CUDA already and avoid these issues entirely.

 

I think it would take Nvidia allowing PhysX to use OpenCL to get AMD to even do anything in regards to PhysX. Its a very good physics engine but AMD is just too stubborn to take the initiative.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

I think it would take Nvidia allowing PhysX to use OpenCL to get AMD to even do anything in regards to PhysX. Its a very good physics engine but AMD is just too stubborn to take the initiative.

Perhaps, but I can't see Nvidia doing that on the cheap. Maybe 8 years ago, they could've struck a profitable deal

For both sides. But now?

Nvidia has their balls. PhysXs as a Gameworks component is one thing, but PhysX based engines are everywhere. Nvidia played it smart and got involved where it made sense.

Link to comment
Share on other sites

Link to post
Share on other sites

I'm amused that your FPS is higher running PhysX on your CPU, and that me and Vic were correct in what we said. Thanks for the data.

also added avg/min/max chart to my original post - I consider it's within margin of error, also .. you just can't have an exact in-game run

Link to comment
Share on other sites

Link to post
Share on other sites

also added avg/min/max chart to my original post - I consider it's within margin of error, also considering you just can't have an exact in-game run

Perfect! :)

.

Link to comment
Share on other sites

Link to post
Share on other sites

also added avg/min/max chart to my original post - I consider it's within margin of error, also .. you just can't have an exact in-game run

Thanks!

Just saw your posts over on Reddit, very informative. I'll update my OP with results so people know exactly what is going on, they need to know that PhysX isn't doing anything.

I think what these "enthusiasts" forgot is that PhysX can be run entirely on the CPU, which is fine. This isn't 2009 PhysX that would pillage your CPU and stunt everything. This is a better one.

The AMD problem? Driver overhead. The CPU is already running the custom tailored physics engine that SMS wrote (simulation games are taxing as hell on a CPU) and on top of that it has to manage whatever other CPU bound additions there are THEN it has to deal with drivers. We know where this is going...

If there are no optimizations, your CPU will bottleneck real good with that much being done. Once again, not Nvidias problem that AMD can't get their shit together.

CPU PhysX has been used in engines forever. No big deal. GPU PhysX effects is inherently closed off since AMD has no CUDA, and it's not like Nvidia will give them a hand and make it OpenCL compatible even though they damn well could.

Link to comment
Share on other sites

Link to post
Share on other sites

PhsyX acceleration is now multithreaded BUT that's 100% on developers to either update old games (hah never happening) or to include it in new ones. If anything, modern PhysX acceleration on the CPU should be far better.

OR AMD could just license CUDA already and avoid these issues entirely.

Threaded or not physics is simply hard shit for any CPU to handle (this is why they are pushed to the GPU now days). You could spread out the physics calculations across eight cores and it still would be ten times slower than running on a dozen shaders. Why would AMD want to license CUDA when they already have their own compute platform? CUDA is so 2007 and lost a lot of traction over the years. Now days everyone targets open platforms such as OpenCL for mainstream software. I also doubt Nvidia would cough up licensing that quick of their own compute platform for their competitor.

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks!

Just saw your posts over on Reddit, very informative. I'll update my OP with results so people know exactly what is going on, they need to know that PhysX isn't doing anything.

I think what these "enthusiasts" forgot is that PhysX can be run entirely on the CPU, which is fine. This isn't 2009 PhysX that would pillage your CPU and stunt everything. This is a better one.

The AMD problem? Driver overhead. The CPU is already running the custom tailored physics engine that SMS wrote (simulation games are taxing as hell on a CPU) and on top of that it has to manage whatever other CPU bound additions there are THEN it has to deal with drivers. We know where this is going...

If there are no optimizations, your CPU will bottleneck real good with that much being done. Once again, not Nvidias problem that AMD can't get their shit together.

CPU PhysX has been used in engines forever. No big deal. GPU PhysX effects is inherently closed off since AMD has no CUDA, and it's not like Nvidia will give them a hand and make it OpenCL compatible even though they damn well could.

also added W10x64 benchmark in the mix since a few people hinted the PhysX toggle might not work

this time, the race was in light rain - but: same car, same track, same quality setting

Link to comment
Share on other sites

Link to post
Share on other sites

also added W10x64 benchmark in the mix since a few people hinted the PhysX toggle might not work

this time, the race was in light rain - but: same car, same track, same quality setting

 

Added to the OP. Thanks for actually testing and giving real evidence instead of the typical conjecture. 

Link to comment
Share on other sites

Link to post
Share on other sites

Added to the OP. Thanks for actually testing and giving real evidence instead of the typical conjecture. 

yeah ..  <_<

people are quick to blame nVidia for every s**t AMD does or, in this case .. doesn't do

 

---

 

a comment on the game: even at those very low frame rates, it's quite impressive how the game doesn't tear, stutter or anything like that - thumbs up to Slightly Mad for this

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.


×