Jump to content

Project CARS devs address AMD performance issues, AMD drivers to blame entirely, PhysX runs on CPU only, no GPU involvement whatsoever.

You don't understand how PhysX works. At all. So why do you mention something irrelevant?

once again parallel processors 

Link to comment
Share on other sites

Link to post
Share on other sites

To be fair, Intel and Nvidia asked to get a look at Mantle which was amusing since Intel and Nvidia were on the board of people working on DX12; AMD never gave them anything and lets be honest Mantle was stillborn. 

 

I can't imagine the dollars that AMD had to pay EA/DICE so that they would build Frostbite 3 to be Mantle compatible. 

I don't see why Nvidia would give a damn about Mantle if they've been working alongside Microsoft and several other companies to produce DX12 for years now.

 

Everyone know physx runs MUCH better on gpus.

This is incontestable.

 

From the nvidia page.

 

Will running PhysX on a GPU slow down gaming performance?

Running physics on the GPU is typically significantly faster than running physics on the CPU, so overall game performance is improved and frame rates can be much faster. However, adding physics can also impact performance in much the same way that anti-aliasing impacts performance. Gamers always enable AA modes if they can because AA makes the game look better. Gamers will similarly enable physics on their GPUs so long as frame rates remain playable. With AA enabled, running physics on a GPU will generally be much faster than running physics on a CPU when AA is enabled. PhysX running on a dedicated GPU allows offloading the PhysX processing from the GPU used for standard graphics rendering, resulting in an optimal usage of processing capabilities in a system.

 

http://www.geforce.com/hardware/technology/physx/faq

That is talking about PhysX particle effects. What we're talking about is strictly a physics engine like the ones used in almost every single game. Not reactionary physics, but object based physics like a car's weight transfer.

 

once again parallel processors 

Yes, continue spewing random BS without any reason. Continue please.

.

Link to comment
Share on other sites

Link to post
Share on other sites

Everyone know physx runs MUCH better on gpus.

This is incontestable.

 

From the nvidia page.

 

Will running PhysX on a GPU slow down gaming performance?

Running physics on the GPU is typically significantly faster than running physics on the CPU, so overall game performance is improved and frame rates can be much faster. However, adding physics can also impact performance in much the same way that anti-aliasing impacts performance. Gamers always enable AA modes if they can because AA makes the game look better. Gamers will similarly enable physics on their GPUs so long as frame rates remain playable. With AA enabled, running physics on a GPU will generally be much faster than running physics on a CPU when AA is enabled. PhysX running on a dedicated GPU allows offloading the PhysX processing from the GPU used for standard graphics rendering, resulting in an optimal usage of processing capabilities in a system.

 

http://www.geforce.com/hardware/technology/physx/faq

 

 

Sigh. 

 

PhysX is entirely reliant on CUDA acceleration. Intel and Nvidia are the current GPU makers who incorporate CUDA. AMD doesn't. Why the fuck are all of you getting mad that PhysX, a 100% proprietary to CUDA acceleration tech, doesn't work well when you don't have a CUDA complaint GPU. 

 

And before you go further, PhysX tanks CPUs period. AMD, Intel; doesn't matter. PhysX as a physics engine works fine. PhysX as a CUDA accelerated feature requires CUDA. CUDA CUDA CUDA. How many times do we need to say CUDA? Do you get the hint? 

It is not Nvidias problem that PhysX effects are inherently demanding on a CPU. It is not Nvidias problem that AMD won't license out the CUDA to incorporate into their cards so that they too could enjoy GPU acceleration. 

Link to comment
Share on other sites

Link to post
Share on other sites

well you can understand amd's reluctance to pay a fee to nvidia for every gpu they sell

Not really. Given the prevalence of CUDA-based algorithms in the HPC space which has only just begone to wane, it would have been in AMD's best interest for their FirePros given IBM's systems exclusively use Nvidia coprocessors b/c of CUDA. With their theoretical performance advantage, it would have been in AMD's best interests to get a CUDA license back in 2008.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Sigh. 

 

PhysX is entirely reliant on CUDA acceleration. Intel and Nvidia are the current GPU makers who incorporate CUDA. AMD doesn't. Why the fuck are all of you getting mad that PhysX, a 100% proprietary to CUDA acceleration tech, doesn't work well when you don't have a CUDA complaint GPU. 

 

And before you go further, PhysX tanks CPUs period. AMD, Intel; doesn't matter. PhysX as a physics engine works fine. PhysX as a CUDA accelerated feature requires CUDA. CUDA CUDA CUDA. How many times do we need to say CUDA? Do you get the hint? 

It is not Nvidias problem that PhysX effects are inherently demanding on a CPU. It is not Nvidias problem that AMD won't license out the CUDA to incorporate into their cards so that they too could enjoy GPU acceleration. 

You see, neither of us will never get through to them explaining the difference of a physics engine and a reactionary particle engine.

.

Link to comment
Share on other sites

Link to post
Share on other sites

I don't see why Nvidia would give a damn about Mantle if they've been working alongside Microsoft and several other companies to produce DX12 for years now.

 

That is talking about PhysX particle effects. What we're talking about is strictly a physics engine like the ones used in almost every single game. Not reactionary physics, but object based physics like a car's weight transfer.

 

Yes, continue spewing random BS without any reason. Continue please.

 

Running physics on the GPU is typically significantly faster than running physics on the CPU,

 

( In General)

Link to comment
Share on other sites

Link to post
Share on other sites

Wrong. PhysX. Holy fuck how many times do I have to say it?

 

 

 PhysX is optimized for hardware acceleration by massively parallel processors.

but no of course you know more about physx than nvidia

Link to comment
Share on other sites

Link to post
Share on other sites

How does it feel, hitting your head against a wall? Fun, eh? 

How thick are some people. PhysX on the physics engine side is 100% compatible with any GPU, just like Havok. PhysX for your engine is just an alternative to Havok. 

PhysX for effects is GPU reliant and it bloody well isn't Nvidias problem if AMD doesn't want to license the tech that powers PhysX. 

 

 

CUDA exists. Intel happily paid. AMD is refusing to. NOT NVIDIAS PROBLEM. AMD wants to be cheap, that is their prerogative. 

 

 

AMD are being stubborn idiots about not licensing CUDA when the door is wide open. 

 

 

Exactly. AMD doesn't want to license out CUDA like Intel did, AMD can blame themselves for their own poor performance. 

 

 

 

Seriously. You AMD defending fanboys are ignoring blatant facts. Nvidia isn't hampering fuck all. Nvidia is willing to license out the tech to allow CUDA acceleration on Radeon GPUs. Intel paid up, they're going to employ it. AMD doesn't want to pay pennies per GPU to have equal footing, that is their problem. 

Don't act like AMD wouldn't do the same to Nvidia if they could. Everyone is just getting pissy that Nvidia got AMD over a barrel first. Welcome to corporations and the marketplace, nobody is your friend and no one gives a shit about the customers so long as you give them your money. That goes for all sides. So stop acting like AMD is some bloody saint in any of this. 

Whoa whoa whoa whoa wait a minute. NVIDIA gave AMD a clear and cheap option to add more features to its graphics cards and it refused? I mean, I believe you, but wow, that seems prettttty dumb.

 

I mean, how many times have you seen people recommend NVIDIA graphics cards over AMD's because of PhysX, Shadowplay, or that CUDA acceleration is more widely supported than OpenCL or some such? I for one, saw that quite a bit back before Maxwell was released.

 

Maybe I'm just stupid, but wouldn't that have been a good business decision from AMD? It would've at least helped get back some of its abysmal market share that it has currently...

Why is the God of Hyperdeath SO...DARN...CUTE!?

 

Also, if anyone has their mind corrupted by an anthropomorphic black latex bat, please let me know. I would like to join you.

Link to comment
Share on other sites

Link to post
Share on other sites

Wrong. PhysX. Holy fuck how many times do I have to say it?

 

I am convinced that people here are just being goons and trolls at this point. 

 

CUDA accelerated PhysX effects ≠ PhysX physics engine. Two very, very different things. PhysX being used as your physics engine is no different than using Havok, though I don't know enough about the pros/cons of either to tell you which one is better/worse (I'd argue its all the same to me, physics is physics)

Sigh. Some people here. 

Link to comment
Share on other sites

Link to post
Share on other sites

but no of course you know more about physx than nvidia

No, you're not getting the disparity between the two. Tell me why a PhysX physics engine doesn't run like shit on a CPU when it's used solely for physics?

 

They say physics on that sentence not physx . They are both physics based sigh.

This is why I'm saying for PhysX a reactionary particle system. There is a difference between the two. One is used for general physics, like every game that uses a physics engine. The other is used for particle effects using CUDA acceleration. There is a difference in design because of a difference in USE.

.

Link to comment
Share on other sites

Link to post
Share on other sites

Mantle cannot harm the performance of an NVIDIA GPU. It can only help an AMD GPU. And the openness of AMD's direct compute effects such as tressfx means that it can debugged and analyzed to look for anything that was done to harm nvidia.

Gameworks can also be analyzed and debugged for the same thing. You just have to disassemble the binary and decompile it.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I am convinced that people here are just being goons and trolls at this point. 

 

CUDA accelerated PhysX effects ≠ PhysX physics engine. Two very, very different things. PhysX being used as your physics engine is no different than using Havok, though I don't know enough about the pros/cons of either to tell you which one is better/worse (I'd argue its all the same to me, physics is physics)

Sigh. Some people here. 

I'm not a complete expert by any means. But I've messed around with both and Havok still seems 90's dated annoyance with the way it runs. PhysX physics engines run a lot smoother in my experience, and allow more fine tuning of multiple objects.

.

Link to comment
Share on other sites

Link to post
Share on other sites

intel isnt directly competing with nvidia while amd is

Intel is directly in competition with Nvidia. Did you forget Intel's Xeon Phi and Nvidia's Teslas are practically the currency of HPC technology? If they weren't in competition, Intel would have bought out Nvidia a long time ago.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not a complete expert by any means. But I've messed around with both and Havok still seems 90's dated annoyance with the way it runs. PhysX physics engines run a lot smoother in my experience, and allow more fine tuning of multiple objects.

 

Havok gave us the hilarious Halo physics though, and god knows that game had some of the most broken, weird and downright lol inducing physics of all time. 

But seriously, since English and reading comphrehension are difficult. 

PhysX Effects = CUDA acceleration that will work on a CUDA complaint card with appropriate drivers

PhysX Physics = physics engine that is 100% CPU dependent and GPU agnostic. 

Class dismissed? 

Link to comment
Share on other sites

Link to post
Share on other sites

No, you're not getting the disparity between the two. Tell me why a PhysX physics engine doesn't run like shit on a CPU when it's used solely for physics?

 

This is why I'm saying for PhysX a reactionary particle system. There is a difference between the two. One is used for general physics, like every game that uses a physics engine. The other is used for particle effects using CUDA acceleration. There is a difference in design because of a difference in USE.

im talking about the particles -_-

Link to comment
Share on other sites

Link to post
Share on other sites

Intel is directly in competition with Nvidia. Did you forget Intel's Xeon Phi and Nvidia's Teslas are practically the currency of HPC technology? If they weren't in competition, Intel would have bought out Nvidia a long time ago.

Patrick, you have to remember, anything that's not gaming related is irrelevant and doesn't effect our little bubbles. /s

 

Havok gave us the hilarious Halo physics though, and god knows that game had some of the most broken, weird and downright lol inducing physics of all time. 

But seriously, since English and reading comphrehension are difficult. 

PhysX Effects = CUDA acceleration that will work on a CUDA complaint card with appropriate drivers

PhysX Physics = physics engine that is 100% CPU dependent and GPU agnostic. 

Class dismissed? 

Class will commence throwing books at the instructor..

.

Link to comment
Share on other sites

Link to post
Share on other sites

im talking about the particles -_-

Nobody is talking about particles because Project Cars doesn't use that for the physics engine. Why the fuck are you going on about something irrelevant?

.

Link to comment
Share on other sites

Link to post
Share on other sites

Intel is directly in competition with Nvidia. Did you forget Intel's Xeon Phi and Nvidia's Teslas are practically the currency of HPC technology? If they weren't in competition, Intel would have bought out Nvidia a long time ago.

oh right forgot about those but those are for science and research which i dont think uses cuda 

Link to comment
Share on other sites

Link to post
Share on other sites

and users get a gimped version so we need a standard that works well on amd and nvidia gpus

We need AMD to buck up and just buy a CUDA license. If they want supercomputer design wins, it's a given.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

oh right forgot about those but those are for science and research which i dont think uses cuda

Most of the scientific computing algorithms are written in CUDA. Now that Intel is gaining ground it is helping computer scientists rewrite the algorithms in OpenMP, OpenACC, and OpenCL, but the old algorithms can still be run in CUDA on Intel iGPU and Xeon Phi.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.


×