Jump to content

Project CARS devs address AMD performance issues, AMD drivers to blame entirely, PhysX runs on CPU only, no GPU involvement whatsoever.

Again, this isn't a problem of proprietary design, it's a problem of developers not implementing a global solution that will work on other hardware.

 

I play a stupid amount of Warframe, I get PhysX-like effects because the developers have implemented a solution for us non-PhysX users. Now all of us get to enjoy the pretty particles. Is this an Nvidia problem? No. It's a developer problem because they don't create an alternate solution.

 

As I told @zappian, having physics run on the CPU is not supposed to be a problem for AMD's cards. There is something else at play here that might reduce the performance.

 

Physx just much better on NVIDIA cards than the cpu.

 

I tried to run mafia 2 physx on my cpu and thats a very old game . It got to like 20 fps because the physx were shuffled off to the cpu.

Thats my experience with it.

Link to comment
Share on other sites

Link to post
Share on other sites

:lol:

 

You simply CAN'T use Prepar3d as an example. Ever. That game was broken when they ripped the code from FSX. It was such a pile from the beginning. All they've done is fixed the Fatal Error problem and added terrain under the water.

 

That was something that LM could have fixed instead of Nvidia but the game was a broken mess in the first place that it wasn't worth it.

 

 

They bought the license from Microsoft to DEVELOP it further. Unlike whats its face, the ones doing the Steam Edition, who can't do much at all.

 

P3D v2 involves new code, they are actually going through and cleaning it, rewriting it, etc. That's why you have better things in P3D than FSX. Maybe use it, sometime, instead of rehashing it. Is it based on the same game? Yes. Is it just a few fixes and some terrain? Far from it. Heck, rumor is that LM is actively looking into 64-bit support, which would require a rewrite. One of the reasons they haven't done it just yet.

 

Bias is as bias was.

 

Edit: I take that back, LM IS WORKING ON 64-Bit. I doublechecked where I read it, and it was a moderator of the Forums on LM.

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah i'm feeling like Spartaman64 is a little uninformed/naïve and is just basically guessing at this point. I guess he'd do well by delving into the subject matters that were discussed a little more vigorously before proceeding. Because this isn't helping the debate much, it's only derailing.

 

I have to say that i'm glad that eventhough we don't agree, you're not getting upset about it. So i give credit where credit is due, spartaman64. You're right though that Intel would have a hard time creating drivers for games, as this takes a lot of experience. But i'm sure someone is willing to sell that information/experience  ;)

Link to comment
Share on other sites

Link to post
Share on other sites

sorry basically intel currently doesnt have the experience in making drivers for games

I see what you did there.

Link to comment
Share on other sites

Link to post
Share on other sites

Physx just much better on NVIDIA cards than the cpu.

 

I tried to run mafia 2 physx on my cpu and thats a very old game . It got to like 20 fps because the physx were shuffled off to the cpu.

Thats my experience with it.

Wrong. PhysX.

 

The PhysX developed to run as a physics engine doesn't have any problems running on CPUs. It's just like Havok, just uses a different process to produce the same result.

 

They bought the license from Microsoft to DEVELOP it further. Unlike whats its face, the ones doing the Steam Edition, who can't do much at all.

 

P3D v2 involves new code, they are actually going through and cleaning it, rewriting it, etc. That's why you have better things in P3D than FSX. Maybe use it, sometime, instead of rehashing it. Is it based on the same game? Yes. Is it just a few fixes and some terrain? Far from it. Heck, rumor is that LM is actively looking into 64-bit support, which would require a rewrite. One of the reasons they haven't done it just yet.

 

Bias is as bias was.

You don't understand how to read English. I know that LM bought a license from Microsoft because I read pretty much every piece of information on the game's development process and have even dug through the code myself, what they did is rip the code from FSX and recompiled it to make more sense. Underneath it is essentially the same game because everything from one works on the other.

.

Link to comment
Share on other sites

Link to post
Share on other sites

Yup, you have no clue what we're talking about.

PhysX ran as a physics engine (separate from particle PhysX) doesn't use AMD gpus but rather runs on the CPU. It has no impact on GPU performance unless the CPU can't calculate the physics fast enough as per every single other game that uses any physics engine. There is something else going on here that would make the game run worse on AMD cards THAT IS NOT PHYSICS.

it doesn't run as well on the CPU before it was fine because you can buy a cheap nvidia GPU and use it for physx but nvidia disabled that and tressfx isn't the solution because it relies on direct compute which nvidia cards are not that great at

Yeah i'm feeling like Spartaman64 is a little uninformed/naïve and is just basically guessing at this point. I guess he'd do well by delving into the subject matters that were discussed a little more vigorously before proceeding. Because this isn't helping the debate much, it's only derailing.

I have to say that i'm glad that eventhough we don't agree, you're not getting upset about it. So i give credit where credit is due, spartaman64. You're right though that Intel would have a hard time creating drivers for games, as this takes a lot of experience. But i'm sure someone is willing to sell that information/experience ;)

OK then inform me what am I missing

and it doesn't make sense to me to get upset just because we disagree on something because we probably disagree on a lot of things and you are probably right in some while I'm right in other and we probably agree on a lot of things also

Link to comment
Share on other sites

Link to post
Share on other sites

it doesn't run as well on the CPU before it was fine because you can buy a cheap nvidia GPU and use it for physx but nvidia disabled that and tressfx isn't the solution because it relies on direct compute which nvidia cards are not that great at

As I've said before and I'll say it again. Wrong. PhysX.

.

Link to comment
Share on other sites

Link to post
Share on other sites

OK then inform me what am I missing

 

Context mostly. Also coherence and a well constructed argument. You're now just throwing spaghetti at the wall and looking what sticks.

 

I'm still not sure what your argument really is at this point.

Link to comment
Share on other sites

Link to post
Share on other sites

Context mostly. Also coherence and a well constructed argument. You're now just throwing spaghetti at the wall and looking what sticks.

I'm still not sure what your argument really is at this point.

OK which part is lacking coherence I'm bad at debate because I do notice myself throwing things out without connecting them because it makes sense in my mind but other people are just confused

As I've said before and I'll say it again. Wrong. PhysX.

if there is no drawbacks from it running on the cpu then nvidia will run it on the cpu when using their cards also

Link to comment
Share on other sites

Link to post
Share on other sites

OK which part is lacking coherence I'm bad at debate because I do notice myself throwing things out without connecting them because it makes sense in my mind but other people are just confused

 

Well what stance do you partake. I'm not asking you to chose a side, but atleast make some sort of statement. Where do you think the fault lies, what would be your solution... that sort of stuff.

 

The reason you're not able to put it into words properly, is probably your age.

Link to comment
Share on other sites

Link to post
Share on other sites

if there is no drawbacks from it running on the cpu then nvidia will run it on the cpu when using their cards also

Circular logic doesn't work here.

 

Nvidia has developed it for their cards, and will use it on their cards. However, there shouldn't be drawbacks running it on a CPU compared to a GeForce card.

.

Link to comment
Share on other sites

Link to post
Share on other sites

Well what stance do you partake. I'm not asking you to chose a side, but atleast make some sort of statement. Where do you think the fault lies, what would be your solution... that sort of stuff.

The reason you're not able to put it into words properly, is probably your age.

and english isnt my first language but my current arguments is that open standards are better than proprietary standards

Circular logic doesn't work here.

Nvidia has developed it for their cards, and will use it on their cards. However, there shouldn't be drawbacks running it on a CPU compared to a GeForce card.

if you have a top of the line cpu and you are not using your cpu for something like streaming physx will probably work fine but you can see where im going with this

Link to comment
Share on other sites

Link to post
Share on other sites

and english isnt my first language but my current arguments is that open standards are better than proprietary standards

 

By what definition? Are they inherently better because they're agnostic to brands? Or because they actually provide a better experience? If the non-proprietary service is poorly made, I do not believe they're inherently better by default. Take a look at Freesync for example. You'd argue that freesync is better than gsync with your current logic, but when you actually "pop the hood" and start looking at both technologies more closely, you'd see that Gsync is much better than freesync. It's range is higher and it works better at low framerates.

 

For freesync to do what g-sync does, scalers are just not sufficient. Which is why nvidia knew a programmable board with sufficient computational power, and memory, was needed. Plus scalers at that time weren't even supporting freesync-level of tech, but that's besides the point. So what is the better technology? In my opinion g-sync. It would be nice if that could be turned into an agnostic technology, but i'm willing to bet that scalermanuf. aren't willing to pay royalties and we get this middleground option like freesync.

Link to comment
Share on other sites

Link to post
Share on other sites

By what definition? Are they inherently better because they're agnostic to brands? Or because they actually provide a better experience? If the non-proprietary service is poorly made, I do not believe they're inherently better by default. Take a look at Freesync for example. You'd argue that freesync is better than gsync with your current logic, but when you actually "pop the hood" and start looking at both technologies more closely, you'd see that Gsync is much better than freesync. It's range is higher and it works better at low framerates.

the range is dependent on the manufacturer but yes currently at the low end of the frame rate range because of multiple frame buffering g sync is better but multi frame buffering is built into the adaptive sync spec and amd would be crazy to not use it in the future. and if we dont have an open physics engine the games that uses physics engines will be few and far between but if amd and nvidia work together on a single physics engine probably most games will implement it

Link to comment
Share on other sites

Link to post
Share on other sites

if you have a top of the line cpu and you are not using your cpu for something like streaming physx will probably work fine but you can see where im going with this 

Actually I can't because you still haven't explained any reason why I'm wrong.

.

Link to comment
Share on other sites

Link to post
Share on other sites

Actually I can't because you still haven't explained any reason why I'm wrong.

if you are streaming or you have a bad cpu physx wouldnt work for you

Link to comment
Share on other sites

Link to post
Share on other sites

the range is dependent on the manufacturer but yes currently at the low end of the frame rate range because of multiple frame buffering g sync is better but multi frame buffering is built into the adaptive sync spec and amd would be crazy to not use it in the future. and if we dont have an open physics engine the games that uses physics engines will be few and far between but if amd and nvidia work together on a single physics engine probably most games will implement it

 

How would AMD and nvidia working together promote the use of physics engines in videogames? The engines are already there, in the gameworks library. They cost time and money to make, that has to come from somewhere so it's inevitable that royalties have to be payed. Them working together wouldn't change that fact, it would only complicate things.

 

And yes that's my point, the range is totally random. There is no quality control (no good quality control atleast) and costumers just have to basically hope that reviewers are capable of determining those ranges and disclose them properly. Because AMD sure as hell isn't disclosing this, nor are the manuf. of the displays. So as a result, the "freesync" serivce/brand is a worse one. Meaning, the opensource option isn't inherently better if no effort or quality is being promoted.

 

The same effort went into Mantle, which is why it's dead. And making these products, just to not charge money for it tells me two things. 

A. they aren't sure of their product, and thus daren't ask royalties for them

B. They're incapable or running a business.

 

The fact AMD makes everything open source, only hurts them in the process. By marginalizing their brandname, and only costing them money.

Link to comment
Share on other sites

Link to post
Share on other sites

How would AMD and nvidia working together promote the use of physics engines in videogames? The engines are already there, in the gameworks library. They cost time and money to make, that has to come from somewhere so it's inevitable that royalties have to be payed. Them working together wouldn't change that fact, it would only complicate things.

 

And yes that's my point, the range is totally random. There is no quality control and costumers just have to basically hope that reviewers are capable of determining those ranges and disclose them properly. Because AMD sure as hell isn't disclosing this, nor are the manuf. of the displays. So as a result, the "freesync" serivce/brand is a worse one. Meaning, the opensource option isn't inherently better if no effort or quality is being promoted.

well we saw how the physx engine worked out for project cars when it comes to amd gpus if physx was being supported by both nvidia and amd then it would have ran well on amd gpus also. and of course anything will turn out poorly if no effort is being put in and i feel like amd rushed with free sync and sure have waited to give monitor manufacturers more time but as free sync matures i have high hopes for it

Link to comment
Share on other sites

Link to post
Share on other sites

if you are streaming or you have a bad cpu physx wouldnt work for you

I don't think you understand how CPUs work because that will happen with every single program.

.

Link to comment
Share on other sites

Link to post
Share on other sites

I don't think you understand how CPUs work because that will happen with every single program.

if physx was being done on the gpu it wouldnt take up cpu performance 

Link to comment
Share on other sites

Link to post
Share on other sites

well we saw how the physx engine worked out for project cars when it comes to amd gpus if physx was being supported by both nvidia and amd then it would have ran well on amd gpus also. and of course anything will turn out poorly if no effort is being put in and i feel like amd rushed with free sync and sure have waited to give monitor manufacturers more time but as free sync matures i have high hopes for it

 

I'm not trying to be a dick, but your posts are really hard to read due to the lack of punctuation. Please don't rattle on the keyboard and revise your comment before hitting "post". This isn't a race. I can wait 30 more seconds if it means a readable post.

 

And you still provide no reason for this alledged increase in the usage of the phsyicsengines. It would only improve usability or user experience, not net an increase in usage. People will not blame a phsyics engine for their game not running properly, rather AMD or the developer. 

Link to comment
Share on other sites

Link to post
Share on other sites

if physx was being done on the gpu it wouldnt take up cpu performance 

That doesn't matter. If your cpu can't handle a physics engine then why are you playing a game in the first place? It's no different than running the dozens and dozens of other physics engines out there.

.

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not trying to be a dick, but your posts are really hard to read due to the lack of punctuation. Please don't rattle on the keyboard and revise your comment before hitting "post". This isn't a race. I can wait 30 more seconds if it means a readable post.

 

And you still provide no reason for this alleged increase in the usage of the physics engines. It would only improve usability or user experience, not net an increase in usage. People will not blame a physics engine for their game not running properly, rather AMD or the developer. 

One of the main reason why physx isn't being adopted widely is that is locked onto nvidia. Back when physx was with a third party company it seemed like it had a really bright future with more and more adoption from devs. But when nvidia purchased the company physx adoption seem to have hit a brick wall. 

Link to comment
Share on other sites

Link to post
Share on other sites

Nice. I use Nvidia. I can't wait to get this game. Once I get more money.

Love cats and Linus. Check out linuscattips-fan-club. http://pcpartpicker.com/p/Z9QDVn and Asus ROG Swift. I love anime as well. Check out Heaven Society heaven-society. My own personal giveaway thread http://linustechtips.com/main/topic/387856-evga-geforce-gtx-970-giveaway-presented-by-grimneo/.

Link to comment
Share on other sites

Link to post
Share on other sites

That doesn't matter. If your cpu can't handle a physics engine then why are you playing a game in the first place? It's no different than running the dozens and dozens of other physics engines out there.

physics engines made by game developers run on the gpu and its like saying why dont you use your cpu for graphics which you can but is going to be slow. the gpu is better at these types of applications

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.


×