Jump to content

AMD says Nvidia’s GameWorks “completely sabotaged” Witcher 3 performance

Am I the only one just waiting for TotalBiiscuit to do a video on this topic and introduce a bit of sanity into this argument?

 

He won't. He clarified in Q&A why he is staying away from all things Witcher 3 related: He feels that he can't be unbiased cause Gog.com (which is owned by CDPR) sponsors his Axiom starcraft esports team. He is being too cautious, to a fault in fact.

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

But it also doesn't perform well on Nvidia cards so...I don't see the point.

i5 4670k @ 4.2GHz (Coolermaster Hyper 212 Evo); ASrock Z87 EXTREME4; 8GB Kingston HyperX Beast DDR3 RAM @ 2133MHz; Asus DirectCU GTX 560; Super Flower Golden King 550 Platinum PSU;1TB Seagate Barracuda;Corsair 200r case. 

Link to comment
Share on other sites

Link to post
Share on other sites

But it also doesn't perform well on Nvidia cards so...I don't see the point.

 

It is a non-issue. Nvidia also isn't the one who sets the default tessellation rates (which are obscenely high, go into the .ini file and crank them down for a better setting that doesn't kill either brand). People just want to make a fuss about things. Maxwell and GCN have comparable tessellation power, Maxwell just has better drivers. Kepler inherently isn't as good at tessellation. So you crank those settings down. 

 

People (like Huddy) just want to hurl shit at Nvidia instead of hurling shit at his engineering teams. 

Link to comment
Share on other sites

Link to post
Share on other sites

That was very whiny. Stop complaining and go make your own software.

 

  1. GLaDOS: i5 6600 EVGA GTX 1070 FE EVGA Z170 Stinger Cooler Master GeminS524 V2 With LTT Noctua NFF12 Corsair Vengeance LPX 2x8 GB 3200 MHz Corsair SF450 850 EVO 500 Gb CableMod Widebeam White LED 60cm 2x Asus VN248H-P, Dell 12" G502 Proteus Core Logitech G610 Orion Cherry Brown Logitech Z506 Sennheiser HD 518 MSX
  2. Lenovo Z40 i5-4200U GT 820M 6 GB RAM 840 EVO 120 GB
  3. Moto X4 G.Skill 32 GB Micro SD Spigen Case Project Fi

 

Link to comment
Share on other sites

Link to post
Share on other sites

Word on the street is that the 390/X that people were looking forward to is nothing more than  290/X rebrand. The HBM card is going after the Titan, which is nice, but maybe 5% of the market is even remotely interested in those cards much less buying them. 

 

So I will probably get really, really cheap 290s (maybe even 8GB versions) and call it a day. AMD gets no money from me, I get cheap cards. Win win? 

 

hopefully the 980 ti comes out close if not at the same time and drops the price of the 970/980 a 980ti would be nice but we shall see

"if nothing is impossible, try slamming a revolving door....." - unknown

my new rig bob https://uk.pcpartpicker.com/b/sGRG3C#cx710255

Kumaresh - "Judging whether something is alive by it's capability to live is one of the most idiotic arguments I've ever seen." - jan 2017

Link to comment
Share on other sites

Link to post
Share on other sites

It is a non-issue. Nvidia also isn't the one who sets the default tessellation rates (which are obscenely high, go into the .ini file and crank them down for a better setting that doesn't kill either brand). People just want to make a fuss about things. Maxwell and GCN have comparable tessellation power, Maxwell just has better drivers. Kepler inherently isn't as good at tessellation. So you crank those settings down. 

 

People (like Huddy) just want to hurl shit at Nvidia instead of hurling shit at his engineering teams. 

 

You have no proof to back that up. Considering CDPR states they cannot optimize HairWorks, it's safe to assume, they only have the black boxed version of GameWorks, which means they only get DLL's, where they can only call functions. CDPR would have no interest in only running tessellation level at 64x. NVidia trying to sell 900 series graphics cards do.

 

The ini settings file, does not have tessellation multiplier settings, only an anti aliasing setting for HairWorks. Stop spreading misinformation.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

You have no proof to back that up. Considering CDPR states they cannot optimize HairWorks, it's safe to assume, they only have the black boxed version of GameWorks, which means they only get DLL's, where they can only call functions. CDPR would have no interest in only running tessellation level at 64x. NVidia trying to sell 900 series graphics cards do.

 

The ini settings file, does not have tessellation multiplier settings, only an anti aliasing setting for HairWorks. Stop spreading misinformation.

 

It is the height of irony for you to:

 

1. Tell me I have no proof

2. Tell me to stop spreading misinformation 

Link to comment
Share on other sites

Link to post
Share on other sites

Holy balls this thread blew up

Link to comment
Share on other sites

Link to post
Share on other sites

There is a warning on the product page and the manual that it only supports optimization of i386 and Intel64 architectures. If you change your CPUID string the front end will query the chip for available instructions. From there it determines the architecture family and then builds the code based on the cycle count and cache latency rules which were painstakingly developed from an optimization program (yes, a program was used to build the core component of another program) which likely took the form of an integer linear program or genetic algorithm.

Did Cinebench decide to switch to GCC or Clang?

Which is the effect of the trial.

 

However, for a few years back (not sure if this remains true), the ICC compiler was also the best option for AMD (great SSE2 support at the time).

 

Not sure if the decided to switch to Clang. Could still be ICC.

Link to comment
Share on other sites

Link to post
Share on other sites

We have no clue of the consequences of such a CUDA license. The only other company, that has one, seems to not use it for anything (Intel).

It's not quite as optimized as Maxwell, but we don't know how well the new GCN is for that.

But is it necessary? If you just waste excessive amounts of tessellation on HairWorks @ 64x, or the worlds most detailed concrete slab in Crysis 2, then what is the point of wasting resources on it? There does not seem to be any difference in graphics fidelity between 64x and 32x in HairWorks, and not noticeably in 16x either. So what is the point of extreme tessellation, when it is that taxing, and the law of diminishing returns hits so hard?

All of Intel's Xeon Phi and Gen 7+ iGPUs are CUDA 6 compliant for the record. How else do you think the Xeon Phi sold so well? Having Nvidia's algorithms run on your stuff (with superior performance) and then being able to help supercomputer engineers rewrite the same algorithms for OpenMP/OpenACC or even OpenCL (meaning the machines can run on the old code base, but then rebuild the code base at the same time for further optimization on open platforms, kudos to Intel) is great.

The difference between 16x and 32x is very noticeable for me. n64x c. though I will admit presents nothing other than more realistic movement of hair (spline physics). Is this a next-gen game or not?

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

The Intel suit was BS too. ICC optimizes per architecture based on the clock counts per instruction. It can't be expected to optimize to the same level for AMD.

lol. Now it does but that wasn't the case when the suit was filed.

Here's x86-64 assembly expert Agner Fog on the matter in 2009:

Sounds nice, but the truth is that the CPU dispatcher didn't support SSE or SSE2 or any higher SSE in AMD processors and still doesn't today (Intel compiler version 11.1.054)

The Intel CPU dispatcher does not only check the vendor ID string and the instruction sets supported. It also checks for specific processor models. In fact, it will fail to recognize future Intel processors with a family number different from 6.

Never trust any benchmark unless it is open source and compiled with a neutral compiler, such as Gnu or Microsoft.

It is possible to change the CPUID of AMD processors by using the AMD virtualization instructions.

Unsurprisingly changing the CPUID of AMD and Via processors heavily improved their performance in benchmarks.

Link to comment
Share on other sites

Link to post
Share on other sites

Of course they did.

THISGONBGUD.gif

Time for the pages and pages of evidence and AMD people stuffing their fingers in their ears and closing their eyes.

System Specs

CPU: Ryzen 5 5600x | Mobo: Gigabyte B550i Aorus Pro AX | RAM: Hyper X Fury 3600 64gb | GPU: Nvidia FE 4090 | Storage: WD Blk SN750 NVMe - 1tb, Samsung 860 Evo - 1tb, WD Blk - 6tb/5tb, WD Red - 10tb | PSU:Corsair ax860 | Cooling: AMD Wraith Stealth  Displays: 55" Samsung 4k Q80R, 24" BenQ XL2420TE/XL2411Z & Asus VG248QE | Kb: K70 RGB Blue | Mouse: Logitech G903 | Case: Fractal Torrent RGB | Extra: HTC Vive, Fanatec CSR/Shifters/CSR Elite Pedals w/ Rennsport stand, Thustmaster Warthog HOTAS, Track IR5,, ARCTIC Z3 Pro Triple Monitor Arm | OS: Win 10 Pro 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

You have no proof to back that up. Considering CDPR states they cannot optimize HairWorks, it's safe to assume, they only have the black boxed version of GameWorks, which means they only get DLL's, where they can only call functions. CDPR would have no interest in only running tessellation level at 64x. NVidia trying to sell 900 series graphics cards do.

 

The ini settings file, does not have tessellation multiplier settings, only an anti aliasing setting for HairWorks. Stop spreading misinformation.

It is the height of irony for you to:

 

1. Tell me I have no proof

2. Tell me to stop spreading misinformation 

Well the patch isn't out yet so we don't know which one of you are correct. If the setting isn't there I think it's safe to say that it is hard coded by Nvidia. If the setting ends up in the ini file then it's good for all of us.

Link to comment
Share on other sites

Link to post
Share on other sites

All of Intel's Xeon Phi and Gen 7+ iGPUs are CUDA 6 compliant for the record. How else do you think the Xeon Phi sold so well? Having Nvidia's algorithms run on your stuff (with superior performance) and then being able to help supercomputer engineers rewrite the same algorithms for OpenMP/OpenACC or even OpenCL (meaning the machines can run on the old code base, but then rebuild the code base at the same time for further optimization on open platforms, kudos to Intel) is great.

The difference between 16x and 32x is very noticeable for me. n64x c. though I will admit presents nothing other than more realistic movement of hair (spline physics). Is this a next-gen game or not?

 

There is quite a big difference between using CUDA for compute and just games. You have to remember that all *Works effects in VisualFX of GameWorks, doesn't use APEX/Advanced PhysX, but only DX11, so I doubt CUDA makes any difference.

 

Maybe 32x is the sweet spot, but it doesn't change the fact, that this one effect, is extremely taxing for very little return in graphical fidelity. You say you work a lot with optimization; do you think HairWorks is well optimized? Take a look, at what TressFX did in version 2 and 3, with master/slave strands of hair. It seems a LOT more optimized, without any noticeable downgrade in graphics.

Of course Deus Ex Mankind Divided, will be the exam for TressFX 3. Although I expect it to run a little bad on NVidia at launch, I'm positive it will be optimized in a driver, since NVidia will get their hands on the source code and/or help from the dev, like it was with Tomb Raider.

 

Well the patch isn't out yet so we don't know which one of you are correct. If the setting isn't there I think it's safe to say that it is hard coded by Nvidia. If the setting ends up in the ini file then it's good for all of us.

 

It's going to be interesting of course. But we don't know what NVidia has done with CDPR, since launch, after all hell broke loose. Right now, it's hard coded, and based on official statements, I assume it's a black boxed thing.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

I also want to add AMD needs to do 2 things 

1) Shut the fuck up
2) Take Nvidia to court if this is as they claim

If you are not going to do step 2 quit your fucking whining.

System Specs

CPU: Ryzen 5 5600x | Mobo: Gigabyte B550i Aorus Pro AX | RAM: Hyper X Fury 3600 64gb | GPU: Nvidia FE 4090 | Storage: WD Blk SN750 NVMe - 1tb, Samsung 860 Evo - 1tb, WD Blk - 6tb/5tb, WD Red - 10tb | PSU:Corsair ax860 | Cooling: AMD Wraith Stealth  Displays: 55" Samsung 4k Q80R, 24" BenQ XL2420TE/XL2411Z & Asus VG248QE | Kb: K70 RGB Blue | Mouse: Logitech G903 | Case: Fractal Torrent RGB | Extra: HTC Vive, Fanatec CSR/Shifters/CSR Elite Pedals w/ Rennsport stand, Thustmaster Warthog HOTAS, Track IR5,, ARCTIC Z3 Pro Triple Monitor Arm | OS: Win 10 Pro 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

I also want to add AMD needs to do 2 things 

1) Shut the fuck up

2) Take Nvidia to court if this is as they claim

If you are not going to do step 2 quit your fucking whining.

 

They can't take it to court and they very well bloody know they can't so instead they posture on social media and talk shit. At least when other companies talk shit they have products to back up their bravado. 

Link to comment
Share on other sites

Link to post
Share on other sites

AMD has two problems, one of which is very easy to solve.

1) it has no CUDA license (despite being offered it for free twice)

2) it has a terrible tesselation engine which is basically all Hairworks runs on. Is it Nvidia's fault AMD is just that bad at it?

See, I'm not 100% on this, they had the opportunity to write a driver supporting CUDA on their GPUs, but I never saw any offer of a contractual guarantee on it. If they were offered the boilerplate and none of the articles I saw mentioned it, so be it, its on AMD; but I cannot blame them for not jumping into CUDA if they didn't have a true license agreement protecting their investment of time and resources should nVidia change their mind on the free access after the fact. If you saw that they were in fact offered the full contractual guarantee I defer to your reading on the subject, but what I have seen only told of free access and the opportunity to incorporate support into their driver, no agreement or legal protections.

Link to comment
Share on other sites

Link to post
Share on other sites

It is a non-issue. Nvidia also isn't the one who sets the default tessellation rates (which are obscenely high, go into the .ini file and crank them down for a better setting that doesn't kill either brand).

I am not a fan of conspiracy theories based on limited info, but I can understand why people are suspicious. CD Projekt Red has not helped the situation by setting the default tessellation obscenely high for no visual benefit, and then releasing statements saying that they cannot optimize it. When it turned out to be so simple that we could optimize it ourselves. Of course people will speculate then.

Some clarity on the above from the devs would help settle everybody down. I appreciate the way they admitted the downgrade and laid that to rest with honest talk..

Well At least the game runs well on both Maxwell and GCN, that's the most important thing.

Link to comment
Share on other sites

Link to post
Share on other sites

What is he even talking about the performance on AMD cards is pretty much in line with the Nvidia cards in The Witcher 3.
The only thing that has problems is Hairoworks and that is a Nvidia feature in the first place.

RTX2070OC 

Link to comment
Share on other sites

Link to post
Share on other sites

I am not a fan of conspiracy theories based on limited info, but I can understand why people are suspicious. CD Projekt Red has not helped the situation by setting the default tessellation obscenely high for no visual benefit, and then releasing statements saying that they cannot optimize it. When it turned out to be so simple that we could optimize it ourselves. Of course people will speculate then.

Some clarity on the above from the devs would help settle everybody down. I appreciate the way they admitted the downgrade and laid that to rest with honest talk..

Well At least the game runs well on both Maxwell and GCN, that's the most important thing.

 

When the dev themselves say, they cannot optimize it, what makes you think, they themselves can set the tessellation multiplier? Remember that NVidia cards, or at least the users, has no control over this. It's either 0x (off) or 64x. There is no way for any gamer, to manually set the multiplier in the game. VS is wrong, when he states this. Only AMD users can do this, because it's a feature built in to the drivers, due to NVidia pulling shit like this before.

 

It's such a shame, because this game is really well optimized in itself, and looks amazing. CDPR are one of the best devs in the industry, when it comes to being open and ethical. Why they choose to always buy into the tech, one of the least open and ethical companies in the industry, is confusing.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

They can't take it to court and they very well bloody know they can't so instead they posture on social media and talk shit. At least when other companies talk shit they have products to back up their bravado.

Ya its almost like they know they will get legally trounced so they are going nothing more then try and paint nvidia as a villain.

But no, they wouldn't do that. 

System Specs

CPU: Ryzen 5 5600x | Mobo: Gigabyte B550i Aorus Pro AX | RAM: Hyper X Fury 3600 64gb | GPU: Nvidia FE 4090 | Storage: WD Blk SN750 NVMe - 1tb, Samsung 860 Evo - 1tb, WD Blk - 6tb/5tb, WD Red - 10tb | PSU:Corsair ax860 | Cooling: AMD Wraith Stealth  Displays: 55" Samsung 4k Q80R, 24" BenQ XL2420TE/XL2411Z & Asus VG248QE | Kb: K70 RGB Blue | Mouse: Logitech G903 | Case: Fractal Torrent RGB | Extra: HTC Vive, Fanatec CSR/Shifters/CSR Elite Pedals w/ Rennsport stand, Thustmaster Warthog HOTAS, Track IR5,, ARCTIC Z3 Pro Triple Monitor Arm | OS: Win 10 Pro 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

When the dev themselves say, they cannot optimize it, what makes you think, they themselves can set the tessellation multiplier? Remember that NVidia cards, or at least the users, has no control over this. It's either 0x (off) or 64x. There is no way for any gamer, to manually set the multiplier in the game. VS is wrong, when he states this. Only AMD users can do this, because it's a feature built in to the drivers, due to NVidia pulling shit like this before.

It's such a shame, because this game is really well optimized in itself, and looks amazing. CDPR are one of the best devs in the industry, when it comes to being open and ethical. Why they choose to always buy into the tech, one of the least open and ethical companies in the industry, is confusing.

Is it possible that the hairworks licensing agreement prevents the dev from reducing it? We may never know due to NDA.
Link to comment
Share on other sites

Link to post
Share on other sites

CDPR are one of the best devs in the industry, when it comes to being open and ethical. Why they choose to always buy into the tech, one of the least open and ethical companies in the industry, is confusing.

What is confusing about it?

You can't be serious.  Hyperthreading is a market joke?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

It is a non-issue. Nvidia also isn't the one who sets the default tessellation rates (which are obscenely high, go into the .ini file and crank them down for a better setting that doesn't kill either brand). People just want to make a fuss about things. Maxwell and GCN have comparable tessellation power, Maxwell just has better drivers. Kepler inherently isn't as good at tessellation. So you crank those settings down. 

 

People (like Huddy) just want to hurl shit at Nvidia instead of hurling shit at his engineering teams. 

You're wrong, maxwell has far better tessellation performance than GCN and is it's similar to kepler.

You would think that CDPR and Nvidia would be smart enough to balance performance and looks, that isn't the case.

So either CDPR or NVIDIA fucked up. I'm going pointy to NVIDIA otherwise the 700 series wouldn't run like shit.

 

67750.png

tessmark.gif

FX-8120 | ASUS Crosshair V Formula | G.Skill Sniper 8GB DDR3-1866 CL9 | Club3D Radeon R9 290 RoyalAce |Thermaltake Chaser MkIII | 128GB M4 Crucial + 2TB HDD storage | Cooler Master 850M | Scythe Mugen 3 | Corsair Strafe RGB | Logitech G500

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.


×