Jump to content

[Mini-News] Metal Gear Solid V will be bundled with Nvidia graphics cards

Bouzoo

Hairworks...

AMD sucks at Hairworks because it's not optimised...

 

No, AMD cards are simply bad at tessellation. It's not some magic GameWorks code that only Nvidia has, it's tessellation. There's an option in CCC to reduce tessellation level, and performance increases drastically. Take a look at this tessellation benchmark:

 

H6ATP3r.png

http://anandtech.com/bench/GPU14/841

 

290X is worse than GTX 660.

i7 9700K @ 5 GHz, ASUS DUAL RTX 3070 (OC), Gigabyte Z390 Gaming SLI, 2x8 HyperX Predator 3200 MHz

Link to comment
Share on other sites

Link to post
Share on other sites

The reason that AMD sucks with hairworks is that Nvidia doesn't allow them to optimise there hardware for it.

 

Optimization would help a little but optimization isn't magic. AMDs GPUs are weaker in key areas than Nvidia ones, no amount of optimization can fix that. AMD decided to cripple their high-end cards by not increasing the ROP count over the 290 cards. AMD's pixel fillrate on their top end Fury X card is lower than the 970. AMD has much better texture fillrate but that isn't as important in games right now. Gameworks leverages pixel fillrate (which also explains why the 700 series cards are much slower than the 900 series in Gameworks).

Link to comment
Share on other sites

Link to post
Share on other sites

Yay so another game that will be shit on AMD gpus...

I haven't encountered a game that actually runs like shit rendering the game unplayable with my 290X.

I wonder how many people have actually had bad experiences with AMD and how many people just spout about random negative things they find from people on the internet.

I'm aware there are games out there better optimized for Ndivia hardware, but I don't know of many (if any at all) that completely ruin the experience for AMD users.

PCPartPicker link: http://pcpartpicker.com/p/R6GTGX

Привет товарищ ))))

Link to comment
Share on other sites

Link to post
Share on other sites

Yay so another game that will be shit on AMD gpus...

probably won't be considering MGS:GZ runs fantasticly on everything.

Link to comment
Share on other sites

Link to post
Share on other sites

All this NVIDIA hate because AMD apparently couldn't optimise for Hairworks? AMD's tessellation processing just isn't as good, same for Kepler and Fermi based cards.
AMD also still managed to get an optimised driver out despite this, AND Only AMD users can adjust the tessellation rendering down from x64 in their control panel. Dropping it means they can get comparable visuals without any of the significant performance drop. As a current NV user, I would love to be able to change tessellation levels in my driver control panel.

The Witcher 3 Patch 1.07 also added more Hairworks options directly into the game, now you can also manually adjust the AA levels of the Hair down from the default x8 AA.

 

4wUEnRJ.png

 

But of course it's all NVIDIA directly making CDPR handicapping AMD performance, it has nothing to do with CDPR directly or AMD's lacklustre Tessellation performance in current titles.

People are quick to forget that in AMD's poster game Lichdom Battlemage for True Audio and TressFX, TressFX 2.0 was disabled from running on NVIDIA hardware. Even changing TressFX to be enabled in the config files doesn't have any effect since it was disabled on NV hardware.

I find it rather Hypocritical of AMD and users in this regard, as the developer has stated on multiple occasion due to their agreement with AMD TressFX is only for their hardware in that game.

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

All this NVIDIA hate because AMD apparently couldn't optimise for Hairworks? AMD's tessellation processing just isn't as good, same for Kepler and Fermi based cards.

AMD also still managed to get an optimised driver out despite this, AND Only AMD users can adjust the tessellation rendering down from x64 in their control panel. Dropping it means they can get comparable visuals without any of the significant performance drop. As a current NV user, I would love to be able to change tessellation levels in my driver control panel.

The Witcher 3 Patch 1.07 also added more Hairworks options directly into the game, now you can also manually adjust the AA levels of the Hair down from the default x8 AA.

 

 

But of course it's all NVIDIA directly making CDPR handicapping AMD performance, it has nothing to do with CDPR directly OR AMD's lacklustre Tessellation performance in current titles.

People are quick to forget that in AMD's poster game Lichdom Battlemage for True Audio and TressFX, TressFX 2.0 was disabled from running on NVIDIA hardware. Even changing TressFX to be enabled in the config files doesn't have any effect since it was disabled on NV hardware.

I find it rather Hypocritical of AMD and users in this regard, as the developer has stated on multiple occasion due to their agreement with AMD TressFX is only for their hardware in that game.

While I agree with you I heard AMD was not aware of the tressfx thing on lichdom and they are looking into it.

System Specs

CPU: Ryzen 5 5600x | Mobo: Gigabyte B550i Aorus Pro AX | RAM: Hyper X Fury 3600 64gb | GPU: Nvidia FE 4090 | Storage: WD Blk SN750 NVMe - 1tb, Samsung 860 Evo - 1tb, WD Blk - 6tb/5tb, WD Red - 10tb | PSU:Corsair ax860 | Cooling: AMD Wraith Stealth  Displays: 55" Samsung 4k Q80R, 24" BenQ XL2420TE/XL2411Z & Asus VG248QE | Kb: K70 RGB Blue | Mouse: Logitech G903 | Case: Fractal Torrent RGB | Extra: HTC Vive, Fanatec CSR/Shifters/CSR Elite Pedals w/ Rennsport stand, Thustmaster Warthog HOTAS, Track IR5,, ARCTIC Z3 Pro Triple Monitor Arm | OS: Win 10 Pro 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

While I agree with you I heard AMD was not aware of the tressfx thing on lichdom and they are looking into it.

 

They're only aware of it because I and others kept nagging AMD Roy about it every time he said they never hamper gamer's experiences or went off on a tangent about NVIDIA and Gameworks are evil.

 

I also seriously doubt they weren't aware as Xaviant said Tress FX was only available on AMD hardware because of an agreement/partnership with them.

77Ukxvl.png

kmqcv5x.jpg

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

They're only aware of it because I and others kept nagging AMD Roy about it every time he said they never hamper gamer's experiences or went off on a tangent about NVIDIA and Gameworks are evil.

 

I also seriously doubt they weren't aware as Xaviant said Tress FX was only available on AMD hardware because of an agreement with them.

77Ukxvl.png

kmqcv5x.jpg

Starting to become AMDs Modus operandi, point the finger at someone else and ignore the issue. 

System Specs

CPU: Ryzen 5 5600x | Mobo: Gigabyte B550i Aorus Pro AX | RAM: Hyper X Fury 3600 64gb | GPU: Nvidia FE 4090 | Storage: WD Blk SN750 NVMe - 1tb, Samsung 860 Evo - 1tb, WD Blk - 6tb/5tb, WD Red - 10tb | PSU:Corsair ax860 | Cooling: AMD Wraith Stealth  Displays: 55" Samsung 4k Q80R, 24" BenQ XL2420TE/XL2411Z & Asus VG248QE | Kb: K70 RGB Blue | Mouse: Logitech G903 | Case: Fractal Torrent RGB | Extra: HTC Vive, Fanatec CSR/Shifters/CSR Elite Pedals w/ Rennsport stand, Thustmaster Warthog HOTAS, Track IR5,, ARCTIC Z3 Pro Triple Monitor Arm | OS: Win 10 Pro 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

Starting to become AMDs Modus operandi, point the finger at someone else and ignore the issue. 

 

That's the thing, both companies play dirty to some extent and as a gamer I want the best possible experience without my current GPU Brand coming into play.

At the same time I also love all the new tech, even if it's proprietary ( Still own a first gen Ageia PhysX PCI card) .

Would I prefer if it's open and anyone can use it without any issues? Yes, but each GPU architecture also has their limitations in that regard and not every company gives out their creations.

AMD isn't great at tessellation at the moment, hence Hairworks being punishing, but it hardly makes NVIDIA evil.

What is nasty was in Crysis 2 when there was an invisible tessellated see under the ground that handicapped AMD cards.

Just like AMD are being nasty and having their partner Xaviant disable TressFX2.0 on NVIDIA cards.

I simply vote with my wallet in the end as I want the best overall experience with all the eye candy. At the moment that's Gameworks effects, and when the New Tomb Raider or Deus Ex is out; I'll probably fawn over TressFX 3.0 as well.

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

http://www.hardwarepal.com/wp-content/uploads/2015/01/MGS-V-Ground-Zeroes-Benchmark-1920x1080.jpg

AMD performance is worsened vs nvidia, but I think working with nvidia likely improved base performance on both

Intel i5-3570K/ Gigabyte GTX 1080/ Asus PA248Q/ Sony MDR-7506/MSI Z77A-G45/ NHD-14/Samsung 840 EVO 256GB+ Seagate Barracuda 3TB/ 16GB HyperX Blue 1600MHZ/  750w PSU/ Corsiar Carbide 500R

 

Link to comment
Share on other sites

Link to post
Share on other sites

Bullshit. Don't be so naive. Project CARS for example was known to run like crap on AMD hardware. Here's why from a developer:

 

 
Check these threads out for more info:

 

Apparently the game ran just fine in Beta state on AMD cards, then suddenly the performance just tanked in the game. Further more AMD cards handle draw calls a LOT better than NVidia in all benchmarks and games for that matter, so using increased draw calls as a reason makes little sense.

 

No, AMD cards are simply bad at tessellation. It's not some magic GameWorks code that only Nvidia has, it's tessellation. There's an option in CCC to reduce tessellation level, and performance increases drastically. Take a look at this tessellation benchmark:

 

http://anandtech.com/bench/GPU14/841

 

290X is worse than GTX 660.

 

Indeed, so suddenly all NVidia hammers tessellation to hell in the worlds greatest concrete slab, rendered water underground, and tessellation with triangles smaller than 1 pixel? That is insanely wasteful, which punished NVidia own users as well. 64x on HairWorks with no visual benefit over 32x and hardly any over 16x? For what reason?

There is a reason you can set Tessellation manually in CCC, and they are mainly known as Batman Origin and Crysis 2 DX11 version.

 

Marvel at the worlds greatest concrete slab here: http://techreport.com/review/21404/crysis-2-tessellation-too-much-of-a-good-thing/2

 

All this NVIDIA hate because AMD apparently couldn't optimise for Hairworks? AMD's tessellation processing just isn't as good, same for Kepler and Fermi based cards.

AMD also still managed to get an optimised driver out despite this, AND Only AMD users can adjust the tessellation rendering down from x64 in their control panel. Dropping it means they can get comparable visuals without any of the significant performance drop. As a current NV user, I would love to be able to change tessellation levels in my driver control panel.

The Witcher 3 Patch 1.07 also added more Hairworks options directly into the game, now you can also manually adjust the AA levels of the Hair down from the default x8 AA.

 

But of course it's all NVIDIA directly making CDPR handicapping AMD performance, it has nothing to do with CDPR directly or AMD's lacklustre Tessellation performance in current titles.

People are quick to forget that in AMD's poster game Lichdom Battlemage for True Audio and TressFX, TressFX 2.0 was disabled from running on NVIDIA hardware. Even changing TressFX to be enabled in the config files doesn't have any effect since it was disabled on NV hardware.

I find it rather Hypocritical of AMD and users in this regard, as the developer has stated on multiple occasion due to their agreement with AMD TressFX is only for their hardware in that game.

 

CDPR straight out said they could not optimize HairWorks for anything, least of all AMD, which probably means they don't have source code access themselves. Just because an effect is DX11 Tessellation based, doesn't mean you cannot optimize for it. How the graphics card deal with the effect is something you optimize for, which can make a huge effect.

 

I do wonder what the new HairWorks pre-set is about? Does it add tessellation multiplier control or is it just a mix of level of HairWorks + AA? Looks like the latter.

 

NVidia is making the games handicapped on all cards with excessive tessellation, but of course it hits AMD worse as they cannot optimize properly for it. Remember that optimization at the end of the graphics stack is very limited and difficult to do without breaking everything. That's what NVidia likes to do, which is why you suddenly see a character in Assassins Creed Unity without a face! Yup, didn't happen on AMD.

 

They're only aware of it because I and others kept nagging AMD Roy about it every time he said they never hamper gamer's experiences or went off on a tangent about NVIDIA and Gameworks are evil.

 

I also seriously doubt they weren't aware as Xaviant said Tress FX was only available on AMD hardware because of an agreement/partnership with them.

77Ukxvl.png

 

AMD claims they never hinder anyone from optimizing. Based on their general behaviour when it comes to their IP, I tend to believe that. As the quote shown above, the developer disabled it and it sounds like they only included it for the general cooperation with AMD, which means they lived up to their contract/partner deal with AMD and nothing else. Go shout at the dev instead.

 

The fact that you yourself say AMD seemed to not be aware of this issue, kind of proves the point. After all, why would AMD test TressFX implementation in a game on NVidia hardware?

 

That being said, I fully expect TressFX in Deus Ex Mankind Divided, to work on both AMD and NVidia, as it should.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

This is BS. The reason why performance suffers with AMD hardware is high DX11 driver overhead for the CPU and the fact that AMD cards are slower in tessellation intensive scenarios, like Hairworks. This is 100% AMD's fault, so stop blaming Gameworks. GameWorks was AMD's excuse for their incompetence in optimizing their graphics driver and having poor performance in tessellation. Period.

high DX11 driver overhead for the CPU? Ok....then why do Nvidia drivers max out 2 cores of a Xeon X5450 with a GTX 970 and 650ti in the same rig? (BTW, that's retorical-I know the answer-Nvidia drivers poll the CPU way too much).

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Not another gameworks shit *shurgs*

@mr moose will love this kappa.

  ﷲ   Muslim Member  ﷲ

KennyS and ScreaM are my role models in CSGO.

CPU: i3-4130 Motherboard: Gigabyte H81M-S2PH RAM: 8GB Kingston hyperx fury HDD: WD caviar black 1TB GPU: MSI 750TI twin frozr II Case: Aerocool Xpredator X3 PSU: Corsair RM650

Link to comment
Share on other sites

Link to post
Share on other sites

Apparently the game ran just fine in Beta state on AMD cards, then suddenly the performance just tanked in the game. Further more AMD cards handle draw calls a LOT better than NVidia in all benchmarks and games for that matter, so using increased draw calls as a reason makes little sense.

 

Any proof to back that up?

 

Further more AMD cards handle draw calls a LOT better than NVidia in all benchmarks and games for that matter, so using increased draw calls as a reason makes little sense.

 

Nope. It's the other way around. AMD only recently improved it with 15.7 driver. They're still behind Nvidia. Check the 2 threads I've linked in my previous post, and also take a look at this:

 

290X 15.4:

HVVVfgH.png

 

290X 15.7: 

s7bpU6O.png

 

GTX 970:

r0CwkUB.png

 

 

Indeed, so suddenly all NVidia hammers tessellation to hell in the worlds greatest concrete slab, rendered water underground, and tessellation with triangles smaller than 1 pixel? That is insanely wasteful, which punished NVidia own users as well. 64x on HairWorks with no visual benefit over 32x and hardly any over 16x? For what reason?

There is a reason you can set Tessellation manually in CCC, and they are mainly known as Batman Origin and Crysis 2 DX11 version.

 

Perhaps Nvidia did overuse tessellation with GameWorks, or more specifically Hairworks. There's no doubt they did in those games that have tessellated water mesh underground and stuff like that, but that's different because you can disable HW. They certainly knew their cards perform good with tessellation so they based some of their GW effects around it. Visual benefit over 32x with HW is arguable. I don't think we even know what the level of tessellation is with HW. I had to manually lower it to 8x with my 290X to get the same performance as with my 970 at default. I do think that whatever level it is by default is excess, even for Nvidia cards, as the framerate was still unsatisfactory imo. From what I've seen there isn't much of a visual difference between 32x and 16x, yet the performance gain is quite big. There's now an option in Witcher 3 to reduce tessellation, however, which makes the game playable.

Anyway, all it took AMD to fix it was to optimize their driver and reduce tessellation. It's their fault for having hardware that's slow in tessellation and for the amount time it took them to add the tessellation level option and "optimize" their driver. Besides, HW is optional. Nvidia isn't obliged to let AMD users use their effects.
 
And HW is just part of the GW, so if you're going to blame something blame Hairworks. Not that you should, though.
 
 

CDPR straight out said they could not optimize HairWorks for anything, least of all AMD, which probably means they don't have source code access themselves. Just because an effect is DX11 Tessellation based, doesn't mean you cannot optimize for it. How the graphics card deal with the effect is something you optimize for, which can make a huge effect.

 

Of course they couldn't optimize Hairworks for AMD. It's a hardware issue. They knew performance on AMD hardware is worse, but it's a hardware issue and there was nothng they could have done. They obviously didn't want to reduce the tessellation cause that's not an optimization.

 

After all, why would AMD test TressFX implementation in a game on NVidia hardware?

 

Why would Nvidia test GW on AMD hardware?

i7 9700K @ 5 GHz, ASUS DUAL RTX 3070 (OC), Gigabyte Z390 Gaming SLI, 2x8 HyperX Predator 3200 MHz

Link to comment
Share on other sites

Link to post
Share on other sites

Any proof to back that up?

 

 

Nope. It's the other way around. AMD only recently improved it with 15.7 driver. They're still behind Nvidia. Check the 2 threads I've linked in my previous post, and also take a look at this:

 

290X 15.4:

HVVVfgH.png

 

290X 15.7: 

s7bpU6O.png

 

GTX 970:

r0CwkUB.png

 

 

 

Perhaps Nvidia did overuse tessellation with GameWorks, or more specifically Hairworks. There's no doubt they did in those games that have tessellated water mesh underground and stuff like that, but that's different because you can disable HW. They certainly knew their cards perform good with tessellation so they based some of their GW effects around it. Visual benefit over 32x with HW is arguable. I don't think we even know what the level of tessellation is with HW. I had to manually lower it to 8x with my 290X to get the same performance as with my 970 at default. I do think that whatever level it is by default is excess, even for Nvidia cards, as the framerate was still unsatisfactory imo. From what I've seen there isn't much of a visual difference between 32x and 16x, yet the performance gain is quite big. There's now an option in Witcher 3 to reduce tessellation, however, which makes the game playable.

Anyway, all it took AMD to fix it was to optimize their driver and reduce tessellation. It's their fault for having hardware that's slow in tessellation and for the amount time it took them to add the tessellation level option and "optimize" their driver. Besides, HW is optional. Nvidia isn't obliged to let AMD users use their effects.
 
And HW is just part of the GW, so if you're going to blame something blame Hairworks. Not that you should, though.
 
 
 

 

Of course they couldn't optimize Hairworks for AMD. It's a hardware issue. They knew performance on AMD hardware is worse, but it's a hardware issue and there was nothng they could have done. They obviously didn't want to reduce the tessellation cause that's not an optimization.

 

 

Why would Nvidia test GW on AMD hardware?

AMD driver can be better with older GPU than Nvidia's, the HD 5650 and 4250 in this laptop have the CPU (Phenom II X4 P920 1.6GHz) idling while my Quadro NVS 110M on its last Windows 7 drivers has the CPU idling at around 50% with my  Core 2 Duo T7600 (and I know it was the drivers as I compared on a clean Windows install and spent hours making sure that nothing else was using the CPU). Its a similar story with my Xeon X5450 with my GTX 970 and GTX 650TI 2GB as well, with the CPU at around 30%. (haven't owned an AMD graphics card outside of a laptops-they didn't give me the best value for money at the time  :( ).

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Now that's a bold statement considering that Arkham Knight (and few other games for that matter) ran bad or everything except consoles. It may happen but it may not as well, no evidence for anything yet.

 

Yeah I hope for Nvidia's sake that this doesn't becomes a sort of kiss of death to games: Nvidia gameworks? Bundled with cards? Steam sale removal!

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

This comes as no surprise.

Esp since said Nvidia features will be in it.

Maximums - Asus Z97-K /w i5 4690 Bclk @106.9Mhz * x39 = 4.17Ghz, 8GB of 2600Mhz DDR3,.. Gigabyte GTX970 G1-Gaming @ 1550Mhz

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah I hope for Nvidia's sake that this doesn't becomes a sort of kiss of death to games: Nvidia gameworks? Bundled with cards? Steam sale removal!

 

Ground Zeroes was great on the PC. Hopefully all the bullshit going on at Konami doesn't mean Phantom Pain ends up being a bad port.

Link to comment
Share on other sites

Link to post
Share on other sites

Any proof to back that up?

 

 http://linustechtips.com/main/topic/363018-project-cars-devs-address-amd-performance-issues-amd-drivers-to-blame-entirely-physx-runs-on-cpu-only-no-gpu-involvement-whatsoever/?p=4924725 (can't quote as the thread got locked)

 

Nope. It's the other way around. AMD only recently improved it with 15.7 driver. They're still behind Nvidia. Check the 2 threads I've linked in my previous post, and also take a look at this:

 

290X 15.4:

 

290X 15.7: 

 

GTX 970:

 

There is no way a game like Project cars would use over 900.000 draw calls. Draw calls are limited by API and CPU, not the GPU (at least in praxis). Certainly the 970 can do more in DX11, because NVidia has good DX11 drivers, but Maxwell is also a newer architecture as well.

According to this post http://forums.guru3d.com/showpost.php?p=5116716&postcount=901 PC can use up to 13k draw calls, and NVidia's drivers in DX11 can handle up to 11k and AMD up to 7k (In windows 7, so expect higher values on Win 8.1 and even higher in windows 10). Those numbers are based on a single 3GHZ Haswell core, so a 4790k@ 4,4 or more will increase that number almost linear (why Haswell E processors are shit for gaming in DX11).

 

Project cars suffers from the same issue as Assassins Creed: Too many draw calls for DX11. If the dev will indeed release a DX12 update, that should make the game run better on everything. However if you look at mantle and DX12, AMD can handle a lot more draw calls over all.

 

Just a closing note on the draw call issue, since all your arguments are based on the 3Dmark API overhead test, remember what 3DMark writes:

You should not use these scores to compare systems or graphics cards.

 

So yeah.

 

Perhaps Nvidia did overuse tessellation with GameWorks, or more specifically Hairworks. There's no doubt they did in those games that have tessellated water mesh underground and stuff like that, but that's different because you can disable HW. They certainly knew their cards perform good with tessellation so they based some of their GW effects around it. Visual benefit over 32x with HW is arguable. I don't think we even know what the level of tessellation is with HW. I had to manually lower it to 8x with my 290X to get the same performance as with my 970 at default. I do think that whatever level it is by default is excess, even for Nvidia cards, as the framerate was still unsatisfactory imo. From what I've seen there isn't much of a visual difference between 32x and 16x, yet the performance gain is quite big. There's now an option in Witcher 3 to reduce tessellation, however, which makes the game playable.

Anyway, all it took AMD to fix it was to optimize their driver and reduce tessellation. It's their fault for having hardware that's slow in tessellation and for the amount time it took them to add the tessellation level option and "optimize" their driver. Besides, HW is optional. Nvidia isn't obliged to let AMD users use their effects.
 
And HW is just part of the GW, so if you're going to blame something blame Hairworks. Not that you should, though. 

 

Pretty sure 64x is max for tessellation, but I could be mistaken. The issue is that it's not just Crysis 2, Batman Origins that uses excessive amounts of Tessellation. It is ALL tessellation based GameWorks effects in ALL the games that uses them. NVidia designed these effects to use max amount of tessellation. That hurts both AMD and NVidia users, but it certainly hurts AMD users a lot more. Further more, it also "incentivizes" NVidia users to upgrade their cards more often. How convenient.

 

Pretty sure you cannot change the tessellation multiplier in Witcher 3 on Hairworks. The new update looks like a preset combining Hairworks Off/gerralt/everything + HairWorks anti aliasing. At least I haven't seen anything that shows differently.

 

AMD optimized for Witcher 3, which is possible (no one claims otherwise), but with the exception of HairWorks. The point is that you still pay full price for the game with an effect the game has been marketed with, that your card cannot run properly. I have problems with that.

 

Of course they couldn't optimize Hairworks for AMD. It's a hardware issue. They knew performance on AMD hardware is worse, but it's a hardware issue and there was nothng they could have done. They obviously didn't want to reduce the tessellation cause that's not an optimization.

 

Why would Nvidia test GW on AMD hardware?

 

You can optimize plenty. when Tomb Raider was released and both Crystal Dynamics (the dev) and NVidia got source code access, both were able to make TressFX run equally efficient on AMD and NVidia. Before NVidia got access, the performance sucked on their cards. Ok TressFX is based on direct compute, but still. Optimization is about running code better on your hardware. Which means that you usually get even better performance increases by optimizing areas, the hardware are less good at. AMD's card has generally been better at Direct Compute, but NVidia at least got the opportunity to optimize TressFX overcoming their short comings.

 

They wouldn't, neither do they need to. They only need to open up for the source code, so AMD can optimize. GameWorks would still run like shit on everything and suck, but a lot less if it's vendor agnostic.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

 

The game was obviously not finished at the time of the first SS. Perhaps they implemented something later on that increased draw calls. We can't be sure they increased the amount of draw calls on purpose to reduce performance. Unless you can prove they did something like Extremetech found in Crysis 2.

 

There is no way a game like Project cars would use over 900.000 draw calls. Draw calls are limited by API and CPU, not the GPU (at least in praxis). Certainly the 970 can do more in DX11, because NVidia has good DX11 drivers, but Maxwell is also a newer architecture as well.

According to this post http://forums.guru3d.com/showpost.php?p=5116716&postcount=901 PC can use up to 13k draw calls, and NVidia's drivers in DX11 can handle up to 11k and AMD up to 7k (In windows 7, so expect higher values on Win 8.1 and even higher in windows 10). Those numbers are based on a single 3GHZ Haswell core, so a 4790k@ 4,4 or more will increase that number almost linear (why Haswell E processors are shit for gaming in DX11).

 

API Overhead Test tests API performance by looking at the balance between frame rates and draw calls. It increases the amount of draw calls until you hit 30 fps and then displays the result, which is draw calls per second. 13k per frame would mean that to be able to get 60 fps you need 780000 draw calls per second. And from my API Overhead tests in my previous post, my 4670K can do around 900k at 30 fps, which means that number would be significantly lower at 60 fps, right? So 900k vs 1m 200k is still a big difference and that it matters.

 

 

Just a closing note on the draw call issue, since all your arguments are based on the 3Dmark API overhead test, remember what 3DMark writes:

 

So yeah.

 

I am not comparing graphics cards. I'm comparing AMD's and Nvidia's API overhead using their graphics cards. I can't test Nvidia's overhead with a 290X.

 

AMD optimized for Witcher 3, which is possible (no one claims otherwise), but with the exception of HairWorks. The point is that you still pay full price for the game with an effect the game has been marketed with, that your card cannot run properly. I have problems with that.

 

You're paying 60$ for the game, not for GW effects. Those effects are Nvidia's tech and they are optional. Nvidia invested a lot of money and effort into creating it to provide something cool for their users. They even send out their engineers to help game devs implement it. They are in no way obliged to even let AMD users use the effects. Yet they do, and then AMD users blame them for higher performance impact, which is 100% AMD's fault and not Nvidia's. Ridiculous if you ask me.

 

You can optimize plenty. when Tomb Raider was released and both Crystal Dynamics (the dev) and NVidia got source code access, both were able to make TressFX run equally efficient on AMD and NVidia. Before NVidia got access, the performance sucked on their cards. Ok TressFX is based on direct compute, but still. Optimization is about running code better on your hardware. Which means that you usually get even better performance increases by optimizing areas, the hardware are less good at. AMD's card has generally been better at Direct Compute, but NVidia at least got the opportunity to optimize TressFX overcoming their short comings.

 

They wouldn't, neither do they need to. They only need to open up for the source code, so AMD can optimize. GameWorks would still run like shit on everything and suck, but a lot less if it's vendor agnostic.

 

That's completely different. This has nothing to do with the source code, but tessellation. How many times do I have to repeat this? I've already provided proof that shows that 290X is as slow as GTX 660 in tessellation. You can't make 290X perform as 980 in tessellation with optmization. That's impossible. You can't fix hardware issues with software.

i7 9700K @ 5 GHz, ASUS DUAL RTX 3070 (OC), Gigabyte Z390 Gaming SLI, 2x8 HyperX Predator 3200 MHz

Link to comment
Share on other sites

Link to post
Share on other sites

GameWorks automatically cuts the chance of a game being a good port in half, either due to the Nvidia or due to the developer being shit and depending on Nvidia to make the port passable. 

Link to comment
Share on other sites

Link to post
Share on other sites

AMD driver can be better with older GPU than Nvidia's, the HD 5650 and 4250 in this laptop have the CPU (Phenom II X4 P920 1.6GHz) idling while my Quadro NVS 110M on its last Windows 7 drivers has the CPU idling at around 50% with my  Core 2 Duo T7600 (and I know it was the drivers as I compared on a clean Windows install and spent hours making sure that nothing else was using the CPU). Its a similar story with my Xeon X5450 with my GTX 970 and GTX 650TI 2GB as well, with the CPU at around 30%. (haven't owned an AMD graphics card outside of a laptops-they didn't give me the best value for money at the time  :( ).

 

But that's different. The issue you described has nothing to do with DX11 driver overhead.

 

GameWorks automatically cuts the chance of a game being a good port in half, either due to the Nvidia or due to the developer being shit and depending on Nvidia to make the port passable. 

 

I think Nvidia made some really bad partnership choices. Games like Unity, WD, Arkham Knight were a mess because of devs/publishers. GW had nothing to do with it.

i7 9700K @ 5 GHz, ASUS DUAL RTX 3070 (OC), Gigabyte Z390 Gaming SLI, 2x8 HyperX Predator 3200 MHz

Link to comment
Share on other sites

Link to post
Share on other sites

I think Nvidia made some really bad partnership choices. Games like Unity, WD, Arkham Knight were a mess because of devs/publishers. GW had nothing to do with it.

That's pretty much what I'm saying. Basically, don't touch a GW title withing 6 months of release.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×