Jump to content

NVIDIA Responds to GTX 970 3.5GB Memory Issue

TheBoneyKing

I just don't get why this issue is so much of a hit, when AMD has some massive DirectX CPU overhead performance issues going on for like a year long when they even released a new API Mantle (everyone is hyping about it 24/7) to counter CPU overhead. I make a thread here; http://www.overclock.net/t/1528559/directx-driver-overhead-and-why-mantle-is-a-selling-point-bunch-of-benchmarks/100_100

Mantle is only doing well against their own DX but against Nvidia's the difference is non-existent. Their DX has been so horrible to a point a 760 was wrecking a 290x which obviously includes a 295x2. That thread I made was linked to AMD's driver team by the AMD rep @ OCN, 2 months later still no response. They most likely won't fix it, if they do it's bye bye Mantle thats been a lie.

AMD got hit hard on this. The problem has to do with what hits at home more for users. This issue is seen as many games being unplayable to the max ability of the GPU, which we all like to push, let's face it. As Mantle was sadly a lot of marking fluff with little details, and many don't know about what you highlighted and assume it is only on Windows side of things, and well, more performance is better than no performance, and to be fair, even compared to Nvidia card, Mantle goes a bit faster, but not a s fast as advertise in many affected games (from what I remember), less people complain.

Not to mention that there is no problem in itself on Mantle. I mean yes the game runs faster, than without it on the same card. The performance is more consistent and not "magically" drops like the 970 case.

Link to comment
Share on other sites

Link to post
Share on other sites

AMD got hit hard on this. The problem has to do with what hits at home more for users. This issue is seen as many games being unplayable to the max ability of the GPU, which we all like to push, let's face it. As Mantle was sadly a lot of marking fluff with little details, and many don't know about what you highlighted and assume it is only on Windows side of things, and well, more performance is better than no performance, and to be fair, even compared to Nvidia card, Mantle goes a bit faster, but not a s fast as advertise in many affected games (from what I remember), less people complain.

Not to mention that there is no problem in itself on Mantle. I mean yes the game runs faster, than without it on the same card. The performance is more consistent and not "magically" drops like the 970 case.

Mantle is doing a tiny bit better besides it can be just the GPU thats just more powerful than whatever Nvidia card they used after Mantle lifted all the CPU overhead to allow their GPU's reach their max potential. I wouldn't say it's better but whatever, it's better. Also Nvidia released 337.50 after a month of Mantle's release without a new API reducing the same amount of CPU overhead Mantle managed to do. After Nvidia released that driver, the hype I had for Mantle was immediately over. Bragging about it being a low-level API, their PR's trying to say DX12 = Mantle, when D3D (this is the API, DirectX is just a package) went low level back in 1999 during the development of their first console. 

"Direct3D is a low-level API that you can use to draw triangles, lines, or points per frame, or to start highly parallel operations on the GPU."

Source; https://msdn.microsoft.com/en-us/library/windows/desktop/hh769064(v=vs.85).aspx#What_is_D3D

Link to comment
Share on other sites

Link to post
Share on other sites

I have been doing some testing with Vram hungry game like Assassin's Creed Unity found something very interesting. Normally, under Ultra setting, 1080p and FXAA the VRAM would not increases over 3550MB. Even with 4X MSAA, the VRAM only increased to around 3580MB. Then I tried 8X MSAA and went into single digits FPS and finally pushed it to around 3990MB. 

Here is the interesting part. I then went back to FXAA and the VRAM usage didn't drop, it remained there. I'm beginning to think the memory bandwidth is okay, the problem appears to be a bug in the driver or firmware not allocating the final 512mb unless you bring it under very stressful conditions. Once the conditions are met, even if I lowered the VRAM requirements by switching from MSAA 8X to a post processing AA like FXAA which hardly affects the VRAM, the used VRAM remains the same as before and well over 3550mb this time around. 

I recorded this video using Shadowplay. The stutter once I went to FXAA is probably due to Shadowplay as I didn't experience it myself while playing the game.


http://youtu.be/cvDdrmzmS4M

Link to comment
Share on other sites

Link to post
Share on other sites

Has this seriously gone so derailed that people are now trying to find faults with AMD? Please. This issue is very aparrent otherwise it wouldn't be such a hot topic. Whatever is causing it, Nvidia need to sort it, or they're gonna be hit where it hurts the most.

 

Also, people need to get their head out of the damn sand and stop defending them and looking for excuses, they may allegedly be perfect, but they're not, and neither is AMD or Intel for that matter.

CPU: Intel Core i7 4790K @ 4.7GHz, 1.3v with Corsair H100i - Motherboard: MSI MPOWER Z97 MAX AC - RAM: 2x4GB G.Skill Ares @ 2133 - GPU1: Sapphire Radeon R9-290X BF4 Edition with NZXT Kraken G10 with a Corsair H55 AIO @ 1140/1650 GPU2: PowerColor Radeon R9-290X OC Edition with NZXT Kraken G10 with a Corsair H55 AIO @ 1140/1650 - SSD: 256GB OCZ Agility 4 - HDD: 1TB Samsung HD103SJ- PSU: SuperFlower Leadex GOLD 1300w  - Case: NZXT Switch 810 (White) - Case fans: NZXT Blue LED Fans- Keyboard: Steelseries Apex Gaming Keyboard - Mouse: Logitech G600 - Heaphones: Logitech G930 - Monitors: ASUS PB287Q and Acer G246HYLbd -  Phone: Sony Xperia Z1

Link to comment
Share on other sites

Link to post
Share on other sites

Mantle is doing a tiny bit better besides it can be just the GPU thats just more powerful than whatever Nvidia card they used after Mantle lifted all the CPU overhead to allow their GPU's reach their max potential. I wouldn't say it's better but whatever, it's better. Also Nvidia released 337.50 after a month of Mantle's release without a new API reducing the same amount of CPU overhead Mantle managed to do. After Nvidia released that driver, the hype I had for Mantle was immediately over. Bragging about it being a low-level API, their PR's trying to say DX12 = Mantle, when D3D (this is the API, DirectX is just a package) went low level back in 1999 during the development of their first console. 

"Direct3D is a low-level API that you can use to draw triangles, lines, or points per frame, or to start highly parallel operations on the GPU."

Source; https://msdn.microsoft.com/en-us/library/windows/desktop/hh769064(v=vs.85).aspx#What_is_D3D

 

Mantle/DirectX 12 has yet to be ported to a truly CPU bound game like a MMO. Hell even in a RTS they showed pretty decent improvements. A console port that is designed to run around a 1.6/1.7 ghz CPU. Yeah don't expect to see much improvement. That has nothing to do with a low level not being a big improvement. That has to do with the game having to run on a freakin mobile chip that our desktop CPU's blow away.

 

As far as Direct3D? That was purchased from another company. MS did not even create the original. There was a GROUP that included MS that was going to go low level, and MS pulled out. You have to remember that 3DFX WAS the Nvidia at that time and they had a low level API and 3DFX kicked the absolute crap out of everything as far as gaming. The company was ran into the ground with tons of stupidity and Nvidia ending up with a lot of their intellectual property when all was said and done. BTW just like we have idiotic fanboys now? We had idiotic fanboys then. Nvidia was considered completely budget and TRASH for a long, long time. That is why it is so funny seeing Nvidia fan boys now. Things have never been as close to equal between two companies as far as peformance, yet people want to act as if the divide is night and day lol.

 

Also? Directx was absolute garbage until DirectX 9. People like John Carmack wrote open letters to MS blasting it and saying what a piece of crap it was. When it FINALLY got good around 9? Carmack praised it, because it had finally caught up to OpenGL.

CPU:24/7-4770k @ 4.5ghz/4.0 cache @ 1.22V override, 1.776 VCCIN. MB: Z87-G41 PC Mate. Cooling: Hyper 212 evo push/pull. Ram: Gskill Ares 1600 CL9 @ 2133 1.56v 10-12-10-31-T1 150 TRFC. Case: HAF 912 stock fans (no LED crap). HD: Seagate Barracuda 1 TB. Display: Dell S2340M IPS. GPU: Sapphire Tri-x R9 290. PSU:CX600M OS: Win 7 64 bit/Mac OS X Mavericks, dual boot Hackintosh.

Link to comment
Share on other sites

Link to post
Share on other sites

Has this seriously gone so derailed that people are now trying to find faults with AMD? Please. This issue is very aparrent otherwise it wouldn't be such a hot topic. Whatever is causing it, Nvidia need to sort it, or they're gonna be hit where it hurts the most.

 

Of course, The problem is we don't know what the cause is yet or if it is even fixable.

 

So in the mean time the Nvidia fanboys will find issue with AMD in order to shut up the AMD fanboys who are making biased claims about nvidia.  Unfortunately all this is doing is making people who bought a 970 feel bad about it and think that the card is somehow going to perform worse.

 

People need to stop take a breath, the world is not going to end.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Of course, The problem is we don't know what the cause is yet or if it is even fixable.

 

So in the mean time the Nvidia fanboys will find issue with AMD in order to shut up the AMD fanboys who are making biased claims about nvidia.  Unfortunately all this is doing is making people who bought a 970 feel bad about it and think that the card is somehow going to perform worse.

 

People need to stop take a breath, the world is not going to end.

 

I just want to see benchmarks on 1440p with frametimes on many of the new games where it would possibly be an issue (shadow of mordor would be a good test). At 1080p? You aren't hitting that VRAM amount. 

 

I think the reason people are worried is a 970 is not a 1080p card. It is a 1440p card/4k card in SLI, and we are seeing games VRAM usage go up because it is the one thing the consoles can do well with their unified memory for high textures. PS4 has been in the mid 2's on VRAM usage per their tech articles (like the one on Second Son). That means the next highest up texture setting (for the games that decide to give us them) could see problems at 1440p.

 

Add to this we have Nvidia DSR and a card that might do really poorly in it because VRAM usage increases with resolution, and DSR was a major selling point of the card (we had Gedosato before this, but many people don't use such things until it is easily accessible, usable).  

 

So for the average user at 1080p. Yeah not an issue at all. For someone with a 1440p monitor or running that resolution in DSR or two cards in SLI? Yeah I can understand why they might be upset and or feel like they got ripped off or sold something they were not expecting.

CPU:24/7-4770k @ 4.5ghz/4.0 cache @ 1.22V override, 1.776 VCCIN. MB: Z87-G41 PC Mate. Cooling: Hyper 212 evo push/pull. Ram: Gskill Ares 1600 CL9 @ 2133 1.56v 10-12-10-31-T1 150 TRFC. Case: HAF 912 stock fans (no LED crap). HD: Seagate Barracuda 1 TB. Display: Dell S2340M IPS. GPU: Sapphire Tri-x R9 290. PSU:CX600M OS: Win 7 64 bit/Mac OS X Mavericks, dual boot Hackintosh.

Link to comment
Share on other sites

Link to post
Share on other sites

Has this seriously gone so derailed that people are now trying to find faults with AMD? Please. This issue is very aparrent otherwise it wouldn't be such a hot topic. Whatever is causing it, Nvidia need to sort it, or they're gonna be hit where it hurts the most.

 

Also, people need to get their head out of the damn sand and stop defending them and looking for excuses, they may allegedly be perfect, but they're not, and neither is AMD or Intel for that matter.

 

"otherwise it wouldn't be such a hot topic."

 

First it was a bunch of people yelling that the card only used 3GB

Then it was a bunch of people yelling that the card only used 3.5GB

Now it's a bunch of people who use very general and incomplete data to come to blazing conclusions about memory bandwidth.

The Internet is the first thing that humanity has built that humanity doesn't understand, the largest experiment in anarchy that we have ever had.

Link to comment
Share on other sites

Link to post
Share on other sites

NeoGAF user bootski posted a good example: http://www.neogaf.com/forum/showpost.php?p=149081753&postcount=700

 

For anyone wondering what a good test should look like try and follow this.

 

I said it before and I'll say it again, increasing graphical options and fiddling with many different settings just to achieve 4GB of VRAM usage is going to severely hurt performance anyway. The 970 is not strong enough to run the settings needed to run, in order to break 3.5GB of VRAM mark. Therefore it is running out of horsepower. In order for you to properly test the theory correctly, you would need a neutral way of increasing video memory usage, without hurting performance. Along-side an FCAT analysis of the high memory usage situation, because that would reveal any stuttering. Aside from that, there is no other way to properly test whether or not the 970 has a problem when accessing the .5 GB memory section.

Link to comment
Share on other sites

Link to post
Share on other sites

I said it before and I'll say it again, increasing graphical options and fiddling with many different settings just to achieve 4GB of VRAM usage is going to severely hurt performance anyway. The 970 is not strong enough to run the settings needed to run, in order to break 3.5GB of VRAM mark. Therefore it is running out of horsepower. In order for you to properly test the theory correctly, you would need neutral way of increasing video memory usage, without hurting performance. Along-side an FCAT analysis of the high memory usage situation, because that would reveal any stuttering. Aside from that, there is no other way to properly test whether or not the 970 has a problem when accessing the .5 GB memory section.

It's actually pretty simple to test it.

You just have to use a 980 for a comparison and see if the average fps difference changes when you go over the 3,5GB.

Example:

970 runs at 55fps and the 980 at 65fps under 3,5GB usage.

Now if you turn up the settings over 3,5GB normally this should happen 970 at 45fps and 980 at 55fps.

But if the result is more like this 970 at 35fps and 980 at 55fps you'll know if something isn't working correctly.

 

RTX2070OC 

Link to comment
Share on other sites

Link to post
Share on other sites

Mantle/DirectX 12 has yet to be ported to a truly CPU bound game like a MMO. Hell even in a RTS they showed pretty decent improvements. A console port that is designed to run around a 1.6/1.7 ghz CPU. 

Against their own DirectX*

Nvidia's DirectX is currently miles better, Mantle pushed Nvidia to release 337.50 (google it what that driver did and ask yourself why it never added any performance in a GPU bound game at 4K/8x msaa).

 

Exactly Nvidia's DirectX drivers were suffering from a shit load of CPU overhead as well, they fixed that and came on par with Mantle in terms of lifting CPU bottlenecks. Just 5-10% behind Mantle. I don't understand why you keep bashing DirectX when both Nvidia & AMD had horrible DX drivers, they were the reason of horrible performance as well, now it's just AMD left that has horrible DX drivers in terms of CPU overhead. Now with DX12 coming, if MS didn't lie, we'll get a 50% CPU performance improvement hopefully over Nvidia's DX11. That's a good 40% advantage over Mantle. . 

Futuremark is releasing soon an API CPU overhead benchmark between DX11/DX12/Mantle, feel free to read on; http://www.futuremark.com/benchmarks/3dmark#api-overhead

MS ran that benchmark; http://youtu.be/6cOk5AeFyqo?t=1h16m19s

Just a few more months for this benchmark and everyone will know how much of a liar AMD is.

Also games ported from the console don't have to be CPU bound on the console to be CPU bound on the PC. Besides Mantle got less to do with consoles than D3D.

 

 

Nvidia was considered completely budget and TRASH for a long, long time. That is why it is so funny seeing Nvidia fan boys now. 

My arguments were about AMD's horrible DX driver which is not fixed in 12 months time after Mantle's release, don't tell me the driver team wasn't aware of their DX overhead issues after they just released the "first" low-level API that was supposed to make an end to CPU bottlenecks. It's not Nvidia vs AMD here, it's MS vs AMD here. 

I'm not going to defend Nvidia, they deserve the criticism but what I find absolutely weird is that the 970 has probably been sold over a million times and 4 months later a guy finds out it can't use more than 3.5GB spreaded across the net in a day or two. When AMD makes a new API to counter CPU overhead AMD literally PR'ed it like they're selling eggs on door while a lot of people were hyping the fck out of it , Nvidia's 337.50 revealing Mantle is only good against AMD's own DX, nobody is aware of the issue. 

Link to comment
Share on other sites

Link to post
Share on other sites

I said it before and I'll say it again, increasing graphical options and fiddling with many different settings just to achieve 4GB of VRAM usage is going to severely hurt performance anyway. The 970 is not strong enough to run the settings needed to run, in order to break 3.5GB of VRAM mark. Therefore it is running out of horsepower. In order for you to properly test the theory correctly, you would need neutral way of increasing video memory usage, without hurting performance. Along-side an FCAT analysis of the high memory usage situation, because that would reveal any stuttering. Aside from that, there is no other way to properly test whether or not the 970 has a problem when accessing the .5 GB memory section.

Did you take a good look at his post? I never said that it was the decisive test to be the gold standard, if anything it's a good example of an "in game" experience. And by "FCAT" you mean Nvidia's tool?(http://www.geforce.com/hardware/technology/fcat) Because I'm not sure how that would make a difference, if you could explain please?

 

Also, your horsepower analogy is bad. The "Four" legged hoarse one a few pages was better.

Link to comment
Share on other sites

Link to post
Share on other sites

 

Against their own DirectX*

Nvidia's DirectX is currently miles better, Mantle pushed Nvidia to release 337.50 (google it what that driver did and ask yourself why it never added any performance in a GPU bound game at 4K/8x msaa).

 

Exactly Nvidia's DirectX drivers were suffering from a shit load of CPU overhead as well, they fixed that and came on par with Mantle in terms of lifting CPU bottlenecks. Just 5-10% behind Mantle. I don't understand why you keep bashing DirectX when both Nvidia & AMD had horrible DX drivers, they were the reason of horrible performance as well, now it's just AMD left that has horrible DX drivers in terms of CPU overhead. Now with DX12 coming, if MS didn't lie, we'll get a 50% CPU performance improvement hopefully over Nvidia's DX11. That's a good 40% advantage over Mantle. . 

Futuremark is releasing soon an API CPU overhead benchmark between DX11/DX12/Mantle, feel free to read on; http://www.futuremark.com/benchmarks/3dmark#api-overhead

MS ran that benchmark; http://youtu.be/6cOk5AeFyqo?t=1h16m19s

Just a few more months for this benchmark and everyone will know how much of a liar AMD is.

Also games ported from the console don't have to be CPU bound on the console to be CPU bound on the PC. Besides Mantle got less to do with consoles than D3D.

 

 

My arguments were about AMD's horrible DX driver which is not fixed in 12 months time after Mantle's release, don't tell me the driver team wasn't aware of their DX overhead issues after they just released the "first" low-level API that was supposed to make an end to CPU bottlenecks. It's not Nvidia vs AMD here, it's MS vs AMD here. 

I'm not going to defend Nvidia, they deserve the criticism but what I find absolutely weird is that the 970 has probably been sold over a million times and 4 months later a guy finds out it can't use more than 3.5GB spreaded across the net in a day or two. When AMD makes a new API to counter CPU overhead AMD literally PR'ed it like they're selling eggs on door while a lot of people were hyping the fck out of it , Nvidia's 337.50 revealing Mantle is only good against AMD's own DX, nobody is aware of the issue. 

 

Nvidia's directx driver isn't quite as magic as you make it out to be (but of course to be fair mantle isn't as magic as its made out to be either), only directx 11.1 running on windows 8.1 with an nvidia card can really compete with mantle's draw call performance (and mantle still has better draw call performance), and so far the only games that actually uses dx11.1 (and they have mantle support too anyway) are frostbite 3 games. There's only so much magic their driver can do on dx11.0 games, unless they are somehow forcing all dx11 games to use multithreaded commandlists...

 

I look forward to seeing the results of that api overhead benchmark though, should be interesting.

Link to comment
Share on other sites

Link to post
Share on other sites

 

It's actually pretty simple to test it.
You just have to use a 980 for a comparison and see if the average fps difference changes when you go over the 3,5GB.

Example:
970 runs at 55fps and the 980 at 65fps under 3,5GB usage.
Now if you turn up the settings over 3,5GB normally this should happen 970 at 45fps and 980 at 55fps.
But if the result is more like this 970 at 35fps and 980 at 55fps you'll know if something isn't working correctly.
 

 

They already tested your theory, there is no 20fps difference:

 

mm1d5.jpg

 

The 980 runs out of horsepower the same way the 970 does after consuming over 3.5GB of VRAM.

 

 

Did you take a good look at his post? I never said that it was the decisive test to be the gold standard, if anything it's a good example of an "in game" experience. And by "FCAT" you mean Nvidia's tool?(http://www.geforce.com/hardware/technology/fcat) Because I'm not sure how that would make a difference, if you could explain please?

 

Also, your horsepower analogy is bad. The "Four" legged hoarse one a few pages was better.

 

 

I did look at the post, I read the whole thing. Fiddling with settings to break over 3.5GB of VRAM is pointless. You will run out of horsepower way before you fill up your frame buffer. Everyone knows this. This should be common sense by now. Anyone trying to do this, showing it off as some sort of source or test is nonsensical. You will see performance degradation regardless. 

 

FCAT analysis shows frame times, to see if there's a variable in frametimes, which comes off as stuttering. Which people are trying to blame on the .5 GB sector when you try to access it.

Link to comment
Share on other sites

Link to post
Share on other sites

Nvidia's directx driver isn't quite as magic as you make it out to be (but of course to be fair mantle isn't as magic as its made out to be either), only directx 11.1 running on windows 8.1 with an nvidia card can really compete with mantle's draw call performance (and mantle still has better draw call performance), and so far the only games that actually uses dx11.1 (and they have mantle support too anyway) are frostbite 3 games. There's only so much magic their driver can do on dx11.0 games, unless they are somehow forcing all dx11 games to use multithreaded commandlists...

 

I look forward to seeing the results of that api overhead benchmark though, should be interesting.

Haven't you bothered checking the thread I made at OCN? Otherwise you wouldn't be telling it's not magic when it's on par with AMD's own API overhead test aka Star Swarm that was designed to sell Mantle. There's only one game that has DX11.1 and that's BF4. Yeah according to AMD Mantle has better drawcall performance, as far as Nvidia is telling it's dramatically worse; http://www.pcper.com/files/imagecache/article_max_width/review/2014-03-22/09.jpg

"According to AMD, it will be possible to have up to 100K render calls per frames with Mantle, while today, optimized render codes achieve only 10k draw calls/frame."

http://www.geeks3d.com/20131113/amd-mantle-first-interesting-slides-and-details-target-100k-draw-calls-per-frame/

Check MS's their DX12 presentation, DX11 was at 50K per frame and DX12 at 150K/frame. They even lied about DX11 drawcalls per frame, it's like 5 times higher than their own API.

Link to comment
Share on other sites

Link to post
Share on other sites

 

 

Fiddling with settings to break over 3.5GB of VRAM is pointless. You will run out of horsepower way before you fill up your frame buffer. Everyone knows this. This should be common sense by now. Anyone trying to do this, showing it off as some sort of source or test is nonsensical. You will see performance degradation regardless. 

 

Quote for truth

 

Its absurd that we even have to keep telling people this over and over

Link to comment
Share on other sites

Link to post
Share on other sites

 

 

 

They already tested your theory, there is no 20fps difference:

 

mm1d5.jpg

 

The 980 runs out of horsepower the same way the 970 does after consuming over 3.5GB of VRAM.

 

 

What I would like to know now is if the 970 starts to stutter when it goes up to 3,5GB  like it normally does when you run out of Vram because if not there then is literally no reason for people to get upset.

RTX2070OC 

Link to comment
Share on other sites

Link to post
Share on other sites

 

 

 

They already tested your theory, there is no 20fps difference:

 

mm1d5.jpg

 

The 980 runs out of horsepower the same way the 970 does after consuming over 3.5GB of VRAM.

 

 

 

I did look at the post, I read the whole thing. Fiddling with settings to break over 3.5GB of VRAM is pointless. You will run out of horsepower way before you fill up your frame buffer. Everyone knows this. This should be common sense by now. Anyone trying to do this, showing it off as some sort of source or test is nonsensical. You will see performance degradation regardless. 

 

FCAT analysis shows frame times, to see if there's a variable in frametimes, which comes off as stuttering. Which people are trying to blame on the .5 GB sector when you try to access it.

 

facepalm.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

So when the 8gb versions of the 980 and maybe 970 come out it's only going to have 7.5gb of memory?

WTF Nvidia?!!

I'm Batman!

Steam: Rukiri89 | uPlay: Rukiri89 | Origin: XxRukiriXx | Xbox LIVE: XxRUKIRIxX89 | PSN: Ericks1989 | Nintendo Network ID: Rukiri

Project Xenos: Motherboard: MSI Z170a M9 ACK | CPU: i7 6700k | Ram: G.Skil TridentZ 16GB 3000mhz | PSU: EVGA SuperNova 850w G2 | Case: Caselabs SMA8 | Cooling: Custom Loop | Still in progress 

Link to comment
Share on other sites

Link to post
Share on other sites

 

What I would like to know now is if the 970 starts to stutter when it goes up to 3,5GB  like it normally does when you run out of Vram because if not there then is literally no reason for people to get upset.

 

It's a normal occurrence when you run out of VRAM, people have been talking about this for ages upon ages of what happens when you run out of VRAM. It's literally no reason for people to get upset.

 

facepalm.jpg

 

If that's all you got, then

 

large.gif

Link to comment
Share on other sites

Link to post
Share on other sites

Haven't you bothered checking the thread I made at OCN? Otherwise you wouldn't be telling it's not magic when it's on par with AMD's own API overhead test aka Star Swarm that was designed to sell Mantle. There's only one game that has DX11.1 and that's BF4. Yeah according to AMD Mantle has better drawcall performance, as far as Nvidia is telling it's dramatically worse; http://www.pcper.com/files/imagecache/article_max_width/review/2014-03-22/09.jpg

"According to AMD, it will be possible to have up to 100K render calls per frames with Mantle, while today, optimized render codes achieve only 10k draw calls/frame."

http://www.geeks3d.com/20131113/amd-mantle-first-interesting-slides-and-details-target-100k-draw-calls-per-frame/

Check MS's their DX12 presentation, DX11 was at 50K per frame and DX12 at 150K/frame. They even lied about DX11 drawcalls per frame, it's like 5 times higher than their own API.

Don't get me wrong, I'd agree nvidia's dx11 driver is better optimized than AMD's, but it can't magically make all dx11 games perform with as low overhead as mantle or dx12 (if dx11 could perform with as little overhead as dx12, then there wouldn't be much point in dx12). with dx11.1/multithreaded commandlists it can come pretty close though. The big reason AMD's dx11 driver has poor performance in bf4 is because amd never bothered implementing multithreaded commandlists in their dx11.1 driver.

 

Do we know if the star swarm benchmark is utilizing dx11.1/multithreaded command lists in its dx11 renderer or not? I wasn't able to find any information confirming this either way. Would certainly be interesting if it was using dx11.0 and nvidia managed to get it performing on par with mantle...

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×