Jump to content

NVIDIA Responds to GTX 970 3.5GB Memory Issue

TheBoneyKing

-anip-

Good then you can answer my previous questions, properly and respectfully. Thank you.

Link to comment
Share on other sites

Link to post
Share on other sites

Don't get me wrong, I'd agree nvidia's dx11 driver is better optimized than AMD's, but it can't magically make all dx11 games perform with as low overhead as mantle or dx12. with dx11.1/multithreaded commandlists it can come pretty close though. The big reason AMD's dx11 driver has poor performance in bf4 is because amd never bothered implementing multithreaded commandlists in their dx11.1 driver.

 

Do we know if the star swarm benchmark is utilizing dx11.1/multithreaded command lists in its dx11 renderer or not? I wasn't able to find any information confirming this either way. Would certainly be interesting if it was using dx11.0 and nvidia managed to get it performing on par with mantle...

Star Swarm doesn't use DX11.1 and;

http://www.overclock.net/t/1528559/directx-driver-overhead-and-why-mantle-is-a-selling-point-bunch-of-benchmarks/0_100#post_23302772

Link to comment
Share on other sites

Link to post
Share on other sites

Hate to say it, but you guys are really reminding me of people who are up in arms and threatening lawsuits because they got a new 32GB phone only to find out 4GB of it is taken up by the OS already.

 

Yes, that's true.  There IS technically 4 GB of VRAM on the 970.  That 0.5 GB is completely unusable when every other advertised 4 GB VRAM card can actually address the full 4 GB of VRAM isn't really a big deal.  

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

Good then you can answer my previous questions, properly and respectfully. Thank you.

 

I answered your questions. If you do not understand, that is not my fault. If you think I was being disrespectful, when you pull a facepalm meme on me, then I think you do not understand the meaning of disrespectful. Rather, instead re-read what I said until you understand what I said because it seems many other people understood what I said and you are the only one who did not. If you still do not understand what I said, after re-reading what I said. Then ask me some questions on what you do not understand, and perhaps I can go into detail to explain to you.

Link to comment
Share on other sites

Link to post
Share on other sites

It's a normal occurrence when you run out of VRAM, people have been talking about this for ages upon ages of what happens when you run out of VRAM. It's literally no reason for people to get upset.

 

 

 

 

That's not what I was trying to get at.

What I mean is does it stutter at 3,5GB already or does it start to stutter at 4GB at which it normally should if it's working correctly.

Because if it starts to stutter at 3,5GB already I can understand why people are upset.

I want to see if this actually effects the gaming experience when going over 3,5GB because as of right now all I'm seeing are benchmarks that have nothing to do with actual gaming.

RTX2070OC 

Link to comment
Share on other sites

Link to post
Share on other sites

 

Good then you can answer my previous questions, properly and respectfully. Thank you.

What questions? He's mainly right about the fact you'll lack horsepower before you'll need that .5GB of VRAM and that your frametimes will be high when you ran out of vram. Wake up, who knows a 970 has been sold over a million times and it took 4 months to find this issue and it's the first time I see people complaining about stutters (lack of vram). Anyways regardless of what I just said, Nvidia is required to fix this.
 

 

That's not what I was trying to get at.
What I mean is does it stutter at 3,5GB already or does it start to stutter at 4GB which it normally should.
Because if it starts to stutter at 3,5GB already I can understand why people are upset.

I want to see if this actually effects the gaming experience when going over 3,5GB because as of right now all I'm seeing are benchmarks that have nothing to do with actual gaming.

When you need more than 3.5GB, for example 3.6GB then it stutters. The GPU load is an indicator btw, if you ran out the GPU usage will be below 99%.

Link to comment
Share on other sites

Link to post
Share on other sites

 

 

 

They already tested your theory, there is no 20fps difference:

 

mm1d5.jpg

 

 

"We turned on graphical settings that happen to use more VRAM while taxing the card harder, watch how both cards see a drop in framerate!"

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

"We turned on graphical settings that happen to use more VRAM while taxing the card harder, watch how both cards see a drop in framerate!"

You missed the point if the card would actually have a problem with Vram the 970 would drop way more down than the 980 which it doesn't.

So right now this hole theard is pointless because there is not a single evidence that the card has an actual Vram issue.

And until someone shows me that the 970 stutters at 3,5GB which would be the case if it had a Vram issue I call BS on the hole thing.

RTX2070OC 

Link to comment
Share on other sites

Link to post
Share on other sites

Reacting honestly is not disrespectful, it's humorous.

I did look at the post, I read the whole thing. Fiddling with settings to break over 3.5GB of VRAM is pointless. You will run out of horsepower way before you fill up your frame buffer. Everyone knows this. This should be common sense by now. Anyone trying to do this, showing it off as some sort of source or test is nonsensical. You will see performance degradation regardless.

What is note worthy in these tests where the GPU usage percent(%) behaving abnormally not that the frame rate was decreasing or performance was lowered, an obvious effect of increasing the filters and graphic options to achieve the desired effect.
 

FCAT analysis shows frame times, to see if there's a variable in frametimes, which comes off as stuttering. Which people are trying to blame on the .5 GB sector when you try to access it.

The MSI Afterburner test show Frametimes, then how does FCAT differ, elaborate for full marks.

 

Maybe try not to lie to me, that would be nice. And horsepower is still a not the right phrase to use.

Link to comment
Share on other sites

Link to post
Share on other sites

Sorry if this has been mentioned already as it is a long thread but wouldnt a game like Skyrim with mods be a more accurate portrayal?  

Link to comment
Share on other sites

Link to post
Share on other sites

Sorry if this has been mentioned already as it is a long thread but wouldnt a game like Skyrim with mods be a more accurate portrayal?  

 

 

Skyrim is a unoptimized heap of coding garbage, a fair test that wouldn't make. 

Link to comment
Share on other sites

Link to post
Share on other sites

Skyrim is a unoptimized heap of coding garbage, a fair test that wouldn't make. 

and its especially great when people pile on heaps of totally unoptimized texture mods on top of it

Link to comment
Share on other sites

Link to post
Share on other sites

Sorry if this has been mentioned already as it is a long thread but wouldnt a game like Skyrim with mods be a more accurate portrayal?  

 

The best way would be to edit the config .ini's of specific games to detect extra VRAM, as modern games like Advanced Warfare will readily try and fill it.  

 

Realistically speaking, the fact that people are having 970's that will only address 3.5 GB of VRAM is the real issue.  

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

Reacting honestly is not disrespectful, it's humorous.

What is note worthy in these tests where the GPU usage percent(%) behaving abnormally not that the frame rate was decreasing or performance was lowered, an obvious effect of increasing the filters and graphic options to achieve the desired effect.

 

The MSI Afterburner test show Frametimes, then how does FCAT differ, elaborate for full marks.

 

Maybe try not to lie to me, that would be nice. And horsepower is still a not the right phrase to use.

 

If you honestly didn't understand what I was saying, then that's humorous. 

 

GPU usage behaving abnormally is a side-effect of the GPU trying to swap to system memory. A side-effect of increasing filters and graphics options, because doing this increases VRAM utilization.

 

The difference is FCAT measures frame latency at the end of the rendering pipeline, instead of tapping into the video drivers themselves.

 

Lie to you? Huh? Am I getting trolled? Because now it sure feels like it.  :huh:

 

It's not? Please elaborate on why its not, because maybe you simply do not understand the comparison. Just like you don't understand basic answers to your questions.

Link to comment
Share on other sites

Link to post
Share on other sites

I just want to see benchmarks on 1440p with frametimes on many of the new games where it would possibly be an issue (shadow of mordor would be a good test). At 1080p? You aren't hitting that VRAM amount. 

 

I think the reason people are worried is a 970 is not a 1080p card. It is a 1440p card/4k card in SLI, and we are seeing games VRAM usage go up because it is the one thing the consoles can do well with their unified memory for high textures. PS4 has been in the mid 2's on VRAM usage per their tech articles (like the one on Second Son). That means the next highest up texture setting (for the games that decide to give us them) could see problems at 1440p.

 

Add to this we have Nvidia DSR and a card that might do really poorly in it because VRAM usage increases with resolution, and DSR was a major selling point of the card (we had Gedosato before this, but many people don't use such things until it is easily accessible, usable).  

 

So for the average user at 1080p. Yeah not an issue at all. For someone with a 1440p monitor or running that resolution in DSR or two cards in SLI? Yeah I can understand why they might be upset and or feel like they got ripped off or sold something they were not expecting.

 

I fully understand that people are annoyed, but why? The thing that people keep carrying on about is that they claim the card won't do what they thought it should. However there were plenty of benchmarks showing what the limit of this card was long before most purchased it.  Everyone knew that the card would tank when it got pushed passed a certain quality setting.  The only thing that has changed is they have found out why. 

 

The way I see it, worst case scenario is that the 970 will still perform as well as it did in all benchmarks.  Best case scenario is Nvidia find a solution and the card performs more like a 980 and everyone ends up with a card that performs better than expected.  Under these conditions I can't see how anyone can complain they have bought a dud or aren't getting what they paid for.  

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

I fully understand that people are annoyed, but why? The thing that people keep carrying on about is that they claim the card won't do what they thought it should. However there were plenty of benchmarks showing what the limit of this card was long before most purchased it.  Everyone knew that the card would tank when it got pushed passed a certain quality setting.  The only thing that has changed is they have found out why. 

 

The way I see it, worst case scenario is that the 970 will still perform as well as it did in all benchmarks.  Best case scenario is Nvidia find a solution and the card performs more like a 980 and everyone ends up with a card that performs better than expected.  Under these conditions I can't see how anyone can complain they have bought a dud or aren't getting what they paid for.  

 

It seems like people were happy to get a card that wasn't beating a 980 (cause its a tier lower) but now that we know WHY this specific set of cards will perform lower, people are going crazy asking for 980 performance out of a 970. 

From what I'm trying to understand. 

Also, half the people posting here are telling everyone to just buy a R9 290. 

Link to comment
Share on other sites

Link to post
Share on other sites

I fully understand that people are annoyed, but why? The thing that people keep carrying on about is that they claim the card won't do what they thought it should. However there were plenty of benchmarks showing what the limit of this card was long before most purchased it.  Everyone knew that the card would tank when it got pushed passed a certain quality setting.  The only thing that has changed is they have found out why. 

 

The way I see it, worst case scenario is that the 970 will still perform as well as it did in all benchmarks.  Best case scenario is Nvidia find a solution and the card performs more like a 980 and everyone ends up with a card that performs better than expected.  Under these conditions I can't see how anyone can complain they have bought a dud or aren't getting what they paid for.  

 

I feel like everyone just wants to find something to complain about.

 

We can cure aids, cancer and other diseases all in the same day and somebody will somehow find something about it to complain about.

Link to comment
Share on other sites

Link to post
Share on other sites

If you honestly didn't understand what I was saying, then that's humorous. 

 

GPU usage behaving abnormally is a side-effect of the GPU trying to swap to system memory. A side-effect of increasing filters and graphics options, because doing this increases VRAM utilization.

 

The difference is FCAT measures frame latency at the end of the rendering pipeline, instead of tapping into the video drivers themselves.

 

Lie to you? Huh? Am I getting trolled? Because now it sure feels like it.  :huh:

 

It's not? Please elaborate on why its not, because maybe you simply do not understand the comparison. Just like you don't understand basic answers to your questions.

Your attempts to insult my intelligence over the internet are small and only deserve a response merely because of the order I chose to respond is in relation to your statements.

 

And I've hear that answer before, hmm where was that? Oh yeah, everywhere else. Very original and I like how you added nothing more to the previous speculations.

 

That's better, but only just. 8/Eight.

 

And you question my intelligence... or did you state it as fact? :unsure:

 

The definition and use are already predetermined: http://en.wikipedia.org/wiki/Horsepower https://www.wordnik.com/words/horsepower The way you used them is in the informal sense and is therefore lacking in specific quantity. In short, it leaves my questioning the validity of your statements, preferring something a more accurate to the current seriousness of the topic at hand.

 

Anything else?

Link to comment
Share on other sites

Link to post
Share on other sites

You all don't get it.

 

This is the design flaw behind maxwell. It's an unintended side effect. And yes, it is false advertising. 

 

"From the Nai's Benchmark, assuming if the allocation is caused by disabled of SMM units, and different bandwidth for each different gpus once Nai's Benchmark memory allocation reaches 2816MiBytes to 3500MiBytes range, I can only assume this is caused by the way SMM units being disabled.

Allow me to elaborate my assumption. As  we know, there are four raster engines for GTX 970 and GTX 980.
Each raster engine has four SMM units. GTX 980 has full SMM units for each raster engine, so there are 16 SMM units.

GTX970 is made by disabling 3 of SMM units. What nvidia refused to told us is which one of the raster engine has its SMM unit being disabled.
I found most reviewers simply modified the high level architecture overview of GTX 980 diagram by removing one SMM unit for each three raster engine with one raster engine has four SMM unit intact.

First scenario
What if the first (or the second, third, fourth) raster engine has its 3 SMM units disabled instead of evenly spread across four raster engine?

Second scenario
Or, first raster engine has two SMM units disabled and second raster engine has one SMM unit disabled?

Oh, please do notice the memory controller diagram for each of the raster engine too. >.< If we follow the first scenario, definitely, the raster engine will not be able to make fully use of the memory controller bandwidth

64bit memory controller, total 4 memory controllers = 256 bit memory controller.
Assuming if there are 3 raster engines with each three has one SMM disabled leaving 1 raster engine with 4 SMM intact.
Mathematically ;
16 SMM = 256 bit = 4096 Mb
13 SMM = 208 bit = 3328 Mb

208 bit = effective width after disabling SMM with 256 bit being actual memory controller width

IT is hardware problem=GTX970 is 208bit card."

 

Source: https://forums.geforce.com/default/topic/803518/geforce-900-series/gtx-970-3-5gb-vram-issue/post/4430735/#4430735

4790k @ 4.6 (1.25 adaptive) // 2x GTX 970 stock clocks/voltage // Dominator Platnium 4x4 16G //Maximus Formula VII // WD Black1TB + 128GB 850 PRO // RM1000 // NZXT H440 // Razer Blackwidow Ultimate 2013 (MX Blue) // Corsair M95 + Steelseries QCK // Razer Adaro DJ // AOC I2757FH

Link to comment
Share on other sites

Link to post
Share on other sites

and its especially great when people pile on heaps of totally unoptimized texture mods on top of it

 

But it would be the same unoptimized heaps in both cases?

Link to comment
Share on other sites

Link to post
Share on other sites

I fully understand that people are annoyed, but why? The thing that people keep carrying on about is that they claim the card won't do what they thought it should. However there were plenty of benchmarks showing what the limit of this card was long before most purchased it.  Everyone knew that the card would tank when it got pushed passed a certain quality setting.  The only thing that has changed is they have found out why. 

 

The way I see it, worst case scenario is that the 970 will still perform as well as it did in all benchmarks.  Best case scenario is Nvidia find a solution and the card performs more like a 980 and everyone ends up with a card that performs better than expected.  Under these conditions I can't see how anyone can complain they have bought a dud or aren't getting what they paid for.  

 

970 was sold as a 4k solution in SLI. It was marketed as such everywhere. It was lauded as such. The benchmarks Nvidia linked mean nothing without frame latency. We don't know wtf they are benchmarking. It could be staring at the damn ground...

 

Like I said the only game I can think of at 1440p that has high ram usage atm (besides Supersampling AA/downsampling which I also have a link for) is shadows of mordor maxed without the ultra texture pack. Benchmark it with frame latency, The game has a damn Nvidia logo when you load it. I don't think a more fair test could be had. 

 

So in a single card configuration? Yup wouldn't mean much. Also as far as someone saying you would run into a horsepower problem before a VRAM problem? That is 100 percent BS. the 6 gig VRAM textures on Shadows of Mordor prove that. My R9 290 runs into a vram problem and the game runs like butter at 1440p without them.

 

That is the case on like a 760 with 4GB of VRAM (stupid card) but that card could still benefit from 3 (if there was a 3 gig card). That is why you see a R9 280/x blowing 2 gig cards away in Ryse with SSAA on, which is similar to Nvidia DSR/AMD VSR. Star Citizen is going to use Crysis engine like Ryse. I bet a ton of people bought dual GTX 970's for the game. THAT is the people who are probably upset and feel like they were sold snakeoil, and higher texture packs than the consoles which can use mid 2's are not going to be limited to one game and higher AA is here to stay. If the GTX 970 was marketed as a single card solution you would not see near the complaints from people.

 

e96fe5a3_Benchmarks_Ryse_1080p_2_x_SSAA-

CPU:24/7-4770k @ 4.5ghz/4.0 cache @ 1.22V override, 1.776 VCCIN. MB: Z87-G41 PC Mate. Cooling: Hyper 212 evo push/pull. Ram: Gskill Ares 1600 CL9 @ 2133 1.56v 10-12-10-31-T1 150 TRFC. Case: HAF 912 stock fans (no LED crap). HD: Seagate Barracuda 1 TB. Display: Dell S2340M IPS. GPU: Sapphire Tri-x R9 290. PSU:CX600M OS: Win 7 64 bit/Mac OS X Mavericks, dual boot Hackintosh.

Link to comment
Share on other sites

Link to post
Share on other sites

Don't go into science... ever.... he gave three data points. You need to get your head out of your ass.

 

So do you. My R9 280x doesn't drop two thirds of it's performance when it's sitting at 99% VRAM usage. Stay defensive.

FX 6300 @4.8 Ghz - Club 3d R9 280x RoyalQueen @1200 core / 1700 memory - Asus M5A99X Evo R 2.0 - 8 Gb Kingston Hyper X Blu - Seasonic M12II Evo Bronze 620w - 1 Tb WD Blue, 1 Tb Seagate Barracuda - Custom water cooling

Link to comment
Share on other sites

Link to post
Share on other sites

So do you. My R9 280x doesn't drop two thirds of it's performance when it's sitting at 99% VRAM usage. Stay defensive.

 

Keep being an asshole then, sure. You say it doesn't drop performance, then prove it. You say something does drop performance, then prove it. The guy I was responding to only gave three data points, have you really never taken a basic science class to know that's a completely bullshit way of testing for something? All performance goes down when the load increases, it's what's expected. This is how god damned skewed and bullshit the graph would be to go on if you used three data points:

 

post-109492-0-13961900-1422246960_thumb.

 

If you don't see how that's stupid to use as a model... I have no words.

The Internet is the first thing that humanity has built that humanity doesn't understand, the largest experiment in anarchy that we have ever had.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×