Jump to content

The 970 3.5gb bug

Cheddle

Just because it doesn't affect YOU doenst mean it inst a problem.

 

Did i ignore the artifacts on the r9 series just because it didnt affect ME and my particular card.

 

Hell no, i gave AMD hell when that happened , a lot of 270x cards are affect by artifcating.

 

Just saying, most people here have been giving me shit about the bad drivers AMD launched 5 years ago 

but this nvidia debacle , shhh just push it under the rug because reasons.

I know its a problem, since I used a synthetic test to see for myself, but as I've mentioned before at 1080p the most vRAM usage I see is 3GB, so it has no affect at all in the games I play. And I won't be the only one who knows of it, but has no real problems due to not being affected.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

I know its a problem, since I used a synthetic test to see for myself, but as I've mentioned before at 1080p the most vRAM usage I see is 3GB, so it has no affect at all in the games I play. And I won't be the only one who knows of it, but has no real problems due to not being affected.

 

Be considerate of the people trying to run 4k with SLI 970s please then.

 

They wont have much fun with this problem.

Link to comment
Share on other sites

Link to post
Share on other sites

Be considerate of the people trying to run 4k with SLI 970s please then.

 

They wont have much fun with this problem.

Honestly, if youv'e got the money for a decent 4K monitor and 970s in SLi, you'd go with the 980s, because they are a lot better for it in 2 way SLi.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Are you serious? It's a problem that doesn't affect everyone right now, yes. But it is a problem that will affect a great deal of people eventually and it has already affected many. Why are people pissed? Because Nvidia effectively dicked them over and made more than 10% of their available VRAM useless. What about SLI owners? What about games that simply need a ton of VRAM or other VRAM intensive tasks? This is a huge deal whether you like it or not, and the people affected have every right to be as pissed off as they are right now. It's false advertisement, it's dishonest and it's simply not okay to tell the affected people that it's okay because they don't notice it "yet". No, stop it.

Im a GTX 970 SLI owner, and I approve this message.

Now all of a sudden 4k may be out of thr question, even though i already have a 4k tv.

Main rig. Updated :D CPU: FX8350 @stock Cooler: Corsair h110 Motherboard: Ga-990fxa-ud5 Memory: Corsair Vengance 8Gigs 1866Mhz GPU: MSI GTX 970 SLI (blower cooler)  PSU: Cooler Master Silent pro 850w SSD: Corsair Force GT 60Gb HDD: WD 500 Caviar blue Case: Thermaltake core V71 Fan controler: NZXT Sentry
 
Secondary PC: CPU: i5 4670k @stock Cooler: Thermaltake NIC L/31 Motherboard: Asus Z97I-PLUS Memory: Corsair Vengance 8Gigs 2133Mhz GPU: ZOTAC GTX 780 (Titan cooler) PSU: Antec 520m Pro SSD: Corsair Force GT 60Gb HDD: WD 500 Caviar blue & WD 2TB Green Case: Thermaltake core V1

Link to comment
Share on other sites

Link to post
Share on other sites

Im a GTX 970 SLI owner, and I approve this message.

Now all of a sudden 4k may be out of thr question, even though i already have a 4k tv.

 

Thanks for posting  ,i was just telling the other guy to be consider 4k gtx 970 sli users.

 

I hope nvidia fixes the issue.

Link to comment
Share on other sites

Link to post
Share on other sites

All 970's have the 'problem', its the way its designed.

Here's a breakdown of what's happening on the 970:

1. each SM or SMM is a multiprocessing unit that consists of 128 cuda cores.

2. The GTX 970 is basically just a GTX 980 with a trio of SM units disabled, everything else is identical.

3. Each SM uses what's called 'crossbars' to access memory channels.

Since the GTX 970 has several SM's disabled, this results in more load on fewer crossbars when accessing memory channels, so this results in the 970 not being able to access all 4gb vram as efficiently as the gtx 980.

For this reason, nvidia logically segmented the gtx 970's vram into 3.5gb and 0.5gb chunks as a performance optimization. This way the first 3.5gb of vram on the 970 maintains the same ratio of SM units to VRAM as the gtx 980, and the gtx 970 can access this 3.5gb chunk with full efficiency.

There's really no 'memory bug' with the gtx 970, the card is designed this way. People also need to note that the gtx 970 is a good $200 less than the gtx 980, of course it isn't going to perform as well at very high resolutions as the gtx 980, if it did there would be no reason for the gtx 980 to exist.

People are basically whipping themselves up into a frenzy, because a card that is a cut down version of the gtx 980... performs like a cutdown version of the gtx 980. This was nothing unexpected, and the card still performs as well as initial benchmarks indicated.

The part that you dont get is that you can't SLI cards with this issue, because on sli they have the power to use the 4gb of ram, but performance will go to shit regardless of the combined horse power.

Once i get home later today i will run test.

Main rig. Updated :D CPU: FX8350 @stock Cooler: Corsair h110 Motherboard: Ga-990fxa-ud5 Memory: Corsair Vengance 8Gigs 1866Mhz GPU: MSI GTX 970 SLI (blower cooler)  PSU: Cooler Master Silent pro 850w SSD: Corsair Force GT 60Gb HDD: WD 500 Caviar blue Case: Thermaltake core V71 Fan controler: NZXT Sentry
 
Secondary PC: CPU: i5 4670k @stock Cooler: Thermaltake NIC L/31 Motherboard: Asus Z97I-PLUS Memory: Corsair Vengance 8Gigs 2133Mhz GPU: ZOTAC GTX 780 (Titan cooler) PSU: Antec 520m Pro SSD: Corsair Force GT 60Gb HDD: WD 500 Caviar blue & WD 2TB Green Case: Thermaltake core V1

Link to comment
Share on other sites

Link to post
Share on other sites

Honestly, if youv'e got the money for a decent 4K monitor and 970s in SLi, you'd go with the 980s, because they are a lot better for it in 2 way SLi.

You can't be this arrogant  , just because they got the money they should just spend it left and right? SLI 970's WAS a decent way to run 4K for a decent amount of money , WAS.

CPU : i5-8600k , Motherboard: Aorus Z370 Ultra Gaming , RAM: G.skill Ripjaws 16GB 3200mhz ,GPU : Gigabyte 1070 G1 ,Case: NZXT Noctis 450 ,Storage : Seagate 1TB HDD , Seagate Barracuda 2TB, Samsung 860 EVO 500GB , KINGSTON SHFS37A/240G HYPERX FURY 240GB  , PSU : Corsair RM 750X , Display(s) : LG Flatron W2243S , Dell U2715H , Cooling: Coolermaster Hyper 212x Evo, Keyboard: Coolermaster Rapid-i , Drevo Tyrfing Black , Coolermaster Masterkeys Pro S, Mouse : Logitech G502 Proteus Spectrum , Windows 10 

Link to comment
Share on other sites

Link to post
Share on other sites

It's downright embarrassing how many people defend this kind of behaviour. Seriously. Oh, it's "by design"? The GPU is supposed to work that way? Then why didn't Nvidia tell us that when people first mentioned the issue or, even better, when they released the damn product! Why did they need to "investigate" when they knew what was up?
People like to compare it to the 660 and it's assymetrical design, but guess what: this was known prior to launch, review sites focused on the issue and everybody who informed themselves was well aware of that fact. What we have here is a completely different issue.
Nvidia advertised this card as a 4GB model, nobody knew that it was segmented in a "working" partition with enough speed and effectively a "broken" partition that simply can't keep up with the rest of the GPU.
People have every god damn right to complain about this, and before you start with your "you should have bought a 980 for 4k" bullshit then think about SLI owners, think about the fact that nobody actually knew that this was the case just a few weeks ago. Think about how much of an asshole you come across as when you tell the consumer that it's effectively their fault when they were being mislead and when Nvidia fed them wrong information. This is absolutely disgusting behaviour if you ask me, you should ALWAYS side with the consumer before you side with a multi-billion dollar corporation.
I will gladly accept that we don't have all the information yet, I am well aware that this is still being investigated, but don't you dare trying to shut up people who are affected by this and don't you dare trying to put the blame on them.
By the way, for everyone saying that the specific benchmark for this problem is flawed, there is a forum thread where someone uses an official Nvidia tool to stress the VRAM, and it shows the same results on a 970 while a 4GB 670 works just fine. That user also reports worse stuttering while gaming at 1440p compared to their 670s, I'll link it later or maybe someone knows what I'm talking about.
So, yeah. Stop it. Let people talk and discuss the issue, especially when they're affected by it. This is a serious issue for many people and they have the right to complain and the right to ask for a proper answer from Nvidia.

 

Here is the thread: http://www.overclock.net/t/1535502/gtx-970s-can-only-use-3-5gb-of-4gb-vram-issue/650#post_23458748

And here are the results from an official Nvidia tool:

900x900px-LL-cba84117_670gtx4gb_zpsafa16

 

900x900px-LL-11ed2d55_970gtx4gb_zps70abe

 

Consistent performance for a 4GB 670 while the 970 completely craps itself once that infamous .5GB partition is being accessed.

Link to comment
Share on other sites

Link to post
Share on other sites

We need to know minimum fps numbers.

 

Because a stutter is a sudden change in fps producing very low fps for a split second.

 

Thats why minimum fps are important.

You completely missed my entire point though. If one is loading up a game with crazy settings like tons of downsampling + msaa to force the gpu to allocate over 3.5gb of vram, then how the hell could they possibly determine if it was going over 3.5gb of vram that causes the lowest min fps spike? Even with min fps number it would be just as inconclusive, because at such intensive settings there will be other bottlenecks that could cause spikes,

 

Honestly to truly test real world effect in games we'd probably need a driver from nvidia that doesn't segment the ram, so a game can allocate all 4gb without having to have ridiculous settings forced.

Link to comment
Share on other sites

Link to post
Share on other sites

It's downright embarrassing how many people defend this kind of behaviour. Seriously. Oh, it's "by design"? The GPU is supposed to work that way? Then why didn't Nvidia tell us that when people first mentioned the issue or, even better, when they released the damn product! Why did they need to "investigate" when they knew what was up?

People like to compare it to the 660 and it's assymetrical design, but guess what: this was known prior to launch, review sites focused on the issue and everybody who informed themselves was well aware of that fact. What we have here is a completely different issue.

Nvidia advertised this card as a 4GB model, nobody knew that it was segmented in a "working" partition with enough speed and effectively a "broken" partition that simply can't keep up with the rest of the GPU.

People have every god damn right to complain about this, and before you start with your "you should have bought a 980 for 4k" bullshit then think about SLI owners, think about the fact that nobody actually knew that this was the case just a few weeks ago. Think about how much of an asshole you come across as when you tell the consumer that it's effectively their fault when they were being mislead and when Nvidia fed them wrong information. This is absolutely disgusting behaviour if you ask me, you should ALWAYS side with the consumer before you side with a multi-billion dollar corporation.

I will gladly accept that we don't have all the information yet, I am well aware that this is still being investigated, but don't you dare trying to shut up people who are affected by this and don't you dare trying to put the blame on them.

By the way, for everyone saying that the specific benchmark for this problem is flawed, there is a forum thread where someone uses an official Nvidia tool to stress the VRAM, and it shows the same results on a 970 while a 4GB 670 works just fine. That user also reports worse stuttering while gaming at 1440p compared to their 670s, I'll link it later or maybe someone knows what I'm talking about.

So, yeah. Stop it. Let people talk and discuss the issue, especially when they're affected by it. This is a serious issue for many people and they have the right to complain and the right to ask for a proper answer from Nvidia.

 

Here is the thread: http://www.overclock.net/t/1535502/gtx-970s-can-only-use-3-5gb-of-4gb-vram-issue/650#post_23458748

And here are the results from an official Nvidia tool:

900x900px-LL-cba84117_670gtx4gb_zpsafa16

 

900x900px-LL-11ed2d55_970gtx4gb_zps70abe

 

Consistent performance for a 4GB 670 while the 970 completely craps itself once that infamous .5GB partition is being accessed.

Oh... yet another cuda benchmark. How are we sure that this benchmark isn't subject to some of the same flaws as nai's? Just because its an nvidia tool

? According to nai, it seems like issues inherent with how cuda is designed:

 

The benchmark measures "actually" not the DRAM bandwidth but the bandwidth of the "global memory" s in CUDA. The global memory in CUDA is a virtual memory space, which includes not only the DRAM, the GPU but also DRAM areas of the CPU. A virtual memory space is always distinguished by the fact that there is virtual memory addresses. The virtual memory addresses are translated to a memory access * somehow * in the actual physical memory addresses of the DRAM. The advantage of such virtual memory space, among other things, that it allows swapping of memory regions. Just such a retrieval at the DRAM, the CPU seems CUDA here to make the slump in the benchmark.

The benchmark allocates as many blocks of the global DRAM memories within the GPU as possible. Then it measures the read bandwidth within each of these blocks. The measurement is only as long as the allocated blocks really are in the DRAM GPU good. In this case, memory bandwidth is measured relatively accurately.

The problem however, is that the benchmark is not fully owns the DRAM, the GPU itself. After running in the background Windows and various programs, all also claim some of the DRAM of the GPU. However, the virtual memory space guarantees the benchmark that it () except for the area of ​​the primary surface (?) May be used for almost the entire 4 GiByte. Therefore, the allocation of non-DRAM proposes initially not be out until the upper limit of about 4 GiByte. However, the GPU must now begin to swap memory areas. That is, when the CUDA application is running, should in the DRAM of the GPU, the corresponding data of the CUDA application. Does the character process of Windows or another application should in DRAM GPU their data is.

The GPU seems swapping, according to my first simple investigations of this benchmark is to make simple associative. That is, at a physikalsichen address in the GPU DRAM always the same CUDA data or DirectX data must reside. This leads to conflicts that manifest themselves in the collapse of the bandwidth of the global memories in such conflict areas.

The benchmark attempts to reduce this effect by repeatedly requesting the data in each storage area successively. That is the first demand "should" cause the benchmark a page fault. The page fault "should" copy the data on the GPU Page from the DRAM from the CPU to the GPU DRAM. Then the other global memory accesses would be carried out with the DRAM bandwidth. So at least the assumption on my part.

Interesting way, the GPU does not behave as expected of me. This makes the GPU, the corresponding data does not seem to upload into the DRAM of the GPU, but again to request directly from the DRAM CPU with each memory access in a page fault in CUDA. Characterized the benchmark measures total in such cases, more or less the swapping behavior of CUDA, and not the DRAM bandwidth.

The whole can verify by allowing any applications running in the background that consume a lot of DRAM from the GPU, allowing more swapping needs to take place is also easy. In this case, the benchmark collapses also.

Link to comment
Share on other sites

Link to post
Share on other sites

You completely missed my entire point though. If one is loading up a game with crazy settings like tons of downsampling + msaa to force the gpu to allocate over 3.5gb of vram, then how the hell could they possibly determine if it was going over 3.5gb of vram that causes the lowest min fps spike? Even with min fps number it would be just as inconclusive, because at such intensive settings there will be other bottlenecks that could cause spikes,

 

Honestly to truly test real world effect in games we'd probably need a driver from nvidia that doesn't segment the ram, so a game can allocate all 4gb without having to have ridiculous settings forced.

 

Because people with two 970 have had this problem .

 

The vram goes over 3.5 GB and they experience frame drops that dont make sense considering the power of their SLI solution.

Link to comment
Share on other sites

Link to post
Share on other sites

"bugged" VRAM or not you're getting 4GB of VRAM. I don't see people complaining about the way dual GPU cards like the TitanZ are marketed. It's sold as a 12GB card but only 6GB can be used effectively. Does that mean it's being falsely advertised? No, it still has 12GB on the card. Same applies to the 970. While 0.5GB of it will be slower, you still get 4GB of VRAM.

 

 

Am I saying Nvidia is right in doing this? No. I don't agree with it as much as the next guy, I think it's a dick move on Nvidia's part, but if people are going start throwing around this "false advertisement" bs around then it's something that needs pointing out.

Link to comment
Share on other sites

Link to post
Share on other sites

"bugged" VRAM or not you're getting 4GB of VRAM. I don't see people complaining about the way dual GPU cards like the TitanZ are marketed. It's sold as a 12GB card but only 6GB can be used effectively. Does that mean it's being falsely advertised? No, it still has 12GB on the card. Same applies to the 970. While 0.5GB of it will be slower, you still get 4GB of VRAM.

 

 

Am I saying Nvidia is right in doing this? No. I don't agree with it as much as the next guy, I think it's a dick move on Nvidia's part, but if people are going start throwing around this "false advertisement" bs around then it's something that needs pointing out.

 

Just recently it came to light that Nvidia lied on the cache size and the amount of ROP units. The spec-sheet that we all knew for the past months was wrong. "Miscommunication" my ass, if you want to tell me that not a single one of their engineers saw the publicly available spec-sheet and said "erm, this is actually wrong" then you're just delusional. They did this to increase their yields, even with a broken unit of L2 cache they can still sell a chip as a 970, before maxwell they would have to cut it down to a lower-end chip. This is all a cost-saving measurement, and the consumer pays the price in the end.

 

Oh... yet another cuda benchmark. How are we sure that this benchmark isn't subject to some of the same flaws as nai's? Just because its an nvidia tool

? According to nai, it seems like issues inherent with how cuda is designed:

 

How about the fact that a 4GB gtx 670 works perfectly fine while a gtx 970 doesn't? That's by no means conclusive, but it's a strong inidation that something isn't right here. Also, the major point of my post wasn't this one benchmark or that one forum post, it's the behaviour of some people here. Let the affected people do their testing and let them complain, they have all the right in the world to be fucking pissed right about now. I will not tolerate someone who is trying to shut up people who are affected by this issue, show a bit of empathy for fucks sake.

Link to comment
Share on other sites

Link to post
Share on other sites

Just recently it came to light that Nvidia lied on the cache size and the amount of ROP units. The spec-sheet that we all knew for the past months was wrong. "Miscommunication" my ass, if you want to tell me that not a single one of their engineers saw the publicly available spec-sheet and said "erm, this is actually wrong" then you're just delusional. They did this to increase their yields, even with a broken unit of L2 cache they can still sell a chip as a 970, before maxwell they would have to cut it down to a lower-end chip. This is all a cost-saving measurement, and the consumer pays the price in the end.

 

 

How about the fact that a 4GB gtx 670 works perfectly fine while a gtx 970 doesn't? That's by no means conclusive, but it's a strong inidation that something isn't right here. Also, the major point of my post wasn't this one benchmark or that one forum post, it's the behaviour of some people here. Let the affected people do their testing and let them complain, they have all the right in the world to be fucking pissed right about now. I will not tolerate someone who is trying to shut up people who are affected by this issue, show a bit of empathy for fucks sake.

I can absolutely understand people being upset (and I am as well) with the incorrect l2 cache and rop information on the spec sheet, intentional or not that is false advertising, and nvidia deserves to be in hot water over that; but there is not and never was any 'memory bug' with the 970, it does have 4gb vram and its functioning as it was designed to.

Link to comment
Share on other sites

Link to post
Share on other sites

I can absolutely understand people being upset (and I am as well) with the incorrect l2 cache and rop information on the spec sheet, intentional or not that is false advertising, and nvidia deserves to be in hot water over that; but there is not and never was any 'memory bug' with the 970, it does have 4gb vram and its functioning as it was designed to.

 

I'm not denying that. Yes, it physically has 4GB of memory installed. Yes, they desinged it in such a way and it is "working as intended" with its "slow" partition and its regular partition. But that doesn't change the fact that they didn't disclose it properly. People (and especially reviewers) deserve to know those technicalities, and even if it's not false advertising it is still a dick move on their part. The guy who did the benchmark (the one I linked) said that his 2 970s offer a lesser experience compared to his 2 4GB 670s when gaming at 1440p in some titles. He's complaining about stuttering in certain games, and the benchmark clearly shows why. The memory performance stalls to a halt when the last .5GB are being accessed, so even if he gets higher fps with the 970s the game will stutter in certain scenarios. This doesn't happen in all games, and only a handful of people are actually directly affected by it, but the memory layout is an actual problem for some. And yes, some people probably wouldn't have bought 970s if they knew that they can effectively only use 3.5GB of VRAM without any problems.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×