Jump to content

nVidia has a GTX1060 with only 3GB of VRAM, but that's not all

zMeul

Vram is a pointless debate. With 4GB of HBM the Fury series still works wonderfully, showing it depends on the type of VRAM and not just the size of it.

The debate should really be about the deceptive name of the card, since as some have pointed out, it isn't a 1060 with less Vram, it's a 1060 with less Vram and less cores. With anything under the 3GB of VRAM limit, both 1060s won't perform as well, since one has 10% less cores to render frames. It's deceptive because people buying it assume the only change is the VRAM because it's the only change nvidia talks about.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, laminutederire said:

Vram is a pointless debate. With 4GB of HBM the Fury series still works wonderfully, showing it depends on the type of VRAM and not just the size of it.

The debate should really be about the deceptive name of the card, since as some have pointed out, it isn't a 1060 with less Vram, it's a 1060 with less Vram and less cores. With anything under the 3GB of VRAM limit, both 1060s won't perform as well, since one has 10% less cores to render frames. It's deceptive because people buying it assume the only change is the VRAM because it's the only change nvidia talks about.

As I've said-they've done this before, back around 2004-2005. And things like this are only done to mislead people.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, Dabombinable said:

As I've said-they've done this before, back around 2004-2005. And things like this are only done to mislead people.

I know you've said it, I just wanted to get people back on that since it's the real issue. Unless the card only has 2.5GB of Vram this time :P

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Lays said:

I was just doing the same when I was using a 780 Lightning as a temporary card until my 1080 showed up after I sold my 980 ti.

 

3GB is fine on most of these cards you're going to run out of GPU horsepower before you run out of VRAM.

oh, its you.

Yes, i keep hearing ALL Nvidia users say this.

 

May i ask.

Is there a difference in the VRAM chips and how they fundamentally function that AMD and Nvidia buys?

 

Because for some odd fucking reason, AMD cards tend to do really badly once they go over their VRAM limit. Whilst Nvidia cards seems to just "work flawlessly without any hitches". Despite every person with an ounce of knowledge about GPUs would know that going over your VRAM limit should make the experience horrible.

 

Also. Check out this post and read/watch it ALL before going bothering to reply.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, laminutederire said:

Vram is a pointless debate. With 4GB of HBM the Fury series still works wonderfully, showing it depends on the type of VRAM and not just the size of it.

The debate should really be about the deceptive name of the card, since as some have pointed out, it isn't a 1060 with less Vram, it's a 1060 with less Vram and less cores. With anything under the 3GB of VRAM limit, both 1060s won't perform as well, since one has 10% less cores to render frames. It's deceptive because people buying it assume the only change is the VRAM because it's the only change nvidia talks about.

check the following post.

 

 

Bandwidth only allieviates SOME of the issues with VRAM limit. It is not a solution, nor a "hugely influential factor"

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, Prysin said:

oh, its you.

Yes, i keep hearing ALL Nvidia users say this.

 

May i ask.

Is there a difference in the VRAM chips and how they fundamentally function that AMD and Nvidia buys?

 

Because for some odd fucking reason, AMD cards tend to do really badly once they go over their VRAM limit. Whilst Nvidia cards seems to just "work flawlessly without any hitches". Despite every person with an ounce of knowledge about GPUs would know that going over your VRAM limit should make the experience horrible.

 

Also. Check out this post and read/watch it ALL before going bothering to reply.

 

 

Turbocache was supposed to be a way to utilise system memory with minimal impact on GPU performance. Kind of hard to do with the bandwidth of SDRAM and DDR SDRAM (which were the main types of RAM around when they released the Turbocache 6200/Geforce 6100). Perhaps Nvidia has been working on it behind the scenes over the years.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Dabombinable said:

Turbocache was supposed to be a way to utilise system memory with minimal impact on GPU performance. Kind of hard to do with the bandwidth of SDRAM and DDR SDRAM (which were the main types of RAM around when they released the Turbocache 6200/Geforce 6100). Perhaps Nvidia has been working on it behind the scenes over the years.

no, we would know if they had, as any additional types of RAM cache on their GPUs would be seen during teardown.

If their driver reserved sections in system memory, then we would have seen higher system memory usage with Nvidia cards then with AMD cards. This is also something that sooner or later would have been picked up upon by the press and reported.

 

In olden days, you may be correct, but in the current era, there is no evidence supporting even the possibility of Nvidia having such a system.

 

It would be nice if they did, but alas, hardware does not seem to be there.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Prysin said:

no, we would know if they had, as any additional types of RAM cache on their GPUs would be seen during teardown.

If their driver reserved sections in system memory, then we would have seen higher system memory usage with Nvidia cards then with AMD cards. This is also something that sooner or later would have been picked up upon by the press and reported.

 

In olden days, you may be correct, but in the current era, there is no evidence supporting even the possibility of Nvidia having such a system.

 

It would be nice if they did, but alas, hardware does not seem to be there.

Note. I said system memory, as in the main RAM, not extra stuff tacked on. It was there way to cut corners by a shitload and sell a card with 16/32/64MB of vRAM as having 512MB for example. Its their take on what GPU normally do when maxing out the vRAM, just supposedly with a significantly lower performance.

This should help clear things up:

http://www.tomshardware.com/reviews/nvidia-geforce-6200-turbocache,973-2.html

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Dabombinable said:

Note. I said system memory, as in the main RAM, not extra stuff tacked on. It was there way to cut corners by a shitload and sell a card with 16/32/64MB of vRAM as having 512MB for example. Its their take on what GPU normally do when maxing out the vRAM, just supposedly with a significantly lower performance.

This should help clear things up:

http://www.tomshardware.com/reviews/nvidia-geforce-6200-turbocache,973-2.html

yes, that is due to the GPU using DDR memory, not GDDR.

 

DDR has lower latency, but lower bandwidth and speed.

GDDR has higher latency but much higher bandwidth and speed.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Prysin said:

yes, that is due to the GPU using DDR memory, not GDDR.

 

DDR has lower latency, but lower bandwidth and speed.

GDDR has higher latency but much higher bandwidth and speed.

Just read the page-it explains how it works, and gives insight into why Nvidia's GPU currently would have minimal impact from going over their frame buffer. Its still a shitty thing though-but they are at least not completely fucking people over in the senses that they are going of the physical vRAM, not the amount that the total amount the GPU can address from system memory+its frame buffer..

 

Edit: BTW, I do have an EVGA version with DDR2 and 512MB addressable vRAM.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Dabombinable said:

Just read the page-it explains how it works, and gives insight into why Nvidia's GPU currently would have minimal impact from going over their frame buffer. Its still a shitty thing though-but they are at least not completely fucking people over in the senses that they are going of the physical vRAM, not the amount that the total amount the GPU can address from system memory+its frame buffer..

 

Edit: BTW, I do have an EVGA version with DDR2 and 512MB addressable vRAM.

Yes, the technology COULD BE IN PLACE. But read the post i made before.

 

If this had been a thing, with MODERN GDDR5 or GDDR5X GPUs, then we would have seen higher system memory allocations in games when looking at Nvidia vs AMD. As AMD has no such feature (that we know of).
If the memory allocation is dynamic, we should see a large change in private main memory being allocated when nearing VRAM limit. We did not hear any reports of that during the 3.5GB drama. Nor have we heard about it before.

 

If this is something that Nvidia added back in 2005 or earlier, then reviewers WOULD KNOW and WOULD LOOK FOR IT during the 3.5GB drama. Because it would be one of those things the Nvidia shill reviewers out there could use to marginalize the whole issue.

 

It is also only a technology that makes sense with equal formats of memory. Back then the GPU used equal type of memory as system. Thus the memory controller ON THE GPU could easily talk to the system memory as it would be using the same protocols, the same commands and the same fucking memory structure.

Todays memory controllers are specifically made for GDDR5. The bus system is not designed for DDR. Nor is the controller tuned for DDR. DDR also cannot be used reliably with GPUs, as it is a low latency memory. It requires a faster, more precise memory controller. Whilst GDDR5 controllers can be a bit slower, but work on high throughput instead.

 

Again, this system, whilst it is a nifty feature, has no real basis for being possible to exist today. It would also require Nvidia to use A LOT of resources optimizing this system in order to make every game use it.

 

Not to mention it would not work with DX12, where the GPU driver is much more barebones then under DX11. Meaning it would be a waste of money to even have it.

Link to comment
Share on other sites

Link to post
Share on other sites

I read throught all the fuzz around here and stuff. And it got me wondering.

If Nvidia is so far ahead of AMD in terms of GPU VRAM and technology itself, then why it's all of a fuzz about GTX1060 3GB being the shitty card? Where I see, it's an opportunity for Nvidia to compete between RX460 and RX470 with this card with a price around 150 euros.
Other thing is, GTA V is mentioned as a benchmark here. I play it with my GTX750 OC 2GB and it holds up pretty well with 45-60fps on High settings with all the FXAA things turned down. Is really VRAM amount the issue, or the technology behind it?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Prysin said:

oh, its you.

Yes, i keep hearing ALL Nvidia users say this.

 

May i ask.

Is there a difference in the VRAM chips and how they fundamentally function that AMD and Nvidia buys?

 

Because for some odd fucking reason, AMD cards tend to do really badly once they go over their VRAM limit. Whilst Nvidia cards seems to just "work flawlessly without any hitches". Despite every person with an ounce of knowledge about GPUs would know that going over your VRAM limit should make the experience horrible.

 

Also. Check out this post and read/watch it ALL before going bothering to reply.

 

 

I mean I don't play the most demanding games ever,  but usually in games where I didn't have enough vram, I was already experiencing low frame rate before I hit vram limitations.   In my experience with an original Titan, 2 780s, 2 970s a 290x, 980, 980 ti and now a 1080, usually with a single card if I was near vram limit, it was because I had so many settings cranked that I was already in "uncomfortable"  frame rate territory. 

 

This may not be the case for the 980 ti and 1080 I've had most recently,  but honestly if I could somehow use 6 or 7gb of vram on my 1080, I'd probably already be in the low FPS side of things due to the massive amount of resolution / settings I'd have to crank up. 

Stuff:  i7 7700k @ (dat nibba succ) | ASRock Z170M OC Formula | G.Skill TridentZ 3600 c16 | EKWB 1080 @ 2100 mhz  |  Acer X34 Predator | R4 | EVGA 1000 P2 | 1080mm Radiator Custom Loop | HD800 + Audio-GD NFB-11 | 850 Evo 1TB | 840 Pro 256GB | 3TB WD Blue | 2TB Barracuda

Hwbot: http://hwbot.org/user/lays/ 

FireStrike 980 ti @ 1800 Mhz http://hwbot.org/submission/3183338 http://www.3dmark.com/3dm/11574089

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Accursed Entity said:

and here I am with GTX780 3GB playing at 1440p lol what the hell are you guys arguing about 3gb being too low...

 

3 hours ago, Lays said:

I was just doing the same when I was using a 780 Lightning as a temporary card until my 1080 showed up after I sold my 980 ti.

 

3GB is fine on most of these cards you're going to run out of GPU horsepower before you run out of VRAM.

Thank you, first hand users/owners of 3Gb cards!

 

This thread seems to be an argument for the sake of argument.

 

There seems to be a few points that show what 3Gb of Vram means for performance.

 

1: 3Gb is enough Vram... adjust settings to match your hardware... Because you're on PC and you can!

2: 3Gb seems a good match for the performance of the card! Having more Vram on a card doesn't mean the card is powerful enough to use all of it in most cases.

3: AMD seems to add Vram to their cards as a sales point...

4: I agree and belive that this card should be called the GTX1050 or 1050Ti

 

Because 8Gb would be better than 4gb on a card like a GTX960/GTX1060 just like 32Gb of system memory is better than 16Gb for your Pentium G3258... more does not mean faster or better or more usable.

More is not always better and when you don't have enough on a few games... adjust settings, It's not a console.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Lays said:

I mean I don't play the most demanding games ever,  but usually in games where I didn't have enough vram, I was already experiencing low frame rate before I hit vram limitations.   In my experience with an original Titan, 2 780s, 2 970s a 290x, 980, 980 ti and now a 1080, usually with a single card if I was near vram limit, it was because I had so many settings cranked that I was already in "uncomfortable"  frame rate territory. 

 

This may not be the case for the 980 ti and 1080 I've had most recently,  but honestly if I could somehow use 6 or 7gb of vram on my 1080, I'd probably already be in the low FPS side of things due to the massive amount of resolution / settings I'd have to crank up. 

well, i would agree. But i did these tests at 1080p window mode to eliminate the ability for Crossfire to even work (only SLI -can- work outside of fullscreen. But only in non-UWP fullscreen. Aka Win7 / Win 8 / Win 8.1....

 

As you can see, my FPS takes a nasty hit just going a little above. Despite having overclocked memory, allowing for more memory bandwidth then even most OCd 980Tis..

So despite all this, despite having plenty of power for 1080p, i am getting shafted by VRAM limit.

 

A 970 should be able to power 4GB of VRAM at 1080p. So should a 1060, given it is actually a bit faster then a 970.

Meaning 4GB is actually not enough.

 

Yes, ULTRA in Rainbow Six Siege requires you to download a 4K DLC tex pack. Which is free. But still, i can run Rainbow Six Siege just fine on 1080p on a A10 7870k and GTX 950 Mini.... So my i7 + 295x2 is outright destroying that game. Despite this, i am getting shafted by lack of VRAM.

 

I already discovered that the 3GB on my 7950s weren't enough to play with 4k texture mods in Skyrim, i found this out all the way back in 2013....

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Prysin said:

oh, its you.

Yes, i keep hearing ALL Nvidia users say this.

 

May i ask.

Is there a difference in the VRAM chips and how they fundamentally function that AMD and Nvidia buys?

 

Because for some odd fucking reason, AMD cards tend to do really badly once they go over their VRAM limit. Whilst Nvidia cards seems to just "work flawlessly without any hitches". Despite every person with an ounce of knowledge about GPUs would know that going over your VRAM limit should make the experience horrible.

 

Also. Check out this post and read/watch it ALL before going bothering to reply.

 

 

What does Lays being Lays have to do with anything? He has experience with 780's. As do I. As do a lot of people. What he says is exactly what I stated earlier in this thread. 

 

19 hours ago, MageTank said:

Why these people are fighting to such an extent is beyond me. Regardless of how much VRAM you throw on the 1060, it's still a sub-1080p card these days. A 1060 won't be able to max out every title at 1080p, even if it had 8GB VRAM. Some things are just too demanding in terms of raw horsepower, and no amount of VRAM or VRAM bandwidth will save them

 

As others said, the nomenclature issue is getting old, and there is no reason to call an entirely different card the same name. Once is a problem, but twice is unforgivable. If this card was an x50 card, nobody in this thread would be complaining. AMD's Entry-Level gaming cards have 2GB versions as well, so a 3GB entry level gaming card wouldn't/shouldn't bother anyone. Again, these cards will hit other limitations before VRAM becomes an issue 90% of the time. 

If you are going to call @Lays out on it, you need to call me out on it too. Though, you are going to be in for a very difficult time, because what he and I have said is an absolute fact. Look at those notebook GPU's with twice the VRAM of their desktop counterparts, and tell me if its enough to make those crippled GPU's strong enough to handle demanding titles.

 

You know I like you Prysin, but the way you worded this post just annoys me. It makes it seem as if Lays (or myself) know nothing about GPU's because in our experience, VRAM was not the limiting factor when raw horsepower was. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Prysin said:

check the following post.

 

 

Bandwidth only allieviates SOME of the issues with VRAM limit. It is not a solution, nor a "hugely influential factor"

First of all I really don't care, my fury works wonderfully well with only 4GB of Vram and that's all I'm asking for. 

Secondly if you read my post you would have seen that I want to make the debate about the fact that they try to pass on the card as a 1060 but with less cores. Because it's all that matters in this post.

Link to comment
Share on other sites

Link to post
Share on other sites

Sad to see Nvidia went full AMD Retarded with the naming schemes: what was the problem with calling it 1050ti? leaves you room for a 1050 if you want and makes it clear it's not a 1060 but it's cut down.

 

But anyway why is Nvidia releasing so many SKUs anyway? They always had far less cards and make choice a bit simpler than AMD (I mean: 280, 285 280x, 380, 380x all within the same price and performance range: that's fucking stupid) and that was a good thing if you ask me.

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Misanthrope said:

Sad to see Nvidia went full AMD Retarded with the naming schemes: what was the problem with calling it 1050ti? leaves you room for a 1050 if you want and makes it clear it's not a 1060 but it's cut down.

 

But anyway why is Nvidia releasing so many SKUs anyway? They always had far less cards and make choice a bit simpler than AMD (I mean: 280, 285 280x, 380, 380x all within the same price and performance range: that's fucking stupid) and that was a good thing if you ask me.

Because when nvidia does that, they put larger price difference, allowing them to justify having their higher end card that expensive. Maybe it's because they can't produce enough good cards so they make cut down versions with cards that don't work well enough.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, laminutederire said:

Because when nvidia does that, they put larger price difference, allowing them to justify having their higher end card that expensive. Maybe it's because they can't produce enough good cards so they make cut down versions with cards that don't work well enough.

That's how most cheap cards come to be regardless though.

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Misanthrope said:

That's how most cheap cards come to be regardless though.

Yes but their manufacturing process could be so bad that they don't have the choice to create cut down cards they wouldn't have released otherwise. I they had to throw a negligeable amount of cards because they can't be sold as a 1060, then it's okay, it doesn't cost them too much. But if they have like 10 to 20% of their production which isn't good enough, but in those cards, only a negligible part has less cores working than the 10%  cut-down limit,  it makes sense they create a new model to sell those cards they paid to manufacture.

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, CTR640 said:

Then I'll challenge you to cripple a Titan X Pascal, cripple it so hard enough, this GPU will explode like a bomb.

The game to use to cripple it:

  Hide contents

pac-man.jpg

 

200x MSAA should do it lol

I spent $2500 on building my PC and all i do with it is play no games atm & watch anime at 1080p(finally) watch YT and write essays...  nothing, it just sits there collecting dust...

Builds:

The Toaster Project! Northern Bee!

 

The original LAN PC build log! (Old, dead and replaced by The Toaster Project & 5.0)

Spoiler

"Here is some advice that might have gotten lost somewhere along the way in your life. 

 

#1. Treat others as you would like to be treated.

#2. It's best to keep your mouth shut; and appear to be stupid, rather than open it and remove all doubt.

#3. There is nothing "wrong" with being wrong. Learning from a mistake can be more valuable than not making one in the first place.

 

Follow these simple rules in life, and I promise you, things magically get easier. " - MageTank 31-10-2016

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, Misanthrope said:

But anyway why is Nvidia releasing so many SKUs anyway? They always had far less cards and make choice a bit simpler than AMD (I mean: 280, 285 280x, 380, 380x all within the same price and performance range: that's fucking stupid) and that was a good thing if you ask me.

Money, probably. If they were to call it a 1050 or 1050 Ti, the market would be looking for a price drop that puts it in competition with the RX 470's low end, maybe even into 460 range. That's probably a smaller price tag than NVIDIA wanted for it, so it's still a 1060. Even though it's really not a 1060.

 

I don't hate the card. It's probably going to be a very nice, very capable card for folks living on the low end (which is more PC owners than you might think). What I do hate is NVIDIA calling it a 1060. The 970 VRAM thing didn't bug me too much because they still technically had 4GB of VRAM in there. The 1060 3GB version does bug me because they've removed cores. IMO, taking out VRAM doesn't change what a GPU is. Fundamentally altering its architecture by removing cores? That does. The 1060 3GB is a 1050/1050 Ti.

Aerocool DS are the best fans you've never tried.

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Misanthrope said:

They always had far less cards and make choice a bit simpler than AMD (I mean: 280, 285 280x, 380, 380x all within the same price and performance range: that's fucking stupid) and that was a good thing if you ask me.

230, 240, 250, 250x, 260, 260x, 270, 270x, 280, 280x, 290, 290x, 295x2 (the 265/285 were replacements for the 260/280)

710, 720, 730, 740, 750, 750Ti, 760, 770, 780, 780Ti, Titan, Titan Z (Titan Black was a replacement for the Titan) 

 

360, 370, 380, 380x, 390x, Fury, Fury X, Fury Nano

950, 960, 970, 980, 980Ti, Titan X 

 

Nvidia was a bit better, but not much 

 

460, 470, 480, 490 (?), Rage (?), Rage X (?)

1050(?), 1060-, 1060, 1070, 1080, Titan XP

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×