Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

The video RAM information guide

17 hours ago, BTGbullseye said:

That is the implied situation if you read all the previous comments. Context is king.

That is not what was said. The statement read "you are never going to get a memory clock fast enough to fill that vRAM buffer on 128-bit".

 

This statement, as a whole, is false. I said whether it was useful, but that was also a statement I shouldn't have said. What I should have said was "whether the card is useful", because higher resolutions and multisample AA types etc hurt the core performance far more than memory, but shadowmap is still a situation that can eat up a vRAM buffer and not particularly hurt core or memory bandwidth except for initial loading. In other words, using such low end cards where you would find a 128-bit memory bus in a situation where it would need 8GB of vRAM is ill-advised, and there are very few cases where it makes sense, but if you want to throw a 128-bit memory bus 8GB card at something like Aliens: Isolation where 16k x 16k shadowmap resolution was possible via cfg edits and only cared about your vRAM buffer size, it would use it perfectly fine.

 

Edit: To clarify my statement about the core performance... vRAM chokes don't usually have hitching longer than the inherent delay in low-fps gameplay around 30fps. It is why the low-vRAM-buffer R9 Fury and R9 Fury X never showed low vRAM hitching/stallouts when testing because they ran the tests at 4k and aimed to keep around a 30fps give or take performance level, actually adjusting gaming settings down or up.

 

There have also been tests between 2GB and 4GB vRAM cards at higher resolutions where the 2GB cards don't show much variance in frametime, but then the games were between 25 and 32fps anyway, so any stutter is masked by already-low FPS.

Edited by D2ultima

Clevo P870DM3 (Eurocom) | i7-7700K | 32GB DDR4 2400MHz | GTX 1080N SLI | 850 Pro 256GB | 850 EVO 500GB M.2 | Samsung PM961 256GB NVMe | Crucial M4 512GB | Intel 8265ac | 120Hz Matte screen | 780W PSU

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to post
Share on other sites
  • 4 weeks later...
On 10/12/2019 at 10:50 AM, masethekiller said:

For example why do so many video cards have around the same clock speed but a huge difference in performance?

If you're talking about core clock speed in general, it's because those higher performing GPUs have more processing units. Graphics processing as a whole is what's known as embarrassingly parallel. The best way of increasing graphics rendering performance is to literally just throw more processing units at it. The only reason I can think of why GPUs appear to have floated around the same clock speed over time is clock speed likely has a much stronger effect on power consumption over adding more processing units, and hardware has to be within a certain power envelope.

Link to post
Share on other sites

"Let's clear up a misconception about Vram"

 

Yes, I would like to do this today too,  because I see a lot of people falsely saying  "6GB is more than enough!"  (to play at maxed out settings, 1080/60) 

 

If you want to play MHW (Capcom)  or RE2R (Capcom)  fully maxed out (or close to, even) @ 1080p/60  you will need way more than mere 6GB of Vram.

 

Both games already use up more than that on medium - high settings. 

 

And I've tested this extensively, the requirements Capcom states in game are *very* accurate,  measured with MSI Afterburner,  GPUZ and HWmonitor. 

 

And while you probably could say "most" games don't need more than 6GB Vram today, it's outright false saying there are no games using that much (highly popular games I might add) 

 

And as a matter of fact you *can* play these games with settings that go over your Vram limit but you'll get massive framedrops,  glitches, terrible frametimes and sometimes crashes. So,  no, "only" 6GB Vram is not enough today. And those requirements will only go up in the not so far away future,  naturally. 

 

 

And I bet there are more examples than just these two games already,  it's all I play though, so I can't really comment on that. 


 

✪Ryzen 3600✪RTX3070✪B350✪

✪Trident Z 3600 16-16-16-36✪

✪Noctua U12S Chromax✪

 

 

Link to post
Share on other sites
6 hours ago, Mark Kaine said:

"Let's clear up a misconception about Vram"

 

Yes, I would like to do this today too,  because I see a lot of people falsely saying  "6GB is more than enough!"  (to play at maxed out settings, 1080/60) 

 

If you want to play MHW (Capcom)  or RE2R (Capcom)  fully maxed out (or close to, even) @ 1080p/60  you will need way more than mere 6GB of Vram.

 

Both games already use up more than that on medium - high settings.

The Division 2 and Ghost Recon Breakpoint both exceed 8GB VRAM usage if the you're at the maximum settings, and don't drop down the shadow quality one notch. (that last quality jump notch increases VRAM usage by over 2GB)

CPURyzen 7 5800X Cooler: Arctic Liquid Freezer II 120mm AIO with push-pull Arctic P12 PWM fans RAM: G.Skill Ripjaws V 4x8GB 3600 16-16-16-30

MotherboardASRock X570M Pro4 GPUASRock RX 5700 XT Reference with Eiswolf GPX-Pro 240 AIO Case: Antec P5 PSU: Rosewill Capstone 750M

Monitor: MSI Optix MAG272CR Case Fans: 2x Arctic P12 PWM Storage: HP EX950 1TB NVMe, Mushkin Pilot-E 1TB NVMe, 2x Constellation ES 2TB in RAID1

https://hwbot.org/submission/4497882_btgbullseye_gpupi_v3.3___32b_radeon_rx_5700_xt_13min_37sec_848ms

Link to post
Share on other sites
On 11/15/2019 at 1:04 AM, Mark Kaine said:

"Let's clear up a misconception about Vram"

 

Yes, I would like to do this today too,  because I see a lot of people falsely saying  "6GB is more than enough!"  (to play at maxed out settings, 1080/60) 

 

If you want to play MHW (Capcom)  or RE2R (Capcom)  fully maxed out (or close to, even) @ 1080p/60  you will need way more than mere 6GB of Vram.

 

Both games already use up more than that on medium - high settings. 

 

And I've tested this extensively, the requirements Capcom states in game are *very* accurate,  measured with MSI Afterburner,  GPUZ and HWmonitor. 

 

And while you probably could say "most" games don't need more than 6GB Vram today, it's outright false saying there are no games using that much (highly popular games I might add) 

 

And as a matter of fact you *can* play these games with settings that go over your Vram limit but you'll get massive framedrops,  glitches, terrible frametimes and sometimes crashes. So,  no, "only" 6GB Vram is not enough today. And those requirements will only go up in the not so far away future,  naturally.

I would argue that it's more complicated than "if a game/MSI Afterburner/Task Manager is reporting X amount of VRAM is being used, you need to have at least X amount of VRAM."  In my own testing and looking at other places that did VRAM measurements, I'm not seeing the "massive frame drops," "glitches", or "terrible frame rates" once VRAM is capped. Though rather than post it here, I had some of my testing chronicled with status updates:

Plus looking over at the graphics pipeline, I'm not convinced that every last bit of VRAM used in a game is actually needed at to simply render that frame. That is, if a game is using 4GB of VRAM, I don't think it's actually using all 4GB of it. If it were the case that the GPU needs all of the VRAM it takes up to render a frame, then it would be impossible to run say Final Fantasy XV on my GTX 1080 at 60 FPS, as the game uses up all of its VRAM and the GTX 1080 doesn't have anywhere near enough bandwidth to support chugging around 8GB of data 60 times a second.

 

Also on another note, applications that request memory will get that memory provided it's available. The application may not even use the memory it requested but the OS will keep it around for them in case it does need it. So while there is going to be a minimum VRAM requirement, I think we're giving the estimate reported by applications and reporting tools too much credit compared to what happens in reality. And one more thing: Call of Duty World War II has an option that's literally called "Fill remaining VRAM." With an option like that, I'd say our initial expectations are thrown out the window because how do I know other games aren't doing similar things?

Link to post
Share on other sites
9 hours ago, Mira Yurizaki said:

And one more thing: Call of Duty World War II has an option that's literally called "Fill remaining VRAM." With an option like that, I'd say our initial expectations are thrown out the window because how do I know other games aren't doing similar things?

I don't think that has any relevance whatsoever to the games I'm playing,  and you know that by looking at for example Afterburner, it shows you exactly how much is used at what point in time. 

 

Also iirc for example MHW requirements for the HD texture pack, which is really just "not PS4 low quality textures" is indeed 8GB VRAM. 

 

 

You're right it's surely more complicated than that as a whole though and I only have two (Capcom)  games where I could test this.

 

 

Definitely RE2,  if you only go a tiny little bit over the Vram limit you *will* get framedrops,  with lower settings provided everything else in your PC is up to snuff,  you won't,  at the very least much less severe and more rarely - whereas when you're over the limit it's very predictable and always at the same spots too. 

 

 

 

It obviously depends on the games and also how accurate their estimates are,  but in certain games it makes a huge difference (btw MHW has very similar tendencies as RE2 when your just a little bit over the limit, just less severe) 

 

 

For example :  it's true during a quest the game generally uses between 4 and 5 GB (which already makes 4GB cards obsolete if we're really honest / realistic) but that doesn't stop it from occasionally going to 6GB - which does make sense also considering how dynamic those quests are. 

 

 

PS:

Quote

think we're giving the estimate reported by applications and reporting tools too much credit compared to what happens in reality

 

 

To be clear,  I think the OP is outdated and needs updating,  there are several games that need at least 8GB if you want to play them maxed out at 1080p / 60  and I'm sick of people saying "you don't need more than 6 GB,  because games don't use more than 6 GB anyways lul"  which is proven to be false, by my own experience, general game requirements and others. 

 

 

 

 

 

✪Ryzen 3600✪RTX3070✪B350✪

✪Trident Z 3600 16-16-16-36✪

✪Noctua U12S Chromax✪

 

 

Link to post
Share on other sites
On 11/19/2019 at 2:56 PM, Mark Kaine said:

I don't think that has any relevance whatsoever to the games I'm playing,  and you know that by looking at for example Afterburner, it shows you exactly how much is used at what point in time. 

I don't completely trust MSI Afterburner or similar programs like GPU-Z, EVGA Precision, etc. Those poll the drivers for data and the VRAM usage in this case is the total VRAM usage, including what Windows and other apps are using.

 

The method I used was polling what the application in question was using. I was also including shared memory (i.e., system RAM used as VRAM) in the mix because I was curious about how the application used that. While I didn't capture it, I did notice that games that used a lot of VRAM when less was available started to use more system RAM instead.

 

Quote

Also iirc for example MHW requirements for the HD texture pack, which is really just "not PS4 low quality textures" is indeed 8GB VRAM. 

If this is a requirement from the store page or wahtever, I take what they say with a grain of salt.

 

Quote

Definitely RE2,  if you only go a tiny little bit over the Vram limit you *will* get framedrops,  with lower settings provided everything else in your PC is up to snuff,  you won't,  at the very least much less severe and more rarely - whereas when you're over the limit it's very predictable and always at the same spots too. 

 

It obviously depends on the games and also how accurate their estimates are,  but in certain games it makes a huge difference (btw MHW has very similar tendencies as RE2 when your just a little bit over the limit, just less severe) 

I'm not denying that frame drops will happen. But the usual assumption is once you run out of VRAM, the game's performance starts tanking like people thinking the same thing when the system runs out of system RAM and is constantly swapping. But rather than make blanket statements, I'd rather start examining games themselves in how they behave.

 

Quote

For example :  it's true during a quest the game generally uses between 4 and 5 GB (which already makes 4GB cards obsolete if we're really honest / realistic) but that doesn't stop it from occasionally going to 6GB - which does make sense also considering how dynamic those quests are. 

I don't think they are, as obsolete is purely a function of what the user wants. If you demand a minimum of 1080p 60FPS on max settings, then sure. But I don't. Most of the games I play don't even trip over 4GB of actual VRAM usage, so 4GB cards are still plenty viable for me.

 

Quote

To be clear,  I think the OP is outdated and needs updating,  there are several games that need at least 8GB if you want to play them maxed out at 1080p / 60  and I'm sick of people saying "you don't need more than 6 GB,  because games don't use more than 6 GB anyways lul"  which is proven to be false, by my own experience, general game requirements and others.

Anything that claims a hard limit in a computer setting is going to be subject to this. And again, your use cases may be playing this games and demanding such, but it's not really that good as a blanket statement. For example, if all anyone wants to play is Fortnite, Overwatch, CS:GO, or any of the other so-called eSports games, they're probably going to be running it at the lowest quality settings anyway to maximize performance. In this case at the the time of this writing, they're never going to trip over 6GB of VRAM, and likely won't even trip over 4GB.

 

tl;dr: It looks like you're comparing your experiences to other people's experiences, and clearly yours are different than what they normally see. But just because you see something different doesn't invalidate anyone else's statements. You have to find the context in which they were saying them. So the next time you see someone go "you don't need more than 6GB", ask them what they do rather than lash at them with "but RE2 uses 8GB of VRAM and it'll have frame drops if you don't have that much"

 

Otherwise, let's recommend Threadrippers and 64GB of RAM for everyone.

Link to post
Share on other sites
11 hours ago, Mira Yurizaki said:

I don't completely trust MSI Afterburner or similar programs like GPU-Z, EVGA Precision, etc. Those poll the drivers for data and the VRAM usage in this case is the total VRAM usage, including what Windows and other apps are using.

 

The method I used was polling what the application in question was using. I was also including shared memory (i.e., system RAM used as VRAM) in the mix because I was curious about how the application used that. While I didn't capture it, I did notice that games that used a lot of VRAM when less was available started to use more system RAM instead.

 

If this is a requirement from the store page or wahtever, I take what they say with a grain of salt.

 

I'm not denying that frame drops will happen. But the usual assumption is once you run out of VRAM, the game's performance starts tanking like people thinking the same thing when the system runs out of system RAM and is constantly swapping. But rather than make blanket statements, I'd rather start examining games themselves in how they behave.

 

I don't think they are, as obsolete is purely a function of what the user wants. If you demand a minimum of 1080p 60FPS on max settings, then sure. But I don't. Most of the games I play don't even trip over 4GB of actual VRAM usage, so 4GB cards are still plenty viable for me.

 

Anything that claims a hard limit in a computer setting is going to be subject to this. And again, your use cases may be playing this games and demanding such, but it's not really that good as a blanket statement. For example, if all anyone wants to play is Fortnite, Overwatch, CS:GO, or any of the other so-called eSports games, they're probably going to be running it at the lowest quality settings anyway to maximize performance. In this case at the the time of this writing, they're never going to trip over 6GB of VRAM, and likely won't even trip over 4GB.

 

tl;dr: It looks like you're comparing your experiences to other people's experiences, and clearly yours are different than what they normally see. But just because you see something different doesn't invalidate anyone else's statements. You have to find the context in which they were saying them. So the next time you see someone go "you don't need more than 6GB", ask them what they do rather than lash at them with "but RE2 uses 8GB of VRAM and it'll have frame drops if you don't have that much"

 

Otherwise, let's recommend Threadrippers and 64GB of RAM for everyone.

Tldr: Many Games need more than 6GB Vram today and saying it ain't so is indeed a misconception at best and purposefully spreading false information at worst. 

 

 

(and I don't think I need saying that not all games need that much Vram, an ever increasing number does though) 

 

 

11 hours ago, Mira Yurizaki said:

Otherwise, let's recommend Threadrippers and 64GB of RAM for everyone.

This is nonsense,  stop making up stuff and show me one game with such requirements. 

 

Basically you seem to (purposefully?)  keep missing the point. 

 

Like, If someone asks "Hey guys I'm building a PC and should I get this 6GB card?" 

Then saying "yes,  it's fine, games don't need more anyways"  is simply misleading misinformation.  And if pointed out they usually just say "then just lower your settings"  which again is completely missing the point of building a somewhat future proof system (for at least a couple of years) 

 

If someone has such a card or knows exactly they won't be playing too demanding games or are ok with lowering settings then it is indeed ok to say 6GB is enough,  but not as a blanket statement and it should be at least pointed put that many games already need more Vram and that to future proof a system this should be taken into consideration. 

 

 

I'm really not sure why this seems so hard to accept.

 

Similar goes for "you don't need more than 4 cores" btw,  when several games make very good use of multiple cores / threads already today. 

 

 

Why plug outdated hardware and software like that?  I really don't get it, PC gamers should and often do always stride for the best,  not barely above console performance. 

 

 

 

 

 

 

✪Ryzen 3600✪RTX3070✪B350✪

✪Trident Z 3600 16-16-16-36✪

✪Noctua U12S Chromax✪

 

 

Link to post
Share on other sites
7 hours ago, Mark Kaine said:

-snip-

I'll just leave this in bullet point form:

  • I never advocated saying "X card is fine because games don't use more than 6GB of VRAM anyway" as a serious suggestion.
  • Everyone's needs and use cases are different. Saying things like "X is obsolete" is solely based on your opinions.
  • Not every PC gamer cares about maximizing everything and making sure they beat consoles. Most of the people I know who game on PCs care more about just having something that works and isn't complete garbage over trying to make the bestest of everything
  • I find it somewhat ironic someone flinging terms like "misinformation" when all they're doing is taking figures from a reporting tool and, from what I can tell, blindly accepting it, rather than try to figure out how things actually behave and challenge existing assumptions.

I'll leave this at that.

Link to post
Share on other sites
9 hours ago, Mark Kaine said:

Many Games need more than 6GB Vram today

No, this is misinformation. Few games genuinely need more than 6GB, many games can reserve more than 6, but doesn't use all of it.

 

And most programs that report VRAM usage, don't see what the actual usage is. They see how programs and their OS is reserving VRAM.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to post
Share on other sites
1 hour ago, Drak3 said:

. Few games genuinely need more than 6GB

Oh yeah,  they don't need it because you can just lower settings,  right? 

 

Ironically exactly what I'm talking about.  Please stop arguing in bad faith.  "many"  "few" 

There are popular "AAA" games that use more than 6GB Vram today at 1080p, and as such that needs to be taken into consideration. Period.

 

 

And I'm not even going to entertain the idea that we have no way of seeing current Vram usage when there are several programs that do just that - in real time. 

 

 

There's like no grounds for discussion whatsoever if you don't even acknowledge that fact. 

 

 

 

✪Ryzen 3600✪RTX3070✪B350✪

✪Trident Z 3600 16-16-16-36✪

✪Noctua U12S Chromax✪

 

 

Link to post
Share on other sites
1 minute ago, Mark Kaine said:

Oh yeah,  they don't need it because you can just lower settings,  right?

No, they don’t need it because there isn’t that much data.

 

Games get more VRAM reserved than what they actually need so that no other programs can eat up a section of VRAM that is needed in one sequence within a game but not another.

 

It’s something most operating systems do with every form of RAM visible to them.

3 minutes ago, Mark Kaine said:

And I'm not even going to entertain the idea that we have no way of seeing current Vram usage when there are several programs that do just that - in real time. 

Again, they don’t see actual VRAM usage. They see how VRAM is reserved and report it as used, because it might as well be if you’re not a high level techie that probes around for that kind of information. @Mira Yurizaki is exactly that type of techie.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to post
Share on other sites
1 minute ago, Drak3 said:

No, they don’t need it because there isn’t that much data.

 

Games get more VRAM reserved than what they actually need so that no other programs can eat up a section of VRAM that is needed in one sequence within a game but not another.

 

It’s something most operating systems do with every form of RAM visible to them.

Again, they don’t see actual VRAM usage. They see how VRAM is reserved and report it as used, because it might as well be if you’re not a high level techie that probes around for that kind of information. @Mira Yurizaki is exactly that type of techie.

So Afterburner is reporting wrong data?  And so is Capcom?

 

And the lags I'm getting as soon as I'm close to that limit and it's clear the game would need more is just my imagination I suppose? 

 

 

 

 

 

✪Ryzen 3600✪RTX3070✪B350✪

✪Trident Z 3600 16-16-16-36✪

✪Noctua U12S Chromax✪

 

 

Link to post
Share on other sites
15 minutes ago, Mark Kaine said:

So Afterburner is reporting wrong data?  And so is Capcom?

Yes. But as far as you’re concerned, it’s correct. You don’t have the know how to to reduce how much is reserved for a buffer, nor do you have a reason to.

 

18 minutes ago, Mark Kaine said:

And the lags I'm getting as soon as I'm close to that limit and it's clear the game would need more is just my imagination I suppose? 

Perhaps.

But, there are many details I don’t know. GPU model, vBIOS version, other software on the system, CPU, RAM speed and timings, OS version, etc.

There are many reasons for the situation you describe.

 

But to insinuate that everyone else is wrong because of only your experience being an exception to the rule is ignorant at best.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to post
Share on other sites
  • 3 months later...
On 8/16/2014 at 12:16 PM, D2ultima said:

Ok. I did a SLI guide and now time to do a vRAM/memory bandwidth guide. A lot of people seem to be confused about vRAM in general. Well, here we go.

 

Let's clear up some misconceptions about vRAM! 

 

  Hide contents

 

Assumption: You need a powerful video card to make use of a lot of vRAM.

FALSE. vRAM usage is independent of GPU usage. For example, here is a screenshot of Call of Duty: Ghosts using 4GB of vRAM while happily using ~15% of my video cards sitting at the main menu. Ghosts is a bad example, however, so here is also a screenshot of Titanfall using only 60% of my GPU while happily gobbling up 3.9GB vRAM. The second screen was in the shot to show RAM usage to someone; so you can ignore that. 

 

Assumption: vRAM amount is related to the memory bus width.

Partially true. vRAM amount is only loosely related to the memory bus width. 128/256/512 bit memory buses have RAM sizes like 1GB, 2GB, 4GB, 8GB, etc. 96/192/384 bit memory buses have RAM sizes like 768MB, 1.5GB, 3GB, 6GB, etc. You can have 4GB vRAM on a 128-bit memory bus (like HERE) and 6GB vRAM on a 192-bit memory bus (like HERE). On the flip side, HERE is a 384-bit memory bus card with only 3GB vRAM. This question will be extrapolated under the its own section later down, as there is a lot of information to add.

 

Assumption: You need huge amounts of vRAM if you're going to use multiple monitors or high resolution screens.

Partially true. You do not NEED it. It will help, but not in the way you may think. If you're considering gaming, especially fullscreened, your vRAM usage depends about 95% on game settings and resolution matters very little; 2GB would work easy for triple monitor 1080p gaming (with games prior to 2014 at least).

 

Assumption: A lot of vRAM being used must mean the textures are great.

FALSE. There's texture size and texture quality. Texture size is the resolution which they are rendered at. Texture quality is how well it is drawn/rendered. Just because a game has very large textures does NOT automatically mean it's drawn or rendered well, and thus doesn't automatically mean it looks brilliant. Also, things like shadow maps can use lots of vRAM, and thus use up your vRAM without actually improving texture quality.

 

Assumption: Adding two cards gives me double the vRAM!

FALSE. SLI and CrossfireX copy the content of their memory across the cards, therefore your vRAM amount does not increase. Your memory access bandwidth, however, does (nearly) double.

 

Assumption: GDDR5 is always better than GDDR3.

FALSE. Memory bus width is taken into account here. GDDR3 doubles memory bandwidth, and GDDR5 doubles it again. A 1000MHz mem clock on a 512-bit mem bus, GDDR3 card is EQUAL TO a 1000MHz mem clock on a 256-bit mem bus, GDDR5 card. Most new cards don't touch GDDR3 anymore, but I'm including this anyway to clear up anything people may be misinformed on.

 

Assumption: I need tons more vRAM to turn my Anti Aliasing up.

Partially true. Normal levels of AA, like 4x MSAA or TXAA, is not going to use a lot of extra memory... 100MB here, 200MB there, for the most part. Anything that re-samples textures will do this. FXAA and MLAA should not boost memory drain at all, as they do not mess with textures directly... SMAA is a post-process form of AA but has a SLIGHT impact on vRAM. High levels of CSAA however, can use a lot of vRAM. 16xQ CSAA can use 600MB of vRAM and 32x CSAA is even more (for example, my Killing Floor uses 2.1GB vRAM due 32x CSAA at 1080p). I DO NOT KNOW how SLI-enabled AA levels count toward vRAM usage (64x CSAA possible). Due to the fact that AA is used on the sacrificed card itself and SLI is in fact "off", it might use the vRAM buffer of the second card for AA in that way. I will test this one day I'm not bothering unless nVidia's Pascal GPUs can use CSAA again, as Maxwell GPUs cannot. I do not know how MFAA works with the new Maxwell GPUs with respect to vRAM usage, and I will not know unless someone buys and sends me 970M or 980M GPUs or a desktop with Maxwell cards (which is not going to happen lel). I DO know that MFAA does not work with SLI, so forcing high levels of AA is now officially dead. If you want to volunteer for testing this however, and have the hardware to test with, you can PM me here or send me a tweet.

 

Assumption: There's no way I'll need 4GB or more for 1080p!

FALSE. I explain a lot of it above and below about how vRAM is used up, and 4GB can definitely be used up at 1080p. I will also point out that games which lack sufficient compression techniques in their code CAN and WILL use up 4GB or more of your vRAM, at 1080p, regardless of if they have advanced lighting and reflections and high quality textures (separate from high resolution textures) or not. Because as is said elsewhere in this guide, game resolution is NOT the main factor in how much vRAM you use. This is not to say that most games that use 3GB and 4GB at 1080p (like Evolve, or CoD: AW) actually have a need for it... they're unoptimized. No denying that. But it doesn't mean that it's going to hurt to have enough vRAM to satisfy their needs. Remember: more vRAM is better than less.

 

What does vRAM size have to do with gaming?

 

  Hide contents

 

Video games use textures. Textures have a size. I already explained this a little above, but I'll be a bit more thorough here. Texture size is pretty native to a game, usually. Most games won't allow you to make much of a change to the texture sizes, but some do. Titanfall is an example of one such game. Modifying Skyrim is another example. A large texture size simply uses up more vRAM, and does little else in terms of performance impact in most cases. Some games are coded badly though ^_^. Anyway, texture size improvements are how crisp things are more than anything else. Just because your textures are crisp does NOT mean they are good; please remember this. You can have huge, crisp, badly drawn and badly rendered textures, using 4GB of vRAM or more and looking like a piece of poo. Like Call of Duty Ghosts. Or try comparing Titanfall to BF4. Worlds apart, but the former uses more vRAM. In other words, vRAM is just a per-game thing. Adjusting texture quality actually doesn't help vRAM too much either. Only adjusting the size does. BF4 on lowest graphics uses ~2.2GB vRAM for me just as it does on ultra.

 

Next, you have something slightly different. vRAM can be dedicated to more than just texture size! One such game with does this is Arma 2. There, one can adjust the amount of video memory the game is allowed to use. Oddly enough, the highest allowance is "default" which caps out at about 2.5GB of used memory. "Very high" is what most people would probably use, but that's a 1.5GB limitation for that game. It applies to the DayZ mod for Arma 2 as well. What that does is it reduces popping by keeping more textures in memory. So when you zoom in with a powerful rifle, you don't have to wait for the surroundings to catch up as much. It does not improve framerates at all to give the game more vRAM, but the texture pop-in reduction etc is nice.

 

Also, shadow resolution, number of dynamic light sources and reflections, etc can all use up extra video memory. In this case, a fully-optimized game which looks worse than Crysis 3 but uses more vRAM than Crysis 3 can be achieved by bumping the shadow resolutions to high levels and improving the amount of dynamic lighting and reflections that is available, especially if the game is more open-world and has many more objects loaded in to be affected by said lights or cast shadows or whatever. Draw distances also take up extra vRAM for things like open world games. So don't compare apples-to-apples "texture quality" as an indicator of good vRAM usage, but please DO compare everything else. An open world game is likely to use a bit more than a corridor shooter. But the corridor shooter likely will have better textures. On the other hand, for a game like Shadows of Mordor where bumping the textures is the MAIN drain on the system's vRAM buffer, you can quite clearly understand that it's just unoptimization there. But a game like skyrim with very high-res texture mods and lighting overhauls eating up a solid 3GB etc is fine due to the large open-world nature, even though it looks worse than anything you'd find in Crysis 3,  etc.

 

Bonus: What happens if I don't have enough vRAM that a game asks for for certain settings?

 

  Hide contents

 

I have to add this section because of all these new games where developers have decided that everybody owns two Titan Blacks with 6GB vRAM and will just throw uncompressed everything into vRAM even though the quality isn't all that great (stares at Shadow of Mordor and Evil Within and any other ID Tech 5 engine game ever).

 

Now, first and foremost, some games will lock some options away from you and you won't ever see it, unless you hack it in (though this is rare). Wolfenstein: The New Order does this; if your GPU is under 3GB in vRAM size you will never see "ultra" textures in the options menu. Forcing it on usually results in stuttering and crashing or generally undesirable behaviour on an extreme scale. For some proof, you can read this article about what happens when forcing ultra textures on Wolfenstein: TNO using a 2GB 770.

 

Next, and the most common option, the game will allow you turn on the settings, and you may experience some playable framerates, especially in fullscreen with no second monitors attached so you cut down on OS-used vRAM, but the minimum framerates may be quite low and some stutter may be apparent (depending on how much extra vRAM you would need) and you might even crash the game. When the vRAM limit is hit, windows (or the drivers; I'm not sure) starts compressing what's in the vRAM buffer and tossing out things used for caching. Typically, you can go a couple hundred MB above what the game would require if you had a higher vRAM card without issues, but there WILL come a point where you can compress no longer, and you will end up using virtual memory of some kind, and performance will decline. It may not be considered "playable" for some people (depending on what happens), but for others they'll deal with it/not turn up settings. So be wary of this.

 

Finally, Windows, which often needs vRAM for its desktop, may bug you about having "out of memory" errors and ask if you wish to switch off Aero (Vista and 7; never seen this in 8 though I never used 8 with under 4GB of vRAM) and may pull you out of your game to do so. This will come with framedrops from a game which cannot compress vRAM usage any longer. The reason this happens is because the rest of the texture data must be held in RAM or even on the Hard Drive/Solid State Drive, and accessing these (yes, even the SSD) is much slower than pulling the raw information from the vRAM buffer, and is what usually causes the slowdowns and stutters. To this effect, I ASSUME that extremely fast & low latency system memory and installation of the game on a SSD as well as very high GPU memory bandwidth will likely reduce the stuttering/slowdown occurrences due to faster access of the data not stored in the vRAM buffer, and the higher GPU memory bandwidth means it can empty/refill the vRAM buffer faster, and thus the game won't need to "wait" as often.

 

So basically, it may work or it may not work, but no matter how it does, if you try to force buffer information built for more vRAM than you own into a card, it will decrease the performance of the game, though the game may very well still be playable.

 

 

And now resolution, Anti Aliasing and operating systems

 

  Reveal hidden contents

 

Now, onto something more impacting. Resolution. Or in this case, not so impacting at all. Rendered resolution of a game has about 5% of an impact on how much vRAM the game is using. The only game I've seen make a large jump in used vRAM from a resolution increase is Watch Dogs, which (according to benchmarks) goes from ~3100MB vRAM to ~3800MB vRAM bumping from 1080p to 4k. Not a big amount either; considering that's 4x as many pixels. ~700-750MB vRAM increase is tiny, and the game was still easily playable on the 3GB vRAM 780Ti cards they were benchmarking with too. Most other games have a tiny impact too. Upscale Dark Souls 2 to 4k? Use about 400MB extra vRAM (caps under 1.4GB at 4K even). Sniper Elite v2 at 1080p with 4.0 SSAA (4k res downsample) + "anti aliasing" set to "high" doesn't crack 2GB of vRAM either. BF4 on ultra uses 2.2GB and doesn't pass 2.5GB if you turn the resolution scale to 200% in-game either (unless you turn on AA as well; in which case it totals near 3GB or less). Honestly? Most games up until 2013 are going to be fine with 2-3GB of vRAM, even at 4K or triple-monitor 1080p gaming. 2014 and onward unoptimized AAA ports though appear to require ridiculous amounts of vRAM, so you may wish to REALLY consider that. Please note that using SSAA is akin to actually bumping the resolution up manually. 4x SSAA at 1080p would accurately show the vRAM usage of running a game at 4K resolution. PLEASE NOTE HOWEVER: Multisample-type AA at higher resolutions resamples those higher resolutions, so the vRAM increase for using is a bit bigger (though proportionally so). This is NOT affected by post-process-type and temporal-type filters such as SMAA 1x, 1Tx, 2x, 2Tx, FXAA, MLAA, etc. SMAA 4x however has a decent amount of multisampling in it, so it may impact vRAM a little more (not as much as 8x MSAA).

 

Next, the operating system you're using has an impact on how much vRAM you're using. WHUT? Yup! Your OS has an impact. Windows 7 uses 128MB of your vRAM minimum, and can use extra vRAM if it needs to and you can spare it (once Aero is enabled). Windows 8 & 8.1 can actually use more vRAM, up to 400MB in total. Switching Aero off like some people do will work to reduce the used vRAM from your desktop, but if you're on Windows 8 or Windows 8.1, switching Aero off without causing some issues isn't possible (that I know of). So be wary of this too! Of course, fullscreening a game will remove the desktop's rendering (for that screen only) and free up your system resources some, in case you're vRAM starved with an older card or something with 1-2GB. I also believe that multiple monitors = more vRAM used while sitting at the desktop (via a correlation to number of pixels rendered), but I cannot conclusively prove this and it is harder to find the information (or test it successfully) than one would think. The reason I am unsure is because my vRAM usage seems to change every time I restart my PC and programs. I've seen it idle at 800MB in Win 8.1 and I've also seen it idle at 300MB.

 

Bonus! I recently discovered that opening pictures in windows photo viewer WILL increase your vRAM used, at least on Windows 8 (not tested on Win 7 and earlier OSes; nor on Win 10). So if you find you're vRAM starved and have a couple pictures open and minimized, you should close those.

 

 

And now about multiple monitors

 

  Reveal hidden contents

 

So, how do multiple monitors benefit from more vRAM? Well you see, having extra screens uses vRAM even while gaming fullscreened. If you have two screens and fullscreen your game on one, then you're still rendering the desktop on the other(s), and thus using extra vRAM. This is why some games in the past (think back to 1GB cards and Just Cause 2, for example) would pop up and say you're out of memory and ask to change Windows to the basic theme (to save on vRAM). As said above, fullscreening frees up the vRAM for the screens you're taking up, but if you have 3-4 monitors connected you're gonna want some excess vRAM so that neither your games nor windows are starved; far less if you're running in either regular or borderless windowed mode (where it still renders the desktop behind the game on the monitor you're running the game on). PLEASE NOTE THAT THIS IS DIFFERENT FROM GAMING ON MULTIPLE SCREENS. In this case, more is better, and more helps, but is entirely dependent on the games you are playing. If your game is one of these new unoptimized AAA titles like Watch Dogs, which can pull 3GB vRAM easily at 1080p, having three screens connected and running watch dogs maxed out on a 3GB card may run into some memory issues. For a game like BF4 however, which grabs at most 2.2GB of vRAM in DX11 at 1080p using the ultra preset, having two extra screens and a single 3GB card connected would be fine. A 2GB card might run into problems at that point though, while having only one monitor attached would work on the 2GB card in that scenario. Also note: the higher the resolution of the monitor when not running a game is indeed a factor in how much vRAM it uses.

 

Now while 2-3 monitors could use up a decent bit of vRAM (game + 256MB while fullscreened for Win 7), usually it'll get compressed if you say... only have a 2GB GPU and are running a game using 1.8-2GB of vRAM on its own. But of course, the more headroom you have the better, and the less chance you'll get windows giving you an out-of-memory error in your game, as this WILL happen if your game cannot compress vRAM usage anymore for your current settings. This is why people recommend more vRAM for multiple monitors, though they probably blow this reason out of proportions in their heads a little (I've seen people say a weaker 4GB card will do better for multiple monitors than a stronger 3GB card when purely talking about gaming... very rare this would be the case).

 

Now as far as gaming on multiple monitors goes: it DOES use a bit more vRAM, but it's only an incremental increase as you saw with resolution. 5760 x 1080 is actually LESS pixels than 3840 x 2160, so if you can run a game at 4k res (or at 1080p with 4x Supersampling), you can run it easier on 3 monitors at 1080p, both vRAM-wise and performance-wise  ^_^. In that case, you have less to worry about than the user who is NOT going to use all three screens and only going to game on one and use the others for productivity. Three 1440p screens however, is a HUGE amount of pixels; more than a single 4K monitor would bring. For gaming at that resolution, under no circumstances would I suggest less than 4GB of vRAM if you plan to play any game that uses 2GB at 1080p. 1080p --> 3840 x 2160 may only be 600-800MB of an increase, but 1080p --> 7680 x 1440 would likely use over a full GB extra; and more if you throw even low-level multisample AA on the game, so I'd suggest having more than 3GB as a safety net for this particular setup (or larger).

 

 

Now, onto memory bandwidth and memory bus and such. You may wanna skip this if you know already and are only here for the vRAM size portion above, but I might as well be thorough if I'm doing this. Spoiler tags save the day!

 

vRAM types & memory clocks

 

  Reveal hidden contents

 

Usually, cards come with one of three kinds of memory today. GDDR3, GDDR5 and HBM (High Bandwidth Memory). They also have a memory clock, and a memory bus. All of these variables combine for the memory bandwidth. Just like clock speed, more memory bandwidth is a good thing. Games do not usually benefit much from increased memory bandwidth though, so don't expect huge gains from overclocking memory in most games. Some games do, but I don't remember any of their names off-hand.

 

HBM is different from GDDR3 and GDDR5 in mostly physical ways. Calculation-wise it's very similar (as I expand on below), and thus I am not giving it its own section. HBM 1.0 (currently on R9 Fury and R9 Fury X cards) is limited to 4GB. HBM 2.0 will not be. Since googling about HBM provides many articles explaining how it works physically, I will defer to those rather than attempt to explain it again here (if you've noticed, I did not explain about GDDR3/GDDR5's physical makeup more than was necessary).

 

Your memory clock is represented in multiple different ways. There is your base clock, which is usually an exceedingly low number. nVidia gaming-class cards since Kepler (GTX 600/700 series) came out have had a standard of 1500MHz for the desktop lineup in terms of memory speed, and Maxwell (GTX 900 series) has had a standard of 1750MHz. AMD has been using less than 1500MHz for the most part (with 7000 and R9 2xx series) but has bumped the speed to 1500MHz recently (R9 3xx series). This clock speed is not what you're going to be too concerned with; you should be concerned with your effective memory clock. Your effective memory clock depends on the type of video memory you have, which I will explain below:

 

- GDDR3 memory (which you won't find in midrange or high-end cards these days) doubles that clock speed. So a card with 1500MHz memory clock using GDDR3 RAM will have a 3000MHz effective memory clock.

- GDDR5 memory (which you will find everywhere in midrange and high-end cards these days) doubles GDDR3's doubler. In other words, it multiplies the clock speed by 4. So a card with 1500MHz memory clock using GDDR5 RAM will have a 6000MHz effective memory clock.

- HBM memory (only present in three cards right now) also doubles the clock speed, similarly to GDDR3. So a card with "500MHz" memory clock (like this) will have an effective memory clock of 1000MHz (despite that link ironically claiming the effective clock is 500MHz).

 

Now there are three ways one usually reads the memory from a card with GDDR5 RAM. Let's use the GTX 680 as an example. Some programs and people list the actual clock, which is 1500MHz. One such program is GPU-Z. Other programs list the doubled clock speed; which would be 3000MHz. Those programs are often overclockers such as nVidia InspectorMSI Afterburner also works on the doubled clock speed, though it does not list the clocks themselves. Then finally, the effective clock speed is often seen in sensor-type parts of programs; such as GPU-Z's sensor page. Please remember which clock your program works with when overclocking. If you want to go from 6000MHz to 7000MHz for example, you would need +500MHz boost in MSI Afterburner.

 

HBM is read by both GPU-Z and MSI Afterburner at its default clock rate, and is indeed overclockable via MSI Afterburner (though not by Catalyst Control Center). I am unsure of other tools people use for AMD OCing that aren't Catalyst Control Center or MSI Afterburner, but there is a chance it may be OC-able by other programs.

 

N.B. - Apparently, monitoring software using AMD cards in Crossfire seem to add the vRAM being used across each card. This results in two 4GB cards using something like 6000MB of vRAM and you scratching your head wondering if your PC is in fact skynet. Don't worry. Just cut the vRAM counter in half (in your head, please don't cut your monitor) and it should accurately reflect how much vRAM is being used in crossfire.

 

 

Next, memory bus and memory bandwidth!

 

  Reveal hidden contents

 

Each card has a memory bus. This memory bus is like the number of lanes in a highway, and the memory speed is how fast the cars travel. Most recent gaming cards from nVidia have a 256-bit memory bus or a 384-bit memory bus. AMD's a bit similar with 256-bit and 384-bit mem bus cards; but their R9 290x has a 512-bit memory bus. Dat be a wide highway, boys.

 

Anyway, memory bandwidth is calculated pretty easily. What you do is you take the effective memory clock of a card, multiply it by the bus width and then divide by 8 (because 8 bits = 1 byte). So see those GTX 680 cards with their fancy schmancy 192GB/s memory bandwidth? Here's how you tell if it's true:

256/8 = 32 * 6000 MHz = 192000 MB/s = 192 GB/s.

Simple, no?

 

It's just as easy for HBM as well. Take the R9 Fury X:

4096/8 = 512 * 1000MHz = 512000MB/s = 512GB/s.

 

Now as I said before, memory clock and memory bus tell the whole story. AMD's R9 290X has a 512-bit memory bus but only a 5000MHz memory clock, whereas nVidia's GTX 780Ti has only a 384-bit memory bus but a 7000MHz memory clock! So it works out like this:

290X = 512/8 = 64 * 5000 MHz = 320000 MB/s = 320 GB/s

780Ti = 384/8 = 48 * 7000 MHz = 336000 MB/s = 336 GB/s

So here we see even though the bus width is larger on AMD's bad boy, the bandwidth is in fact less due to significantly slower clock speeds. Now you know not to just buy a card just because it's got a huge memory clock over the other, or in reverse, because it has a huge memory bus over the other. Did you know the GTX 285 from like 7 years ago had a 512-bit, GDDR3 memory setup? When AMD brought out GDDR5 and nVidia hadn't moved to that yet, they competed by increasing the bus width a bunch, and it was able to compete easily. Interesting tidbit few remember =D.

 

nVidia Maxwell GPUs are slightly different to the above formulas. They have memory bandwidth engine optimizations that surpass the rated bandwidth they have. nVidia touted this as a feature, but it was proved during the GTX 970 scandal when users on notebookreview took 980M cards (256-bit, 5000MHz memory) and 780M cards (256-bit, 5000MHz memory) and tested using the CUDA benchmarker test designed to prove the 970 faulty. The result was that Maxwell GPUs had a higher actual bandwidth than was mathematically determinable using memory bus and clock speeds. As far as I remember, this is on average a 15% bonus in memory bandwidth (minimum of 10%). A 224GB/s card like the GTX 980 would actually have something like 257GB/s. The GTX Titan X and GTX 980Ti with 336GB/s should actually have near 386GB/s of bandwidth. I do not know if this affects all types of information that is usually stored in vRAM, however I must note that it has a distinct benefit and closes the gap between the fabled "HBM 512GB/s", especially with memory overclocks on the nVidia cards. 336GB/s to 512GB/s is more of a far cry than 386GB/s to 512GB/s, and if someone manages to OC a 980Ti or Titan X's memory to say... 7400MHz from 7000MHz? That gap jumps to 355.2GB/s mathematically and then to 408.5GB/s with Maxwell's speed improvements.

 

Now, there's another important thing you should know. There is a difference between a vRAM bottleneck and a memory bandwidth bottleneck. Some games are not designed to use more than a certain amount of vRAM, like Crysis 1. Instead, they make heavy use of the memory bandwidth on your card to keep the relevant information in your vRAM buffer. You can tell if you need a memory bandwidth increase to help a game if your game runs your memory controller's load at high percentages. To check this, GPU-Z could possibly be used. It's important to note that you CAN run into a memory bandwidth choke without running out of vRAM, and vice versa.

 

What I discovered is that increasing memory bandwidth does not do a whole lot for games once above ~160GB/s. My GPUs default to 160GB/s, and it's easy for me to overclock them to 192GB/s (5000MHz to 6000MHz) and I've never really noticed any games actually benefitting from this. 160GB/s has been used for years; as far back as the GTX 285 had 160GB/s and the bandwidth did not improve massively in years past, until recently when nVidia's Maxwell GPUs and AMD's R9 300 series opted for larger out-of-the-box memory bandwidths. Generally, it's better to have more memory bandwidth than less, but it's more of a "slight increase if any" and a "better to have more than less" rather than "this memory OC will give me a solid 5-10fps in <insert game>!" or anything similar. At least, there's no tech RIGHT NOW that will need a lot more memory bandwidth than is currently available as far as I can see.

 

Also, I mentioned this above but I'll repeat it again here: SLI and CrossFireX systems for the most part "add" the memory bandwidth for vRAM access. 2-way SLI of 192GB/s cards? 384GB/s. 3-way SLI of 192GB/s cards? A cool 576GB/s. Tossed three 980Ti cards in SLI? Enjoy a sexy ~1TB/s memory access bandwidth. Now, this doesn't affect memory "fill" time (that is still limited to each card's bandwidth, your RAM and your data storage, and likely your paging file too), and the multiGPU overhead will not allow you to see a true doubling of bandwidth, but the benefits definitely exist. Please note however that if "Split Frame Rendering" with DX12 becomes a thing (or any mode where multiple GPUs act as "one big GPU") then memory bandwidth improvements are likely going out the window (maybe why they're bumping mem bandwidth now?).

 

 

Extra: Memory bus + mismatched memory size section

 

  Reveal hidden contents

 

Now, here's the funny part. Since writing this guide I was told that I was wrong about a couple of things. I've updated and fixed and you can see how stuff works down below. But here's where things get tricky. Apparently, there should technically be a direct, hard-limit to the size of memory chunks one can use on a card depending on the memory bus. By this law, cards like the GTX 660Ti should not have 2GB of vRAM, but instead ought to have 1.5GB or 3GB of vRAM attached to them. Instead, nVidia uses 2GB chips somehow. This apparently could have been done one of two ways.

 

The first way, as described to me, would be to use mismatched-yet-compatible memory chip sizes to get ALMOST as much vRAM out of the cards. In this case, the GTX 660Ti could use a 768MB chip + a 1152MB chip, totaling 1920 memory size. This is apparently NOT what nVidia uses. What nVidia apparently uses, is a bit more complex. Basically a 192-bit memory bus has three blocks of 64-bit memory controllers, and to get 1.5GB of vRAM on them, you would add 512MB vRAM to each 64-bit mem bus, but nVidia adds an extra 512MB block to one 64-bit block. The thing about this "trick", is that all of the memory does not work at the faster bus speed. This means that only the first 1.5GB of vRAM is run at ~144GB/s, and the last 512MB block is run at only ~48GB/s, due to asymmetrical design. This is present in other cards as well from nVidia, such as the GTX 650Ti Boost, GTX 560 SE, GTX 550Ti and the GTX 460 v2. So hey, the more you know right? Proof of this happening. I'd like to point out that the GTX 550Ti has a different layout to the 660/660Ti (and possibly the rest of the cards I've listed that have mismatched memory bandwidth), so thanks to a helpful forum user, I've got a picture to better explain the difference right here. The article already linked does explain it, but not well enough in my eyes.

 

The good thing about this is that overclocking your memory will still benefit your transfer rate even with such a memory configuration, as memory bus works with memory controller for bandwidth. But as most benchmarks pointed out, the 660Ti was able to perform fairly similarly to the 670 in a lot of games, which proves that memory bandwidth is nowhere near as important for gaming as clock speeds are. But in the games where it DOES matter, even the weaker GTX 760 has been shown to pull ahead of the 660Ti by a little bit, according to the sheer number of people who tell me that the 760 beats the 660Ti in some games but not all. Now I know the reason. Go figure huh?

 

So what can I say here? If you see mismatched memory sizes to the memory bus from nVidia, now you know what it be. If you can, get matched memory =D.

 

 

And the GTX 970 gets its own section! Hooray!

 

  Reveal hidden contents

 

The GTX 970 is apparently wired like frankenstein. Let me try and explain. You've no doubt seen multiple articles about how it has 4GB memory and people are considering it like a 3.5GB card. That's wrong, and you shouldn't do that, but not for the reasons you're probably thinking. See, as I mentioned earlier in the guide, when you hit the maximum vRAM on a GPU, you start to compress memory. Send some of it into virtual memory etc if you can compress no longer, etc etc. The problem is that when you hit a 3.5GB vRAM mark on the GTX 970, this does not happen. Because the card actually DOES have the extra vRAM, instead of compressing what's in the buffer, it attempts to make use of the ungodly slow last bit of vRAM. This causes the stuttering and slowdowns many people witnessed when playing. It would actually be better for everyone if nVidia were to either use drivers or a vBIOS to lock off the final 512MB forever, or to somehow tweak drivers to force all of windows' memory requirements (aero, multiple monitors, etc) into the slow 512MB portion, and give non-windows-native applications access to the fast 3.5GB outright, AND block them from accessing the final 512MB no matter what, causing the games/etc to start compressing memory at the 3584MB vRAM mark and eventually hunt for virtual memory, like it does on 3072MB with a GTX 780, for example. For some proof on what happens at the 3.5GB vRAM mark and why it differs to a 3GB card's limit, HERE is a comparison video of a GTX 780 running Dying Light next to a GTX 970 running Dying Light. In the video, if you set it to 720p60 or 1080p60, you can clearly see that the 970's footage is somewhat less smooth than the 780's, despite having more vRAM for the game to make use of. The software also shows only 3.5GB of vRAM being used because as nVidia said, normal programs can't see the last 512MB as it's in a different partition so to speak; but it's clear the game is attempting to use more and the transfer to the slow memory bus is the problem (I wish to point out that since this video has been released, the game updated to use less video memory, and you won't find this bug happening anymore as the game was in an unoptimized state + had extra view distance at the time of that video). Please note: No other maxwell card to date has this bug. The 950, 960, 960 OEM (192-bit mem bus, 3GB/6GB vRAM) 980, 980Ti, Titan X, 940M, 950M, 960M, 965M, 970M, 980M and mobile 980 ALL lack this error.

 

Also, please make note: The 970 does NOT share the same issue that the 660Ti and other mismatched memory cards do. The 28GB/s slow vRAM portion is beyond slow; even the GeForce 7800 card had higher than that. Also, the rest of the card only has access to seven total 32-bit memory controller chips (unlike the 8 blocks it should have), meaning the rest of the 3.5GB is actually on a 224-bit memory bus (notice it's falsely marketed as a 256-bit mem bus card?). I also do not know if the access bandwidth for the slow memory doubles in SLI, which would at LEAST alleviate the problem somewhat. The CUDA memory tester that was used to check the 970 does not test differently in SLI; the bandwidth does not double with SLI on/off in that test. So we have no real way of knowing (at least that I can test) to see whether or not SLI helps. At the very least, increasing the memory clock will help the slow vRAM portion of the card, but that is very little relief. In this case, I can only recommend the 970 to users who are CERTAIN of what they are going to play, and can say with 100% certainty that they will not approach that 3.5GB vRAM buffer. If you know you will, a 980 (too expensive), 980Ti, R9 290 R9 390, R9 290X R9 390X or R9 Fury would be a far better buy. If you are the kind of guy who's only gonna play BF4/BF Hardline and some older game titles, then you'll be perfectly happy with a 970 as you'll likely never hit the vRAM bottleneck issue that only applies to the 970. Also, please note: FPS counts do NOT tell the whole story in this case. As shown in the video I linked, the 970 was actually getting higher FPS most of the time even though the 780 had the smoother experience.

 

Also, according to an article I was recently linked to by PCGamer, it seems that when accessing the slow portion of vRAM, the rest of the memory on the 970 slows down as the seventh 32-bit memory controller that runs in tandem with the others to make up the 224-bit memory bus has to be a "middleman" with the slow portion, and thus memory bandwidth on the card itself is crippled. To fully quote the part of the article (which was originally quoted from PCPer):

 

“if the 7th port is fully busy, and is getting twice as many requests as the other port, then the other six must be only half busy, to match with the 2:1 ratio. So the overall bandwidth would be roughly half of peak. This would cause dramatic underutilization and would prevent optimal performance and efficiency for the GPU.”

 

You can read the full article here, and that might explain other issues with accessing the slow portion with only a few games.

 

Also, a little addendum here. Many users have been claiming to me that most games don't use near 3.5GB of vRAM etc. This is true... but as mentioned earlier in this guide, I would like to point out that running a second monitor, and/or windowed mode/borderless games, ESPECIALLY using Windows 8, 8.1, and 10, will be also competing for space in the vRAM buffer. If your OS is using 600MB without you gaming and you load up a 3GB game in borderless windowed mode, you *WILL* encroach on the 3.5GB vRAM mark, potentially causing hitches and stuttering in your game without actually playing a "4GB game" or whatever. Single-monitor-only gamers need not worry as much, but multitaskers who are gamers? Their worries are different and valid. I should know, because I am one of them myself. If I can help it my game's going borderless windowed. So 970s would be a bad buy for me if I wanted to play games that used anywhere near 3GB of vRAM due to how much else windows itself takes.

 

*NB - I previously recommended the 780 6GB, but since devs seem to no longer code for current + previous gen like they usually did, Maxwell's better tessellation engine and general driver improvements have the 970 pull far enough ahead of the 780 for me to remove it from the direct recommendation, leaving me in the awkward position of requiring to recommend you only stronger (and of course, more expensive) nVidia cards or equal/stronger AMD cards; and even if you could get a 780Ti on the cheap (which would mitigate the performance gap), that card was never made with a 6GB vRAM buffer (likely because the Titan Black, which is still $1000 OR MORE at the time of this writing, was essentially a 6GB 780Ti with a double precision rendering block).

 

 

FAQ

 

  Reveal hidden contents

 

If I could afford it, should I get the version of the card with more vRAM?

YES. A lot of games lately are coming out using much much more vRAM than is necessary, and someday soon they might actually use all that vRAM for a good purpose. When that time comes, you'll be glad you got that card with 4GB instead of 2GB a year ago =3. Look at me, I thought 4GB of vRAM was overkill for my 780Ms, until Ghosts, Titanfall, Wolfenstein, Watch Dogs.... And more games are going to do the same thing. Then I found out about Arma 2's "default" setting for vRAM, and then Skyrim with my mods uses 2.7GB in fullscreen, etc... eventually I realized I was lucky as hell to have 4GB on these cards, and how much people should go for higher vRAM cards if they could. It really can't hurt to have more. So as long as the version with extra vRAM doesn't have another downside, go for it.

 

What about those dual-GPU cards like the R9 295X2 with 8GB or the GTX Titan Z with 12GB vRAM? Should I get those instead of two separate cards?

NO. Cards like that are marketed underhandedly. Dual-GPU cards are listed almost unanimously with the sum of the vRAM on each card. When you use them in their intended SLI/CrossfireX formats, the vRAM data is copied across both cards, so you end up with 1/2 the listed vRAM in effect. The 295X2 is simply two 290X 4GB cards. The Titan Z is even worse; it's two downclocked GTX Titan Black 6GB cards. Most often too, they have an inexplicable markup in price (the Titan Z launched at $3000). NEVER buy them unless you know EXACTLY what you're doing (in which case you wouldn't be reading this guide). Don't let anybody you know buy them. Stab em if you have to. Do *NOT* let them waste that kinda cash. Even if you make the arguement that you could buy 2 of them and get 4-way SLI/Xfire going on a board not normally built for 4-way card setups, the money you save ALONE from not buying most of them could likely get you an enthusiast intel board and high end CPU anyway.

 

I have a 4GB R9 290X, should I sell this and get the 8GB R9 390X?

I CAN'T SAY. Developers have calmed down from going overboard with the vRAM buffer lately (kind of) Nope, they're still at it... though 4GB appears to have become the standard "minimum" amount of vRAM necessary for AAA titles (beyond "low" graphics, anyway), but it also seems to be a bit of a sweet spot except in many of the games. Shadows of Mordor wants 6GB for the texture quality and draw distance of a 2011 game for example, Black Ops 3 does not allow "extra" texture quality without 6GB vRAM or more, and it's easy to pass 4GB vRAM on Assassin's Creed Syndicate (especially above 1080p), but 4GB cards play these mostly fine. I STILL recommend the highest amount of vRAM a card offers if you can afford it, but selling a 4GB card to buy the same one with a higher vRAM buffer may not be worth the hassle for you (though the 390X is overclocked a bit on core and a lot on memory). This is on a case-by-case basis and it still might suffice to simply buy a stronger card with lots of vRAM, like a 980Ti.

 

 

Windows 10 and nVidia 353.62

 

  Reveal hidden contents

 

Windows 10 has made changes to itself under the hood. Including some changes to something called the "WDDM". This has programs reporting vRAM usage/utilization with errors. 2GB GPUs appearing to use 2800MB of vRAM, benchmarks that use set amounts of vRAM appearing to use double, etc. If you're on Windows 10, as far as I know, there is no way AS OF THIS WRITING to properly check the amount of used vRAM. A large reason why I don't know is because I refuse to use Windows 10 for another few months, mainly because of these kinds of stupid issues, so I haven't ran a million and one programs to see if things line up with how they were on Win 8.1. If anyone knows any way to accurately get the readings, you can let me know in the thread and I'll amend it here. 

 

Apparently, Windows 10 and 353.62 nVidia drivers are stealing the max amount of vRAM available to GPUs. In particular, about 1/8 of the vRAM seems to be reserved and not usable to the system for whatever reason. A user on NBR reported his 980M only having 3.5GB of RAM available, and after some further checking, it seems all cards exhibit this behaviour, even using CUDA programs to check how much memory is available to the system to use. HERE is the post with some proof. If you have a relatively low vRAM card and are suddenly running into stuttering in games that use just about the amount of vRAM you have (let's say... BF4 on a 2GB GPU) then here's your problem. =D.

 

I am not aware of AMD GPUs having vRAM allocation stolen, or if it's a DX12 thing and not a nVidia driver thing, but if ANYONE can check for me (I don't remember how you'd check on AMD cards exactly, since you don't have access to CUDA, but the is a way I've seen before) and report it, it would be fantastic. Also, I'm not sure if AMD cards experience the vRAM error reporting bug, but I believe they might. Again, anyone who can check, let me know! Most of NBR use nVidia because AMD has no laptop GPUs worth even remembering the existence of these days, so I haven't had much chance to ask people using AMD cards.

 

Finally, Windows 10 and DirectX 12 ARE NOT GIVING YOU EXTRA VIDEO RAM. There have been some users noticing that their GPUs suddenly have a large amount of shared memory, and are assuming this is DX12. No. It is not. As far back as Windows Vista did this. It doesn't really seem to do anything in practice, actually. People are blowing this whole DX12 thing out of proportion. Here's my Windows 8.1 system showing the same thing.

 

When these Windows 10 issues are CONFIRMED resolved, I'll remove this section from the guide. Since I'm not using Windows 10, I'll need people to report to me.

 

 

Final tidbits and stuff

 

  Reveal hidden contents

 

A lot of games are going to come out soon using a whole lot more vRAM. They're probably not gonna look 300% better even though they use 300% the vRAM, but eventually, 2GB is just gonna be a bare minimum requirement for video cards. I highly suggest if you're able to get the most vRAM and just... leave it. You're probably not like me who uses playclaw 5 and has system info as overlays all the time so that you know which game uses how much memory, how much % GPU usage, how hot your parts are, etc all the time, so you probably won't be as interested in GPU memory usage as I am. But it's better to have when the need arises than to not have. All the people with 2GB GPUs can't even play Wolfenstein with ultra texture size; the option doesn't even appear in the menu unless you have 3GB or more memory. Call of Duty Black Ops 3 limits 2GB vRAM users to "medium" textures, and 4GB card users can only use "high"... you need 6GB or above to even access "extra" texture quality.

 

As for memory bandwidth? It's not that important really, at least not for games. Most games don't really care; they're more interested in core processing power. Of course, higher resolution textures will eventually need better memory bandwidth to load quickly enough, but we're not much at that point yet that I've seen. Hell, some games will max out your memory controller while using small amounts of vRAM because they were designed in a time where vRAM was scarce, so they abuse the speed of the memory, like Crysis 1. So while more bandwidth is always better, don't kill your wallet for it. If there's a better, stronger card out there with more vRAM but less memory bandwidth, then go for it, unless you want the other one for a specific reason. If you're wondering why there'd be a stronger card with more vRAM and less memory bandwidth for the same or cheaper price, remember that nVidia and AMD make different cards.

 

 

I started writing this guy mainly for the top section, to denounce misinformation people seem to have regarding vRAM and its relation to memory bandwidth, but I figured I might as well just go the full mile and explain as best I can about what most people need to know about GPU memory anyway. If I've somehow screwed up somewhere, let me know. I probably have. I'll fix whatever I get wrong. And thank you to everyone who has contributed and corrected things I didn't get right! Unlike my SLI guide, much of the information here was confirmed post-writing.

 

If you want the SLI information or the mobile i7 CPU information guide, they're in my sig!

 

Moderator note: If you believe any information found in this guide is incorrect, please message me or D2ultima and we will investigate it, thank you. - Godlygamer23

well you got one poit but ill get you a question if sli doesn't somehow make the vram cooperate how i have seen huge diference on vram usage while i have (not on sli )but combinet 2 gpus 1050 2gb and 1060 6gb i have to metion though that the slower 1050 is cuda orientedonly wheres the rest is up to 1060 and yet most aspect of game in my case the rise of tomb raider the VRAM usage is less but the game visually performs way better even when i was playing only on 1060 i think you need t search far more for the sli and how actually works if the game also support it also when low on vram its the windows and the driver that writes the crucial data in rams and the data to retrieve in page file that though can be fixed with a small core clock oc in gpu (but the times to access/read/write is beening determind by other factors than gpu) also usually on full screen game the desktop goes in the bagroung whitch helps the system to have some resources available for the game i used to play on vista witch 4 gb or ram and a gtx 240

Edited by the heck with this
Link to post
Share on other sites
16 minutes ago, the heck with this said:

 

 

 

16 minutes ago, the heck with this said:
Just now, the heck with this said:

well you got one poit but ill get you a question if sli doesn't somehow make the vram cooperate how i have seen huge diference on vram usage while i have (not on sli )but combinet 2 gpus 1050 2gb and 1060 6gb i have to metion though that the slower 1050 is cuda orientedonly wheres the rest is up to 1060 and yet most aspect of game in my case the rise of tomb raider the VRAM usage is less but the game visually performs way better even when i was playing only on 1060 i think you need t search far more for the sli and how actually works if the game also support it also when low on vram its the windows and the driver that writes the crucial data in rams and the data to retrieve in page file that though can be fixed with a small core clock oc in gpu (but the times to access/read/write is beening determind by other factors than gpu) also usually on full screen game the desktop goes in the bagroung whitch helps the system to have some resources available for the game i used to play on vista witch 4 gb or ram and a gtx 240

 

 

Just now, the heck with this said:

well you got one poit but ill get you a question if sli doesn't somehow make the vram cooperate how i have seen huge diference on vram usage while i have (not on sli )but combinet 2 gpus 1050 2gb and 1060 6gb i have to metion though that the slower 1050 is cuda orientedonly wheres the rest is up to 1060 and yet most aspect of game in my case the rise of tomb raider the VRAM usage is less but the game visually performs way better even when i was playing only on 1060 i think you need t search far more for the sli and how actually works if the game also support it also when low on vram its the windows and the driver that writes the crucial data in rams and the data to retrieve in page file that though can be fixed with a small core clock oc in gpu (but the times to access/read/write is beening determind by other factors than gpu) also usually on full screen game the desktop goes in the bagroung whitch helps the system to have some resources available for the game i used to play on vista witch 4 gb or ram and a

gtx 240 .

as well an important role is the manufacture of the vram chip as they are 3 hynix samsung and sun microsystems from relative experiance samsung memory chips tend to perform about 5% better that the rest sun is 2% and evntually hynix is up to what it says

 

Link to post
Share on other sites
  • 1 month later...

correct me if im in the wrong thread, but yesterday i had my first encounter with artifacting. it wasn't even with a gpu, but its applicable to them and a. sneaky likely cause in some cases.

 

a dying port, or cable on either end can cause it. in my case monitor has a dying port so it artifacts sometimes. the cable, and console in question didnt artifact on another tv, this was further proved when the monitor artifacted then on a cable box. i know its unlikely cause but is one possible.

main rig:

CPU: 8086k @ 4.00ghz-4.3 boost

PSU: 750 watt psu gold (Corsair rm750)

gpu:axle p106-100 6gbz msi p104-100 @ 1887+150mhz oc gpu clock, 10,012 memory clock*2(sli?) on prime w coffee lake igpu

Mobo: Z390 taichi ultimate

Ram: 2x8gb corsair vengence lpx @3000mhz speed

case: focus G black

OS: ubuntu 16.04.6, and umix 20.04

Cooler: mugen 5 rev b,

Storage: 860 evo 1tb/ 120 gb corsair force nvme 500

 

backup

8gb ram celeron laptop/860 evo 500gb

Link to post
Share on other sites
  • 2 weeks later...

So since this is supposed to be a guide and not a post about your opinion on vram I think it needs some tweaks because some things told are not that accurate mainly because you make absolute statements like something you say is FALSE isnt false but yes maybe its common not to be true, that doesnt make false though.

 

e.g

 

On 8/16/2014 at 12:16 PM, D2ultima said:

Assumption: You need a powerful video card to make use of a lot of vRAM.

FALSE. vRAM usage is independent of GPU usage. For example, here is a screenshot of Call of Duty: Ghosts using 4GB of vRAM while happily using ~15% of my video cards sitting at the main menu. Ghosts is a bad example, however, so here is also a screenshot of Titanfall using only 60% of my GPU while happily gobbling up 3.9GB vRAM. The second screen was in the shot to show RAM usage to someone; so you can ignore that. 

Is not entirely true especially for the low tier cards and some modern game engines. 

Because something is saved in the VRAM doesnt mean that your GPU is going to render it or that it is going to render it on time.... vRAM is just a "storage space" for elements and textures that need to be rendered...

 

A simple example take RTX 2060 6GB and a GTX 1060 6GB at any game at same settings

 

 

Notice that both use the same VRAM usage  in game (+- a few hundred megs depending on the game)  at same settings or in other words same amount of same resolution/quality textures and whatnot are stored for both GPUs yet the performance is different

 

That means that indeed a more powerful GPU is going to take better advantage of your VRAM the ability to store stuff doesn't necessarily mean that you are able to render it on time as well! 

 

Same holds true especially if you compare older with newer GPUs or completely different architectures  while keeping the vram usage the same. 

 

And that's especially true in newer game engines where they just try to fill up vram for better efficiency (instead of loading something the moment it needs to render they keep it loaded in advance so that its going to be ready when it needs to be rendered) 

 

 

 

On 8/16/2014 at 12:16 PM, D2ultima said:

Assumption: vRAM amount is related to the memory bus width.

Partially true. vRAM amount is only loosely related to the memory bus width. 128/256/512 bit memory buses have RAM sizes like 1GB, 2GB, 4GB, 8GB, etc. 96/192/384 bit memory buses have RAM sizes like 768MB, 1.5GB, 3GB, 6GB, etc. You can have 4GB vRAM on a 128-bit memory bus (like HERE) and 6GB vRAM on a 192-bit memory bus (like HERE). On the flip side, HERE is a 384-bit memory bus card with only 3GB vRAM. This question will be extrapolated under the its own section later down, as there is a lot of information to add.

Its not lossely related... its just that your definition of vRAM is not specific enough.

 

Its directly related on two factors CHIP architecture, number of chips. 

 

Usually when one speaks on vRAM he focuses only on the amount of storage it can provide... but different chips of vram have different characteristics.

 

The end result though (when we are speaking about memory bus width) is directly correlated depending on vram chip architecture/technology and number of chips on board. 

 

In other words if you see a "naked" graphics card pcb on a resolution clear enough to read the part numbers of the vram chips you can calculate the vram bus width to the GPU just by knowing what exact type of chips it uses and how many of them are present on the PCB. 

 

On 8/16/2014 at 12:16 PM, D2ultima said:

Assumption: You need huge amounts of vRAM if you're going to use multiple monitors or high resolution screens.

Partially true. You do not NEED it. It will help, but not in the way you may think. If you're considering gaming, especially fullscreened, your vRAM usage depends about 95% on game settings and resolution matters very little; 2GB would work easy for triple monitor 1080p gaming (with games prior to 2014 at least).

You tried to say something right here but you got it then all wrong later (the bold underlined part) 

 

Vram usage correlates directly to settings and resolution

 

More monitors are going to use more vram unless you use same settings AND resolution

 

so if you have 3 monitors and your game resolution is 1920*1080 then the amount of Vram (at same settings same game) is going to be the same compared to a single 1080p monitor..

 

in practice no one does that on a triple monitor setup you would have (3*1920)*1080 (if you are using three 1080p monitors) which would bump the resolution quite a bit and even if settings are to be kept the same the vRAM usage would increase (also on top of that windows will trim vram a little more the more monitors you have for its desktop rendering  purposes ) 

 

The rate of the increase depends on the game and settings though in some games the increase may be marginal in some other it would be much greater. 

 

Here is an example of vram usage between 1080p and 1440p

 

 

 

Notice that in some cases the difference is about or over 1GB of vram

 

Also keep in mind that 3*1920*1080 is a greater resolution than 1440p so the vram difference would be more pronounced than what you see in the video above

 

3*1920*1080 (triple 1080p monitor setup) is about 6.22 million pixels

2560*1440 (single monitor 1440p) is about 3.68 million pixels so almost half the resolution. 

3840*2160 (so a single uhd monitor) is about 8.29 million pixels so greater but closer to the resolution of the triple monitor setup of our example and lets see the difference in vram there: 

 

 

about 2 gigs difference in GTAV especially after the intro where it gets more than 2,5Gigs in the city. 

And here again there is a big difference only changing the resolution while same settings are kept and the rest of the system is the same 

On 8/16/2014 at 12:16 PM, D2ultima said:

Assumption: GDDR5 is always better than GDDR3.

FALSE. Memory bus width is taken into account here. GDDR3 doubles memory bandwidth, and GDDR5 doubles it again. A 1000MHz mem clock on a 512-bit mem bus, GDDR3 card is EQUAL TO a 1000MHz mem clock on a 256-bit mem bus, GDDR5 card. Most new cards don't touch GDDR3 anymore, but I'm including this anyway to clear up anything people may be misinformed on.

Not true.. newer  GDDR generations are faster on themselves they dont just have bigger bandwidth...
The formula goes like this 
(Memory clock x Bus Width / 8 ) * GDDR type multiplier  = Bandwidth in GB/s


(multiplier is 2 for GDDR3 and 4 for GDDR5, also I divide by 8 to convert bits into bytes  ) 

 

In your example both will theoretical examples will produce same GB/s yes.. but its a misconceived example because you fail to realize that (as I mentioned above in the previous quote about bus size) that the bus size is in direct correlation with the number of chips and its tech...

 

So in order for those to be equal indeed you need a graphics card to have one version with e.g 4 chips of GDDR3 and a version with 2 chips of GDDR5 which is not the case.. most likely it will have have the same number of chips in both versions also and there are other factors into play as well (such as channels) 

 

And the most simple way to prove that this is actually how its done is by just go to youtube and try to find a GPU performance in GDDR3 vs GDDR5 you see that the GDDR5 implementation is always faster that's because factories dont try to cripple the new chips in order to be equally performand as the older ones as you do in the quote above but rather just introduce the same number of chips which are of newer technology and that ends up adding to performance.... 

e.g 

 

 

 

etc

 

Furthermore it doesnt make sense its like saying if you have a 4 core i5 and disable two of its cores and lower the clock to the clocks of the same generation dual core i3 the CPUs would perform equally... well yes but also that doesnt make sense why to do that? All the differences you try to handicap exist exactly to differentiate the i5 as a faster product.. same with GDDR5 and GDDR3 if you gonna add less GDDR5 chips (in order to have half the bus width) and equalize their clock and so on and so forth its gonna be the same but it also doesn't mean anything... 

 

I could keep on but I am already exhausted and think that I got to my point... in general what you do is one time use rule of thumb and the other time go technical but it cant go both ways... either you base your guide on rule of thumb or you make a technical guide because if you pick and choose as you pelase when to go rule of thumb and when to go technical you end up messing both and if not anything else you just add up to more confusion...

 

Dont get me wrong I admire your courage to spend so much time to make this post but that's the reason you dont see many people uploading guides because its damn hard and timespending and you need to be 100% accurate or the guide (and thus the hours you spend on it) not just go in vein but also may end up missinforming people. 

Link to post
Share on other sites
On 4/19/2020 at 9:13 AM, papajo said:

So since this is supposed to be a guide and not a post about your opinion on vram I think it needs some tweaks because some things told are not that accurate mainly because you make absolute statements like something you say is FALSE isnt false but yes maybe its common not to be true, that doesnt make false though.

The reason the guide isn't updated for newer things is because all three guides I have written (that're in my signature) were written before this board changed to IP v4, and when I edit them, they often completely break formatting and make me have to re-write the ENTIRE guide over and over.

 

As for what I wrote, I have indeed verified that what i wrote was true... just talking about 2GB of vRAM in 2020 is far too old. But I cannot edit the guides properly. I am aware that caching in vRAM is a totally different thing. I am aware that GDDR3 is non-existent and GDDR5 should be compared to GDDR5X and GDDR6. I am aware that not only game settings but certain things like using NVENC inflates vRAM. I know that raising resolution on some games influences vRAM more (but this is because they use higher resolution assets at those higher resolutions, you can achieve similar effects without forcing higher resolution in games that allow you to inflate shadowmap resolutions to stupid levels, like Alien: Isolation).

 

If I had to update this guide (and I could) I would need to re-write it in an entirely fresh post, edit the original post here to link to the new guide, and get a mod to pin the new guide and unpin the old one. And I... just don't really want to? It is indeed a lot of work for quite little thanks. Also, something not in the guide, is the subsystem of GPUs which affects memory performance, where situations like even if memory bandwidth is exactly the same you can get large performance differences, beyond what Nvidia's rapidly advancing delta colour compression can provide. Turing's heavily improved texture streaming capabilities are one such benefit that doesn't mathematically translate.

Clevo P870DM3 (Eurocom) | i7-7700K | 32GB DDR4 2400MHz | GTX 1080N SLI | 850 Pro 256GB | 850 EVO 500GB M.2 | Samsung PM961 256GB NVMe | Crucial M4 512GB | Intel 8265ac | 120Hz Matte screen | 780W PSU

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to post
Share on other sites
  • 2 weeks later...

think something is seriously wrong with my gpu's vram now.  p106 used almost all 6gb of gddr5 ram in minecraft. my p104 i upgraded to has 8gb of gddr5x ram, and bearly used 2gb of its vram. also aa lags my game to obliveon, even tho i have all that vram. my heaven benchmark went up over 30% with the upgarde, but still.

 

another question, whats the diffrence between video clock and graphics clock?

my linux nvidia driver and heaven say im getting speeds of 1887 mhz graphics clock.  in green with envy, its saying the 1887 clock is "video" and the "graphics clock" is 1704 boost/ 1695 clock speed. 

main rig:

CPU: 8086k @ 4.00ghz-4.3 boost

PSU: 750 watt psu gold (Corsair rm750)

gpu:axle p106-100 6gbz msi p104-100 @ 1887+150mhz oc gpu clock, 10,012 memory clock*2(sli?) on prime w coffee lake igpu

Mobo: Z390 taichi ultimate

Ram: 2x8gb corsair vengence lpx @3000mhz speed

case: focus G black

OS: ubuntu 16.04.6, and umix 20.04

Cooler: mugen 5 rev b,

Storage: 860 evo 1tb/ 120 gb corsair force nvme 500

 

backup

8gb ram celeron laptop/860 evo 500gb

Link to post
Share on other sites
On 5/8/2020 at 3:50 AM, Snowarch said:

think something is seriously wrong with my gpu's vram now.  p106 used almost all 6gb of gddr5 ram in minecraft. my p104 i upgraded to has 8gb of gddr5x ram, and bearly used 2gb of its vram. also aa lags my game to obliveon, even tho i have all that vram. my heaven benchmark went up over 30% with the upgarde, but still.

 

another question, whats the diffrence between video clock and graphics clock?

my linux nvidia driver and heaven say im getting speeds of 1887 mhz graphics clock.  in green with envy, its saying the 1887 clock is "video" and the "graphics clock" is 1704 boost/ 1695 clock speed. 

Can't explain Minecraft's shoddy java-ness to you.

 

The "video" clock might be the old "shader" clock, that stopped being adjustable around Fermi IIRC. I am only guessing, and I am unfamiliar with whatever linux tool you are using.

Clevo P870DM3 (Eurocom) | i7-7700K | 32GB DDR4 2400MHz | GTX 1080N SLI | 850 Pro 256GB | 850 EVO 500GB M.2 | Samsung PM961 256GB NVMe | Crucial M4 512GB | Intel 8265ac | 120Hz Matte screen | 780W PSU

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to post
Share on other sites
  • 3 weeks later...

My VRAM is hitting over 90C with fans set to 75% and the GPU core is sitting at like 73C. why is my VRAM running so hot and what can i do i really dont want to prematurely kill my card but also dont want to wait 6 months for the RMA. I have no idea how long the RMA would take but these days you never know waited over a month for my new monitor to arrive.

Link to post
Share on other sites
  • 3 months later...

I just a have a quick few questions at the risk of sounding stupid.

 

I want to play Cyberpunk 2077 when it comes out. I have a GTX 980 with 4GB of VRAM. This is enough to meet the minimum requirements, but 2gb short of the recommended. Is buying a whole new graphics card with 6GB of VRAM my only option if I want the recommended level of power? I was hoping I could just buy a decent 2gb video card and add it on. Is this possible at all?

I read that DirectX games CAN utilize multiple graphics card's VRAM. Is this true? Would that work?

 

Thank you!

Link to post
Share on other sites
1 hour ago, noggen said:

I just a have a quick few questions at the risk of sounding stupid.

Better to get educated at the risk of sounding stupid, than to actually be stupid.

1 hour ago, noggen said:

Is buying a whole new graphics card with 6GB of VRAM my only option if I want the recommended level of power?

Yes.

1 hour ago, noggen said:

I was hoping I could just buy a decent 2gb video card and add it on. Is this possible at all?

That is not how VRAM works. That's like saying "can I just connect another PC to mine with a USB cable to increase the system RAM". (not being condescending, just giving an example of approximate equivalence)

1 hour ago, noggen said:

I read that DirectX games CAN utilize multiple graphics card's VRAM. Is this true? Would that work?

It's not really implemented in any games, and there is no indication that it ever will be. The process is incredibly inefficient in all but a few non-gaming workloads, (will actually reduce framerates significantly) requires enterprise grade GPU hardware, (like the $2000-$8000 Quadro GPUs) and requires software implementation of the process.

1 hour ago, noggen said:

Thank you!

Honestly, I wouldn't go for any GPU with less than 8GB of VRAM anymore, as 8GB cards are currently available in the $150 range when new. (RX 570 8GB, or $200 for similar performance as your GTX 980 on an RX 5500XT)

CPURyzen 7 5800X Cooler: Arctic Liquid Freezer II 120mm AIO with push-pull Arctic P12 PWM fans RAM: G.Skill Ripjaws V 4x8GB 3600 16-16-16-30

MotherboardASRock X570M Pro4 GPUASRock RX 5700 XT Reference with Eiswolf GPX-Pro 240 AIO Case: Antec P5 PSU: Rosewill Capstone 750M

Monitor: MSI Optix MAG272CR Case Fans: 2x Arctic P12 PWM Storage: HP EX950 1TB NVMe, Mushkin Pilot-E 1TB NVMe, 2x Constellation ES 2TB in RAID1

https://hwbot.org/submission/4497882_btgbullseye_gpupi_v3.3___32b_radeon_rx_5700_xt_13min_37sec_848ms

Link to post
Share on other sites
On 5/29/2020 at 1:54 PM, Squanchtendo said:

My VRAM is hitting over 90C with fans set to 75% and the GPU core is sitting at like 73C. why is my VRAM running so hot and what can i do i really dont want to prematurely kill my card but also dont want to wait 6 months for the RMA. I have no idea how long the RMA would take but these days you never know waited over a month for my new monitor to arrive.

Your GPU core is not where your vRAM is, and it's possible your card either has bad VRM cooling, and/or the thermal pads used to pull heat from your vRAM have degraded in quality. Listing what card you have would help in figuring out if the cooler is a problem. 

4 hours ago, BTGbullseye said:

Better to get educated at the risk of sounding stupid, than to actually be stupid.

Yes.

That is not how VRAM works. That's like saying "can I just connect another PC to mine with a USB cable to increase the system RAM". (not being condescending, just giving an example of approximate equivalence)

It's not really implemented in any games, and there is no indication that it ever will be. The process is incredibly inefficient in all but a few non-gaming workloads, (will actually reduce framerates significantly) requires enterprise grade GPU hardware, (like the $2000-$8000 Quadro GPUs) and requires software implementation of the process.

Honestly, I wouldn't go for any GPU with less than 8GB of VRAM anymore, as 8GB cards are currently available in the $150 range when new. (RX 570 8GB, or $200 for similar performance as your GTX 980 on an RX 5500XT)

Good answer, I agree with what you've said. I would say 6GB is still a very solid baseline for vRAM right now, but I don't know if that will hold up in 2 years, and as much as "future proofing lol can't do it", I don't recommend GPUs for them to become a crippling factor in any time under 2 years (especially in notebooks where they cannot be swapped out).

Clevo P870DM3 (Eurocom) | i7-7700K | 32GB DDR4 2400MHz | GTX 1080N SLI | 850 Pro 256GB | 850 EVO 500GB M.2 | Samsung PM961 256GB NVMe | Crucial M4 512GB | Intel 8265ac | 120Hz Matte screen | 780W PSU

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Newegg

×