Posted August 16, 2014 (edited) · Original PosterOP Ok. I did a SLI guide and now time to do a vRAM/memory bandwidth guide. A lot of people seem to be confused about vRAM in general. Well, here we go. Let's clear up some misconceptions about vRAM! Hide contents Assumption: You need a powerful video card to make use of a lot of vRAM. FALSE. vRAM usage is independent of GPU usage. For example, here is a screenshot of Call of Duty: Ghosts using 4GB of vRAM while happily using ~15% of my video cards sitting at the main menu. Ghosts is a bad example, however, so here is also a screenshot of Titanfall using only 60% of my GPU while happily gobbling up 3.9GB vRAM. The second screen was in the shot to show RAM usage to someone; so you can ignore that. Assumption: vRAM amount is related to the memory bus width. Partially true. vRAM amount is only loosely related to the memory bus width. 128/256/512 bit memory buses have RAM sizes like 1GB, 2GB, 4GB, 8GB, etc. 96/192/384 bit memory buses have RAM sizes like 768MB, 1.5GB, 3GB, 6GB, etc. You can have 4GB vRAM on a 128-bit memory bus (like HERE) and 6GB vRAM on a 192-bit memory bus (like HERE). On the flip side, HERE is a 384-bit memory bus card with only 3GB vRAM. This question will be extrapolated under the its own section later down, as there is a lot of information to add. Assumption: You need huge amounts of vRAM if you're going to use multiple monitors or high resolution screens. Partially true. You do not NEED it. It will help, but not in the way you may think. If you're considering gaming, especially fullscreened, your vRAM usage depends about 95% on game settings and resolution matters very little; 2GB would work easy for triple monitor 1080p gaming (with games prior to 2014 at least). Assumption: A lot of vRAM being used must mean the textures are great. FALSE. There's texture size and texture quality. Texture size is the resolution which they are rendered at. Texture quality is how well it is drawn/rendered. Just because a game has very large textures does NOT automatically mean it's drawn or rendered well, and thus doesn't automatically mean it looks brilliant. Also, things like shadow maps can use lots of vRAM, and thus use up your vRAM without actually improving texture quality. Assumption: Adding two cards gives me double the vRAM! FALSE. SLI and CrossfireX copy the content of their memory across the cards, therefore your vRAM amount does not increase. Your memory access bandwidth, however, does (nearly) double. Assumption: GDDR5 is always better than GDDR3. FALSE. Memory bus width is taken into account here. GDDR3 doubles memory bandwidth, and GDDR5 doubles it again. A 1000MHz mem clock on a 512-bit mem bus, GDDR3 card is EQUAL TO a 1000MHz mem clock on a 256-bit mem bus, GDDR5 card. Most new cards don't touch GDDR3 anymore, but I'm including this anyway to clear up anything people may be misinformed on. Assumption: I need tons more vRAM to turn my Anti Aliasing up. Partially true. Normal levels of AA, like 4x MSAA or TXAA, is not going to use a lot of extra memory... 100MB here, 200MB there, for the most part. Anything that re-samples textures will do this. FXAA and MLAA should not boost memory drain at all, as they do not mess with textures directly... SMAA is a post-process form of AA but has a SLIGHT impact on vRAM. High levels of CSAA however, can use a lot of vRAM. 16xQ CSAA can use 600MB of vRAM and 32x CSAA is even more (for example, my Killing Floor uses 2.1GB vRAM due 32x CSAA at 1080p). I DO NOT KNOW how SLI-enabled AA levels count toward vRAM usage (64x CSAA possible). Due to the fact that AA is used on the sacrificed card itself and SLI is in fact "off", it might use the vRAM buffer of the second card for AA in that way. I will test this one day I'm not bothering unless nVidia's Pascal GPUs can use CSAA again, as Maxwell GPUs cannot. I do not know how MFAA works with the new Maxwell GPUs with respect to vRAM usage, and I will not know unless someone buys and sends me 970M or 980M GPUs or a desktop with Maxwell cards (which is not going to happen lel). I DO know that MFAA does not work with SLI, so forcing high levels of AA is now officially dead. If you want to volunteer for testing this however, and have the hardware to test with, you can PM me here or send me a tweet. Assumption: There's no way I'll need 4GB or more for 1080p! FALSE. I explain a lot of it above and below about how vRAM is used up, and 4GB can definitely be used up at 1080p. I will also point out that games which lack sufficient compression techniques in their code CAN and WILL use up 4GB or more of your vRAM, at 1080p, regardless of if they have advanced lighting and reflections and high quality textures (separate from high resolution textures) or not. Because as is said elsewhere in this guide, game resolution is NOT the main factor in how much vRAM you use. This is not to say that most games that use 3GB and 4GB at 1080p (like Evolve, or CoD: AW) actually have a need for it... they're unoptimized. No denying that. But it doesn't mean that it's going to hurt to have enough vRAM to satisfy their needs. Remember: more vRAM is better than less. What does vRAM size have to do with gaming? Hide contents Video games use textures. Textures have a size. I already explained this a little above, but I'll be a bit more thorough here. Texture size is pretty native to a game, usually. Most games won't allow you to make much of a change to the texture sizes, but some do. Titanfall is an example of one such game. Modifying Skyrim is another example. A large texture size simply uses up more vRAM, and does little else in terms of performance impact in most cases. Some games are coded badly though ^_^. Anyway, texture size improvements are how crisp things are more than anything else. Just because your textures are crisp does NOT mean they are good; please remember this. You can have huge, crisp, badly drawn and badly rendered textures, using 4GB of vRAM or more and looking like a piece of poo. Like Call of Duty Ghosts. Or try comparing Titanfall to BF4. Worlds apart, but the former uses more vRAM. In other words, vRAM is just a per-game thing. Adjusting texture quality actually doesn't help vRAM too much either. Only adjusting the size does. BF4 on lowest graphics uses ~2.2GB vRAM for me just as it does on ultra. Next, you have something slightly different. vRAM can be dedicated to more than just texture size! One such game with does this is Arma 2. There, one can adjust the amount of video memory the game is allowed to use. Oddly enough, the highest allowance is "default" which caps out at about 2.5GB of used memory. "Very high" is what most people would probably use, but that's a 1.5GB limitation for that game. It applies to the DayZ mod for Arma 2 as well. What that does is it reduces popping by keeping more textures in memory. So when you zoom in with a powerful rifle, you don't have to wait for the surroundings to catch up as much. It does not improve framerates at all to give the game more vRAM, but the texture pop-in reduction etc is nice. Also, shadow resolution, number of dynamic light sources and reflections, etc can all use up extra video memory. In this case, a fully-optimized game which looks worse than Crysis 3 but uses more vRAM than Crysis 3 can be achieved by bumping the shadow resolutions to high levels and improving the amount of dynamic lighting and reflections that is available, especially if the game is more open-world and has many more objects loaded in to be affected by said lights or cast shadows or whatever. Draw distances also take up extra vRAM for things like open world games. So don't compare apples-to-apples "texture quality" as an indicator of good vRAM usage, but please DO compare everything else. An open world game is likely to use a bit more than a corridor shooter. But the corridor shooter likely will have better textures. On the other hand, for a game like Shadows of Mordor where bumping the textures is the MAIN drain on the system's vRAM buffer, you can quite clearly understand that it's just unoptimization there. But a game like skyrim with very high-res texture mods and lighting overhauls eating up a solid 3GB etc is fine due to the large open-world nature, even though it looks worse than anything you'd find in Crysis 3, etc. Bonus: What happens if I don't have enough vRAM that a game asks for for certain settings? Hide contents I have to add this section because of all these new games where developers have decided that everybody owns two Titan Blacks with 6GB vRAM and will just throw uncompressed everything into vRAM even though the quality isn't all that great (stares at Shadow of Mordor and Evil Within and any other ID Tech 5 engine game ever). Now, first and foremost, some games will lock some options away from you and you won't ever see it, unless you hack it in (though this is rare). Wolfenstein: The New Order does this; if your GPU is under 3GB in vRAM size you will never see "ultra" textures in the options menu. Forcing it on usually results in stuttering and crashing or generally undesirable behaviour on an extreme scale. For some proof, you can read this article about what happens when forcing ultra textures on Wolfenstein: TNO using a 2GB 770. Next, and the most common option, the game will allow you turn on the settings, and you may experience some playable framerates, especially in fullscreen with no second monitors attached so you cut down on OS-used vRAM, but the minimum framerates may be quite low and some stutter may be apparent (depending on how much extra vRAM you would need) and you might even crash the game. When the vRAM limit is hit, windows (or the drivers; I'm not sure) starts compressing what's in the vRAM buffer and tossing out things used for caching. Typically, you can go a couple hundred MB above what the game would require if you had a higher vRAM card without issues, but there WILL come a point where you can compress no longer, and you will end up using virtual memory of some kind, and performance will decline. It may not be considered "playable" for some people (depending on what happens), but for others they'll deal with it/not turn up settings. So be wary of this. Finally, Windows, which often needs vRAM for its desktop, may bug you about having "out of memory" errors and ask if you wish to switch off Aero (Vista and 7; never seen this in 8 though I never used 8 with under 4GB of vRAM) and may pull you out of your game to do so. This will come with framedrops from a game which cannot compress vRAM usage any longer. The reason this happens is because the rest of the texture data must be held in RAM or even on the Hard Drive/Solid State Drive, and accessing these (yes, even the SSD) is much slower than pulling the raw information from the vRAM buffer, and is what usually causes the slowdowns and stutters. To this effect, I ASSUME that extremely fast & low latency system memory and installation of the game on a SSD as well as very high GPU memory bandwidth will likely reduce the stuttering/slowdown occurrences due to faster access of the data not stored in the vRAM buffer, and the higher GPU memory bandwidth means it can empty/refill the vRAM buffer faster, and thus the game won't need to "wait" as often. So basically, it may work or it may not work, but no matter how it does, if you try to force buffer information built for more vRAM than you own into a card, it will decrease the performance of the game, though the game may very well still be playable. And now resolution, Anti Aliasing and operating systems Hide contents Now, onto something more impacting. Resolution. Or in this case, not so impacting at all. Rendered resolution of a game has about 5% of an impact on how much vRAM the game is using. The only game I've seen make a large jump in used vRAM from a resolution increase is Watch Dogs, which (according to benchmarks) goes from ~3100MB vRAM to ~3800MB vRAM bumping from 1080p to 4k. Not a big amount either; considering that's 4x as many pixels. ~700-750MB vRAM increase is tiny, and the game was still easily playable on the 3GB vRAM 780Ti cards they were benchmarking with too. Most other games have a tiny impact too. Upscale Dark Souls 2 to 4k? Use about 400MB extra vRAM (caps under 1.4GB at 4K even). Sniper Elite v2 at 1080p with 4.0 SSAA (4k res downsample) + "anti aliasing" set to "high" doesn't crack 2GB of vRAM either. BF4 on ultra uses 2.2GB and doesn't pass 2.5GB if you turn the resolution scale to 200% in-game either (unless you turn on AA as well; in which case it totals near 3GB or less). Honestly? Most games up until 2013 are going to be fine with 2-3GB of vRAM, even at 4K or triple-monitor 1080p gaming. 2014 and onward unoptimized AAA ports though appear to require ridiculous amounts of vRAM, so you may wish to REALLY consider that. Please note that using SSAA is akin to actually bumping the resolution up manually. 4x SSAA at 1080p would accurately show the vRAM usage of running a game at 4K resolution. PLEASE NOTE HOWEVER: Multisample-type AA at higher resolutions resamples those higher resolutions, so the vRAM increase for using is a bit bigger (though proportionally so). This is NOT affected by post-process-type and temporal-type filters such as SMAA 1x, 1Tx, 2x, 2Tx, FXAA, MLAA, etc. SMAA 4x however has a decent amount of multisampling in it, so it may impact vRAM a little more (not as much as 8x MSAA). Next, the operating system you're using has an impact on how much vRAM you're using. WHUT? Yup! Your OS has an impact. Windows 7 uses 128MB of your vRAM minimum, and can use extra vRAM if it needs to and you can spare it (once Aero is enabled). Windows 8 & 8.1 can actually use more vRAM, up to 400MB in total. Switching Aero off like some people do will work to reduce the used vRAM from your desktop, but if you're on Windows 8 or Windows 8.1, switching Aero off without causing some issues isn't possible (that I know of). So be wary of this too! Of course, fullscreening a game will remove the desktop's rendering (for that screen only) and free up your system resources some, in case you're vRAM starved with an older card or something with 1-2GB. I also believe that multiple monitors = more vRAM used while sitting at the desktop (via a correlation to number of pixels rendered), but I cannot conclusively prove this and it is harder to find the information (or test it successfully) than one would think. The reason I am unsure is because my vRAM usage seems to change every time I restart my PC and programs. I've seen it idle at 800MB in Win 8.1 and I've also seen it idle at 300MB. Bonus! I recently discovered that opening pictures in windows photo viewer WILL increase your vRAM used, at least on Windows 8 (not tested on Win 7 and earlier OSes; nor on Win 10). So if you find you're vRAM starved and have a couple pictures open and minimized, you should close those. And now about multiple monitors Hide contents So, how do multiple monitors benefit from more vRAM? Well you see, having extra screens uses vRAM even while gaming fullscreened. If you have two screens and fullscreen your game on one, then you're still rendering the desktop on the other(s), and thus using extra vRAM. This is why some games in the past (think back to 1GB cards and Just Cause 2, for example) would pop up and say you're out of memory and ask to change Windows to the basic theme (to save on vRAM). As said above, fullscreening frees up the vRAM for the screens you're taking up, but if you have 3-4 monitors connected you're gonna want some excess vRAM so that neither your games nor windows are starved; far less if you're running in either regular or borderless windowed mode (where it still renders the desktop behind the game on the monitor you're running the game on). PLEASE NOTE THAT THIS IS DIFFERENT FROM GAMING ON MULTIPLE SCREENS. In this case, more is better, and more helps, but is entirely dependent on the games you are playing. If your game is one of these new unoptimized AAA titles like Watch Dogs, which can pull 3GB vRAM easily at 1080p, having three screens connected and running watch dogs maxed out on a 3GB card may run into some memory issues. For a game like BF4 however, which grabs at most 2.2GB of vRAM in DX11 at 1080p using the ultra preset, having two extra screens and a single 3GB card connected would be fine. A 2GB card might run into problems at that point though, while having only one monitor attached would work on the 2GB card in that scenario. Also note: the higher the resolution of the monitor when not running a game is indeed a factor in how much vRAM it uses. Now while 2-3 monitors could use up a decent bit of vRAM (game + 256MB while fullscreened for Win 7), usually it'll get compressed if you say... only have a 2GB GPU and are running a game using 1.8-2GB of vRAM on its own. But of course, the more headroom you have the better, and the less chance you'll get windows giving you an out-of-memory error in your game, as this WILL happen if your game cannot compress vRAM usage anymore for your current settings. This is why people recommend more vRAM for multiple monitors, though they probably blow this reason out of proportions in their heads a little (I've seen people say a weaker 4GB card will do better for multiple monitors than a stronger 3GB card when purely talking about gaming... very rare this would be the case). Now as far as gaming on multiple monitors goes: it DOES use a bit more vRAM, but it's only an incremental increase as you saw with resolution. 5760 x 1080 is actually LESS pixels than 3840 x 2160, so if you can run a game at 4k res (or at 1080p with 4x Supersampling), you can run it easier on 3 monitors at 1080p, both vRAM-wise and performance-wise ^_^. In that case, you have less to worry about than the user who is NOT going to use all three screens and only going to game on one and use the others for productivity. Three 1440p screens however, is a HUGE amount of pixels; more than a single 4K monitor would bring. For gaming at that resolution, under no circumstances would I suggest less than 4GB of vRAM if you plan to play any game that uses 2GB at 1080p. 1080p --> 3840 x 2160 may only be 600-800MB of an increase, but 1080p --> 7680 x 1440 would likely use over a full GB extra; and more if you throw even low-level multisample AA on the game, so I'd suggest having more than 3GB as a safety net for this particular setup (or larger). Now, onto memory bandwidth and memory bus and such. You may wanna skip this if you know already and are only here for the vRAM size portion above, but I might as well be thorough if I'm doing this. Spoiler tags save the day! vRAM types & memory clocks Hide contents Usually, cards come with one of three kinds of memory today. GDDR3, GDDR5 and HBM (High Bandwidth Memory). They also have a memory clock, and a memory bus. All of these variables combine for the memory bandwidth. Just like clock speed, more memory bandwidth is a good thing. Games do not usually benefit much from increased memory bandwidth though, so don't expect huge gains from overclocking memory in most games. Some games do, but I don't remember any of their names off-hand. HBM is different from GDDR3 and GDDR5 in mostly physical ways. Calculation-wise it's very similar (as I expand on below), and thus I am not giving it its own section. HBM 1.0 (currently on R9 Fury and R9 Fury X cards) is limited to 4GB. HBM 2.0 will not be. Since googling about HBM provides many articles explaining how it works physically, I will defer to those rather than attempt to explain it again here (if you've noticed, I did not explain about GDDR3/GDDR5's physical makeup more than was necessary). Your memory clock is represented in multiple different ways. There is your base clock, which is usually an exceedingly low number. nVidia gaming-class cards since Kepler (GTX 600/700 series) came out have had a standard of 1500MHz for the desktop lineup in terms of memory speed, and Maxwell (GTX 900 series) has had a standard of 1750MHz. AMD has been using less than 1500MHz for the most part (with 7000 and R9 2xx series) but has bumped the speed to 1500MHz recently (R9 3xx series). This clock speed is not what you're going to be too concerned with; you should be concerned with your effective memory clock. Your effective memory clock depends on the type of video memory you have, which I will explain below: - GDDR3 memory (which you won't find in midrange or high-end cards these days) doubles that clock speed. So a card with 1500MHz memory clock using GDDR3 RAM will have a 3000MHz effective memory clock. - GDDR5 memory (which you will find everywhere in midrange and high-end cards these days) doubles GDDR3's doubler. In other words, it multiplies the clock speed by 4. So a card with 1500MHz memory clock using GDDR5 RAM will have a 6000MHz effective memory clock. - HBM memory (only present in three cards right now) also doubles the clock speed, similarly to GDDR3. So a card with "500MHz" memory clock (like this) will have an effective memory clock of 1000MHz (despite that link ironically claiming the effective clock is 500MHz). Now there are three ways one usually reads the memory from a card with GDDR5 RAM. Let's use the GTX 680 as an example. Some programs and people list the actual clock, which is 1500MHz. One such program is GPU-Z. Other programs list the doubled clock speed; which would be 3000MHz. Those programs are often overclockers such as nVidia Inspector. MSI Afterburner also works on the doubled clock speed, though it does not list the clocks themselves. Then finally, the effective clock speed is often seen in sensor-type parts of programs; such as GPU-Z's sensor page. Please remember which clock your program works with when overclocking. If you want to go from 6000MHz to 7000MHz for example, you would need +500MHz boost in MSI Afterburner. HBM is read by both GPU-Z and MSI Afterburner at its default clock rate, and is indeed overclockable via MSI Afterburner (though not by Catalyst Control Center). I am unsure of other tools people use for AMD OCing that aren't Catalyst Control Center or MSI Afterburner, but there is a chance it may be OC-able by other programs. N.B. - Apparently, monitoring software using AMD cards in Crossfire seem to add the vRAM being used across each card. This results in two 4GB cards using something like 6000MB of vRAM and you scratching your head wondering if your PC is in fact skynet. Don't worry. Just cut the vRAM counter in half (in your head, please don't cut your monitor) and it should accurately reflect how much vRAM is being used in crossfire. Next, memory bus and memory bandwidth! Hide contents Each card has a memory bus. This memory bus is like the number of lanes in a highway, and the memory speed is how fast the cars travel. Most recent gaming cards from nVidia have a 256-bit memory bus or a 384-bit memory bus. AMD's a bit similar with 256-bit and 384-bit mem bus cards; but their R9 290x has a 512-bit memory bus. Dat be a wide highway, boys. Anyway, memory bandwidth is calculated pretty easily. What you do is you take the effective memory clock of a card, multiply it by the bus width and then divide by 8 (because 8 bits = 1 byte). So see those GTX 680 cards with their fancy schmancy 192GB/s memory bandwidth? Here's how you tell if it's true: 256/8 = 32 * 6000 MHz = 192000 MB/s = 192 GB/s. Simple, no? It's just as easy for HBM as well. Take the R9 Fury X: 4096/8 = 512 * 1000MHz = 512000MB/s = 512GB/s. Now as I said before, memory clock and memory bus tell the whole story. AMD's R9 290X has a 512-bit memory bus but only a 5000MHz memory clock, whereas nVidia's GTX 780Ti has only a 384-bit memory bus but a 7000MHz memory clock! So it works out like this: 290X = 512/8 = 64 * 5000 MHz = 320000 MB/s = 320 GB/s 780Ti = 384/8 = 48 * 7000 MHz = 336000 MB/s = 336 GB/s So here we see even though the bus width is larger on AMD's bad boy, the bandwidth is in fact less due to significantly slower clock speeds. Now you know not to just buy a card just because it's got a huge memory clock over the other, or in reverse, because it has a huge memory bus over the other. Did you know the GTX 285 from like 7 years ago had a 512-bit, GDDR3 memory setup? When AMD brought out GDDR5 and nVidia hadn't moved to that yet, they competed by increasing the bus width a bunch, and it was able to compete easily. Interesting tidbit few remember =D. nVidia Maxwell GPUs are slightly different to the above formulas. They have memory bandwidth engine optimizations that surpass the rated bandwidth they have. nVidia touted this as a feature, but it was proved during the GTX 970 scandal when users on notebookreview took 980M cards (256-bit, 5000MHz memory) and 780M cards (256-bit, 5000MHz memory) and tested using the CUDA benchmarker test designed to prove the 970 faulty. The result was that Maxwell GPUs had a higher actual bandwidth than was mathematically determinable using memory bus and clock speeds. As far as I remember, this is on average a 15% bonus in memory bandwidth (minimum of 10%). A 224GB/s card like the GTX 980 would actually have something like 257GB/s. The GTX Titan X and GTX 980Ti with 336GB/s should actually have near 386GB/s of bandwidth. I do not know if this affects all types of information that is usually stored in vRAM, however I must note that it has a distinct benefit and closes the gap between the fabled "HBM 512GB/s", especially with memory overclocks on the nVidia cards. 336GB/s to 512GB/s is more of a far cry than 386GB/s to 512GB/s, and if someone manages to OC a 980Ti or Titan X's memory to say... 7400MHz from 7000MHz? That gap jumps to 355.2GB/s mathematically and then to 408.5GB/s with Maxwell's speed improvements. Now, there's another important thing you should know. There is a difference between a vRAM bottleneck and a memory bandwidth bottleneck. Some games are not designed to use more than a certain amount of vRAM, like Crysis 1. Instead, they make heavy use of the memory bandwidth on your card to keep the relevant information in your vRAM buffer. You can tell if you need a memory bandwidth increase to help a game if your game runs your memory controller's load at high percentages. To check this, GPU-Z could possibly be used. It's important to note that you CAN run into a memory bandwidth choke without running out of vRAM, and vice versa. What I discovered is that increasing memory bandwidth does not do a whole lot for games once above ~160GB/s. My GPUs default to 160GB/s, and it's easy for me to overclock them to 192GB/s (5000MHz to 6000MHz) and I've never really noticed any games actually benefitting from this. 160GB/s has been used for years; as far back as the GTX 285 had 160GB/s and the bandwidth did not improve massively in years past, until recently when nVidia's Maxwell GPUs and AMD's R9 300 series opted for larger out-of-the-box memory bandwidths. Generally, it's better to have more memory bandwidth than less, but it's more of a "slight increase if any" and a "better to have more than less" rather than "this memory OC will give me a solid 5-10fps in <insert game>!" or anything similar. At least, there's no tech RIGHT NOW that will need a lot more memory bandwidth than is currently available as far as I can see. Also, I mentioned this above but I'll repeat it again here: SLI and CrossFireX systems for the most part "add" the memory bandwidth for vRAM access. 2-way SLI of 192GB/s cards? 384GB/s. 3-way SLI of 192GB/s cards? A cool 576GB/s. Tossed three 980Ti cards in SLI? Enjoy a sexy ~1TB/s memory access bandwidth. Now, this doesn't affect memory "fill" time (that is still limited to each card's bandwidth, your RAM and your data storage, and likely your paging file too), and the multiGPU overhead will not allow you to see a true doubling of bandwidth, but the benefits definitely exist. Please note however that if "Split Frame Rendering" with DX12 becomes a thing (or any mode where multiple GPUs act as "one big GPU") then memory bandwidth improvements are likely going out the window (maybe why they're bumping mem bandwidth now?). Extra: Memory bus + mismatched memory size section Hide contents Now, here's the funny part. Since writing this guide I was told that I was wrong about a couple of things. I've updated and fixed and you can see how stuff works down below. But here's where things get tricky. Apparently, there should technically be a direct, hard-limit to the size of memory chunks one can use on a card depending on the memory bus. By this law, cards like the GTX 660Ti should not have 2GB of vRAM, but instead ought to have 1.5GB or 3GB of vRAM attached to them. Instead, nVidia uses 2GB chips somehow. This apparently could have been done one of two ways. The first way, as described to me, would be to use mismatched-yet-compatible memory chip sizes to get ALMOST as much vRAM out of the cards. In this case, the GTX 660Ti could use a 768MB chip + a 1152MB chip, totaling 1920 memory size. This is apparently NOT what nVidia uses. What nVidia apparently uses, is a bit more complex. Basically a 192-bit memory bus has three blocks of 64-bit memory controllers, and to get 1.5GB of vRAM on them, you would add 512MB vRAM to each 64-bit mem bus, but nVidia adds an extra 512MB block to one 64-bit block. The thing about this "trick", is that all of the memory does not work at the faster bus speed. This means that only the first 1.5GB of vRAM is run at ~144GB/s, and the last 512MB block is run at only ~48GB/s, due to asymmetrical design. This is present in other cards as well from nVidia, such as the GTX 650Ti Boost, GTX 560 SE, GTX 550Ti and the GTX 460 v2. So hey, the more you know right? Proof of this happening. I'd like to point out that the GTX 550Ti has a different layout to the 660/660Ti (and possibly the rest of the cards I've listed that have mismatched memory bandwidth), so thanks to a helpful forum user, I've got a picture to better explain the difference right here. The article already linked does explain it, but not well enough in my eyes. The good thing about this is that overclocking your memory will still benefit your transfer rate even with such a memory configuration, as memory bus works with memory controller for bandwidth. But as most benchmarks pointed out, the 660Ti was able to perform fairly similarly to the 670 in a lot of games, which proves that memory bandwidth is nowhere near as important for gaming as clock speeds are. But in the games where it DOES matter, even the weaker GTX 760 has been shown to pull ahead of the 660Ti by a little bit, according to the sheer number of people who tell me that the 760 beats the 660Ti in some games but not all. Now I know the reason. Go figure huh? So what can I say here? If you see mismatched memory sizes to the memory bus from nVidia, now you know what it be. If you can, get matched memory =D. And the GTX 970 gets its own section! Hooray! Hide contents The GTX 970 is apparently wired like frankenstein. Let me try and explain. You've no doubt seen multiple articles about how it has 4GB memory and people are considering it like a 3.5GB card. That's wrong, and you shouldn't do that, but not for the reasons you're probably thinking. See, as I mentioned earlier in the guide, when you hit the maximum vRAM on a GPU, you start to compress memory. Send some of it into virtual memory etc if you can compress no longer, etc etc. The problem is that when you hit a 3.5GB vRAM mark on the GTX 970, this does not happen. Because the card actually DOES have the extra vRAM, instead of compressing what's in the buffer, it attempts to make use of the ungodly slow last bit of vRAM. This causes the stuttering and slowdowns many people witnessed when playing. It would actually be better for everyone if nVidia were to either use drivers or a vBIOS to lock off the final 512MB forever, or to somehow tweak drivers to force all of windows' memory requirements (aero, multiple monitors, etc) into the slow 512MB portion, and give non-windows-native applications access to the fast 3.5GB outright, AND block them from accessing the final 512MB no matter what, causing the games/etc to start compressing memory at the 3584MB vRAM mark and eventually hunt for virtual memory, like it does on 3072MB with a GTX 780, for example. For some proof on what happens at the 3.5GB vRAM mark and why it differs to a 3GB card's limit, HERE is a comparison video of a GTX 780 running Dying Light next to a GTX 970 running Dying Light. In the video, if you set it to 720p60 or 1080p60, you can clearly see that the 970's footage is somewhat less smooth than the 780's, despite having more vRAM for the game to make use of. The software also shows only 3.5GB of vRAM being used because as nVidia said, normal programs can't see the last 512MB as it's in a different partition so to speak; but it's clear the game is attempting to use more and the transfer to the slow memory bus is the problem (I wish to point out that since this video has been released, the game updated to use less video memory, and you won't find this bug happening anymore as the game was in an unoptimized state + had extra view distance at the time of that video). Please note: No other maxwell card to date has this bug. The 950, 960, 960 OEM (192-bit mem bus, 3GB/6GB vRAM) 980, 980Ti, Titan X, 940M, 950M, 960M, 965M, 970M, 980M and mobile 980 ALL lack this error. Also, please make note: The 970 does NOT share the same issue that the 660Ti and other mismatched memory cards do. The 28GB/s slow vRAM portion is beyond slow; even the GeForce 7800 card had higher than that. Also, the rest of the card only has access to seven total 32-bit memory controller chips (unlike the 8 blocks it should have), meaning the rest of the 3.5GB is actually on a 224-bit memory bus (notice it's falsely marketed as a 256-bit mem bus card?). I also do not know if the access bandwidth for the slow memory doubles in SLI, which would at LEAST alleviate the problem somewhat. The CUDA memory tester that was used to check the 970 does not test differently in SLI; the bandwidth does not double with SLI on/off in that test. So we have no real way of knowing (at least that I can test) to see whether or not SLI helps. At the very least, increasing the memory clock will help the slow vRAM portion of the card, but that is very little relief. In this case, I can only recommend the 970 to users who are CERTAIN of what they are going to play, and can say with 100% certainty that they will not approach that 3.5GB vRAM buffer. If you know you will, a 980 (too expensive), 980Ti, R9 290 R9 390, R9 290X R9 390X or R9 Fury would be a far better buy. If you are the kind of guy who's only gonna play BF4/BF Hardline and some older game titles, then you'll be perfectly happy with a 970 as you'll likely never hit the vRAM bottleneck issue that only applies to the 970. Also, please note: FPS counts do NOT tell the whole story in this case. As shown in the video I linked, the 970 was actually getting higher FPS most of the time even though the 780 had the smoother experience. Also, according to an article I was recently linked to by PCGamer, it seems that when accessing the slow portion of vRAM, the rest of the memory on the 970 slows down as the seventh 32-bit memory controller that runs in tandem with the others to make up the 224-bit memory bus has to be a "middleman" with the slow portion, and thus memory bandwidth on the card itself is crippled. To fully quote the part of the article (which was originally quoted from PCPer): “if the 7th port is fully busy, and is getting twice as many requests as the other port, then the other six must be only half busy, to match with the 2:1 ratio. So the overall bandwidth would be roughly half of peak. This would cause dramatic underutilization and would prevent optimal performance and efficiency for the GPU.” You can read the full article here, and that might explain other issues with accessing the slow portion with only a few games. Also, a little addendum here. Many users have been claiming to me that most games don't use near 3.5GB of vRAM etc. This is true... but as mentioned earlier in this guide, I would like to point out that running a second monitor, and/or windowed mode/borderless games, ESPECIALLY using Windows 8, 8.1, and 10, will be also competing for space in the vRAM buffer. If your OS is using 600MB without you gaming and you load up a 3GB game in borderless windowed mode, you *WILL* encroach on the 3.5GB vRAM mark, potentially causing hitches and stuttering in your game without actually playing a "4GB game" or whatever. Single-monitor-only gamers need not worry as much, but multitaskers who are gamers? Their worries are different and valid. I should know, because I am one of them myself. If I can help it my game's going borderless windowed. So 970s would be a bad buy for me if I wanted to play games that used anywhere near 3GB of vRAM due to how much else windows itself takes. *NB - I previously recommended the 780 6GB, but since devs seem to no longer code for current + previous gen like they usually did, Maxwell's better tessellation engine and general driver improvements have the 970 pull far enough ahead of the 780 for me to remove it from the direct recommendation, leaving me in the awkward position of requiring to recommend you only stronger (and of course, more expensive) nVidia cards or equal/stronger AMD cards; and even if you could get a 780Ti on the cheap (which would mitigate the performance gap), that card was never made with a 6GB vRAM buffer (likely because the Titan Black, which is still $1000 OR MORE at the time of this writing, was essentially a 6GB 780Ti with a double precision rendering block). FAQ Hide contents If I could afford it, should I get the version of the card with more vRAM? YES. A lot of games lately are coming out using much much more vRAM than is necessary, and someday soon they might actually use all that vRAM for a good purpose. When that time comes, you'll be glad you got that card with 4GB instead of 2GB a year ago =3. Look at me, I thought 4GB of vRAM was overkill for my 780Ms, until Ghosts, Titanfall, Wolfenstein, Watch Dogs.... And more games are going to do the same thing. Then I found out about Arma 2's "default" setting for vRAM, and then Skyrim with my mods uses 2.7GB in fullscreen, etc... eventually I realized I was lucky as hell to have 4GB on these cards, and how much people should go for higher vRAM cards if they could. It really can't hurt to have more. So as long as the version with extra vRAM doesn't have another downside, go for it. What about those dual-GPU cards like the R9 295X2 with 8GB or the GTX Titan Z with 12GB vRAM? Should I get those instead of two separate cards? NO. Cards like that are marketed underhandedly. Dual-GPU cards are listed almost unanimously with the sum of the vRAM on each card. When you use them in their intended SLI/CrossfireX formats, the vRAM data is copied across both cards, so you end up with 1/2 the listed vRAM in effect. The 295X2 is simply two 290X 4GB cards. The Titan Z is even worse; it's two downclocked GTX Titan Black 6GB cards. Most often too, they have an inexplicable markup in price (the Titan Z launched at $3000). NEVER buy them unless you know EXACTLY what you're doing (in which case you wouldn't be reading this guide). Don't let anybody you know buy them. Stab em if you have to. Do *NOT* let them waste that kinda cash. Even if you make the arguement that you could buy 2 of them and get 4-way SLI/Xfire going on a board not normally built for 4-way card setups, the money you save ALONE from not buying most of them could likely get you an enthusiast intel board and high end CPU anyway. I have a 4GB R9 290X, should I sell this and get the 8GB R9 390X? I CAN'T SAY. Developers have calmed down from going overboard with the vRAM buffer lately (kind of) Nope, they're still at it... though 4GB appears to have become the standard "minimum" amount of vRAM necessary for AAA titles (beyond "low" graphics, anyway), but it also seems to be a bit of a sweet spot except in many of the games. Shadows of Mordor wants 6GB for the texture quality and draw distance of a 2011 game for example, Black Ops 3 does not allow "extra" texture quality without 6GB vRAM or more, and it's easy to pass 4GB vRAM on Assassin's Creed Syndicate (especially above 1080p), but 4GB cards play these mostly fine. I STILL recommend the highest amount of vRAM a card offers if you can afford it, but selling a 4GB card to buy the same one with a higher vRAM buffer may not be worth the hassle for you (though the 390X is overclocked a bit on core and a lot on memory). This is on a case-by-case basis and it still might suffice to simply buy a stronger card with lots of vRAM, like a 980Ti. Windows 10 and nVidia 353.62 Hide contents Windows 10 has made changes to itself under the hood. Including some changes to something called the "WDDM". This has programs reporting vRAM usage/utilization with errors. 2GB GPUs appearing to use 2800MB of vRAM, benchmarks that use set amounts of vRAM appearing to use double, etc. If you're on Windows 10, as far as I know, there is no way AS OF THIS WRITING to properly check the amount of used vRAM. A large reason why I don't know is because I refuse to use Windows 10 for another few months, mainly because of these kinds of stupid issues, so I haven't ran a million and one programs to see if things line up with how they were on Win 8.1. If anyone knows any way to accurately get the readings, you can let me know in the thread and I'll amend it here. Apparently, Windows 10 and 353.62 nVidia drivers are stealing the max amount of vRAM available to GPUs. In particular, about 1/8 of the vRAM seems to be reserved and not usable to the system for whatever reason. A user on NBR reported his 980M only having 3.5GB of RAM available, and after some further checking, it seems all cards exhibit this behaviour, even using CUDA programs to check how much memory is available to the system to use. HERE is the post with some proof. If you have a relatively low vRAM card and are suddenly running into stuttering in games that use just about the amount of vRAM you have (let's say... BF4 on a 2GB GPU) then here's your problem. =D. I am not aware of AMD GPUs having vRAM allocation stolen, or if it's a DX12 thing and not a nVidia driver thing, but if ANYONE can check for me (I don't remember how you'd check on AMD cards exactly, since you don't have access to CUDA, but the is a way I've seen before) and report it, it would be fantastic. Also, I'm not sure if AMD cards experience the vRAM error reporting bug, but I believe they might. Again, anyone who can check, let me know! Most of NBR use nVidia because AMD has no laptop GPUs worth even remembering the existence of these days, so I haven't had much chance to ask people using AMD cards. Finally, Windows 10 and DirectX 12 ARE NOT GIVING YOU EXTRA VIDEO RAM. There have been some users noticing that their GPUs suddenly have a large amount of shared memory, and are assuming this is DX12. No. It is not. As far back as Windows Vista did this. It doesn't really seem to do anything in practice, actually. People are blowing this whole DX12 thing out of proportion. Here's my Windows 8.1 system showing the same thing. When these Windows 10 issues are CONFIRMED resolved, I'll remove this section from the guide. Since I'm not using Windows 10, I'll need people to report to me. Final tidbits and stuff Hide contents A lot of games are going to come out soon using a whole lot more vRAM. They're probably not gonna look 300% better even though they use 300% the vRAM, but eventually, 2GB is just gonna be a bare minimum requirement for video cards. I highly suggest if you're able to get the most vRAM and just... leave it. You're probably not like me who uses playclaw 5 and has system info as overlays all the time so that you know which game uses how much memory, how much % GPU usage, how hot your parts are, etc all the time, so you probably won't be as interested in GPU memory usage as I am. But it's better to have when the need arises than to not have. All the people with 2GB GPUs can't even play Wolfenstein with ultra texture size; the option doesn't even appear in the menu unless you have 3GB or more memory. Call of Duty Black Ops 3 limits 2GB vRAM users to "medium" textures, and 4GB card users can only use "high"... you need 6GB or above to even access "extra" texture quality. As for memory bandwidth? It's not that important really, at least not for games. Most games don't really care; they're more interested in core processing power. Of course, higher resolution textures will eventually need better memory bandwidth to load quickly enough, but we're not much at that point yet that I've seen. Hell, some games will max out your memory controller while using small amounts of vRAM because they were designed in a time where vRAM was scarce, so they abuse the speed of the memory, like Crysis 1. So while more bandwidth is always better, don't kill your wallet for it. If there's a better, stronger card out there with more vRAM but less memory bandwidth, then go for it, unless you want the other one for a specific reason. If you're wondering why there'd be a stronger card with more vRAM and less memory bandwidth for the same or cheaper price, remember that nVidia and AMD make different cards. I started writing this guy mainly for the top section, to denounce misinformation people seem to have regarding vRAM and its relation to memory bandwidth, but I figured I might as well just go the full mile and explain as best I can about what most people need to know about GPU memory anyway. If I've somehow screwed up somewhere, let me know. I probably have. I'll fix whatever I get wrong. And thank you to everyone who has contributed and corrected things I didn't get right! Unlike my SLI guide, much of the information here was confirmed post-writing. If you want the SLI information or the mobile i7 CPU information guide, they're in my sig! Moderator note: If you believe any information found in this guide is incorrect, please message me or D2ultima and we will investigate it, thank you. - Godlygamer23 Edited April 14, 2016 by D2ultima Somebody fix the forum for old posts pls. Clevo P870DM3 (Eurocom) | i7-7700K | 32GB DDR4 2400MHz | GTX 1080N SLI | 850 Pro 256GB | 850 EVO 500GB M.2 | Samsung PM961 256GB NVMe | Crucial M4 512GB | Intel 8265ac | 120Hz Matte screen | 780W PSU THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday Link to post Share on other sites
Posted August 16, 2014 · Original PosterOP Oh god blue text. I can't read that. It's 5:35am and I have no idea if you're serious or joking . Clevo P870DM3 (Eurocom) | i7-7700K | 32GB DDR4 2400MHz | GTX 1080N SLI | 850 Pro 256GB | 850 EVO 500GB M.2 | Samsung PM961 256GB NVMe | Crucial M4 512GB | Intel 8265ac | 120Hz Matte screen | 780W PSU THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday Link to post Share on other sites
Posted August 16, 2014 It's 5:35am and I have no idea if you're serious or joking . I'm serious, I can't read text that's blue. . Link to post Share on other sites
Posted August 16, 2014 Sorry, you have some incorrect information right in the first couple of points you try to make. I'm not going to break down your entire post, as I'm sure someone else will get to that later, so I'll just tackle the first few things.You are partially correct that VRAM usage is not 100% restricted by or related to GPU usage, but CoD Ghost is the worst possible example you can use to make that point. CoD Ghosts will simply fill up what ever VRAM buffer your card has, but its not actually using the VRAM. Crysis 3 on ultra and at 4K resolution does not use 4GB of VRAM, so there is no way CoD Ghost is using it. Games also "reserve" additional VRAM that does not actually get used. Just like Windows will reserve more system RAM based on the total amount of RAM you install. The issue with VRAM is in games that actually need high amounts of VRAM will more often than not run into a limitation of the GPU's processing power.There is no game in existence that needs and will use 4GB of VRAM where you would not first be limited by the processing power of something like a 4GB GTX 760. There are games that need and will use more than 2GB of VRAM, and a 760 can power those games, but in order to provide that extra VRAM the easiest and cheapest solution was to use an extra stick of VRAM. This leads into the second issue with VRAM totals being determined by the bus size. You can't stick 2.5GB or 3GB of VRAM on a 256bit bus, so the only option is to bump it up to 4GB.VRAM totals are absolutely determined by the bus width. The bus size will determine the quantity in which VRAM can be added. A 128bit, 256bit or 512bit bus will use values of 128, 256, 512, 1024, 2048, 4096 or 8192 MB of VRAM.A 192bit or 384bit bus will use values of 192, 384, 768, 1536, 3072 or 6144 MBof VRAM.So yes you can stick 8192MB of VRAM on a 128bit bus, but you will never in a 100 years be capable of achieving a memory clock rate that will be capable of using all that VRAM on such a small bus size. i7 2600K @ 4.7GHz/ASUS P8Z68-V Pro/Corsair Vengeance LP 2x4GB @ 1600MHz/EVGA GTX 670 FTW SIG 2/Cooler Master HAF-X http://www.speedtest.net/my-result/3591491194 Link to post Share on other sites
Posted August 16, 2014 · Original PosterOP text See? Good. I can update that now. I only noticed real-world applications of it. And yes, I was told that Ghosts probably just filling up the buffer. The point was though, that it can still be used up. Also, seeing as how I can put titanfall on mostly medium settings but insane textures, YES, most of the 4GB vRAM buffer on a 760 can be used up. It is not practical, however, I do agree on that point, but it is possible. I'll add another point about that via screenshot. The main purpose of my showing CoD: Ghosts there was to show that the vRAM buffer can be used up without a huge amount of power being used. The rest of your post however, is getting more technical than I was aiming for. I was more meaning to make layman's terms about this; as in "what you could look for to buy". If you want to pick through the rest of the post, and tell me where I went wrong, go ahead. I won't claim to know everything, and I'm glad to fix what I don't know. Clevo P870DM3 (Eurocom) | i7-7700K | 32GB DDR4 2400MHz | GTX 1080N SLI | 850 Pro 256GB | 850 EVO 500GB M.2 | Samsung PM961 256GB NVMe | Crucial M4 512GB | Intel 8265ac | 120Hz Matte screen | 780W PSU THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday Link to post Share on other sites
Posted August 16, 2014 · Original PosterOP I'm serious, I can't read text that's blue. Fixed it for you Clevo P870DM3 (Eurocom) | i7-7700K | 32GB DDR4 2400MHz | GTX 1080N SLI | 850 Pro 256GB | 850 EVO 500GB M.2 | Samsung PM961 256GB NVMe | Crucial M4 512GB | Intel 8265ac | 120Hz Matte screen | 780W PSU THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday Link to post Share on other sites
Posted August 16, 2014 Fixed it for you Aw you're so kind. Thank you. . Link to post Share on other sites
Posted August 16, 2014 · Original PosterOP @jmaster299 Decided to tag you in a new post instead If what you said before is true, how did the 192-bit mem bus GTX 660Ti have 2GB of vRAM on it? I saw the trend with the mem bus sizes vs RAM sizes and where you were going, but by your logic, the 660Ti should only be capable of using 1.5GB of that vRAM. Also I covered the game using extra vRAM to prevent texture popping in a further post down the line, I believe. Clevo P870DM3 (Eurocom) | i7-7700K | 32GB DDR4 2400MHz | GTX 1080N SLI | 850 Pro 256GB | 850 EVO 500GB M.2 | Samsung PM961 256GB NVMe | Crucial M4 512GB | Intel 8265ac | 120Hz Matte screen | 780W PSU THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday Link to post Share on other sites
Posted August 16, 2014 Thx for such a big article, that is informational. I'll copy & save it in my facebook notes, in case I'll need to dig deeper on vram theme. | CPU: i7 3770k | MOTHERBOARD: MSI Z77A-G45 Gaming | GPU: GTX 770 | RAM: 16GB G.Skill Trident X | PSU: XFX PRO 1050w | STORAGE: SSD 120GB PQI + 6TB HDD | COOLER: Thermaltake: Water 2.0 | CASE: Cooler Master: HAF 912 Plus | Link to post Share on other sites
Posted August 16, 2014 · Original PosterOP Thx for such a big article, that is informational. I'll copy & save it in my facebook notes, in case I'll need to dig deeper on vram theme. Not a problem. I just want to be sure anybody that sees if I did something wrong tells me so I can fix it. I don't wanna be the cause of spreading misinformation myself, and my SLI guide is a lot more rock solid than this one Clevo P870DM3 (Eurocom) | i7-7700K | 32GB DDR4 2400MHz | GTX 1080N SLI | 850 Pro 256GB | 850 EVO 500GB M.2 | Samsung PM961 256GB NVMe | Crucial M4 512GB | Intel 8265ac | 120Hz Matte screen | 780W PSU THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday Link to post Share on other sites
Posted August 17, 2014 @jmaster299 Decided to tag you in a new post instead If what you said before is true, how did the 192-bit mem bus GTX 660Ti have 2GB of vRAM on it? I saw the trend with the mem bus sizes vs RAM sizes and where you were going, but by your logic, the 660Ti should only be capable of using 1.5GB of that vRAM. Also I covered the game using extra vRAM to prevent texture popping in a further post down the line, I believe. Nvidia was either lying or they were using some sort of trickery to achieve that total. 2048 can not be divided evenly by 192. You end up with 10.6666666. The only way they could have done it was with 768 + 1134, which gets them 1914MB of VRAM, which is not technically 2GB. i7 2600K @ 4.7GHz/ASUS P8Z68-V Pro/Corsair Vengeance LP 2x4GB @ 1600MHz/EVGA GTX 670 FTW SIG 2/Cooler Master HAF-X http://www.speedtest.net/my-result/3591491194 Link to post Share on other sites
Posted August 17, 2014 · Original PosterOP Nvidia was either lying or they were using some sort of trickery to achieve that total. 2048 can not be divided evenly by 192. You end up with 10.6666666. The only way they could have done it was with 768 + 1134, which gets them 1914MB of VRAM, which is not technically 2GB. @jmaster299 There have been some cards with that number. 660Ti has 2GB of vRAM according to GPU-Z screenshots And some other cards have been anomalies too: GTX 650 Ti Boost || GTX 550Ti || GTX 560 SE || GTX 460 v2 That being said, I DID see the trend you mentioned; 320-bit memory buses had vRAM amounts like 1280MB and such, which I recognized as being related to each other, but there have been 192-bit cards with 1GB and 2GB of vRAM listed and sold by nVidia. I am less familiar with AMD's lineup as their mobile offerings are not very good compared to the green side, despite being cheaper, so I don't take too much note of their desktop GPUs' power except for direct power comparisons. Clevo P870DM3 (Eurocom) | i7-7700K | 32GB DDR4 2400MHz | GTX 1080N SLI | 850 Pro 256GB | 850 EVO 500GB M.2 | Samsung PM961 256GB NVMe | Crucial M4 512GB | Intel 8265ac | 120Hz Matte screen | 780W PSU THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday Link to post Share on other sites
Posted August 17, 2014 @jmaster299 There have been some cards with that number. 660Ti has 2GB of vRAM according to GPU-Z screenshots And some other cards have been anomalies too: GTX 650 Ti Boost || GTX 550Ti || GTX 560 SE || GTX 460 v2 That being said, I DID see the trend you mentioned; 320-bit memory buses had vRAM amounts like 1280MB and such, which I recognized as being related to each other, but there have been 192-bit cards with 1GB and 2GB of vRAM listed and sold by nVidia. I am less familiar with AMD's lineup as their mobile offerings are not very good compared to the green side, despite being cheaper, so I don't take too much note of their desktop GPUs' power except for direct power comparisons. GPU-Z says whatever the card tells it to say. Again, it is impossible to divide 2048 by 192, making it impossible for the card to actually have 2048MB of RAM. Companies like that all the time. Its why every single HDD and SSD in existence will claim it has 500GB, when in fact it has 500000MB, which is less than 500GB. i7 2600K @ 4.7GHz/ASUS P8Z68-V Pro/Corsair Vengeance LP 2x4GB @ 1600MHz/EVGA GTX 670 FTW SIG 2/Cooler Master HAF-X http://www.speedtest.net/my-result/3591491194 Link to post Share on other sites
Posted August 17, 2014 Nvidia was either lying or they were using some sort of trickery to achieve that total. 2048 can not be divided evenly by 192. You end up with 10.6666666. The only way they could have done it was with 768 + 1134, which gets them 1914MB of VRAM, which is not technically 2GB. NVIDIA does indeed use RAM chips with mis-matched capacities to achieve this. Link to post Share on other sites
Posted August 17, 2014 · Original PosterOP GPU-Z says whatever the card tells it to say. Again, it is impossible to divide 2048 by 192, making it impossible for the card to actually have 2048MB of RAM. Companies like that all the time. Its why every single HDD and SSD in existence will claim it has 500GB, when in fact it has 500000MB, which is less than 500GB. @jmaster299 I know the SSDs and HDDs and such are a bit odd, and once formatted have even less space, etc. But as for the GPUs; if one uses a sensor program it should work properly. No matter how you slice it, if I start a game like BF4, run GPU-Z's sensors or Playclaw 5's GPU sensors etc and that vRAM passes 2000MB used, then we've got something. For example, I've never seen my cards pass 4080MB of vRAM being used; and even in the CoD: Ghosts screenshot above you see it stopped at about 4016MB.. so 4096MB is quite likely my actual RAM size. Unfortunately I do not own a 660Ti or any of the affected cards to personally check, though I would love to do some testing on it. As you said, the maximum RAM limit ought to be 1914 at best. Clevo P870DM3 (Eurocom) | i7-7700K | 32GB DDR4 2400MHz | GTX 1080N SLI | 850 Pro 256GB | 850 EVO 500GB M.2 | Samsung PM961 256GB NVMe | Crucial M4 512GB | Intel 8265ac | 120Hz Matte screen | 780W PSU THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday Link to post Share on other sites
Posted August 17, 2014 · Original PosterOP NVIDIA does indeed use RAM chips with mis-matched capacities to achieve this. I find it funny; I wrote the guide and due to my writing it I'm learning more. I love tech. Clevo P870DM3 (Eurocom) | i7-7700K | 32GB DDR4 2400MHz | GTX 1080N SLI | 850 Pro 256GB | 850 EVO 500GB M.2 | Samsung PM961 256GB NVMe | Crucial M4 512GB | Intel 8265ac | 120Hz Matte screen | 780W PSU THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday Link to post Share on other sites
Posted August 17, 2014 NVIDIA does indeed use RAM chips with mis-matched capacities to achieve this. Which is exactly what I said in my post. But there is still no way for them to get exactly 2048MB that way. They can get close to it, a little under or over, but never exactly that amount. @jmaster299 I know the SSDs and HDDs and such are a bit odd, and once formatted have even less space, etc. But as for the GPUs; if one uses a sensor program it should work properly. No matter how you slice it, if I start a game like BF4, run GPU-Z's sensors or Playclaw 5's GPU sensors etc and that vRAM passes 2000MB used, then we've got something. For example, I've never seen my cards pass 4080MB of vRAM being used; and even in the CoD: Ghosts screenshot above you see it stopped at about 4016MB.. so 4096MB is quite likely my actual RAM size. Unfortunately I do not own a 660Ti or any of the affected cards to personally check, though I would love to do some testing on it. As you said, the maximum RAM limit ought to be 1914 at best. Stop using CoD Ghosts as an example, the RAM usage totals shown when playing that game are not accurate. Also, you have to remember that GPUs can use a combination of VRAM and system RAM, and that's how you can achieve a "usage" greater than the physical amount of VRAM that is installed on the card. Also don't use the info tab on GPU-Z as your basis for what amount of VRAM is being used because GPU-Z is only displaying the number that Nvidia or AMD tells it to display. You would need to do a validation with GPU-Z in order to get that information. That's why the validation function exists, to prove that the base info matches the usage info. If I run dxdiag on my system, it shows I have an "Approx Memory Total" of 4038MB for my GTX 670 which only has 2GB of VRAM on it. That's how Nvidia can fudge with their numbers and claim that a card with 1914MB of VRAM actually has 2GB of VRAM. The only issue is if a game actually needs to tip into that shared memory pool, you will run into performance issues like GPU usage dropping to 0 because having to pull from that shared memory pool is always slower than using that onboard VRAM. i7 2600K @ 4.7GHz/ASUS P8Z68-V Pro/Corsair Vengeance LP 2x4GB @ 1600MHz/EVGA GTX 670 FTW SIG 2/Cooler Master HAF-X http://www.speedtest.net/my-result/3591491194 Link to post Share on other sites
Posted August 17, 2014 · Original PosterOP Stop using CoD Ghosts as an example, the RAM usage totals shown when playing that game are not accurate. Also, you have to remember that GPUs can use a combination of VRAM and system RAM, and that's how you can achieve a "usage" greater than the physical amount of VRAM that is installed on the card. Also don't use the info tab on GPU-Z as your basis for what amount of VRAM is being used because GPU-Z is only displaying the number that Nvidia or AMD tells it to display. You would need to do a validation with GPU-Z in order to get that information. That's why the validation function exists, to prove that the base info matches the usage info. If I run dxdiag on my system, it shows I have an "Approx Memory Total" of 4038MB for my GTX 670 which only has 2GB of VRAM on it. That's how Nvidia can fudge with their numbers and claim that a card with 1914MB of VRAM actually has 2GB of VRAM. The only issue is if a game actually needs to tip into that shared memory pool, you will run into performance issues like GPU usage dropping to 0 because having to pull from that shared memory pool is always slower than using that onboard VRAM. @jmaster299 I am fully aware of the shared memory deal. This is why I said using the SENSORS. As in the real-time reporting of the usage. As you said, once you cross the vRAM on the card, it'll significantly slow down. I am also only using ghosts' screenshot as the only example I have on-hand of recorded proof that I pass 4000MB of used vRAM while remaining under the 4096MB limit. You need to read what I say. I've seen it cross 4000 on Titanfall before, which is where I saw the maximum usage of 4080MB that one time. I do not, however, have a screenshot of it. It is simply in my memory. Next, my GPU usually reads on validation processes as having over 6GB of vRAM, just like yours does. My old 280M read as having 3800MB when it was a 1GB card. I know this happens, however I've never before via sensor programs seen the vRAM pass what was on the card itself, which is again why I mentioned looking at SENSOR programs. Anyway, I suppose I've seen enough proof from both you and linus to assume that their RAM sizes are close but no cigar and they round it off and fiddle with the vBIOS for desired effect. I'm not yet sure how I'll update this to fit in the guide, but I'll think of something. Section fixed; full RAM exists, with a bit of a twist. Thanks for the information. Like I said, anything else in the guide you see that's incorrect, let me know. Clevo P870DM3 (Eurocom) | i7-7700K | 32GB DDR4 2400MHz | GTX 1080N SLI | 850 Pro 256GB | 850 EVO 500GB M.2 | Samsung PM961 256GB NVMe | Crucial M4 512GB | Intel 8265ac | 120Hz Matte screen | 780W PSU THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday Link to post Share on other sites
Posted August 17, 2014 This is a good read to show how Nvidia uses 2GB of ram for a 192 bit bus. Either way mentioned there seems to have compromises when utilizing that last 512mb, but that doesn't mean that the card can't use the full 2GB it has. http://www.anandtech.com/show/6159/the-geforce-gtx-660-ti-review/2 i5 4690k - MSI Z97M Gaming - GTX 970 - Kingston HyperX Savage - Samsung 840 EVO - Phanteks Enthoo EVOLV Link to post Share on other sites
Posted August 18, 2014 · Original PosterOP This is a good read to show how Nvidia uses 2GB of ram for a 192 bit bus. Either way mentioned there seems to have compromises when utilizing that last 512mb, but that doesn't mean that the card can't use the full 2GB it has. http://www.anandtech.com/show/6159/the-geforce-gtx-660-ti-review/2 And we have found that nVidia did not lie, but still uses tricks. Wonderful! =D. Now I really know what to update my guide with. Thanks! Clevo P870DM3 (Eurocom) | i7-7700K | 32GB DDR4 2400MHz | GTX 1080N SLI | 850 Pro 256GB | 850 EVO 500GB M.2 | Samsung PM961 256GB NVMe | Crucial M4 512GB | Intel 8265ac | 120Hz Matte screen | 780W PSU THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday Link to post Share on other sites
Posted August 18, 2014 And we have found that nVidia did not lie, but still uses tricks. Wonderful! =D. Now I really know what to update my guide with. Thanks! Which is what I said. They either lied or used "Trickery" to get those numbers. But you have to keep in mind that the exception does not make the rule. The bus size determining the amount of VRAM is true 99.9% of the time. A couple of one off cases from Nvidia doesn't change the rule. i7 2600K @ 4.7GHz/ASUS P8Z68-V Pro/Corsair Vengeance LP 2x4GB @ 1600MHz/EVGA GTX 670 FTW SIG 2/Cooler Master HAF-X http://www.speedtest.net/my-result/3591491194 Link to post Share on other sites
Posted August 18, 2014 · Original PosterOP Which is what I said. They either lied or used "Trickery" to get those numbers. But you have to keep in mind that the exception does not make the rule. The bus size determining the amount of VRAM is true 99.9% of the time. A couple of one off cases from Nvidia doesn't change the rule. I know, and if you check the section I added in the guide above I definitely mentioned both your method and what nVidia did for their other cards like the 550Ti and 660Ti, along with the link he gave us. So I think that should be cleared up. Thanks for proving me wrong though, I'm always happy to make sure I spread the least misinformation possible around. The more people can legitimately prove me wrong, the better, because when I do stuff like this, it makes sure the quality is improved, and I don't tell people garbage elsewhere. Clevo P870DM3 (Eurocom) | i7-7700K | 32GB DDR4 2400MHz | GTX 1080N SLI | 850 Pro 256GB | 850 EVO 500GB M.2 | Samsung PM961 256GB NVMe | Crucial M4 512GB | Intel 8265ac | 120Hz Matte screen | 780W PSU THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday Link to post Share on other sites
Posted August 22, 2014 This thread needs to get way more attention! Great stuff, good read. Thanks for the information. Bert & Ernie before squirting spermie. Link to post Share on other sites
Posted October 10, 2014 · Original PosterOP I'm going to bump this thread because people could read it and I just updated it. Clevo P870DM3 (Eurocom) | i7-7700K | 32GB DDR4 2400MHz | GTX 1080N SLI | 850 Pro 256GB | 850 EVO 500GB M.2 | Samsung PM961 256GB NVMe | Crucial M4 512GB | Intel 8265ac | 120Hz Matte screen | 780W PSU THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday Link to post Share on other sites