Jump to content

ASUS Has Listed the RTX 3080 Ti 20GB and RTX 3060 12GB on Their Service and Support Website

Random_Person1234
15 hours ago, Mark Kaine said:

Thats interesting... my way of checking this is (kinda) similar... first I compare what the in game "estimate" says (if applicable) then I check what Afterburner etc say... often a very similar number to what is estimated as the max usage, and then I'd simply crank up settings that it brings me over or near to my actual VRAM and, without fail, as soon I go above my games will start lagging, hanging, crashing... (in no particular order) 

 

Of course this "evidence" isn't enough to convince the naysayers (to be fair, nothing would, probably...) 

 

But anyway, @SkilledRebuilds posted this Afterburner screen yesterday in another thread ... I haven't seen that option and I cannot activate it, would be interesting to know how...? (maybe it's an AMD feature, maybe it's only certain cards, idk) 

That is not an accurate measurement either, and the stuttering is (probably) caused by GPU bottlenecks unrelated to memory.

The reason why looking at something like MSI Afterburner or some in-game tool is inaccurate is because it shows how much memory is filled, but not how much memory is actually active.

 

Right now, I only have 3 tabs in Edge open and nothing else on my PC. I barely have anything running in the background either. But despite that, my PC reports a memory usage of 8.1GB of RAM.

I know for a fact that my PC doesn't use 8.1GB of RAM right now. Windows (and other programs) say it does, but it doesn't. 

 

The reason why Windows says it uses 8.1GB of RAM is because I had a bunch of programs open before. The data from those programs (that are currently closed) is still loaded into RAM in case I want to start that program again. Actually, during the time I wrote this post my memory usage dropped down to 7.5GB, so Windows probably did some cleaning in the background.

 

Let me pain a scenario for you.

Let's say I have 16GB of RAM in my PC. Windows and all background programs uses 2GB, so I have 14GB free RAM.

I then open a game that uses 4GB of RAM right from the start, so now I am currently using 6GB out of 16GB. I have 10GB to spare. So far, my task manager will have reported memory usage correctly. If I look at the task manager in this scenario it will accurately report 6GB in use and 10GB free.

 

The problems appear when I close that 4GB RAM game. You'd expect my memory usage to drop down to 2GB again like it was before, but it won't. Instead, Windows will think "hold on, I have a ton of RAM to spare. Why waste process cycles cleaning up data in the RAM when I am at no risk of running out? I'll leave some game data in the RAM in case LAwLz decides to start that game again soon".

So instead of dropping down to 2GB of memory usage, I might only drop down to let's say 4GB. So the content of my RAM now looks like this:

2GB of RAM used by Windows and background processes.

2GB of data data from the game I just closed and no longer use.

 

Task manager will report 4GB of RAM in use despite my active programs (like Windows) only using 2GB. The other 2GB of RAM task manager says is in use is just garbage data there "in case it is needed in the future". 

 

 

 

The same thing happens with VRAM in games. MSI Afterburner might report 8GB of VRAM usage, but it might actually be 4GB of active data and 4GB of old data there "just in case it is needed later". Loading data into memory is expensive (in terms of compute power). Flushing data from (V)RAM also costs CPU cycles, and it could cost performance if the same data is requested later (since it needs to be fetched again).

That's why GPUs (and RAM in general) try and expunge data as infrequently as possible. The more (V)RAM you have, the more data your computer will save "just in case it is needed later".

That's why programs like MSI and task manager are misleading. They do not report memory in active use. They includes inactive data in the measurements.

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, pas008 said:

i wish to see if these so called compression techniques actually do work in majority of situations

Compression? Not sure why you bring that up, wasn't mentioned anywhere that I know of yet. Compression doesn't really have much to do with anything here other than reducing the required VRAM, but comparing Nvidia's to AMD's in terms of VRAM usage is not really possible as there is so much difference at the driver level and texture optimization for each game for each vendor you could never isolate the effect of compression alone in regards to VRAM usage.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, LAwLz said:

The same thing happens with VRAM in games. MSI Afterburner might report 8GB of VRAM usage, but it might actually be 4GB of active data and 4GB of old data there "just in case it is needed later". Loading data into memory is expensive (in terms of compute power). Flushing data from (V)RAM also costs CPU cycles, and it could cost performance if the same data is requested later (since it needs to be fetched again).

That's why GPUs (and RAM in general) try and expunge data as infrequently as possible. The more (V)RAM you have, the more data your computer will safe "just in case it is needed later".

That's why programs like MSI and task manager are misleading. They do not report memory in active use. They includes inactive data in the measurements.

Also more specifically it depends on the memory management technique being used for the game, MSI Afterburner can only report on allocated memory (even if looking specifically at dedicated portion of that) and many games just claim everything you have outright regardless of usage. "Oh you have 8GB, well I'll take 7.5GB of that thanks, don't ask what for."

 

Edit:

As a side note it's much easier to see active memory usage in virtualization platforms as they allow the VM to claim/allocate as much memory as configured for the VM within the running OS but the hypervisor just virtually allocates it but only actually uses as much as is active so you can look at active working set and demanded working set. It's quite interesting to look at the huge gap between them, especially on Windows.

Link to comment
Share on other sites

Link to post
Share on other sites

It would be nice to see how they scale, maybe we will see a game that pushes in the Vram department?

Like how RAGE at first did their MEGA textures, which was used as a high res for the whole map that made it run worse when it came out. or think that is how it was.

With games that does something more and pushes those areas, but can it run crysis at 8K?

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, leadeater said:

Compression? Not sure why you bring that up, wasn't mentioned anywhere that I know of yet. Compression doesn't really have much to do with anything here other than reducing the required VRAM, but comparing Nvidia's to AMD's in terms of VRAM usage is not really possible as there is so much difference at the driver level and texture optimization for each game for each vendor you could never isolate the effect of compression alone in regards to VRAM usage.

The compression efficiency of course varies from workload to workload, but last time Anandtech did comparisons between Nvidia's and AMD's compression they found that Nvidia's compression on Pascal was way ahead of AMD's. Since then, Nvidia has further improved their compression but I haven't heard anything about AMD doing it.

 

In Beyond3D's test, Pascal got a compression ratio of around a 2.4. The R9 Fury X got 1.1. Higher is better.

Turing is an additional 20% ahead of Pascal in memory compression.

 

I am not sure if things are compressed while residing in the VRAM buffer or if it's just for L2 cache stuff (don't know much about GPU memory compression) but things in the buffer are compressed, then it might actually be the case that 8GB of VRAM on an Nvidia card can store the same amount of textures as 16GB of VRAM on an AMD graphics card.

Of course that will differ from workload to workload, but it is very impressive how far ahead Nvidia are (or were, if AMD has improved) when it comes to memory compression.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, LAwLz said:

The compression efficiency of course varies from workload to workload, but last time Anandtech did comparisons between Nvidia's and AMD's compression they found that Nvidia's compression on Pascal was way ahead of AMD's. Since then, Nvidia has further improved their compression but I haven't heard anything about AMD doing it.

Oh yes it for sure is, but it's easier to compare required bandwidth than VRAM due to the driver level optimizations for games. There are actually differences in the rendered image between both in many AAA due to the optimizations, there have been cases where Nvidia have replaced entire sets of textures, that how the drivers can get so large because you don't actually need that much data for drivers or really any of the other features.

 

But the better compression that Nvidia has is why they have needed less VRAM than AMD, and it's not just simply because that compression allows less VRAM usage but in the case of AMD they end up putting more VRAM modules on the card to get wider memory bus which has the end result of more VRAM capacity, it's not being added just because it looks better on the spec sheet (the larger capacity). 

 

Where Nvidia needs 128bit AMD's competing option often has 192bit etc.

GTX 1060 6GB: 192bit (192GB/s)

RX 480: 256bit (256GB/s)

 

Note the GB/s matching the bus width is just pure chance, anyone reading the above don't get any wrong ideas about bus width and bandwidth please.

Link to comment
Share on other sites

Link to post
Share on other sites

And not let Nvidia sell game design options to the developers to make AMD bite in the dust and have control of how things work in the pipeline.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, leadeater said:

there have been cases where Nvidia have replaced entire sets of textures, that how the drivers can get so large because you don't actually need that much data for drivers or really any of the other features.

It's not that I don't believe you, but I would really like a source on that. I am not sure what to type into Google to get good results.

 

14 minutes ago, leadeater said:

in the case of AMD they end up putting more VRAM modules on the card to get wider memory bus which has the end result of more VRAM capacity, it's not being added just because it looks better on the spec sheet (the larger capacity). 

Judging by how many comments I've seen talking about "AMD has more VRAM so they are better" on this forum it seems like bigger number on the spec sheet is a very good reason to put in more VRAM, even if it might not be the primary reason AMD does it.

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, LAwLz said:

It's not that I don't believe you, but I would really like a source on that. I am not sure what to type into Google to get good results.

I might be able to find it again, was reading that quite some time ago. I think on a game developers forum.

This is the tool Nvidia developed for it, been generally released now: https://news.developer.nvidia.com/texture-tools-exporter-2020-1/

Link to comment
Share on other sites

Link to post
Share on other sites

I was looking forward to the 3060 but then I heard that the 6700 will use like 150 watt which is 25% less than the 3060 so if it can match the performance I rather go with the AMD version. I wish Nvidia cared more about power consumption. PC power consumption has to go down. It was even on the news a while back.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, LAwLz said:

The compression efficiency of course varies from workload to workload, but last time Anandtech did comparisons between Nvidia's and AMD's compression they found that Nvidia's compression on Pascal was way ahead of AMD's. Since then, Nvidia has further improved their compression but I haven't heard anything about AMD doing it.

 

In Beyond3D's test, Pascal got a compression ratio of around a 2.4. The R9 Fury X got 1.1. Higher is better.

Turing is an additional 20% ahead of Pascal in memory compression.

 

I am not sure if things are compressed while residing in the VRAM buffer or if it's just for L2 cache stuff (don't know much about GPU memory compression) but things in the buffer are compressed, then it might actually be the case that 8GB of VRAM on an Nvidia card can store the same amount of textures as 16GB of VRAM on an AMD graphics card.

Of course that will differ from workload to workload, but it is very impressive how far ahead Nvidia are (or were, if AMD has improved) when it comes to memory compression.

Be aware that it was (and still is) a framebuffer compression and not general VRAM compression. Game requiring 8GB of VRAM for textures still required 8GB of VRAM. Framebuffer is a smaller section of VRAM that acts as immediate rendering memory for the currently rendered frame. Basically it's used to render a frame, do all the processing of it and is then pushed to display output. Having it compressed means data can be pushed around between GPU and framebuffer faster because it's smaller amount of it and occupies less of the available memory bus (it's why 256bit memory NVIDIA cards peformed better than 512bit AMD ones). Especially noticeable at higher resolutions where each frame takes up more space (memory). For a more general VRAM compression we use texture compression. So you can pack massive textures that take up less space in VRAM. They are still decoded on the fly during framebuffer processing where they transition from texture compression to framebuffer compression. Before framebuffer compression was a thing, they basically transitioned from compressed texture to raw texture for the framebuffer processing.

 

I'm not aware of any vendor doing general VRAM compression on the fly like it's done for framebuffer.

Link to comment
Share on other sites

Link to post
Share on other sites

I managed to get RTX 3080 after 4 months and at inflated price.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, RejZoR said:

I managed to get RTX 3080 after 4 months and at inflated price.

At this rate of insanity, I might just hold off until an RTX 4080 😔

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, StDragon said:

At this rate of insanity, I might just hold off until an RTX 4080 😔

Hold off for what? Not being able to buy one either? :D

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

One thing about the VRAM though. The 6800 XT may have 16GBs, but it does seem to still suffer at 4K, perhaps due to the Infinity Cache and lower bandwidth. This could affect the cards aging.

 

I personally don't think 10 GBs is too little, and my 3080 runs everything I need at 4K without issues. 12GBs on the 3060 feels weird to me tho, considering that card will never use that much considering it's performance level. The actual bandwidth is pretty low though, so I'm not sure.

 

Competition is good though, nice to see all these "reaction" skews from Nvidia, the 3080 Ti at it's predicted $999 would be a decent buy (not the best value still, but not as bad as the 6900 XT imo)

Current System: Ryzen 7 3700X, Noctua NH L12 Ghost S1 Edition, 32GB DDR4 @ 3200MHz, MAG B550i Gaming Edge, 1TB WD SN550 NVME, SF750, RTX 3080 Founders Edition, Louqe Ghost S1

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, StDragon said:

At this rate of insanity, I might just hold off until an RTX 4080 😔

Prices are inflated, but not on scalpers levels. Since GTX 980, I remember these high end cards cost around 850€ when there was no virus going on or cards being in short supply. It's how much I paid for GTX 1080Ti like 3 months after release. I did get the AORUS model which has a stupid beefy cooler compared to my current RTX 3080 GamingPro lol. RTX 3080 AORUS now goes for 1200€. So, 980€ isn't that crazy. It's nowhere near MSRP, but since we have VAT over here in Europe, we don't get anything on MSRP prices anyways. Paying 100-150€ more is a lot, but it's not scalpers " a lot". Lets just say I was willing to swallow the price hike for the sake of just being able to game at stupid details and just ignore the BS world.

 

Seeing how things will be going in the future, even when RTX 4080 comes out, you just won't be able to buy one. Not coz of virus again, but because of idiot scalpers again. Just be ready for this nonsense to be a regular thing to deal with in the future unless someone can somehow change it. Somehow. Best bet would be buying last gen just on its way out when it's not interesting for scalpers anymore, but still relevant enough that you can still buy it. You need good timing though so you don't miss it.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×