Jump to content

Modded GeForce RTX 3070 with 16GB memory gets major 1% lows FPS boost

RTX-3070-16GB-VRAM-MOD-768x800.jpg.07526df08b6f45a7276716c0cdd37533.jpg

 

Quotes

Quote

Memory mods on graphics cards are nothing new these days. Modders and technicians have been doing this for years, and memory replacements are among the most common jobs performed on faulty video cards. However, by replacing the memory and grounding some resistors on the PCB, it is possible to trick the card into supporting a different memory altogether. In this case, with twice the capacity.

 

Paulo decided to test the card with the Resident Evil 4 remake at very high settings, which struggled with 8GB VRAM limit before. This is no longer the case and the memory allocation can now go beyond 8GB (as shown even up to 11GB). Furthermore, the framerate, especially for 1% and 0.1% lows, has increased substantially.

 

 

Source: https://videocardz.com/newz/modded-geforce-rtx-3070-with-16gb-memory-gets-major-1-low-fps-boost

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, jamaniek said:

1%? That's not a lot

1% lows perf is increased.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

Are you telling me that the RTX 3070 can take advantage of a 16GB VRAM buffer?! That Nvidia shorted it with just 8GB in a way that cripples it for newer titles that came out only 2.5 years after release, despite the fact that the GPU core is plenty fast enough?

 

I never would have guessed. In a billion years, I never could have come to the conclusion that the card deserved more VRAM than it was given. I am shooketh.

 

/s

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Mr.Zixxel said:

Interesting modding to say the least, worth doubling my 24GB to 48GB perhaps? (joking) 🤔

You joke but I did leave a comment in that channel asking that. I did some maths and I'd need to spend ~$800 to upgrade my 3090 to 48gb of GDDR6X. Less than I paid for it, not much less than getting a second one, and way less than a GPU with 48gb, but still troublesome, so I guess I'll just grab another 3090 down de line before thinking about doing this mod again lol

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, igormp said:

You joke but I did leave a comment in that channel asking that. I did some maths and I'd need to spend ~$800 to upgrade my 3090 to 48gb of GDDR6X. Less than I paid for it, not much less than getting a second one, and way less than a GPU with 48gb, but still troublesome, so I guess I'll just grab another 3090 down de line before thinking about doing this mod again lol

The difference between soldering 24gb EXTRA memory compared to buying an additional 3090 is multiple, firstly, you "can't use" 48GB if you run 2 x 3090 with 24GB each, and secondly, SLI doesn't "work" at the moment. 

 

But anyway, it also depends entirely on what you do with the computer and what kind of load/computation you would do with the card(s).

 

Interesting regardless. Cheers mate 🍻

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Mr.Zixxel said:

The difference between soldering 24gb EXTRA memory compared to buying an additional 3090 is multiple, firstly, you "can't use" 48GB if you run 2 x 3090 with 24GB each, and secondly, SLI doesn't "work" at the moment. 

 

But anyway, it also depends entirely on what you do with the computer and what kind of load/computation you would do with the card(s).

 

Interesting regardless. Cheers mate 🍻

I don't think anyone is dumb enough to use 2x 3090s for games lol

 

I can make use of the full 48gb of two 3090s in my applications (ML-related stuff), and don't need SLI at all (however I did look into a nvlink bridge since it'd net a ~10% increase in perf).

 

I guess a safer path would be getting another 3090, then upgrading those to 48gb each to a total of 96gb of vram. More than an A100 at a fraction of the cost, this does sound like a really interesting idea.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, igormp said:

I don't think anyone is dumb enough to use 2x 3090s for games lol

 

I can make use of the full 48gb of two 3090s in my applications (ML-related stuff), and don't need SLI at all (however I did look into a nvlink bridge since it'd net a ~10% increase in perf).

 

I guess a safer path would be getting another 3090, then upgrading those to 48gb each to a total of 96gb of vram. More than an A100 at a fraction of the cost, this does sound like a really interesting idea.

No, of course it would be a very stupid scenario if you're planning to play games with the graphics card. 🍻

 

Depending on the load and type of load, the A100 can be significantly faster and/or more efficient in many ways, but with that said, the 3090 is still an extremely competent card for most scenarios.

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Mr.Zixxel said:

Depending on the load and type of load, the A100 can be significantly faster and/or more efficient in many ways, but with that said, the 3090 is still an extremely competent card for most scenarios.

It sure can, but also costs like... over 10x more? Don't think it'd be 10x faster or efficient, so there's that.

For a local dev machine for shits and giggles it does sounds like an interesting idea with a nice cost-benefit (as long as it works, ofc).

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

Major 1% low fps boost*

with the foot note being they set up the benchmark to fill out 16GB of vram half artificially (half because the engine just lets you do that without modding in larger textures, but half because its not realistic). 
its the same as forcing the CPU to read from page file to do a calcualtion for no good reason. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, starsmine said:

Major 1% low fps boost*

with the foot note being they set up the benchmark to fill out 16GB of vram half artificially. 
its the same as forcing the CPU to read from page file to do a calcualtion for no good reason. 

It is a game, running settings that are perfectly playable on the 3070 if it happens to have enough VRAM. That is not artificial - it is proving the point that it could benefit from more VRAM.

 

The variable being tested is VRAM capacity. How exactly do you propose that be tested without using a game that can push the card to use more than 8GB of VRAM?

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, YoungBlade said:

It is a game, running settings that are perfectly playable on the 3070 if it happens to have enough VRAM. That is not artificial - it is proving the point that it could benefit from more VRAM.

 

The variable being tested is VRAM capacity. How exactly do you propose that be tested without using a game that can push the card to use more than 8GB of VRAM?

Its artificial because you can literally do that with ANY and EVERY game on ANY GPU with modded textures.

RE4 just lets you do it without "modding" the textures, because it already comes shipped with the larger textures if you choose to do it. (hence why I said half artificial, a user can just do it with minimal friction)

Retail RE4 High textures start at .25GB, but they also include .5GB high, 1GB high, 2GB high, 3GB high, etc all the way to 8GB high. 
Medium textures fit in .25GB
Low textures fit in under that.

Thing is, that .25/.5/1GB setting are still high textures, the look of the game does not change in any significant ways going higher besides making you more likely to run out of VRAM. 

With modded higher detail textures, you can do the same 1% low trick to the new 16GB 3070 and crash it back down to single digits as it has to start fetching from RAM or Pagefile

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, starsmine said:

Its artificial because you can literally do that with ANY and EVERY game on ANY GPU with modded textures.

RE4 just lets you do it without "modding" the textures, because it already comes shipped with the larger textures if you choose to do it. (hence why I said half artificial, a user can just do it with minimal friction)

Retail RE4 High textures start at .25GB, but they also include .5GB high, 1GB high, 2GB high, 3GB high, etc all the way to 8GB high. 
Medium textures fit in .25GB
Low textures fit in under that.

Thing is, that .25/.5/1GB setting are still high textures, the look of the game does not change in any significant ways going higher besides making you more likely to run out of VRAM. 

With modded higher detail textures, you can do the same 1% low trick to the new 16GB 3070 and crash it back down to single digits as it has to start fetching from RAM or Pagefile

To me, an artificial benchmark is one that does something that a user will not do. Either by running an actual benchmark like TimeSpy or by modifying or playing a game in a way that no one actually would.

 

Someone with a 3070 will probably actually try to turn up the textures as high as they can in that game. The fact that the higher textures are more VRAM intensive than in other games does not make it artificial - it's still something a real person would actually do. Saying that this is an artificial benchmark makes literally anything artificial if you, personally, disagree with the choice of settings.

 

And this is not the only game that is able to do this. Hogwarts Legacy chokes when doing RT at 1080p as well on 8GB of VRAM. And that's without doing anything but setting everything to Ultra - no increasing textures past the defaults like in RE4R, just doing Ultra everything in the Hogsmeade section of the game:

 

spacer.png

 

Is this also "artificial?" Even though the other cards with 12GB+ of VRAM all perform as expected and give playable results if you're okay with 30fps for the 1% lows - which plenty of real gamers are fine with?

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, YoungBlade said:

To me, an artificial benchmark is one that does something that a user will not do. Either by running an actual benchmark like TimeSpy or by modifying or playing a game in a way that no one actually would.

 

Someone with a 3070 will probably actually try to turn up the textures as high as they can in that game. The fact that the higher textures are more VRAM intensive than in other games does not make it artificial - it's still something a real person would actually do. Saying that this is an artificial benchmark makes literally anything artificial if you, personally, disagree with the choice of settings.

 

And this is not the only game that is able to do this. Hogwarts Legacy chokes when doing RT at 1080p as well on 8GB of VRAM. And that's without doing anything but setting everything to Ultra - no increasing textures past the defaults like in RE4R, just doing Ultra everything in the Hogsmeade section of the game:

 

spacer.png

 

Is this also "artificial?" Even though the other cards with 12GB+ of VRAM all perform as expected and give playable results if you're okay with 30fps for the 1% lows - which plenty of real gamers are fine with?

Tf, the 3060 is outperforming the 3070 and 3080? That doesn’t seem right there, the 3080 is much stronger than the 3060. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Zodiark1593 said:

Tf, the 3060 is outperforming the 3070 and 3080? That doesn’t seem right there, the 3080 is much stronger than the 3060. 

The 3080 doesn't have enough VRAM.

 

It really is that simple: that section of the game requires more than 10GB with those settings. The 3060 has enough VRAM, and the 3070/3080 do not. This is why all cards with 12GB and up scale correctly, but those with less than 12GB just choke to death.

Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, YoungBlade said:

The 3080 doesn't have enough VRAM.

 

It really is that simple: that section of the game requires more than 10GB with those settings. The 3060 has enough VRAM, and the 3070/3080 do not. This is why all cards with 12GB and up scale correctly, but those with less than 12GB just choke to death.

Even though I’m fully aware of the reason, it’s still no less startling to see the supposedly massive performance advantage of more powerful GPUs, be rendered completely moot, simply due to lacking VRAM. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Zodiark1593 said:

Even though I’m fully aware of the reason, it’s still no less startling to see the supposedly massive performance advantage of more powerful GPUs, be rendered completely moot, simply due to lacking VRAM. 

Its the same as running into page file on your PC if you ever done that and you stutter into a slug machine. 

this is what happens every time you do a cache miss at different scales. In single threaded programs that have to be in order, or in a thread safe parallel path, if you data you need ain't in cache, the CPU straight up stalls until its retreaved. 

Out of order pre-processing and simultaneous multi-threading hides this, but eventually you can only mitigate so much before you clock in bubbles into the pipeline. 

Every time a singlethreaded in order task cant find what It needs in any of its caches it has to burn a few dozen clocks not progressing while it waits for ram to respond 

So when you decide to store a texture in system Ram rather then vram it spends a lot of time just not doing nothing waiting for data to come across the bus. 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, starsmine said:

Its the same as running into page file on your PC if you ever done that and you stutter into a slug machine. 

this is what happens every time you do a cache miss at different scales. In single threaded programs that have to be in order, or in a thread safe parallel path, if you data you need ain't in cache, the CPU straight up stalls until its retreaved. 

Out of order pre-processing and simultaneous multi-threading hides this, but eventually you can only mitigate so much before you clock in bubbles into the pipeline. 

Every time a singlethreaded in order task cant find what It needs in any of its caches it has to burn a few dozen clocks not progressing while it waits for ram to respond 

So when you decide to store a texture in system Ram rather then vram it spends a lot of time just not doing nothing waiting for data to come across the bus. 

The point I was trying to get across was that knowing what occurs in the background, and actually seeing the impact in practice and so acutely, are somewhat different. It appears to be a particular pain point with some more recent games, as it means that, owing to more VRAM, certain less powerful cards will end up actually performing better than their supposedly higher end counterparts. Generally, demand increases over time have been more balanced, so by the time you actually need more VRAM, the GPU itself probably has it's hands full as well. With the specific games in question however, this is not the case.

 

VRAM requirements have increased disproportionately compared to the need for raw compute. In this specific situation posed, if games move to such high VRAM requirements, we could have a very strange case in which higher end cards, actually age substantially worse, and perform worse, than the lower-end counterpart. Regardless of reasons, this is kind of bonkers if it plays out like this.

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, Zodiark1593 said:

The point I was trying to get across was that knowing what occurs in the background, and actually seeing the impact in practice and so acutely, are somewhat different. It appears to be a particular pain point with some more recent games, as it means that, owing to more VRAM, certain less powerful cards will end up actually performing better than their supposedly higher end counterparts. Generally, demand increases over time have been more balanced, so by the time you actually need more VRAM, the GPU itself probably has it's hands full as well. With the specific games in question however, this is not the case.

 

VRAM requirements have increased disproportionately compared to the need for raw compute. In this specific situation posed, if games move to such high VRAM requirements, we could have a very strange case in which higher end cards, actually age substantially worse, and perform worse, than the lower-end counterpart. Regardless of reasons, this is kind of bonkers if it plays out like this.

Thing is, its not a real pain point.
Its pure FOMO.

devs just gave you the HD texture pack in the base release instead of releasing it as DLC or having modders do it for you. If the Devs instead did what they normally do with these large texture packs, certain users would not be freaking out. 
 

Link to comment
Share on other sites

Link to post
Share on other sites

VRAM is going to be a real big issue for Unreal Engine 5. 12GB is the minimum with 16GB recommended once titles are launched based on it.

Link to comment
Share on other sites

Link to post
Share on other sites

Biggest issue is the new consoles which have 16gbs of unified memory...devs are gonna want to take advantage of that so more and more newer games are targeting 16gbs of vram

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/24/2023 at 2:35 PM, Zodiark1593 said:

Tf, the 3060 is outperforming the 3070 and 3080? That doesn’t seem right there, the 3080 is much stronger than the 3060. 

3060 12GB was spanking the 3070 8GB at 1440p with RT almost two years ago with Doom Eternal RTX

 

https://gamegpu.com/action-/-fps-/-tps/doom-eternal-test-rtx

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, decolon said:

Biggest issue is the new consoles which have 16gbs of unified memory...devs are gonna want to take advantage of that so more and more newer games are targeting 16gbs of vram

It's both used as ram and vram, you can't just fill it all up with textures.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×