Jump to content

Does CPU effect FPS even if GPU is at 100%?

As an example if you had a i5-9600 and 3060 with the 3060 at 100% usage, would a i9-13900k give you more fps with the same 3060 at 100% usage?

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, 675409 said:

As an example if you had a i5-9600 and 3060 with the 3060 at 100% usage, would a i9-13900k give you more fps with the same 3060 at 100% usage?

The answer is probably yes but most likely not worth it or even noticeable if the GPU is being fully utilized.

ROG Strix AMD

---------------

CPU: Ryzen 9 5900HX GPU: AMD RX 6800M RAM: 16GB DDR4 Storage: 512GB + 1TB Intel SSD

Link to comment
Share on other sites

Link to post
Share on other sites

I was just wondering because I thought it would help 🤷‍♂️ but it's probably a negligible change for the price lol

Link to comment
Share on other sites

Link to post
Share on other sites

As someone who went from an i5 9600K to an R9 5900X while keeping his RTX 2060 Super, I can tell you that, in GPU bound games, the difference is basically nothing. I couldn't tell a difference when playing Control between the two computers, and benchmarks that I did showed no differences in that game.

 

Now, for CPU limited games, things were different - PUBG and Fortnite for example performed better with the 5900X - but that's because I play those games with reduced settings to remove the GPU bottleneck as much as possible.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, 675409 said:

I was just wondering because I thought it would help 🤷‍♂️ but it's probably a negligible change for the price lol

Its more than it might reduce stutters as the GPU might occasionally become CPU bound even if 99% of the time its maxed out.  Of course, depends on the game.

Router:  Intel N100 (pfSense) WiFi6: Zyxel NWA210AX (1.7Gbit peak at 160Mhz)
WiFi5: Ubiquiti NanoHD OpenWRT (~500Mbit at 80Mhz) Switches: Netgear MS510TXUP, MS510TXPP, GS110EMX
ISPs: Zen Full Fibre 900 (~930Mbit down, 115Mbit up) + Three 5G (~800Mbit down, 115Mbit up)
Upgrading Laptop/Desktop CNVIo WiFi 5 cards to PCIe WiFi6e/7

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, 675409 said:

As an example if you had a i5-9600 and 3060 with the 3060 at 100% usage, would a i9-13900k give you more fps with the same 3060 at 100% usage?

It entirely depends on the game and even the given scenario in the game. Take WoW as an example, which is severely CPU bottlenecked even with a 7800x3D/13900k in dense player areas, but the difference between those and an older CPU can be 2x the minimum/average framerate. Though at some point its network limited, since the server still has to accurately report player position to your system in a timely manner, which will limit the ability for the CPU to draw call those positions and report them to the GPU so it can render the frame.

 

 

 

Separately, to properly understand this data, actually playing Warframe helps, but is relatively easy to simplify. Warframe is like any other 'MMO-ish' game where there's various environments with single player and multiplayer scenarios with various player and NPC densities. In this, I test the whole spectrum I've figured out in the +10 years I've played Warframe with dozens of various systems while being the type to stare at framerates.

There is one scenario where both the 4790k and 7950x3D would reach 500 fps at 4K ultra, this is a perfect example of a GPU limited scenario. However what makes that scenario even more interesting is the 4790k at 1080p ultra in the same environment was also at 500 fps while the 7950x3D was at 1200 fps. So clearly at 1080p an overclocked RTX 4090 can reach higher than 500 fps, but the 4790k was incapable of going higher than 500 fps. Amusingly, this means that the RTX 4090 OC'd and 4790k in that specific scenario are equally limiting the FPS to 500.

 

 

 

Now if you don't play any of these types of games and exclusively play single player experiences that don't involve active communication with other players and/or a server, then you're unlikely to see a benefit from a CPU upgrade and would be better off spending that money on a new graphics card instead.

Ryzen 7950x3D PBO +200MHz / -15mV curve CPPC in 'prefer cache'

RTX 4090 @133%/+230/+1000

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

yes but no. Sometimes, it depends, 100% utilization is not really as meaningful as one would hope. 

extreme example, if you had a purely int task that pushed data every single cycle 100% utilization may be reported, but you have the floating point units just sitting there doing nothing, or unused cache or memory bandwidth. 

 

ideally, you are talking about a situation where as soon as the GPU is done with a frame, there is zero delay in working on the next one as the CPU has already done its job and has fully queued it up. So having the CPU faster does not matter, but say the CPU is feeding one job perfectly, but is stuttering on a secondary job, the reported GPU utilization won't be fully honest. 

For Ampere and Lovelace, there is an additional driver load on the CPU that obfuscates things (it can on any gen of GPUs, it's just more here)

To be more precise with your question you need details, far more than is worth it when you can just benchmark the software in question. 

For the most part though, and the simple answer is if your GPU is reporting 100%, then a faster CPU won't meaningfully improve performance at that specific task. Exceptions may apply. 

Link to comment
Share on other sites

Link to post
Share on other sites

It depends on config, game and game scenario so it sure can. Good examples can be CPU heavy games be it online games like MMO for example WoW being in main city with a lot of players and max graphics settings pegs even solid GPU to max but CPU can and will be limiting factor especially in combat. So those min fps lows will be where CPU will also be important. 

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, 675409 said:

As an example if you had a i5-9600 and 3060 with the 3060 at 100% usage, would a i9-13900k give you more fps with the same 3060 at 100% usage?

Short answer: Yes, but inconsequential.

 

Long answer: It depends which part is capping it. Because a GPU isn't just "3D" it's multiple stages, and if a game is using Vulkan you might get slightly more out of the GPU due to the GPU thread count and frequency change, but we're not talking about huge gains. Just the part that is inside the CPU. A DX9 game for example will likely see no gain, because it's limited to one render thread, so only a tiny bump is possible from the CPU frequency bump, and DX 10 and 11 games have some heavy driver-side work. Vulkan/DX12 depends entirely on the game developer utilizing it.

 

But I would never suggest upgrading the CPU over the GPU. If you are buying parts for a gaming rig, it always goes GPU >  RAM > SSD > CPU.

 

On the flip side of this argument, Every GPU tier upgrade (eg x60 to x70 to x70 to x90) typically is a doubling of performance. Where as doubling the RAM or doubling the SSD speed (eg SATA to NVMe) doesn't tend to translate to a doubling of game performance, and going from i5 to i9 likely will only result in single percentage point changes unless a game is actually utilizing 8+ threads. 

 

What you will always find, is that the GPU is "power limited" before any other throttle kicks in. If the GPU is at 100% and you aren't getting the monitor refresh rate (eg 60fps, 120fps, 144fps, etc) then your GPU is too weak in the first place.

 

Golden rule, your game should never "100%" a GPU unless it's vsync uncapped. Because when it's uncapped, it's drawing the screen as fast as possible, even if nothing has changed. If it's hitting 100% without even hitting 60fps, then upgrading the CPU will likely not make a noticeable difference either.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×