Jump to content

CPU - GPU bottlenecking at high resolutions: A test

Anyone who's spent some time on these forums and who has gone through the various "Computer Hardware" subforums will have seen this question posted and answered many times: Will my CPU bottleneck my GPU? A commonly asked question for people considering AMD CPU's (as their single core performance hasn't measured up to Intel's CPU's for a while now) or for people wondering if they could get by with a lower tier Intel CPU in their gaming rig. But at what point can you consider a GPU "bottlenecked"?

 

I'd like to focus here for a bit on the term "bottleneck". What do you consider bottlenecking? Would you consider any loss in framerate caused by CPU (under)performance a bottleneck? You could, but then you would have to classify any CPU besides the highest performing one as CPU's that will bottleneck any high-end GPU. This is because of one simple truth: if you get a better performing CPU, it will always have some positive effect on your framerate. So to me that's not bottlenecking. I consider a CPU a bottleneck to a GPU when no matter what I do to try and improve performance, I'll hit a wall preventing me from achieving higher framerates

 

So the opinions are always varied, but a general consensus seems to be that when using a high end graphics card like a GTX 980ti or a Fury X a higher end CPU is strongly recommended. Before I go any further I would like to point out that I still agree with this. However recently I've also found that many people started noticing that at higher resolutions, the CPU seems to become less of a factor and recently a benchmark posted on this forum showed an AMD FX 8370 trading blows with an Intel i7 5960x at 4K resolution...

 

http://linustechtips.com/main/topic/429688-amd-fx-8370-vs-intel-i7-5960x-gtx-970-sli-4k-benchmarks/

 

While I don't posses either CPU, I wanted to run some benchmarks based on the hypothesis that a high end CPU isn't necessarily required when gaming at high resolutions. To this end, I ran some benchmarks using The Witcher 3 with my Intel Core i7-2600k overclocked to 4.6 GHz to simulate a high(er) end quad core CPU and underclocked to 2.1 GHZ to simulate a low end quad core CPU. Because my 2600k will not underclock with Hyperthreading enabled, it was turned off for both clockspeeds during these benchmarks.

 

Link to the google doc with the "hard" data:

https://docs.google.com/spreadsheets/d/1c_fg3EhQCSeMhDniEedOouuQU38ASezMtBcpbgqBmDM/edit?usp=sharing

 

I ran two sets of benchmarks: one with most settings cranked up to ultra and one with all settings set to low. Below are the graphs that came out of the data I collected:

 

witcher%203%20hq.jpg

witcher%203%20lq.jpg

 

Looking at these graphs I'm amazed to see just how close together the 2160p and 2880p are! Of course at any resolution 1440p or below there is a significant performance gap as expected.

 

I'm not going to try and draw any hard conclusions out of this but I would like to leave you with the following observations:

- At 2160p or above there is little performance difference

- At lower CPU clockspeeds, the framerates achieved at lower resolutions hit a ceiling where no matter how far you scale down the resolution or quality settings, it doesn't really make much of a difference. In other words: bottlenecking :)

 

I'd love to hear you opinions! What conclusions would you draw from this data? Do you feel the data isn't complete enough? And what do you call a bottleneck?

 

Cheerz,

Syfes

Desktop:     Core i7-9700K @ 5.1GHz all-core = ASRock Z390 Taichi Ultimate = 16GB HyperX Predator DDR4 @ 3600MHz = Asus ROG Strix 3060ti (non LHR) = Samsung 970 EVO 500GB M.2 SSD = ASUS PG279Q

 

Notebook:  Clevo P651RG-G = Core i7 6820HK = 16GB HyperX Impact DDR4 2133MHz = GTX 980M = 1080p IPS G-Sync = Samsung SM951 256GB M.2 SSD + Samsung 850 Pro 256GB SSD

Link to comment
Share on other sites

Link to post
Share on other sites

Interesting,nice work! 

Current system - ThinkPad Yoga 460

ExSystems

Spoiler

Laptop - ASUS FX503VD

|| Case: NZXT H440 ❤️|| MB: Gigabyte GA-Z170XP-SLI || CPU: Skylake Chip || Graphics card : GTX 970 Strix || RAM: Crucial Ballistix 16GB || Storage:1TB WD+500GB WD + 120Gb HyperX savage|| Monitor: Dell U2412M+LG 24MP55HQ+Philips TV ||  PSU CX600M || 

 

Link to comment
Share on other sites

Link to post
Share on other sites

pretty much nothing we didnt already know, but it's VERY nice to see it cast into actual numbers, rather than the usual "i'm guessing it'll bottleneck from the back of my brain"

 

very nicely done.

Link to comment
Share on other sites

Link to post
Share on other sites

The reason why fx crap manages to keep up at high resolutions

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Very good work mate, nice to see the proofs for once, as all the other colleagues here said :D

rig: i7 4770k @4.1Ghz (delidded), Corsair Vengeance 8GB 1600Mhz, ROG Maximus VI Hero, Noctua NH-D14, EVGA GTX980SC, Samsung 850 EVO 500GB, Corsair SF600, self-built wooden Case, CoolerMaster QuickFire TK, Logitech G502, Blue Yeti, BenQ GW2760HS

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×