Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Mira Yurizaki

Bottlenecking: How to find out if you have one and what to do about it

Recommended Posts

Posted · Original PosterOP

 

I made another topic to further explain this phenomena, you should go read it after reading this one.

 

It's graphics card upgrading season and the one thing on a lot of people's minds is "will my system bottleneck this graphics card?" And the answer to that is... it depends. Not just on your hardware, but the game itself.

 

What is a bottleneck?

I'll leave it up to this video to explain it.

 

But if you are the type who wants me to get to the point: a bottleneck, or specifically, a CPU bottleneck, is when the CPU is too busy that it bogs down other parts of the system. With games, the CPU is too busy to tell the GPU to render stuff and you get lower performance than what the GPU should be capable of.

 

How can I tell if my system will bottleneck a video card upgrade?

The basic way to tell is if your CPU load is constantly high, you'll have a bottleneck. How high? I peg it at about at least 80%-85% almost all of the time you're playing the game. If you want to know if your system will bottleneck a potential upgrade, run the games that you usually play or want to play, fire up task manager (CTRL + Shift + Esc if you haven't learned the other three finger salute), and leave it on monitoring the CPU utilization window.

 

gta_v_cpu.png

This is what my CPU looks like during a GTA V run

 

I say that it has to be at the 80%-85% almost all of the time because a game can be highly dynamic. If there's enough explosions going on to make Michael Bay shed a tear, then the CPU is certainly going to be hammered a lot. But often times even a regular ol' shootout won't hammer the CPU all that much.

 

Keep in mind that every game is different. A wide variety of games will need to be profiled. Don't run the most intense game available, profile it, and think that applies to the whole system. Though if it's a game that's played daily, then it can have more weight to your decision making.

 

Well crap, I don't think I should upgrade because the video card will be bottlenecked.

 

The question to consider is how much of a boost will the system get anyway? What about your upgrade plans? If you plan on building a system later but you can get something now, why not get the upgrade and reap the benefits? Ultimately it's up to you, but don't think just because your system has a high chance of bottlenecking a future upgrade doesn't mean you shouldn't upgrade at all. Set some (realistic) requirements like the system must perform at least 45FPS at 1080p in most games on their highest quality preset (not including AA).  Don't sweat it if the system still meets or exceeds them. If your system is getting 40FPS on a game and an upgrade can do 80FPS, but on your system it can only do 70FPS, but the next card down can pull about 60FPS and isn't all that much cheaper, why not still consider it? (I don't know of any real life situation like this, but just some food for thought)

 

I don't want to bother with profiling my games, just give me some pointers

I can't seem to find a lot of websites that aggregate their CPU benchmarks on games with actual figures, are fairly recent, and have a wide sample size. What I did find was Anandtech's CPU benchmarks (http://www.anandtech.com/bench/CPU/1357) and a slew of articles from TechSpot (http://www.techspot.com/features/gaming/gaming-benchmarks/) along with some various other articles or sources that had at least a large gap in performance. (such as this video putting a GTX 1080 in an i5-750 machine) So based all this, here's my conclusion:

  • In most cases, you can go down to an Intel i3-4000 or AMD FX-8000 series and still be at least 80% performance the card can achieve (at least with a GTX 980/980Ti)
  • Even in this situation, you'll still likely achieve 60FPS or above at 1080p on maximum or at least high (it's also getting harder as of late to distinguish between the two, so I wouldn't sweat it if you have to tone down the quality a bit).
  • You're basically in the danger zone if your processor at the time of release was about $100-$120 or less or is less than about 1900 or so on this chart . Though this only applies to DX11 titles. There's not enough data to make a conclusion on DX12/Vulkan, but so far the trend is more cores is better if the game uses async compute (not all games will!)

The tip I want to leave with you is this: bottlenecks aren't the end of the world. Determine how much you're willing to put up with.

Link to post
Share on other sites

My only bottleneck is my i5 3470, and i only notice a problem in arma 3. Other than that I run all my other games on max.


My Main PC:

CPUi5 3570k CPU Cooler: Cooler Master Hyper T4 Motherboard: Asus p8z77-v pro  RAM: Crucial Balistic 2x4gb  GPU: Two PNY GTX 680's in SLI Case: Some rando Antec one  PSU: Thermaltake 1000w  Display: HP Elite Display 321i 23''  Storage: Samsung 840 Evo 128gb, Seagate Barracuda 1tb

 

Link to post
Share on other sites
2 minutes ago, M.Yurizaki said:

1440p is the new 1080p right?

 

I keep thinking that requirement I mentioned is too high still.

1440p is going to be, i wouldn't say it is untill people who dont know what 1440p is start using 1440p monitors 


 

Link to post
Share on other sites

Good guide.

Spoiler

You put FX-800 instead of FX-8000

 


My build | Ryzen 5 1600 | Cryorig H7 | GTX 1080 | 16GB Team Dark | MSI B350 PC MATE | 1.5TB SSD/HDDs | EVGA GS 650 | Corsair 400C White |

Phone: Google Pixel XL | Android Pie (Pixel Dust - rooted with magisk) | Verizon

Proud owner of an Omnibot 2000 and 5402

@Vulrax❤️

Link to post
Share on other sites
On ‎7‎/‎15‎/‎2016 at 6:14 AM, M.Yurizaki said:

SNIP

But what you consider an i5-4570 a bottleneck if I'm always at 80-100% CPU when gaming. Granted this is while playing intensive games but that's the games I tend to play.

 

And will it be a bottleneck for an RX 480 or future AMD or Nvidia GPUs?


How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill

Samsung Galaxy S8 Exynos variant (Late 2018 - present) | Samsung Galaxy Tab A 10.1 2016 Edition (Late 2018 - present) | Lenovo Thinkpad T480 i7-8550U with UHD 620 Graphics (Mid 2018 - present)

Samaritan XL (Early 2019 - present) - AMD Ryzen 7 1700X (8C/16T) , MSI X370 Gaming Pro Carbon, Corsair 16GB DDR4-3200MHz ,  Asus ROG Strix RX Vega 56 , Corsair RM850i PSU, Corsair H100i v2 CPU Cooler, Samsung 860 EVO 500GB SSD, Seagate BarraCuda 2TB HDD (2018), Seagate BarraCuda 1TB HDD (2014), NZXT S340 Elite, Corsair ML 120 Pro, Corsair ML 140 Pro

Link to post
Share on other sites
Posted · Original PosterOP
1 hour ago, AluminiumTech said:

But what you consider an i5-4570 a bottleneck if I'm always at 80-100% CPU when gaming. Granted this is while playing intensive games but that's the games I tend to play.

 

And will it be a bottleneck for an RX 480 or future AMD or Nvidia GPUs?

What I neglected to mention was to check your GPU load. If your GPU load is near 100% as well, upgrading to a better video card will still be an upgrade. The bottleneck will just limit the how much of an improvement you'll get. It may be nothing at all, or it may be 90%.

 

What's missing from the puzzle and most games don't do this is saying how much time it spent on the CPU, which would be an indicator of the maximum frame rate you could possibly have. DOOM does this.

Link to post
Share on other sites
1 minute ago, M.Yurizaki said:

What I neglected to mention was to check your GPU load. If your GPU load is near 100% as well, upgrading to a better video card will still be an upgrade. The bottleneck will just limit the how much of an improvement you'll get. It may be nothing at all, or it may be 90%.

 

What's missing from the puzzle and most games don't do this is saying how much time it spent on the CPU, which would be an indicator of the maximum frame rate you could possibly have. DOOM does this.

Ahhh. Cos I've noticed some games (lighter ones mostly) don't max out the GPU OR CPU.

 

Games like Minecraft will use 60% of CPU and around 60-70% GPU.

 

The GPU load for more demanding games generally is 90-100%.


How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill

Samsung Galaxy S8 Exynos variant (Late 2018 - present) | Samsung Galaxy Tab A 10.1 2016 Edition (Late 2018 - present) | Lenovo Thinkpad T480 i7-8550U with UHD 620 Graphics (Mid 2018 - present)

Samaritan XL (Early 2019 - present) - AMD Ryzen 7 1700X (8C/16T) , MSI X370 Gaming Pro Carbon, Corsair 16GB DDR4-3200MHz ,  Asus ROG Strix RX Vega 56 , Corsair RM850i PSU, Corsair H100i v2 CPU Cooler, Samsung 860 EVO 500GB SSD, Seagate BarraCuda 2TB HDD (2018), Seagate BarraCuda 1TB HDD (2014), NZXT S340 Elite, Corsair ML 120 Pro, Corsair ML 140 Pro

Link to post
Share on other sites
Posted · Original PosterOP
27 minutes ago, AluminiumTech said:

Ahhh. Cos I've noticed some games (lighter ones mostly) don't max out the GPU OR CPU.

 

Games like Minecraft will use 60% of CPU and around 60-70% GPU.

 

The GPU load for more demanding games generally is 90-100%.

I'm trying to get a handle on how games really do their processing, but there's no real information about it. So here's my theory on it.

 

Games require a minimum amount of processing to do everything to setup for the next frame. However, I'm led to believe that there's a minimum amount of time this is allowed. That is to say, the game doesn't run the next thing as soon as it's done on the CPU and demands the logic be run only after a minimum amount of time has passed. To run the logic as soon as the last thing was done would create that problem we had in DOS where games on faster processors would run too fast. Not to mention for some cases, like multiplayer, it would be unfair to those with slower processors who process things less often, though that's probably a non-issue.

 

Also some games (and an increasing amount of older games) don't actually peg the GPU for all its worth. In DirectX 11 and OpenGL, the commands the GPU gets are a single file stream and the GPU has to complete a submission of commands before working on the next one. If those commands don't eat up all of the GPU's resources, it's going to show up as less than 100% load. Also I want to say during this time, the CPU is either going to wait for the GPU to be done before working on the next frame of stuff, or it'll only preprocess one frame ahead, but no more.

 

I decided to run Half-Life 2 to see how much FPS I could poke out of the game. I seemed to be capped at 300 (though this may be a game limitation, but we'll see). This implies that CPU is able to complete the game logic for that particular scene I was doing in 3ms. Now it may be able to complete it in 0.5ms, but it has to send commands to the GPU, which may be taking up 2.5ms round trip to account for API/driver overhead and the latency of the stuff to make it to the GPU, and for the GPU to say "okay, I'm ready again." That is to say that 3ms is a hard cap and not one the game is putting on me.

 

EDIT: although this theory runs flat when I consider in some 3DMark runs I can get 1000+ FPS. So... maybe the propagation delays of the signals isn't a problem.

Link to post
Share on other sites
Posted · Original PosterOP

I finally remembered some previous experience of running into the CPU bottleneck. So this is a good indicator that you shoud upgrade the CPU.

 

Back in 2008 I was still rocking an AMD Athlon X2 3800+ rig and I upgraded the video card from a GeForce 7800 GTX to a 8800 GT. When I tested a few games, this is what happened:

 

Half-Life 2: Episode 2 - went from 30 FPS to 33 FPS.

 

Bioshock - Went from averaging 45 FPS to averaging 60 FPS (the 7800 was capable of getting 60 FPS at times)

 

Call of Duty 4 - I forget what, but it was probably from 60 FPS to 80.

 

Not much of an upgrade there! So a few months later I built a new system and migrated the 8800 GT over.

 

And here's another story about even if you're getting a bottleneck, you can still see performance gains. 

 

When I built my current rig, I migrated my GTX 980 over and benchmarked the system because well, a good way to see what I'd get coming into it. For most 3DMark runs, the new rig scored slightly worse than my old one on graphics tests. Yikes! I just introduced a bottleneck. For comparison, the old one had a i5-4670K overclocked to 4GHz, the current one uses an i7-6700 with a max turbo of 3.7GHz

 

However when I upgraded to a GTX 1080, I still saw huge performance gains. Meaning a bottleneck doesn't mean that is the most performance you'll get period.

 

So my advice? Upgrade the video card until you start seeing it not providing the boost that it should. When that happens, a new computer, even a midrange one, will get you the performance boost you'd like.

Link to post
Share on other sites

My Q6600 bottlenecks my HD 5830...

I previously used a GT 9500 on my Q6600, upgrading to a HD 5830 gave a FPS boost in some games. 

Seeing how my Q6600 hovers around 80-100 usage and HD 5830 40 usage...

I really need to OC my Q6600 from 2.4GHZ to 3.3GHZ or something...


CPU: Intel i7-7700K 5.2GHZ delidded | GPU: MSI GTX 980 Ti Lightning 1525, 2025 | Motherboard: Asus Z170-Deluxe | Cooler: Corsair H100i RAM: 2x8GB G.Skill TridentZ 3466MHZ 15-15-15-32-2N | Storage: 2X Samsung 850 EVO 250GB, Western Digital 1TB Blue, Seagate Desktop 4TB, Intel 730 240GB | PSU: EVGA G2 850W Case: Corsair 750D | Keyboard: Logitech G710 |

Mouse: Logitech G502 | Monitor: Dell U2515HX 

https://www.3dmark.com/fs/14912271   https://pcpartpicker.com/list/fMMBwV

Galaxy S9+, Note 5, Note 3

       

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×