Jump to content

Not utalized CPUs showing different FPS measurements?

Go to solution Solved by pyrojoe34,

Some processes can only be done sequentially (you need the answer to one before you can start the other, physics for example), which means you have to wait for that one to finish. Say that particular calculation only takes a small amount of CPU power to do but needs a minimum number of clock cycles to run. In this case a slower CPU may only need 10% of it's capability to run it but it takes 200ms to run, now take a faster CPU that can do the same task in 150ms. You can't just use 20% of the slower CPU to cut the time in half because the task can only run once the previous task is done. This goes beyond just the multi-threading problem and can exist even in a single threaded core since not every task can use 100% of a core. Now add in other factors like how long it takes to fetch information from the cache or RAM, if there are specialized architectures within the CPU for very specific tasks (like a h.264 encoder/decoder for example), or even how far apart things are in the CPU (the speed of light is absolute) and plenty of other complex factors I don't understand and you start to see why things like cores, frequency, and even IPC or transistor count cannot always predict relative performance.

I've seen this in A LOT of games and it's bugging me to death. 

 

You're playing a game where none of your CPU threads are even close to 100% pegged, but some how, for some unknown reason, switching to a more powerful CPU with the same core/thread count nets you a FPS gain. Despite it also not being used more then the older one. 

 

Why the difference?!?

 

This is all assuming the rest of the hardware is consistent as possible elsewhere to- same storage, RAM amount, and GPU are used in each build. It's already been shown motherboards don't make more then a margin of error of difference (outside of synthetics and the new x99 boost clock stuff), and that RAM speed makes all of a 3% impact max in games when using a dedicated card.

 

So will someone, please, PLEASE tell me why stuff like in that video happens^

Does the game engine, or game, have some kind of internal un-optimized performance cap where it makes cache and RAM speed matter more?  

 

This is extremely confusing. Before when upgrading my CPU and seeing this- I had just assumed my FPS gain was attributed to new drivers, a lower ambient temp leaving more room for a GPU boost, ETC. That video, and ones like it, confirm that there is something else going on. What is that something else? 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

IPC, clock speed, and other generational improvements.

✨PC Specs✨

AMD Ryzen 7 3800X | MSI MPG B550 Gaming Plus | 16GB Team T-Force 3400MHz | Zotac GTX 1080 AMP EXTREME

BeQuiet Dark Rock Pro 4 Samsung 850 EVO 250GB | NZXT 750W | Phanteks Eclipse P400A

Extras: ASUS Zephyrus G14 (2021) | OnePlus 7 Pro | Fully restored Robosapien V2, Omnibot 2000, Omnibot 5402

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, TechnoSword said:

I've seen this in A LOT of games and it's bugging me to death. 

 

You're playing a game where none of your CPU threads are even close to 100% pegged, but some how, for some unknown reason, switching to a more powerful CPU with the same core/thread count nets you a FPS gain. Despite it also not being used more then the older one. 

 

Why the difference?!?

 

This is all assuming the rest of the hardware is consistent as possible elsewhere to- same storage, RAM amount, and GPU are used in each build. It's already been shown motherboards don't make more then a margin of error of difference (outside of synthetics and the new x99 boost clock stuff), and that RAM speed makes all of a 3% impact max in games when using a dedicated card.

 

So will someone, please, PLEASE tell me why stuff like in that video happens^

Does the game engine, or game, have some kind of internal un-optimized performance cap where it makes cache and RAM speed matter more?  

 

This is extremely confusing. Before when upgrading my CPU and seeing this- I had just assumed my FPS gain was attributed to new drivers, a lower ambient temp leaving more room for a GPU boost, ETC. That video, and ones like it, confirm that there is something else going on. What is that something else? 

 

 

First thing to do, in order to know whether there is any puzzle at all, is to look at overall CPU usage measures. The percentages you see in that video are highly misleading due to hyperthreading being enabled. How do 4 cores maxed out look like when splitting the information into 8 "logical cores"?

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, ShadowTechXTS said:

IPC, clock speed, and other generational improvements.

Those make a CPU faster. His question is why a faster processor makes a difference when the slowest one isn't fully utilized (assuming that is the case).

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, SpaceGhostC2C said:

Those make a CPU faster. His question is why a faster processor makes a difference when the slowest one isn't fully utilized (assuming that is the case).

Ah, I misunderstood the question.

✨PC Specs✨

AMD Ryzen 7 3800X | MSI MPG B550 Gaming Plus | 16GB Team T-Force 3400MHz | Zotac GTX 1080 AMP EXTREME

BeQuiet Dark Rock Pro 4 Samsung 850 EVO 250GB | NZXT 750W | Phanteks Eclipse P400A

Extras: ASUS Zephyrus G14 (2021) | OnePlus 7 Pro | Fully restored Robosapien V2, Omnibot 2000, Omnibot 5402

Link to comment
Share on other sites

Link to post
Share on other sites

Some processes can only be done sequentially (you need the answer to one before you can start the other, physics for example), which means you have to wait for that one to finish. Say that particular calculation only takes a small amount of CPU power to do but needs a minimum number of clock cycles to run. In this case a slower CPU may only need 10% of it's capability to run it but it takes 200ms to run, now take a faster CPU that can do the same task in 150ms. You can't just use 20% of the slower CPU to cut the time in half because the task can only run once the previous task is done. This goes beyond just the multi-threading problem and can exist even in a single threaded core since not every task can use 100% of a core. Now add in other factors like how long it takes to fetch information from the cache or RAM, if there are specialized architectures within the CPU for very specific tasks (like a h.264 encoder/decoder for example), or even how far apart things are in the CPU (the speed of light is absolute) and plenty of other complex factors I don't understand and you start to see why things like cores, frequency, and even IPC or transistor count cannot always predict relative performance.

Primary PC-

CPU: Intel i7-6800k @ 4.2-4.4Ghz   CPU COOLER: Bequiet Dark Rock Pro 4   MOBO: MSI X99A SLI Plus   RAM: 32GB Corsair Vengeance LPX quad-channel DDR4-2800  GPU: EVGA GTX 1080 SC2 iCX   PSU: Corsair RM1000i   CASE: Corsair 750D Obsidian   SSDs: 500GB Samsung 960 Evo + 256GB Samsung 850 Pro   HDDs: Toshiba 3TB + Seagate 1TB   Monitors: Acer Predator XB271HUC 27" 2560x1440 (165Hz G-Sync)  +  LG 29UM57 29" 2560x1080   OS: Windows 10 Pro

Album

Other Systems:

Spoiler

Home HTPC/NAS-

CPU: AMD FX-8320 @ 4.4Ghz  MOBO: Gigabyte 990FXA-UD3   RAM: 16GB dual-channel DDR3-1600  GPU: Gigabyte GTX 760 OC   PSU: Rosewill 750W   CASE: Antec Gaming One   SSD: 120GB PNY CS1311   HDDs: WD Red 3TB + WD 320GB   Monitor: Samsung SyncMaster 2693HM 26" 1920x1200 -or- Steam Link to Vizio M43C1 43" 4K TV  OS: Windows 10 Pro

 

Offsite NAS/VM Server-

CPU: 2x Xeon E5645 (12-core)  Model: Dell PowerEdge T610  RAM: 16GB DDR3-1333  PSUs: 2x 570W  SSDs: 8GB Kingston Boot FD + 32GB Sandisk Cache SSD   HDDs: WD Red 4TB + Seagate 2TB + Seagate 320GB   OS: FreeNAS 11+

 

Laptop-

CPU: Intel i7-3520M   Model: Dell Latitude E6530   RAM: 8GB dual-channel DDR3-1600  GPU: Nvidia NVS 5200M   SSD: 240GB TeamGroup L5   HDD: WD Black 320GB   Monitor: Samsung SyncMaster 2693HM 26" 1920x1200   OS: Windows 10 Pro

Having issues with a Corsair AIO? Possible fix here:

Spoiler

Are you getting weird fan behavior, speed fluctuations, and/or other issues with Link?

Are you running AIDA64, HWinfo, CAM, or HWmonitor? (ASUS suite & other monitoring software often have the same issue.)

Corsair Link has problems with some monitoring software so you may have to change some settings to get them to work smoothly.

-For AIDA64: First make sure you have the newest update installed, then, go to Preferences>Stability and make sure the "Corsair Link sensor support" box is checked and make sure the "Asetek LC sensor support" box is UNchecked.

-For HWinfo: manually disable all monitoring of the AIO sensors/components.

-For others: Disable any monitoring of Corsair AIO sensors.

That should fix the fan issue for some Corsair AIOs (H80i GT/v2, H110i GTX/H115i, H100i GTX and others made by Asetek). The problem is bad coding in Link that fights for AIO control with other programs. You can test if this worked by setting the fan speed in Link to 100%, if it doesn't fluctuate you are set and can change the curve to whatever. If that doesn't work or you're still having other issues then you probably still have a monitoring software interfering with the AIO/Link communications, find what it is and disable it.

Link to comment
Share on other sites

Link to post
Share on other sites

Having skimmed through the video, I must say that the most noticeable FPS differences do coincide with heavy CPU usage.

Link to comment
Share on other sites

Link to post
Share on other sites

They may have newer instructions that help it.  If you run Intel XTU on cpus you can really see this difference.  A 6600k can be massively faster than a 3930k even though both cpus are being used 100% on all thread and the 3930k has 2 more cores and ht and in the real world should be 30% faster. 

Rig Specs:

AMD Threadripper 5990WX@4.8Ghz

Asus Zenith III Extreme

Asrock OC Formula 7970XTX Quadfire

G.Skill Ripheartout X OC 7000Mhz C28 DDR5 4X16GB  

Super Flower Power Leadex 2000W Psu's X2

Harrynowl's 775/771 OC and mod guide: http://linustechtips.com/main/topic/232325-lga775-core2duo-core2quad-overclocking-guide/ http://linustechtips.com/main/topic/365998-mod-lga771-to-lga775-cpu-modification-tutorial/

ProKoN haswell/DC OC guide: http://linustechtips.com/main/topic/41234-intel-haswell-4670k-4770k-overclocking-guide/

 

"desperate for just a bit more money to watercool, the titan x would be thankful" Carter -2016

Link to comment
Share on other sites

Link to post
Share on other sites

It's not at all about the raw CPU speed or age so much as it's about all the memory bandwidth, bus speeds, and other technologies supported by the CPU.

If you take a look at this screenshot/excel I made from the Intel ARK site comparing the i7-6700, i7-4790, and i7-870 processors, you'll see some differences:

Intel_ARK_ComparisonChart_2016_09_08.png

 

I'm no expert on technicalities of processor bandwidth, but numbers alone tell me the higher bus speed and memory bandwidth result in more FPS.

Desktop: KiRaShi-Intel-2022 (i5-12600K, RTX2060) Mobile: OnePlus 5T | Koodo - 75GB Data + Data Rollover for $45/month
Laptop: Dell XPS 15 9560 (the real 15" MacBook Pro that Apple didn't make) Tablet: iPad Mini 5 | Lenovo IdeaPad Duet 10.1
Camera: Canon M6 Mark II | Canon Rebel T1i (500D) | Canon SX280 | Panasonic TS20D Music: Spotify Premium (CIRCA '08)

Link to comment
Share on other sites

Link to post
Share on other sites

Didn't get any emails about the responses....but yay people responded ^_^ 

Glad pyrojoe34 found a realistic answer of some kind. Literally the only thing I've ever heard make any kind of sense relating to it. 

 

Always irritates me when most everyone says "oh it's the memory speed" or "oh it's the PCIe 2.0 x16 vs 3.0 x16 bandwidth". As both those have been tested extensively to not matter in comparisons like the one in the video when gaming. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×