Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Ergroilnin

CPUJ vs GPU boost

Recommended Posts

Posted · Original PosterOP

Hello,

 

before I get to my question, I was not sure where this question would go better, so if any mods find this to fit GPU or CPU section better, then please do move it there.

 

Now to the actual question... Why don't CPUs use the same technology as GPUs do for OEM overclocking? As far as I know, GPUs boost as far as they can until they hit voltage or thermal throttle, while CPUs unless manually overclocked only go to the specs the OEM set them to.

 

So why the CPUs do not use the same algorithm for stock overclocking as the GPUs do?

Link to post
Share on other sites

CPU does boost up if there's thermal and power overhead. Something like Intels SpeedStep is a good example. The reason people still overclock is that the algorithm for something like SpeedStep is made to fit most CPUs in a lineup. This means that it cant fine tune for the optimal performance. And it's still careful about not going too overboard with thermals (generally keeping thermals at 75C). Overclocking is just letting you squeeze more performance out the CPU then SpeedStep would allow.


¯\_(ツ)_/¯

¯\_(ツ)_/¯

¯\_(ツ)_/¯

¯\_(ツ)_/¯

¯\_(ツ)_/¯

¯\_(ツ)_/¯

Link to post
Share on other sites

First of all CPUs and GPUs are worlds apart. CPUs do have similar principles to GPUs and vice-versa. But CPUs do not boost far from their specs unless you manually adjust them. GPUs are made of multiple cores (Streaming Proc, CUDA cores, etc.) meanwhile CPUs have just 4-32cores max and so they have higher clocks, GPUs compensate due to low clocks (as high as 2GHz) because of the number of cores that consists a GPU. The general rule is the more the cores, the lower clocks they need to operate to be stable, if there is headroom it will boost higher, CPUs does this but to a lesser extreme


I'm slightly above potato, but I'm getting there...

Not-so-Poor Man's Land:

Ryzen 3 2200G with Stock Cooler (CPU @ 3.8 GHz @1.312V, iGPU @ 1500MHz @ 1.1V),

TeamGroup T-Force Delta DDR4 Gaming 16GB (2x8GB) 16-18-18-32-60 @ 1.35V,

Hard D's are WD 1TB Blue & SeaGate Barracuda (old) 500 GB 7.2k,

DeepCool Wave V2 case with DeepCool DE500 V2,

Display: BenQ GL 2070 on DVI

Keyboard: Generic PS/2 Keyboard

Mouse: Generic USB Mouse

And yes, there are no fans. No fan power needed!

Link to post
Share on other sites
1 hour ago, Totally Average Gameplay said:

First of all CPUs and GPUs are worlds apart. CPUs do have similar principles to GPUs and vice-versa. But CPUs do not boost far from their specs unless you manually adjust them. GPUs are made of multiple cores (Streaming Proc, CUDA cores, etc.) meanwhile CPUs have just 4-32cores max and so they have higher clocks, GPUs compensate due to low clocks (as high as 2GHz) because of the number of cores that consists a GPU. The general rule is the more the cores, the lower clocks they need to operate to be stable, if there is headroom it will boost higher, CPUs does this but to a lesser extreme 
192.168.2.1

This is really great.

Link to post
Share on other sites
Posted (edited)
7 hours ago, Ergroilnin said:

Now to the actual question... Why don't CPUs use the same technology as GPUs do for OEM overclocking? As far as I know, GPUs boost as far as they can until they hit voltage or thermal throttle, while CPUs unless manually overclocked only go to the specs the OEM set them to.

AMD's Precision Boost 2 and XFR2 is supposed to do this, which is why you can't set the maximum turbo boost on certain Ryzen processors much higher if at all.

Edited by Mira Yurizaki
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×