No, it's a trick board manufacturers use to win the benchmarks. They just add 2 multipliers to the boost table.
It's usually called "enhanced boost" or something along those lines. It's also usually unknown to those board manufacturers how good the chip is, so they add a voltage that will guarantee the succes of the overclock. Usually something like 200mv. Disabling it helps with thermals significantly.
I would recommend you find it, and disable it OP. It's always a bad feature. If you want to overclock, do it manually.
!!Make sure to check out the feedback comment below this for up-to-date aknowledgements or changes.!!
Pre-amble and introduction
Hello everyone, I decided to make another topic to put into my signature and possibly help people with. This topic will especially be helpful for people with Founders Edition pascal cards, or people having thermal issues in SFF cases. It is also an absolute must for people who are building a cryptomachine, as this will severely lower power consumption. Otherwise, it will just be interesting for people like me who like to tinker with tech. Subject this time is GPU Boost 3.0.
You can also limit the card's power usage by simply lowering the power target. But this won't actually make the card any more efficient. And this will invariably hurt the performance of the card. As it will be switching more often to stay within the thermal limits you set. If you really want to get the most out of your hardware without sacrificing performance, you have to tamper with boost 3.0 and undervolt.
Other people have already written down just how Boost 3.0 works, and you can read it here:
A TL:DR would be that the GPU takes into consideration a couple of power related parameters to boost to the maximum allowable coreclock whilst staying within the TDP and stability regions.
What do you need, and how does it work:
This topic will only be about downvolting with MSI Afterburner's Frequency/Voltage curve and how to potentially reduce your power consumption. To either get lower noise + heat output, higher boostclocks or both.
I will be using my own EVGA GTX 1070 FTW for this test, and will be using MSI Afterburner 4.3.0, get it here:
I've also selected the standard skin (v3) from the "interface" options menu. Once you have MSI Afterburner installed there are a few things you must do before starting. On the default skin, press the cogwheel to get into the settings. In the "general" tab, tap both "Unlock Voltage Control" and "Unlock Voltage Monitoring". If you want the same skin as me, go to "user interface" tab and select "default MSI v3". In the "monitoring" tab select "Power", "GPU Temperature", "Coreclock" and "GPU Voltage". You can also, for each node, select "show in on-screen" down in the bottom. At the top, also make sure to set the polling rate at something like 100/200ms. As it will make it easier to follow the dynamic behaviour.
You can then use your favorite stability/benchmarking utility, but I suggest using Unigine Heaven, as I've always found that to be both easy to boot and root out instabilities quite well. Get it here:
The big issue with GPU Boost 3.0 is that you don't have full control over it. The GPU for the most part will have it's own idea of what to boost to, and you have to compensate for it's behaviour. For example, when during stresstesting you have found your bottom voltage, save the profile and let the card cool. The thermal window for Boost 3.0 has suddenly expanded and it will start with a higher boostclock than you were originally testing with and can potentially crash. Once it heats up again, it will return to the boostclock you did the stability test with. A safe bet will be to look for the bottom voltage and add a few millivolts to compensate for the higher starting boost.
Undervolting (novice/adepts can start here)
Start by booting up Heaven, run it windows, but at extreme settings. Alt-tab and put your MSI Afterburner window in front of it, and to avoid confusion, hit the "reset" button. Now let the card run for a good 10-15 minutes until the card levels out in temperature. What we are interested in are the Core temperature, the clockspeed and voltage at this stage. If you try to set a curve at a lower thermal interval, GPU-boost will auto adjust the graph to hit higher clocks than you've set them at. And when it heats back up, it will be below what you want it to be. This goes a lot more smoothy whilst the GPU is being stressed and GPU-boost stops interfering.
Now whilst having MSI Afterburner in front, press CTRL+F to pull up the boost 3.0 boost table and it will look something like this:
So in my case, my 1070 ftw levels out at 69-70 degrees, with 1924mhz and 1,031V. These will now be your base numbers to work from. Now look on the graph where this GPU boost 3.0 operating point is. It should be in the middle of the cross. And set the white node above it to match these numbers like so:
So 1924 on the frequency (y-axis) and 1031 on the voltage (x-axis), you can use the up/down keys to finetune. Now comes the fiddly bit. You need to lower all the nodes to the right below or equal to this node (so all at <1924). It will auto adjust everything after you hit apply, so you don't need to set everything as 1924, just make sure it's lower. Hit apply after you've done so. If it looks like this, you're done:
EDIT: Thanks to @Darkseth for the keybindings:
holding SHIFT moves the entire curve up or down, holding CTRL tilts the curve to a direction. Makes it allot easier to achieve the above stated goal. Thanks for that.
!!EDIT#2: For some reason this totally screws up my stability when using those SHIFT and CTRL commands on my EVGA unit. If for some reason it keeps crashing even at conservative levels, do not use those commands! I think shift offsets the lower-end behavior of the card aswell and makes it unstable at desktop usage. So it's mostly shift that makes it unstable.
!!EDIT#3: On my GTX 1080 FTW this is no longer the case.
If it doesn't, or if the first node has shifted away, lower everything to the left a bit like shown, and readjust the 1031/1924 node again to snap into place. If the nodes are too close, they can swap between predetermined slots. Hence lowering the left nodes will make it easier for the software to distinguish what you're doing. It should now still read the correct values whilst running in Heaven.
Now all you need to do is level the nodes (dots) to the left of your operating voltage, one at a time. So raise the node to the left of (in my case) 1031mv up to the same level. Each time doing so, you will readjust a powerstate to a lower voltage (as you'll be moving to the left on the x-axis with the same frequency). And you should see your voltage drop on MSI Aterburner. Keep moving the nodes to 1924 until it fails and add about 0,2-0,25mv (so it if fails at 0,85, set it to 0,875 or the nearest available node). Once you've found your voltage, make sure to give it a good stresstest to make sure it's really stable. Save the profile by hitting "save" and clicking one of the 5 buttons. For added security you can look at what your powertarget is doing when it's levelled out and set the max power-target to about +10% of that. So if it's reading 75%, set it to 85%. That way, when the card is cold and you start up a 3D application it won't try to run 2000mhz and crash on the low voltage.
Now turn off the test, let the GPU settle to it's idle temp and restart the benchmark. It will now boost higher and the card needs to keep stable at the selected voltage. If it fails, reset the card, let it heat back up to it's equilibrium point and load the stable voltage again. Move one node down again to raise the voltage a bit. (doing this step in idle will fuck up the graph again).
Undervolting and yet Overclocking?
And now you should be done. You can also combine this with an overclock. To do this, reset the card and let it heat up again. Set the +core clock slider up to the point you want your coreclock to be and start back from the beginning. It can look something like this:
As you can see, the power % takes a significant hit, meaning your card is running much more efficient (so long as the coreclocks don't drop and you've done everything correctly). Note that the result can be significant and the coretemperature can drop so much as to boost to higher clockspeeds. This is why you also need to compensate a bit and don't be too greedy on the voltage. Look for a nice sweetspot of stability and powersaving.
Compensating for the performance loss.
Thanks to @Darkseth for pointing this out. After some investigation, apparently using aggressive delta's between the powerstates causes a performance delta aswell. It can be anywhere up to 3-5%. Nothing major, but it can be avoided to a certain extent. Once you have your curve like in the last section above, you may want to make it less abrupt and adjust the powerstates left of your cut-off point to make a more ramp-like function. This may require a bit of finetuning to figure out what works best. Fire up heaven again and press "camera" to set it to "free". Pick a point of view with not many moving objects to keep the framerate as steady as possible and start sculpting the curve ( ͡° ͜ʖ ͡°). To give you an idea what to aim for, here are a few examples. Make the curve as smooth as possible up to the point you level it out. Just make sure to leave the first node alone.
- Performance loss? Check above or first comment.
Interested in your results, would appreciate if you left yours below.
No he's made the same fucking post 4 times now. What is it with this website and bot accounts making phony posts. Is it people that actually think it's funny, or Linus making his forum more popular than it actually is.
It's a laptop, you do not want it to run at 3.9ghz if it doesn't have to. It will cause it to run much hotter then necessary and draw excess battery. Disabling turbo actually hurts your performance, since it will only run at base frequency now. And dispite the fact that a CPU can run at 105C before throttling, running a CPU to 90C during load, back to 30 during idle every day for several times is going to cause it to fail much quicker due to thermal cycle damage. CPU's also use more power the hotter they get. Rule of thumb is 4% every 10 degrees.
Please undo whatever it is that you did and explain why this is important to you. Are you experiencing performance issues?
Usually VR is pretty dependent on the GPU due to the high resolution. And VR games are designed to have lower geometry and LOD to hit a 90fps target.
I'd say given the 8 threads and the resolution you should at least aim for the minimum required specs for VR when it comes to GPU.
A 590 is not more powerful than a GTX 1080 though. A 1060 or 590 does meet the required spec for VR, so if your budget allows for it, go with either one. And try to squeeze in 16GB, instead of 8GB. 8GB is a tad on the lower side these days. 3000mhz of 16GB would be better than going for high end 3200mhz memory and being only able to put in 8GB.
Increasing resolution doesn't increase CPU load. In fact, in most cases it decreases it because the GPU is able to generate less frames per second.
Is that the cheapest GPU with that support? Because i'd go for the cheapest.
Mate, even the least informed on this website know that increasing resolution lessens the strain on the CPU because you generally get lower framerates due to GPU constraints.
How can you have 3.3k posts and not know this.