Jump to content

GTX 900 Series Improve Efficiency By 20%-30% not 2X. Have Lower Overclocking Headroom Than 700 Series.

GPUXPert

As you all know the GTX 900 series has been launched and has been praised for its performance, efficiency and pricing by reviewers and rightly so.
However upon further investigation we see the whole picture painted in front of us and it's not as rosy as Nvidia had led us to believe.

Nvidia touted the power efficiency of its 900 series cards, stating that the GTX 980 and 970 have a 165W and 145W TDP. That's all fine and dandy but Nvidia also claimed a 2X performance/W improvement over the previous generation Kepler cards but that number doesn't really pan out.

Looking at independent testing by tomshardware we find a completely different story. Note : do your best to ignore the Gigabyte GTX 980 OC, as it's got a cherry picked core.

Gigabyte's GTX 980 WindForce OC stands out yet again, especially when it comes to our idle and gaming readings. It’s amazing what a specially-selected Maxwell GPU can do.

104-Overview-Efficiency.png

 

We find that the new cards are actually about 20-30% more efficient than their previous generation siblings, not the 2X of performance/W improvement that Nvidia was so looking for.

In compute the new cards are actually less efficient than the their previous generation Kepler siblings and significantly less efficient than the entire range of AMD's cards. The 900 series cards are purely gaming cards and they suffer badly in compute as a direct result of that.

When it comes down to it, our most taxing workloads take Maxwell all the way back to Kepler-class consumption levels. In fact, the GeForce GTX 980 actually draws more power than the GeForce GTX Titan Black without really offering more performance in return.

103-Overview-Power-Consumption-Torture.p

 

The results of tomshardware match the results of Linus Tech Tips almost to the T. The GTX 980 consumed about 48W less than the 780 Ti in their testing which is exactly the result that tomshardware got as well.

 

As far as overclocking goes, one might mistake the high clock speeds of Maxwell for higher overclocking headroom.
The GTX 980 and GTX 970 can often reach a 1400mhz+ boost clock after overclocking but that doesn't tell the whole story.

The cards have already been very aggressively clocked right out of Nvidia's labs which ate though most of the overclocking headroom.
Case and point, lets compare the GTX 980 and GTX 780 Ti overclocking results.

 

Bit-Tech review.
GTX 980

After our session of trial and error, we finished on a stable base clock of 1,327MHz, giving us a rated boost clock of 1,416MHz. In practice, the card was happily boosting to between 1,425MHz and 1,450MHz, which is an incredible feat for a reference GPU, and shows just how much headroom Maxwell really has. The 200MHz base clock increase is a tasty 18 percent boost

http://www.bit-tech.net/hardware/graphics/2014/09/19/nvidia-geforce-gtx-980-review/12

GTX 780 Ti

We eventually managed to add 224MHz to the base clock, taking it to an absolutely massive 1,100MHz. The boost clock at this level was 1,152Mhz, but we regularly witnessed it hitting a whopping 1,230Mhz. This base clock gain is a whole 26 percent

http://www.bit-tech.net/hardware/graphics/2013/11/07/nvidia-gtx-780-ti-3gb-review/11

 

TechPowerUp review

 

GTX 980

Maximum overclock of our sample is 1350 MHz GPU base clock (20% overclock) and 1970 MHz memory (12% overclock).

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_980/29.html

GTX 780 Ti

Maximum overclock of our sample is 1120 MHz GPU (base) clock (28% overclocking) and 1975 MHz memory (13% overclock).

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_780_Ti/30.html

 

You'll also find similar results with the GTX 780 and 970, the 780 would overclock higher by about 10%.

What makes the 900 series attractive isn't that they introduce a new level of performance because they really don't.
What they do is bring existing performance levels to significantly more affordable price points and performance/$ really is the gold standard for a graphics card's success.

Link to comment
Share on other sites

Link to post
Share on other sites

I love how they exempted R9 285 on the first graph which would of mathematically fallen very close to the 900 series based on estimations yet they included they so far not yet released 960... that's highly suspicious...

5820k4Ghz/16GB(4x4)DDR4/MSI X99 SLI+/Corsair H105/R9 Fury X/Corsair RM1000i/128GB SM951/512GB 850Evo/1+2TB Seagate Barracudas

Link to comment
Share on other sites

Link to post
Share on other sites

I don't think that anyone who really knows the semiconductor industry would've believed the 2X performance per watt claim in the first place.
Simple things like I don't know... physics get in the way of achieving that on 28nm. Had Maxwell been ported to 20nm I believe a 60-70% power efficiency gain would've been possible.

As far as overclocking goes, according to Anandtech the GTX 980 and GTX 970 both come with a voltage setting of 1.25v right out of the factory which is considerably higher than the 780 Ti and the 780 and even AMD's 290X and 290.
The new cards have obviously been pushed hard to be more competitive on the desktop. Especially the GTX 980 had to be very aggressively clocked to compete with the 780 Ti which it's supposed to replace.
Overclocking and overvolting the cards from the factory obviously affected efficiency which is why the more conservatively clocked 750 and 750 Ti scored higher on the performance/w chart.

Link to comment
Share on other sites

Link to post
Share on other sites

How the actual F is my GTX 690 at the second highest power hungry GPU !!?? I have NEVER seen my GPU Drain that much power even when i was mining like mad!! 

it´s been at the 380~ at best 

That's not power consumption under gaming. It's under a compute torture test. Nvidia cards drink a lot of juice in compute which is why apple chose AMD's professional FirePro cards for their Mac Book pro systems.

CPU : i5 3570K @ 4.5Ghz. GPU : MSI Lightning GTX 770 @ 1300mhz. 16GB 1600mhz RAM

Link to comment
Share on other sites

Link to post
Share on other sites

That's not power consumption under gaming. It's under a compute torture test. Nvidia cards drink a lot of juice in compute which is why apple chose AMD's professional FirePro cards for their Mac Book pro systems.

and mining is what?.....gaming? i never said Gaming.

Link to comment
Share on other sites

Link to post
Share on other sites

Oh wait, this is wrong because TDP is not power consumption or something dont worry about it justdontthinkaboutit

 

Nvidia couldn't possibly do anything wrong, right? They definitely care about their customers since all that money they've got from being the top dog reminds them of their great customers, right? They'd never lie! Big companies never lie!

 

Edit: Before anyone tells me I'm an AMD fanboy, I'm actually getting sick of their shit too. I follow them on twitter and all I see is "friends don't let friends buy Geforce!" and things like calling Nvidia/Intel hardware "junk". So yeah, I'm sick of both equally.

waffle waffle waffle on and on and on

Link to comment
Share on other sites

Link to post
Share on other sites

Linus said in his video 30% improvement. So there is that, I don't know who still believes anything that has an "UP TO" in front of it, that guy must be an imbecil.

i7-8700K @ 5.0 GHz 1.37v, EVGA GTX 1080 FTW Hybrid, Asus z370-A Prime

G.Skill Trident Z RGB 2x8GB 3200MHz CL16, Samsung 960 Evo 256GB, Samsung 850Pro 1TB, WD 3TB HDD

NZXT H440 White, Corsair H100i GTX v2, EVGA 750G2

Corsair AF140 x1, Corsair SP120 x2, Noctua Industrial 3000 RPM x3

Link to comment
Share on other sites

Link to post
Share on other sites

Well looks like im going to hold off for a bit again. I do want the GTX 970 though. My R9 290 just is loud, power hungry and warmer than I would have expected.

Link to comment
Share on other sites

Link to post
Share on other sites

nvidia was comparing the first generation maxwell to first generation kepler (i.e. 600 series). if u watched the event you would know that.

Link to comment
Share on other sites

Link to post
Share on other sites

Well no wonder the power consumption ramps up under a compute torture test... it's not a bloody compute card. It is a gaming card.

Link to comment
Share on other sites

Link to post
Share on other sites

nvidia was comparing the first generation maxwell to first generation kepler (i.e. 600 series). if u watched the event you would know that.

Ok ?! it's no where near 2X perf/watt compared to first gen Kepler.

Nvidia said it improved efficiency by 2X compared to first generation kepler (i.e. 600 series). What they achieved was more like 30%. If you looked at the graph you would know that.

Link to comment
Share on other sites

Link to post
Share on other sites

Nvidia said it improved efficiency by 2X compared to first generation kepler (i.e. 600 series). What they achieved was more like 30%. If you looked at the graph you would know that.

 

If I remember correctly, 2X efficiency was what they said they were aiming for with Maxwell... I don't think they ever stated that was what they achieved.

 

Even their own graphs/statistics don't show a 2X performance increase: http://www.geforce.co.uk/hardware/desktop-gpus/geforce-gtx-980/performance 

Link to comment
Share on other sites

Link to post
Share on other sites

Bad compute performance on the new cards?

That basically confirms a new Titan in the next months (or year).

144Hz goodness

Link to comment
Share on other sites

Link to post
Share on other sites

If I remember correctly, 2X efficiency was what they said they were aiming for with Maxwell... I don't think they ever stated that was what they achieved.

 

Even their own graphs/statistics don't show a 2X performance increase: http://www.geforce.co.uk/hardware/desktop-gpus/geforce-gtx-980/performance 

GK104 is the GTX 680/770.

"2x perf/watt vs. Kepler"

"2x perf/watt vs. GK104"

5-700x393.jpg

7KM2HSv.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not sure what the problem is with Nvidia clocking aggressively from the off. Overclocking headroom should be a result of the silicon lottery and the cooler, not of their products being nerfed out of the factory.

Link to comment
Share on other sites

Link to post
Share on other sites

GK104 is the GTX 680/770.

5-700x393.jpg

 

 

I am aware of this... The image is broken so I am not sure what your point is.

Link to comment
Share on other sites

Link to post
Share on other sites

The stuff about the power consumption, sure, especially since I don't know much about it, but what you're saying about overclocking isn't entirely true.

The best clock of the 980 is 1126mhz, Bit-Tech got +200mhz on their card and TechPowerUp got an overclock of +224mhz on theirs.

With the 780ti Bit-Tech got an overclock of +224mhz and TechPowerUp got a +244mhz overclock.

 

The overclocking difference between the cards is only just less, if not the same when you actually look at the increase in core clock. However the way you're presenting it makes the maxwell cards look like they're performing worse by doing the comparison in percentages. If I have a 100mhz GPU that I can OC to 200mhz, that's a 100% increase, that clearly is WAAAAY better than these cards! No. That's dumb. 

My point is that the 980 is getting the same OC increases as the 780ti, and it starts at a higher base clock. Plus, it's not even a fair comparison as you're using the 980 and the 780ti, cards from different ranks in their given series. If it was the 980 and the 780, that would be fair.

/rant

CPU- 4690k @4.5ghz / 1.3v    Mobo- Asus Maximus VI Gene   RAM- 12GB GSkill Assorted 1600mhz   GPU- ASUS GTX 760 DCUII-OC 

Storage- 1TB 7200rpm WD Blue + Kingston SSDNow 240GB   PSU- Silverstone Strider ST75F-P

 

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not sure what the problem is with Nvidia clocking aggressively from the off. Overclocking headroom should be a result of the silicon lottery and the cooler, not of their products being nerfed out of the factory.

Well, if all cards are being OC'd out of the factory, it's not a disadvantage, it just means that the chips can take more. The OC headroom after that guaranteed point is where the lottery comes in.

Which makes me think, is it really overclocking if all cards handle it?

CPU- 4690k @4.5ghz / 1.3v    Mobo- Asus Maximus VI Gene   RAM- 12GB GSkill Assorted 1600mhz   GPU- ASUS GTX 760 DCUII-OC 

Storage- 1TB 7200rpm WD Blue + Kingston SSDNow 240GB   PSU- Silverstone Strider ST75F-P

 

Link to comment
Share on other sites

Link to post
Share on other sites

Well, if all cards are being OC'd out of the factory, it's not a disadvantage, it just means that the chips can take more. The OC headroom after that guaranteed point is where the lottery comes in.

Which makes me think, is it really overclocking if all cards handle it?

 

If all of the GPUs have massive overclocking headroom, it tells me that either something is wrong with their binning process, something is wrong with the system they use to choose a reference clock speed, or they are hoping that they can milk people for the GHz or Lighting or whatever editions when no additional binning for these was actually needed.

 

If the engineering sample they had to look at happened to not have a lot of OC headroom, than that's crappy luck but you can't make the assumption that that is true for anything but the engineering sample they had.

Link to comment
Share on other sites

Link to post
Share on other sites

If you have a processor that hits 10Ghz right out of the factory and overclocks to 11Ghz that's just a 10% gain in clock speed and performance.
On the other hand if you have a CPU that hits 1Ghz out of the factory and can be overclocked to 2Ghz that's a massive 100% gain in frequency and performance, even though it overclocked by 1Ghz just like the 10Ghz processor.
 

Overclocking headroom has to be defined by the percentage not the clock speed. Because not all processors are equal. A lower frequency processors can easily beat a higher frequency processor based on how many instructions per clock it can do.

The early AMD CPUs during the K7 era had massive overclocking headroom. They would run 1.1Ghz out of the box but can overclock to 2.3Ghz on air which is considered a massive overclocking headroom because essentially you doubled the performance. Now though if you overclock an 4Ghz FX 8350 to 5Ghz you only gain 25% in performance.

CPU : i5 3570K @ 4.5Ghz. GPU : MSI Lightning GTX 770 @ 1300mhz. 16GB 1600mhz RAM

Link to comment
Share on other sites

Link to post
Share on other sites

WELL.... IT'S STILL A FUCKING BEAST RIGHT?

Main System - 2016 13"nTB MBP 256GB

Gaming Rig - 4790K, 16GB RAM, 1080Ti
Monitor - Dell 25" U2515H

K/B & M - Ducky One TKL, Logitech MX Master & G900

Audio - JDS Labs The Element, Aktimate Mini B+, Krix Seismix 3 Mk6, Ultrasone Pro 900

 

Link to comment
Share on other sites

Link to post
Share on other sites

Didn't they say that when they still thought that they could shrink the die?

So maybe when they do shrink the die will we actually see it.

if you have to insist you think for yourself, i'm not going to believe you.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×