Jump to content

The GTX 1080 slides. Why do you trust them?

Prysin

Ok, so looking at Pascals slides, namely the now infamous "GTX 1080 2x faster then Titan X" you must have been more then a LITTLE sceptic.

But why is that? Because if you knew how strong the Titan X is (theoretically, not just practically) you would also know that the GTX 1080 has fewer everything. It only has more compute units and higher clock speeds (by default).

 

Which is what brings the conundrum of how it achieves this "2x" performance. And what the hell was Nvidia benching these cards in?!

 

OK, first off. Let us look at COLD HARD FACTS.

 

GTX 1080 (CUDA 6)

https://www.techpowerup.com/gpudb/2839/geforce-gtx-1080

Clock Speeds

GPU Clock:1607 MHz

Boost Clock:1733 MHz

Memory Clock:2500 MHz
10000 MHz effective

 

Memory

Memory Size:8192 MB

Memory Type:GDDR5X

Memory Bus:256 bit

Bandwidth:320 GB/s

 

Render Config

Shading Units:2560

TMUs:160

ROPs:80

SM Count:40

Pixel Rate:128.6 GPixel/s

Texture Rate:257.1 GTexel/s

Floating-point performance:8,228 GFLOPS

 

TITAN X (CUDA  5.2)

https://www.techpowerup.com/gpudb/2632/geforce-gtx-titan-x

 

Clock Speeds

GPU Clock:1000 MHz

Boost Clock:1089 MHz

Memory Clock:1753 MHz
7012 MHz effective

 

Memory

Memory Size:12288 MB

Memory Type:GDDR5

Memory Bus:384 bit

Bandwidth:337 GB/s

 

Render Config

Shading Units:3072

TMUs:192

ROPs:96

SMM Count:24

Pixel Rate:96.0 GPixel/s

Texture Rate:192.0 GTexel/s

Floating-point performance:6,144 GFLOPS

 

But now, let us add a Custom PCB 980Ti at much higher clocks then normal (Cuda 5.2)

https://www.techpowerup.com/gpudb/b3423/asus-gold20th-gtx-980-ti-platinum

Clock Speeds

GPU Clock:1266 MHz (+27%)

Boost Clock:1367 MHz (+27%)

Memory Clock:1800 MHz (+3%)
7200 MHz effective

 

Memory

Memory Size:6144 MB

Memory Type:GDDR5

Memory Bus:384 bit

Bandwidth: 346 GB/s

 

Render Config

Shading Units:2816

TMUs:176

ROPs:96

SMM Count:22

Pixel Rate: 121.5 GPixel/s

Texture Rate: 222.8 GTexel/s

Floating-point performance: 7,130 GFLOPS

 

So a custom 980Ti at 30% greater clock speeds CAN outperform a Titan X. The peculiar thing is that despite having 300 less CUDA cores, and 60% faster clock speeds then a Titan X, the GTX 1080 still only delivers a tiny bit more performance in theory even over the Patinum Matrix GTX 980Ti. Which means that the actual theoretical max performance of the GTX 1080, is most likely not achieved through hardware, but through actual software, the slides are most likely rigged with drivers.

 

But Prysin, how can you say that? GFLOPS isnt a measurement of actual performance in a GPU.

Well.... yes it sort of is. So when i say that GFLOPs is a VERY good metric for theoretical performance, it is simply because GFLOPS is the theoretical throughput of the GPU. It is how many "operations" it can perform per cycle.

 

So, how much faster is the GTX 1080 on paper, vs the Titan X?

 

GTX 1080 -> TITAN X -> Difference

CUDA         2560 ->  3072 -> - 20%

TMUs            160 ->    192 -> - 20%

ROPs             80  ->      96 -> - 20%

SMMs             40  ->      24 -> + 66%

Pix Rate      128.6 ->      96 -> + 33.9%

Tex Rate     256.1 ->    192 -> + 33.3%

 

Clock speeds:

Stock:  1607 -> 1000 -> + 60.7%

Boost:  1733 -> 1089 -> + 59.1%

 

GFLOPS:

8228 -> 6144 -> + 33.9%

 

even if we look at Perf per watt, using TDP as our "power consumption metric"

Titan X -> GTX 1080 -> Difference

250w   ->  180W    -> + 38%

 

Meaning the GTX 1080 is going to use roughly 38% less power then the Titan X.... But let us look at cold hard facts again.

GFLOPS per WATT (TDP)

TITAN X = 24.576 GFLOPS/Watt

GTX 1080 = 45.711 GFLOPS/Watt

Difference: +86% more flops per watt for the GTX 1080.

 

IN SHORT. THERE IS NO THEORETICAL WAY FOR THE GTX 1080 TO PERFORM 2X PERF PER WATT, OR EVEN 2X PERFORMANCE OF THE TITAN X. NOT ON A HARDWARE LEVEL.

This means that most likely, that Nvidia slide is rigged to strongly favor the GTX 1080. Either through drivers, or through using functions in CUDA 6, that Titan X simply cannot run natively on CUDA 5.2 and thus have to EMULATE.

Which further means that the setup was DEFINETIVELY rigged to show how GTX 1080 is better.

 

I DONT KNOW WHAT NVIDIA USED TO TEST THOSE GPUS, BUT THEIR NUMBERS SEEM TO BE PULLED STRAIGHT OUTTA THEIR ASSES.

 

Remember how they nerfed the 700 series with drivers?! Remember how a lot of us said they were going to do that with the 900 series once Pascal came out?

If we see more weird results from games in the coming months, games using features that the Maxwell series fully support, but still runs WAY faster on Pascal. Well, then you have absolute proof that the 900 series is getting nerfed to sell more pascal units.

 

Link to comment
Share on other sites

Link to post
Share on other sites

its the bandwagon effect, people want next great thing, even without having seen the product yet

CPU: 6700k 4.6Ghz GPU: MSI GTX 1070 Gaming X MB: MSI Gaming M5 PSU: Evga 750 G2 Case: Phanteks EVOLV 

Link to comment
Share on other sites

Link to post
Share on other sites

I'll admit I'm sceptical, but I'm going to pick up a 1070 in July regardless of how far off their claims were. Can't be worse than the 900 series, can it?

 

...can it?

Project White Lightning (My ITX Gaming PC): Core i5-4690K | CRYORIG H5 Ultimate | ASUS Maximus VII Impact | HyperX Savage 2x8GB DDR3 | Samsung 850 EVO 250GB | WD Black 1TB | Sapphire RX 480 8GB NITRO+ OC | Phanteks Enthoo EVOLV ITX | Corsair AX760 | LG 29UM67 | CM Storm Quickfire Ultimate | Logitech G502 Proteus Spectrum | HyperX Cloud II | Logitech Z333

Benchmark Results: 3DMark Firestrike: 10,528 | SteamVR VR Ready (avg. quality 7.1) | VRMark 7,004 (VR Ready)

 

Other systems I've built:

Core i3-6100 | CM Hyper 212 EVO | MSI H110M ECO | Corsair Vengeance LPX 1x8GB DDR4  | ADATA SP550 120GB | Seagate 500GB | EVGA ACX 2.0 GTX 1050 Ti | Fractal Design Core 1500 | Corsair CX450M

Core i5-4590 | Intel Stock Cooler | Gigabyte GA-H97N-WIFI | HyperX Savage 2x4GB DDR3 | Seagate 500GB | Intel Integrated HD Graphics | Fractal Design Arc Mini R2 | be quiet! Pure Power L8 350W

 

I am not a professional. I am not an expert. I am just a smartass. Don't try and blame me if you break something when acting upon my advice.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

...why are you still reading this?

Link to comment
Share on other sites

Link to post
Share on other sites

I agree. I was looking at this yesterday.

 

When comparing X with Y for Scientific purposes.

 

You HAVE to compare using all the information.

 

I work in a regulatory environment. There is absolutely NO WAY you can make these sorts of performance claims and not back it up with detail.

 

Your analysis is very good and I agree. Nvidia are over-exaggerating the performance claims. For marketing purposes.

 

In the regulatory area - you legally have to back up claims with scientific data before putting things on the market.

 

Here: for Pascal -

 

In reality the differences are going to be smaller to either 980Ti (when its been after market adapted by companies like Asus, EVGA etc). Even less so with TITAN X (which has no after market adaptation variants but is drastically overpowered in its generation of cards).

 

Its pretty naughty of Nvidia to do this sort of thing.

 

Someone in Nvidia corporate compliance needs to be checking on some of the claims.

My Rig "Valiant"  Intel® Core™ i7-5930 @3.5GHz ; Asus X99 DELUXE 3.1 ; Corsair H110i ; Corsair Dominator Platinium 64GB 3200MHz CL16 DDR4 ; 2 x 6GB ASUS NVIDIA GEFORCE GTX 980 Ti Strix ; Corsair Obsidian Series 900D ; Samsung 950 Pro NVME + Samsung 850 Pro SATA + HDD Western Digital Black - 2TB ; Corsair AX1500i Professional 80 PLUS Titanium ; x3 Samsung S27D850T 27-Inch WQHD Monitor
 
Link to comment
Share on other sites

Link to post
Share on other sites

Well..... Nobody really should belive the "2x" PR BS.

It can do a (VR) task probably twice as fast, but this is by simply done by not calculation things that will not be used for the final image, like ARM does for years. But this "trick" can (or could, because Nvidia want's you to buy a new card) can also be applied to the old cards so you are down to the 20 - 30% diffference again that is actual provided by the HW.

Mineral oil and 40 kg aluminium heat sinks are a perfect combination: 73 cores and a Titan X, Twenty Thousand Leagues Under the Oil

Link to comment
Share on other sites

Link to post
Share on other sites

You know how the 680 had 3x the CUDA cores compared to the 580 yet wasn't 3x the performance? Stop basing your entire performance chart off the fact that the 1080 having less CUDA cores means it can't perform better. Second off, if you even bothered to look at the slide they showed it was specifically talking VR performance for the 2xperf/3xpower difference. You still have no performance metrics to compare it to and Nvidia may actually be closer to their claims than you want them to be. Get off your circle jerk, fuck Nvidia band wagon for the love of god and wait for some actual benchmarks instead of skewing information you didn't bother to read in the first place and then draw a conclusion.

.

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Prysin said:

Ok, so looking at Pascals slides, namely the now infamous "GTX 1080 2x faster then Titan X" you must have been more then a LITTLE sceptic.

But why is that? Because if you knew how strong the Titan X is (theoretically, not just practically) you would also know that the GTX 1080 has fewer everything. It only has more compute units and higher clock speeds (by default).

 

Which is what brings the conundrum of how it achieves this "2x" performance. And what the hell was Nvidia benching these cards in?!

 

OK, first off. Let us look at COLD HARD FACTS.

 

GTX 1080 (CUDA 6)

https://www.techpowerup.com/gpudb/2839/geforce-gtx-1080

Clock Speeds

GPU Clock:1607 MHz

Boost Clock:1733 MHz

Memory Clock:2500 MHz
10000 MHz effective

 

Memory

Memory Size:8192 MB

Memory Type:GDDR5X

Memory Bus:256 bit

Bandwidth:320 GB/s

 

Render Config

Shading Units:2560

TMUs:160

ROPs:80

SM Count:40

Pixel Rate:128.6 GPixel/s

Texture Rate:257.1 GTexel/s

Floating-point performance:8,228 GFLOPS

 

TITAN X (CUDA  5.2)

https://www.techpowerup.com/gpudb/2632/geforce-gtx-titan-x

 

Clock Speeds

GPU Clock:1000 MHz

Boost Clock:1089 MHz

Memory Clock:1753 MHz
7012 MHz effective

 

Memory

Memory Size:12288 MB

Memory Type:GDDR5

Memory Bus:384 bit

Bandwidth:337 GB/s

 

Render Config

Shading Units:3072

TMUs:192

ROPs:96

SMM Count:24

Pixel Rate:96.0 GPixel/s

Texture Rate:192.0 GTexel/s

Floating-point performance:6,144 GFLOPS

 

But now, let us add a Custom PCB 980Ti at much higher clocks then normal (Cuda 5.2)

https://www.techpowerup.com/gpudb/b3423/asus-gold20th-gtx-980-ti-platinum

Clock Speeds

GPU Clock:1266 MHz (+27%)

Boost Clock:1367 MHz (+27%)

Memory Clock:1800 MHz (+3%)
7200 MHz effective

 

Memory

Memory Size:6144 MB

Memory Type:GDDR5

Memory Bus:384 bit

Bandwidth: 346 GB/s

 

Render Config

Shading Units:2816

TMUs:176

ROPs:96

SMM Count:22

Pixel Rate: 121.5 GPixel/s

Texture Rate: 222.8 GTexel/s

Floating-point performance: 7,130 GFLOPS

 

So a custom 980Ti at 30% greater clock speeds CAN outperform a Titan X. The peculiar thing is that despite having 300 less CUDA cores, and 60% faster clock speeds then a Titan X, the GTX 1080 still only delivers a tiny bit more performance in theory even over the Patinum Matrix GTX 980Ti. Which means that the actual theoretical max performance of the GTX 1080, is most likely not achieved through hardware, but through actual software, the slides are most likely rigged with drivers.

 

But Prysin, how can you say that? GFLOPS isnt a measurement of actual performance in a GPU.

Well.... yes it sort of is. So when i say that GFLOPs is a VERY good metric for theoretical performance, it is simply because GFLOPS is the theoretical throughput of the GPU. It is how many "operations" it can perform per cycle.

 

So, how much faster is the GTX 1080 on paper, vs the Titan X?

 

GTX 1080 -> TITAN X -> Difference

CUDA         2560 ->  3072 -> - 20%

TMUs            160 ->    192 -> - 20%

ROPs             80  ->      96 -> - 20%

SMMs             40  ->      24 -> + 66%

Pix Rate      128.6 ->      96 -> + 33.9%

Tex Rate     256.1 ->    192 -> + 33.3%

 

Clock speeds:

Stock:  1607 -> 1000 -> + 60.7%

Boost:  1733 -> 1089 -> + 59.1%

 

GFLOPS:

8228 -> 6144 -> + 33.9%

 

even if we look at Perf per watt, using TDP as our "power consumption metric"

Titan X -> GTX 1080 -> Difference

250w   ->  180W    -> + 38%

 

Meaning the GTX 1080 is going to use roughly 38% less power then the Titan X.... But let us look at cold hard facts again.

GFLOPS per WATT (TDP)

TITAN X = 24.576 GFLOPS/Watt

GTX 1080 = 45.711 GFLOPS/Watt

Difference: +86% more flops per watt for the GTX 1080.

 

IN SHORT. THERE IS NO THEORETICAL WAY FOR THE GTX 1080 TO PERFORM 2X PERF PER WATT, OR EVEN 2X PERFORMANCE OF THE TITAN X. NOT ON A HARDWARE LEVEL.

This means that most likely, that Nvidia slide is rigged to strongly favor the GTX 1080. Either through drivers, or through using functions in CUDA 6, that Titan X simply cannot run natively on CUDA 5.2 and thus have to EMULATE.

Which further means that the setup was DEFINETIVELY rigged to show how GTX 1080 is better.

 

I DONT KNOW WHAT NVIDIA USED TO TEST THOSE GPUS, BUT THEIR NUMBERS SEEM TO BE PULLED STRAIGHT OUTTA THEIR ASSES.

 

Remember how they nerfed the 700 series with drivers?! Remember how a lot of us said they were going to do that with the 900 series once Pascal came out?

If we see more weird results from games in the coming months, games using features that the Maxwell series fully support, but still runs WAY faster on Pascal. Well, then you have absolute proof that the 900 series is getting nerfed to sell more pascal units.

 

 

Re: final two paragraphs:

 

 

If Nvidia are caught removing driver support or writing software that forces consumers to upgrade (at cost) to a new card.

 

By reducing the performance of existing cards (newly purchased within the warranty period).

 

That would potentially be illegal and highly unethical.

 

Probably with competition issues also including market distortion.

 

Monopolies are illegal if they allow abuse of dominant positions. Nvidia would be getting into very dangerous territory if they were caught doing this as it would be a clear abuse of their position in the market. Over and above the 'hairworks' and other recent issues over AMD.

 

My Rig "Valiant"  Intel® Core™ i7-5930 @3.5GHz ; Asus X99 DELUXE 3.1 ; Corsair H110i ; Corsair Dominator Platinium 64GB 3200MHz CL16 DDR4 ; 2 x 6GB ASUS NVIDIA GEFORCE GTX 980 Ti Strix ; Corsair Obsidian Series 900D ; Samsung 950 Pro NVME + Samsung 850 Pro SATA + HDD Western Digital Black - 2TB ; Corsair AX1500i Professional 80 PLUS Titanium ; x3 Samsung S27D850T 27-Inch WQHD Monitor
 
Link to comment
Share on other sites

Link to post
Share on other sites

My guess is they tested it on dx12 related benchmarks, because maxwell didnt do too well there.

Cpu: Ryzen 2700 @ 4.0Ghz | Motherboard: Hero VI x370 | Gpu: EVGA RTX 2080 | Cooler: Custom Water loop | Ram: 16GB Trident Z 3000MHz

PSU: RM650x + Braided cables | Case:  painted Corsair c70 | Monitor: MSI 1440p 144hz VA | Drives: 500GB 850 Evo (OS)

Laptop: 2014 Razer blade 14" Desktop: http://imgur.com/AQZh2sj , http://imgur.com/ukAXerd

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, AlwaysFSX said:

You know how the 680 had 3x the CUDA cores compared to the 580 yet wasn't 3x the performance? Stop basing your entire performance chart off the fact that the 1080 having less CUDA cores means it can't perform better. Second off, if you even bothered to look at the slide they showed it was specifically talking VR performance for the 2xperf/3xpower difference. You still have no performance metrics to compare it to and Nvidia may actually be closer to their claims than you want them to be. Get off your circle jerk, fuck Nvidia band wagon for the love of god and wait for some actual benchmarks instead of skewing information you didn't bother to read in the first place and then draw a conclusion.

i love you trying to bash my logic. This logic doesnt only apply to Nvidia hardware. It applies to AMD hardware too. If you compare the PS4 to the XBONE, you realize that the PS4 has around 40% more GPU horsepower, whilst the XBOX has 7% more CPU horsepower. Despite both being based on the same architecture, the PS4 delivers about 30% more performance.

 

Even in compute heavy scenes, such as VR, the GTX 1080 has NO SINGLE PERFORMANCE EDGE BASED ON HARDWARE TO BE ABLE TO PUSH 2X ANYTHING.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, mark_cameron said:

 

Re: final two paragraphs:

 

 

If Nvidia are caught removing driver support or writing software that forces consumers to upgrade (at cost) to a new card.

 

By reducing the performance of existing cards (newly purchased within the warranty period).

 

That would potentially be illegal and highly unethical.

 

Probably with competition issues also including market distortion.

 

Monopolies are illegal if they allow abuse of dominant positions. Nvidia would be getting into very dangerous territory if they were caught doing this as it would be a clear abuse of their position in the market. Over and above the 'hairworks' and other recent issues over AMD.

 

Its more, they dont work on previous generations as much... which makes sense, considering they want their new architecture to look as good as possible. I doubt they would go as far as spending time degrading the perf of on old architecture.

Cpu: Ryzen 2700 @ 4.0Ghz | Motherboard: Hero VI x370 | Gpu: EVGA RTX 2080 | Cooler: Custom Water loop | Ram: 16GB Trident Z 3000MHz

PSU: RM650x + Braided cables | Case:  painted Corsair c70 | Monitor: MSI 1440p 144hz VA | Drives: 500GB 850 Evo (OS)

Laptop: 2014 Razer blade 14" Desktop: http://imgur.com/AQZh2sj , http://imgur.com/ukAXerd

 

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not ENTIRELY sure what you're getting at, but it looks to me like you're comparing two different architectures basically by core count and clock speed.

It's the same with GPUs as it is with CPUs: You cannot necessarily compare the performance of two chips of different architectures with their listed specs.

 

Seems that it would be very easy to tell if Nvidia was nerfing older cards, less conspiracy and more a collective realization that old numbers don't match new numbers. It's not like there's no massive collection of performance data to refer to.

"Do as I say, not as I do."

-Because you actually care if it makes sense.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Prysin said:

i love you trying to bash my logic. This logic doesnt only apply to Nvidia hardware. It applies to AMD hardware too. If you compare the PS4 to the XBONE, you realize that the PS4 has around 40% more GPU horsepower, whilst the XBOX has 7% more CPU horsepower. Despite both being based on the same architecture, the PS4 delivers about 30% more performance.

 

Even in compute heavy scenes, such as VR, the GTX 1080 has NO SINGLE PERFORMANCE EDGE BASED ON HARDWARE TO BE ABLE TO PUSH 2X ANYTHING.

Pascall may have a bigger lead on maxwell in dx12 benchmarks, as maxwell performed quite poor in dx12 scenarios.

Likely just cherry picked benchmarks to get a bigger improvement than irl...

Cpu: Ryzen 2700 @ 4.0Ghz | Motherboard: Hero VI x370 | Gpu: EVGA RTX 2080 | Cooler: Custom Water loop | Ram: 16GB Trident Z 3000MHz

PSU: RM650x + Braided cables | Case:  painted Corsair c70 | Monitor: MSI 1440p 144hz VA | Drives: 500GB 850 Evo (OS)

Laptop: 2014 Razer blade 14" Desktop: http://imgur.com/AQZh2sj , http://imgur.com/ukAXerd

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Prysin said:

i love you trying to bash my logic. This logic doesnt only apply to Nvidia hardware. It applies to AMD hardware too. If you compare the PS4 to the XBONE, you realize that the PS4 has around 40% more GPU horsepower, whilst the XBOX has 7% more CPU horsepower. Despite both being based on the same architecture, the PS4 delivers about 30% more performance.

Even in compute heavy scenes, such as VR, the GTX 1080 has NO SINGLE PERFORMANCE EDGE BASED ON HARDWARE TO BE ABLE TO PUSH 2X ANYTHING.

 

I've got x2 Asus 980Ti Strix and am a 'fanboy'.

 

Doesn't mean I'm blind to potential Nvidia misbehaviour and I fully agree with your skepticism in the OP.

 

Its right someone should quiz the figures.

My Rig "Valiant"  Intel® Core™ i7-5930 @3.5GHz ; Asus X99 DELUXE 3.1 ; Corsair H110i ; Corsair Dominator Platinium 64GB 3200MHz CL16 DDR4 ; 2 x 6GB ASUS NVIDIA GEFORCE GTX 980 Ti Strix ; Corsair Obsidian Series 900D ; Samsung 950 Pro NVME + Samsung 850 Pro SATA + HDD Western Digital Black - 2TB ; Corsair AX1500i Professional 80 PLUS Titanium ; x3 Samsung S27D850T 27-Inch WQHD Monitor
 
Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Lord_Karango17 said:

Its more, they dont work on previous generations as much... which makes sense, considering they want their new architecture to look as good as possible. I doubt they would go as far as spending time degrading the perf of on old architecture.

for a while they DID degrade GTX 700 series. It lasted 1-2 months. Performance DROPPED in games they had previously performed rather decently in. This was discovered by gamers, and there was quite a bit of rage going on. Nvidia quickly realized they had fucked up and patched the 700 series back to how they was.

Now i have no sales data to back up the notion that this was done to increase sales of the 900 series. But it wouldnt shock me if this were actually the "plan" behind the nerf.

Link to comment
Share on other sites

Link to post
Share on other sites

I think I'll get my 1080 regardless. I'm wanting to upgrade anyway. No launch xx80 card has been negative for a while, whereas if I were to buy a 1070, I think I'll hold off for a month or two just in case the whole VRAM crap happens again....

EVE (My Gaming Build)

Motherboard: Asus Maximus VII Ranger - CPU: Intel i5 4690k @ Stock Speeds - Cooling: Phanteks TC-12DX - PSU: Corsair RM750 - GPU: MSI GTX 1070 - Storage: 1x 120gb Samsung 830 Evo SSD (OS Drive) | 1x 120gb Hyperx 3k SSD (Audio Editing) | 1x 480gb Sandisk Ultra II SSD (Games) | 1x 1TB Hard Drive (Mass Storage) - Case: NZXT Phantom 530 - Misc: NZXT Hue RGB LED and controller

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Lord_Karango17 said:

Its more, they dont work on previous generations as much... which makes sense, considering they want their new architecture to look as good as possible. I doubt they would go as far as spending time degrading the perf of on old architecture.

 

Yes. But I think we both understand its possible and why consumers should be wary of it. Dominating behaviour in the IT industry is a huge topic at the moment.

My Rig "Valiant"  Intel® Core™ i7-5930 @3.5GHz ; Asus X99 DELUXE 3.1 ; Corsair H110i ; Corsair Dominator Platinium 64GB 3200MHz CL16 DDR4 ; 2 x 6GB ASUS NVIDIA GEFORCE GTX 980 Ti Strix ; Corsair Obsidian Series 900D ; Samsung 950 Pro NVME + Samsung 850 Pro SATA + HDD Western Digital Black - 2TB ; Corsair AX1500i Professional 80 PLUS Titanium ; x3 Samsung S27D850T 27-Inch WQHD Monitor
 
Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Prysin said:

for a while they DID degrade GTX 700 series. It lasted 1-2 months. Performance DROPPED in games they had previously performed rather decently in. This was discovered by gamers, and there was quite a bit of rage going on. Nvidia quickly realized they had fucked up and patched the 700 series back to how they was.

Now i have no sales data to back up the notion that this was done to increase sales of the 900 series. But it wouldnt shock me if this were actually the "plan" behind the nerf.

Well I did have a bit of a shock when looking at benchmarks of kepler in the witcher 3, because my card was being slaughtered by a fucking 960 (with hairworks on)... Dont know if they patched it or not, but from what I've heard there was a slight bit of merit behind the poor perf, but nvidia could have probabaly done a bit more work improving perf on the older gen.

Cpu: Ryzen 2700 @ 4.0Ghz | Motherboard: Hero VI x370 | Gpu: EVGA RTX 2080 | Cooler: Custom Water loop | Ram: 16GB Trident Z 3000MHz

PSU: RM650x + Braided cables | Case:  painted Corsair c70 | Monitor: MSI 1440p 144hz VA | Drives: 500GB 850 Evo (OS)

Laptop: 2014 Razer blade 14" Desktop: http://imgur.com/AQZh2sj , http://imgur.com/ukAXerd

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Prysin said:

i love you trying to bash my logic. This logic doesnt only apply to Nvidia hardware. It applies to AMD hardware too. If you compare the PS4 to the XBONE, you realize that the PS4 has around 40% more GPU horsepower, whilst the XBOX has 7% more CPU horsepower. Despite both being based on the same architecture, the PS4 delivers about 30% more performance.

 

Even in compute heavy scenes, such as VR, the GTX 1080 has NO SINGLE PERFORMANCE EDGE BASED ON HARDWARE TO BE ABLE TO PUSH 2X ANYTHING.

I'm not bashing your logic, I'm bashing your basis for argument because it's simply wrong. You are taking a method and applying it to a subjective opinion that is objectively wrong.

 

As I said above, Nvidia stated that the only scenario where the 1080 is twice the performance as the Titan X is in VR which if you even watched the conference they were talking about using software technologies to help improve performance in drawing and rendering frames in VR so nothing gets wasted.

 

Not once did the mention that the 1080 being twice the performance in regular 3D applications. Their own website claims that it doesn't get that kind of performance. The Titan X is roughly 30% faster than a 980 and you can clearly see that the 1080 is more than 130% faster than a 980 in VR.

 

1080 performance.png

 

So fill me in oh holy unbiased determining body of goodness, where did Nvidia lie about the performance claims on the 1080? Or is it possibly you that can't read, cherry picks a nonexistent argument, and applies outdated information to a topic that you don't know the whole story on which is the problem here?

.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, ThatCarlosGuy said:

I think I'll get my 1080 regardless. I'm wanting to upgrade anyway. No launch xx80 card has been negative for a while, whereas if I were to buy a 1070, I think I'll hold off for a month or two just in case the whole VRAM crap happens again....

It's safe to say that Nvidia will never make another mistake like that again.

They'd be too scared to.

"Do as I say, not as I do."

-Because you actually care if it makes sense.

Link to comment
Share on other sites

Link to post
Share on other sites

It shows in "VR". With their new stuff that only renders what is shown in vr, it is faster in vr. In normal gaming and applications, it wouldn't be that much faster. 

Star Citizen referral codes, to help support your fellow comrades!
UOLTT Discord server, come on over and chat!

i7 4790k/ Bequiet Pure Rock/Asrock h97 PRO4/ 8 GB Crucial TT/ Corsair RM 750/ H-440 Custom/  PNY GT 610

Damn you're like a modular human being. -ThatCoolBlueKidd

 
Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, AlwaysFSX said:

I'm not bashing your logic, I'm bashing your basis for argument because it's simply wrong. You are taking a method and applying it to a subjective opinion that is objectively wrong.

 

As I said above, Nvidia stated that the only scenario where the 1080 is twice the performance as the Titan X is in VR which if you even watched the conference they were talking about using software technologies to help improve performance in drawing and rendering frames in VR so nothing gets wasted.

 

Not once did the mention that the 1080 being twice the performance in regular 3D applications. Their own website claims that it doesn't get that kind of performance. The Titan X is roughly 30% faster than a 980 and you can clearly see that the 1080 is more than 130% faster than a 980 in VR.

 

1080 performance.png

 

So fill me in oh holy unbiased determining body of goodness, where did Nvidia lie about the performance claims on the 1080? Or is it possibly you that can't read, cherry picks a nonexistent argument, and applies outdated information to a topic that you don't know the whole story on which is the problem here?

He might´ve only seen the slide where it was,¨ 2x faster¨ only with no context. Like this one:

NVIDIA+GTX1080+event+1.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

Until i see some actual vidya benchmarks, Nvidia can shove their bullshit graphs up their arse. On a side now, AMD had an engineering sample of Polaris against an Nvidia GTX900 series card, and it was able to play Star Wars Battlefront, Ultra settings @1080p and maintain 60fps, whilst consuming a third of the power of Nvidia's card.

Shot through the heart and you're to blame, 30fps and i'll pirate your game - Bon Jovi

Take me down to the console city where the games are blurry and the frames are thirty - Guns N' Roses

Arguing with religious people is like explaining to your mother that online games can't be paused...

Link to comment
Share on other sites

Link to post
Share on other sites

I love how people think a company (NVIDI) wouldn't reduce 900 series performance because it's unethical... You act like no company (Mitsubishi) has ever lied about results or (Samsung NX Cameras) reduced hardware performance with software.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×