Jump to content

The GTX 1080 slides. Why do you trust them?

Prysin
Just now, i_build_nanosuits said:

i don't either.

anyway, back to VRWORKs audio.

I hope AMD can find a way to get it to work with their True Audio ASIC chip. As that would give them the same function, but with zero latency. as ASIC chips are much faster at doing their jobs then a GPU or CPU

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Prysin said:

anyway, back to VRWORKs audio.

I hope AMD can find a way to get it to work with their True Audio ASIC chip. As that would give them the same function, but with zero latency. as ASIC chips are much faster at doing their jobs then a GPU or CPU

a GPU is an ASIC chip...

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, i_build_nanosuits said:

a GPU is an ASIC chip...

no, not even close.

 

a GPU is a highly parrellelized processor. If you could be bothered, ANY GPU can run ANY CPU task. It may take forever due to insanely low IPC and clock rate (if the task is single threaded). But it can be done by using GPGPU.

A ASIC chip is litterally a chip made especially for a small set of algorithms.

 

That is why ASIC bitcoin miners are so much faster and more power efficient then a GPU. The CHIP is designed from the ground up to only have the exact hardware needed for a very limited set of algorithms. It litterally ONLY has exactly what it needs, and the pipeline of a ASIC is designed to be fully optimized for the task at hand. Thus even with the fastest of ASIC Bitcoin miners, which is litterally 100,000 times faster then a R9 290X. You still couldnt run a 8bit mario game on it even if you had proper drivers. Because the whole thing is purpose built for ONLY this task.

 

a GPU, has units for a huge array of workloads. From computation, to texure rendering, to shaders, to pixel scaling and a mix of all of these through highly advanced drivers and hardware architecture.

 

EDIT: Incase you have forgotten. ASIC means "Application Specific Intergrated Circuit"

Link to comment
Share on other sites

Link to post
Share on other sites

This is the reason why you should NEVER believe the pre-release pr bs or even bother to think about upgrading or anything until the cards are out and we have some actual benchmarks for them. And not just for let's say ashes of singularity for example where it would make sense for nvidia to optimise their drivers to recover from the pr disaster that game was for them, but for more games on multiple settings etc and for pure benchmark tools as well.

Link to comment
Share on other sites

Link to post
Share on other sites

The only way we're going to know the real world difference is when we finally see real world test results by a wide range of reviewers. Until then, I'm not getting overly hyped about anything and even still, I'm going to wait for real world numbers from Polaris as well. 

 

Speculate all you want until you're all blue in the face, it will mean nothing until we see what they can really do. 

 

I'm quite happy with my 980 with a healthy OC of 1459 daily until I see the Pascal vs Polaris war has been well documented and tested int he real world for all to see. 

 

There are so many "should I sell my 9XX and get a 10XX?" or similar threads asking questions for information we just don't have yet. It makes my brain hurt. Hardware specs only tell so much of the full story. ;)

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

In short, i don't trust them. Any and all data prior to release is either released directly by Nvidia themselves, in which case very little can be considered undeniable facts, while the vast majority is generous rounding, specialized synthetic tests though rarely outright lies with little or no truth behind them. Or the information is released by a ''partner'' as was the case with the Doom 4 showcase where it played on a 1080 graphics card. You can make ballpark estimations if you're well versed and up to date on the GPU industry and related industries.

 

Look at the screenshot i took from Nvidia. Source: http://www.geforce.com/hardware/10series/geforce-gtx-1080

Ge5vsB6.png

It's hard to make out the exact percentage, but the 1080 compared to the 980 is not magic. But lets assume that chart displays roughly a 60% improvement over the 980 GPU, and lets add the 980Ti into the mix. In my opinion, the 980Ti is a much more important card to compare it to.

GTX980 = 1

1080 = 1.6

980Ti = 1.3, read the text.

So where does the 980Ti land? Well it's hard to say as Nvidia haven't released either settings used or the rest of the system specifications. But Guru3D arrived at a 27% improvement over the 980 in their reference 980Ti review. The MSI 980Ti 6G Gaming landed at a 39.5% improvement over reference. I don't have any evidence to support the following, but my 980Ti MSI reached +90MHZ on the core and 350MHZ on the memory consistently. A bit of estimation, but a decently overclock able 980Ti can reach around 45% improvement over the 980 without scoring the jackpot at the silicon lottery.

 

980Ti reference: http://www.guru3d.com/articles_pages/nvidia_geforce_gtx_980_ti_review,15.html

980Ti MSI : http://www.guru3d.com/articles_pages/msi_geforce_gtx_980_ti_gaming_oc_review,15.html

 

Then comes the elephant in the room, the price. Launch prices for noteworthy GPUs.

980 = 549USD

980Ti = 649USD. +18%

1080 = 700USD. +27%

 

Yes i know Nvidia is releasing a ''Founders edition'' for 700USD, while setting the MSRP for 600USD. And that btw is the first and last time i am gonna mention that naming scheme in this forum post, it's the reference card, and i will call it a reference card. However, the thing about MSRP is nothing but good old bullshit. It's nothing more than a recommendation Nvidia has given to it's GPU partners as to how much they SHOULD sell the card for. They are by no means required to match the 600USD price point, nor do i expect there to be any 600USD 1080 this year, or even next year, unless AMD force it by releasing a far better product.

 

Nvidia is going to sell the reference card for 700USD, and the partners (read EVGA, Asus, MSI etc) are gonna release it for higher still. Why do i believe that the board partners will exceed 700USD? Because when have they not sold their GPUs for a higher price point than the reference design? The cards from partners with an aftermarket cooler and higher guaranteed clock speeds consistently perform better than the reference design, and they run cooler. The 600USD price point is nothing but marketing.

 

 

So why isn't the 16nm finfet technology delivering better deals? Because silicon matters. More importantly, the quality of the production and how many usable chips are available on each wafer. Now i don't know the die size of the 1080, but considering the transistor count, i suspect it's in the 300-350mm^2 area. 16nm is not quite half the size of 28nm, so i am probably not far off. Comparably, the 980Ti is a massive 602mm^2 chip, which is absolutely huge. The reason for this is because the 20/22nm process was skipped, in favor of jumping directly to the 14/16nm process. This of course meant that the 28nm production was extended beyond it's natural life span, allowing for massive chips at affordable 600-700USD prices.

 

It's simply not possible to release a 16nm with more transistors than the latest and greatest from the 28nm process and still have better price to performance. So how does the 1080 outperform the 980Ti, even with a lower amount of transistors and lower bandwidth on the memory. The short answer is clock speed. The long answer is that the Finfet transistors exercise greater control over the flow of electricity compared to the now outdated Mosfet transistor. This allows higher voltages and a higher wattage, without adverse effects such as electricity bypassing a ''closed transistor''. Ultimately this leads to a higher theoretical clock speed.

 

The only 16nm GPU so far that allows for a higher transistor count is the Nvidia P100 at 610mm^2. Price is uncertain, but 10 000USD is not an outlandish claim.

 

Finfet is NOT a marketing term. it is in fact an extremely useful technology.

 

 

Before anyone suggest that i say the 1080 is bad. It isn't. I am simply trying to dispel some of the magic that Nvidia seems to have cast on the consumers. The 1080 will be a good incremental increase in performance, but it's not the miracle you're hoping or expecting.

But if you already own a 980Ti or a 980, i suggest you take a look at what AMD offers, or wait for Volta to come around. Unless you can get a good price for your used 980/980Ti that is.

Motherboard: Asus X570-E
CPU: 3900x 4.3GHZ

Memory: G.skill Trident GTZR 3200mhz cl14

GPU: AMD RX 570

SSD1: Corsair MP510 1TB

SSD2: Samsung MX500 500GB

PSU: Corsair AX860i Platinum

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, MMKing said:

In short, i don't trust them. Any and all data prior to release is either released directly by Nvidia themselves, in which case very little can be considered undeniable facts, while the vast majority is generous rounding, specialized synthetic tests though rarely outright lies with little or no truth behind them. Or the information is released by a ''partner'' as was the case with the Doom 4 showcase where it played on a 1080 graphics card. You can make ballpark estimations if you're well versed and up to date on the GPU industry and related industries.

 

Look at the screenshot i took from Nvidia. Source: http://www.geforce.com/hardware/10series/geforce-gtx-1080

Ge5vsB6.png

It's hard to make out the exact percentage, but the 1080 compared to the 980 is not magic. But lets assume that chart displays roughly a 60% improvement over the 980 GPU, and lets add the 980Ti into the mix. In my opinion, the 980Ti is a much more important card to compare it to.

GTX980 = 1

1080 = 1.6

980Ti = 1.3, read the text.

So where does the 980Ti land? Well it's hard to say as Nvidia haven't released either settings used or the rest of the system specifications. But Guru3D arrived at a 27% improvement over the 980 in their reference 980Ti review. The MSI 980Ti 6G Gaming landed at a 39.5% improvement over reference. I don't have any evidence to support the following, but my 980Ti MSI reached +90MHZ on the core and 350MHZ on the memory consistently. A bit of estimation, but a decently overclock able 980Ti can reach around 45% improvement over the 980 without scoring the jackpot at the silicon lottery.

 

980Ti reference: http://www.guru3d.com/articles_pages/nvidia_geforce_gtx_980_ti_review,15.html

980Ti MSI : http://www.guru3d.com/articles_pages/msi_geforce_gtx_980_ti_gaming_oc_review,15.html

 

Then comes the elephant in the room, the price. Launch prices for noteworthy GPUs.

980 = 549USD

980Ti = 649USD. +18%

1080 = 700USD. +27%

 

Yes i know Nvidia is releasing a ''Founders edition'' for 700USD, while setting the MSRP for 600USD. And that btw is the first and last time i am gonna mention that naming scheme in this forum post, it's the reference card, and i will call it a reference card. However, the thing about MSRP is nothing but good old bullshit. It's nothing more than a recommendation Nvidia has given to it's GPU partners as to how much they SHOULD sell the card for. They are by no means required to match the 600USD price point, nor do i expect there to be any 600USD 1080 this year, or even next year, unless AMD force it by releasing a far better product.

 

Nvidia is going to sell the reference card for 700USD, and the partners (read EVGA, Asus, MSI etc) are gonna release it for higher still. Why do i believe that the board partners will exceed 700USD? Because when have they not sold their GPUs for a higher price point than the reference design? The cards from partners with an aftermarket cooler and higher guaranteed clock speeds consistently perform better than the reference design, and they run cooler. The 600USD price point is nothing but marketing.

 

 

So why isn't the 16nm finfet technology delivering better deals? Because silicon matters. More importantly, the quality of the production and how many usable chips are available on each wafer. Now i don't know the die size of the 1080, but considering the transistor count, i suspect it's in the 300-350mm^2 area. 16nm is not quite half the size of 28nm, so i am probably not far off. Comparably, the 980Ti is a massive 602mm^2 chip, which is absolutely huge. The reason for this is because the 20/22nm process was skipped, in favor of jumping directly to the 14/16nm process. This of course meant that the 28nm production was extended beyond it's natural life span, allowing for massive chips at affordable 600-700USD prices.

 

It's simply not possible to release a 16nm with more transistors than the latest and greatest from the 28nm process and still have better price to performance. So how does the 1080 outperform the 980Ti, even with a lower amount of transistors and lower bandwidth on the memory. The short answer is clock speed. The long answer is that the Finfet transistors exercise greater control over the flow of electricity compared to the now outdated Mosfet transistor. This allows higher voltages and a higher wattage, without adverse effects such as electricity bypassing a ''closed transistor''. Ultimately this leads to a higher theoretical clock speed.

 

The only 16nm GPU so far that allows for a higher transistor count is the Nvidia P100 at 610mm^2. Price is uncertain, but 10 000USD is not an outlandish claim.

 

Finfet is NOT a marketing term. it is in fact an extremely useful technology.

 

 

Before anyone suggest that i say the 1080 is bad. It isn't. I am simply trying to dispel some of the magic that Nvidia seems to have cast on the consumers. The 1080 will be a good incremental increase in performance, but it's not the miracle you're hoping or expecting.

But if you already own a 980Ti or a 980, i suggest you take a look at what AMD offers, or wait for Volta to come around. Unless you can get a good price for your used 980/980Ti that is.

Isn't that because the 980Ti was such a jump on the last iteration the 980 that you alluded to?

 

I just cannot believe that within two iterations 980 -> 1080 that there is a 100% increase.

 

Just doesn't make sense. Given that the 970 was released only 18 months ago.

 

I would believe a 50% increase from 980-> 1080.

 

That roughly agrees with what you said.

 

The message from Nvidia is a deliberate obfuscation.

 

To blind consumers. Until the independent reviewers get hold of the cards some of the performance claims are made of straw.

 

 

My Rig "Valiant"  Intel® Core™ i7-5930 @3.5GHz ; Asus X99 DELUXE 3.1 ; Corsair H110i ; Corsair Dominator Platinium 64GB 3200MHz CL16 DDR4 ; 2 x 6GB ASUS NVIDIA GEFORCE GTX 980 Ti Strix ; Corsair Obsidian Series 900D ; Samsung 950 Pro NVME + Samsung 850 Pro SATA + HDD Western Digital Black - 2TB ; Corsair AX1500i Professional 80 PLUS Titanium ; x3 Samsung S27D850T 27-Inch WQHD Monitor
 
Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Prysin said:

If you compare car engines, you can use hardware statistics to easily decide which engine (GPU) is the faster. But when coupled with a vehicle (driver), you need to add in the weight, road grip and aero-dynamics of said vehicle and you suddenly have a whole new ballgame.

 

Why is it so hard for people to realize that it isnt just core count and MHz i am talking about. but ACTUAL throughput. Meaning the ACTUAL theoretical limits of a product.

That's such an awful analogy. I've driven a Cayman s and a mustang gt recently. Both have virtually the same rated 0-60 speed yet the difference is literally night and day. Hardware can give you an idea of relative performance when compared to a similar design with a similar implementation but even then it's still not all that accurate.

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, djdwosk97 said:

That's such an awful analogy. I've driven a Cayman s and a mustang gt recently. Both have virtually the same rated 0-60 speed yet the difference is literally night and day. Hardware can give you an idea of relative performance when compared to a similar design with a similar implementation but even then it's still not all that accurate.

you compare ONE engine metric. Whilst you have yet to compare the 30 other metrics.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Prysin said:

you compare ONE engine metric. Whilst you have yet to compare the 30 other metrics.

0-60 is a performance metric. It should take into account everything else. 

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, djdwosk97 said:

0-60 is a performance metric. It should take into account everything else. 

0-60 is the "AVG FPS" of car performance metrics.

 

it is based upon hundreds of other parametres. And yet cannot properly justify how a vehicle feels like.

Link to comment
Share on other sites

Link to post
Share on other sites

Slides are not showing anything unusual. There is only one graph that is referring to gaming performance and on this one TitanX peformance is at 3.6 and 1080 performance is at 4.4. This is 22% difference. If you have 40 FPS on Titan, you get 48 on 1080. All of other claims, like "1080 is 2 times faster than TitanX" are for VR and only VR. 

 

There are also some performance results for Tomb Rider and Witcher 3 that are suggesting 70% performance increase over 980, but if we consider those are just graphs and certainly they are cranked up a bit too much, I would say that 1080 gives 50% performance increase over 980, which is 20-25% increase over 980 Ti.

 

1070 will perform around 980 Ti, maybe a bit less. 

 

This is perfectly normal if we consider upgrade from 28 to 16nm. Standard upgrade only by one segment (1070=980 and 1080=980Ti) wouldn't convince anyone to upgrade. It would be stepping in place. Something more convincing was needed, especially with PC market declining. And now you get GPU that is thousand times more powerful than console for below $400. 

 

If demo of Polaris 10 was really running Hitman maxed out at 1440p with steady 60 FPS (also may not be exactly true, lets say 50 FPS) then it should provide around FuryX performance for around $350 and with some overclocking potential that FuryX doesn't have. 

 

If this is all true, then there are great times coming and amazing gaming experience will be more affordable than ever.

 

 

Spoiler

CPU: i7-6900K 4.5 GHz | Motherboard: X99 Sabertooth | GPU: RTX 2080 Ti SLi | RAM: 32GB DDR4-3400 CL13 | SSD: SX8200 PRO 512GB | PSU: Corsair AX1600i | Case: InWin 805C | Monitor: LG 38UC99-W 85Hz | Keyboard: Wooting One Analog | Keypad: Azeron Compact Analog | Mouse: Swiftpoint Z | Audio: Klipsch Reference 5.1

 

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, Krzych said:

Slides are not showing anything unusual. There is only one graph that is referring to gaming performance and on this one TitanX peformance is at 3.6 and 1080 performance is at 4.4. This is 22% difference. If you have 40 FPS on Titan, you get 48 on 1080. All of other claims, like "1080 is 2 times faster than TitanX" are for VR and only VR. 

 

There are also some performance results for Tomb Rider and Witcher 3 that are suggesting 70% performance increase over 980, but if we consider those are just graphs and certainly they are cranked up a bit too much, I would say that 1080 gives 50% performance increase over 980, which is 20-25% increase over 980 Ti.

 

1070 will perform around 980 Ti, maybe a bit less. 

 

This is perfectly normal if we consider upgrade from 28 to 16nm. Standard upgrade only by one segment (1070=980 and 1080=980Ti) wouldn't convince anyone to upgrade. It would be stepping in place. Something more convincing was needed, especially with PC market declining. And now you get GPU that is thousand times more powerful than console for below $400. 

 

If demo of Polaris 10 was really running Hitman maxed out at 1440p with steady 60 FPS (also may not be exactly true, lets say 50 FPS) then it should provide around FuryX performance for around $350 and with some overclocking potential that FuryX doesn't have. 

 

If this is all true, then there are great times coming and amazing gaming experience will be more affordable than ever.

 

 

and by next year, you need a SLI or CF setup of the best cards to run the games at maxed out settings for 3440x1440p....

Link to comment
Share on other sites

Link to post
Share on other sites

2x performance of titan x ... maybe in VR or DX12 games.

But for games in general? maybe like 20% better when OCed

Intel i7 12700K | Gigabyte Z690 Gaming X DDR4 | Pure Loop 240mm | G.Skill 3200MHz 32GB CL14 | CM V850 G2 | RTX 3070 Phoenix | Lian Li O11 Air mini

Samsung EVO 960 M.2 250GB | Samsung EVO 860 PRO 512GB | 4x Be Quiet! Silent Wings 140mm fans

WD My Cloud 4TB

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/5/2016 at 10:49 AM, Prysin said:

i love you trying to bash my logic. This logic doesnt only apply to Nvidia hardware. It applies to AMD hardware too. If you compare the PS4 to the XBONE, you realize that the PS4 has around 40% more GPU horsepower, whilst the XBOX has 7% more CPU horsepower. Despite both being based on the same architecture, the PS4 delivers about 30% more performance.

 

Even in compute heavy scenes, such as VR, the GTX 1080 has NO SINGLE PERFORMANCE EDGE BASED ON HARDWARE TO BE ABLE TO PUSH 2X ANYTHING.

I think it's the Simultaneous Multi Projection thing they talked about. It's a solution that allows them to render only one scene in VR instead of having to render one per eye. So if the 1080 achieves 2x Performance by rendering half the pixels, that doesn't tell us anything at all.

We have a NEW and GLORIOUSER-ER-ER PSU Tier List Now. (dammit @LukeSavenije stop coming up with new ones)

You can check out the old one that gave joy to so many across the land here

 

Computer having a hard time powering on? Troubleshoot it with this guide. (Currently looking for suggestions to update it into the context of <current year> and make it its own thread)

Computer Specs:

Spoiler

Mathresolvermajig: Intel Xeon E3 1240 (Sandy Bridge i7 equivalent)

Chillinmachine: Noctua NH-C14S
Framepainting-inator: EVGA GTX 1080 Ti SC2 Hybrid

Attachcorethingy: Gigabyte H61M-S2V-B3

Infoholdstick: Corsair 2x4GB DDR3 1333

Computerarmor: Silverstone RL06 "Lookalike"

Rememberdoogle: 1TB HDD + 120GB TR150 + 240 SSD Plus + 1TB MX500

AdditionalPylons: Phanteks AMP! 550W (based on Seasonic GX-550)

Letterpad: Rosewill Apollo 9100 (Cherry MX Red)

Buttonrodent: Razer Viper Mini + Huion H430P drawing Tablet

Auralnterface: Sennheiser HD 6xx

Liquidrectangles: LG 27UK850-W 4K HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

256 bit and 2560 shaders can't be 2x faster than a 980 Ti.

Linus is my fetish.

Link to comment
Share on other sites

Link to post
Share on other sites

Not to derail the conversation, but even if I believed at face value everything NVIDIA claimed, I still wouldn't be replacing my 980 Ti anytime soon.

 

I'm not even waiting for benchmarks. The 980 Ti is still an overkill card for me and will continue to be so until I replace my monitor with something of higher resolution and refresh rate.

 

And based on the slides, as much as a 1080 could potentially out-perform a 980 Ti, the difference won't be enough for me to justify the costs when I have a perfectly serviceable card that's barely 6 months old and still going strong with a 1500MHz overclock.

 

So I'll be waiting until the next batch of cards in the 1000 series comes out before I even think about a replacement. And even then I'll be first looking to upgrade my monitor long before I upgrade my GPU.

 

---

Link to comment
Share on other sites

Link to post
Share on other sites

Oh I forgot about the new cards having finfet cores, not sure if that has any impact on performance yet though.

Linus is my fetish.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, PrimeSonic said:

Not to derail the conversation, but even if I believed at face value everything NVIDIA claimed, I still wouldn't be replacing my 980 Ti anytime soon.

 

I'm not even waiting for benchmarks. The 980 Ti is still an overkill card for me and will continue to be so until I replace my monitor with something of higher resolution and refresh rate.

 

And based on the slides, as much as a 1080 could potentially out-perform a 980 Ti, the difference won't be enough for me to justify the costs when I have a perfectly serviceable card that's barely 6 months old and still going strong with a 1500MHz overclock.

 

So I'll be waiting until the next batch of cards in the 1000 series comes out before I even think about a replacement. And even then I'll be first looking to upgrade my monitor long before I upgrade my GPU.

 

well if you are still gaming on 1080p, a 980Ti will be good for max/very high preset for another year or two.

 

if you play games at 3440x1440p ultrawide, like i do, you NEED the bleeding edge products unless you want to reduce the resolution. And we all know how shit games look when you reduce monitor resolution.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×