Jump to content

Rumor: Nvidia’s Pascal Architecture Is In Trouble With Asynchronous Compute

Mr_Troll
5 minutes ago, Zodiark1593 said:

I cannot nod in agreement with anyone wishing the failure of either AMD or Nvidia. I want both companies to bring forth excellent products. Competitive products drive down prices and bring forth superior products more quickly. And high profits on both sides will aid in above goals. 

 

That said, I don't think lack of Async will he an issue for Nvidia as Maxwell seem to be pretty fully utilized to begin with. Unless there's a fair amount of headroom, Async won't be able to add much even if the hardware was there. 

nVidia has been acting like a grade A butthole and they deserved to get fucked if AC aint working for them.

And their GameWorks bullshit is the cancer for all of us.

 

And there won't be any excellent hardware if Intel and nGreedia acts like sissy girls who gets

jealous over babybutt shaped boy and wants to annihilate every single form of competition.

Excellent hardware has been over since a long time ago.

DAC/AMPs:

Klipsch Heritage Headphone Amplifier

Headphones: Klipsch Heritage HP-3 Walnut, Meze 109 Pro, Beyerdynamic Amiron Home, Amiron Wireless Copper, Tygr 300R, DT880 600ohm Manufaktur, T90, Fidelio X2HR

CPU: Intel 4770, GPU: Asus RTX3080 TUF Gaming OC, Mobo: MSI Z87-G45, RAM: DDR3 16GB G.Skill, PC Case: Fractal Design R4 Black non-iglass, Monitor: BenQ GW2280

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, i_build_nanosuits said:

-snip-

How many games in 2016 are using AMD? Ashes, Hitman, Deus Ex and Star Citizen. There are probably some more I'm missing, but the point is AMD has been picking up a lot of devs recently, because they offer an easier option for DX12. Also, what do you mean by old technology?

        Pixelbook Go i5 Pixel 4 XL 

  

                                     

 

 

                                                                           

                                                                              

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Citadelen said:

what do you mean by old technology?

https://en.wikipedia.org/wiki/Graphics_Core_Next

The first product featuring GCN was launched in 2011

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, i_build_nanosuits said:

https://en.wikipedia.org/wiki/Graphics_Core_Next

The first product featuring GCN was launched in 2011

So...

        Pixelbook Go i5 Pixel 4 XL 

  

                                     

 

 

                                                                           

                                                                              

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Citadelen said:

So...

2011 is 5 YEARS AGO

Edited by Glenwing
Unnecessary giant font removed

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, i_build_nanosuits said:

-snip-

Aren't you a cleaver little bunny, doing maths like that.

Seriously, why does that matter?

        Pixelbook Go i5 Pixel 4 XL 

  

                                     

 

 

                                                                           

                                                                              

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I highly doubt this is true. While I understand excluding it from Maxwell to sell fairy tales about power consumption, excluding it from Pascal would be suicide. Dx 12 might have come out last year, but it wasn't made overnight and both Nvidia and AMD have known how they needed to build their hardware to support it, so the assertion that they can't add asynchronous compute to Pascal last minute doesn't make sense since they've known about it for years. I'd ignore this rumor and wait until the architecture is actually revealed.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, i_build_nanosuits said:

 

lol ok...

by that i meant that AMD is broke as fuck and their hot running power hungry outdated radeon cards with GCN does NOT sell...and they can't afford to sponsor much game developers anymore...therefore not many games can take advantage of all that raw useless teraflops of compute horsepower and it will very likely still be the same with async compute and all that fuzzy shit you youngsters love to beat the drum about all over the place. it's irrelevant. if it doesn't work on nvidia GPU's, it doesn't work at all...end of the story.

So you better pray that nvidia will get their shit togheter and offer full support for async compute, otherwise your radeon cards are screwed in 9 games out of 10.

Nvidia still hold over 80% of the GPU market share last time i checked, and what? 80-90% of the AAA titles that have come out this year and in 2014-2015 had been supported by nvidia in some way or another and most of them now feature gameworks stuff whether you like it or not it's A FACT..and this is all a result of AMD being broke as fuck and offering old re-brew of their old technologies hoping it will someday pay off...saving money and hoping it turns out for the best...this is the AMD philosophy...and it doesn't work in tech, never has, never will...you have to be a front runner, who invest and develop new technologies.

 

EDIT: ^^ i'm quite proud of that one actually, i think it should get a few like and a few thumbs up and maybe...well, you know :P that's how it is, that's how it goes...that's life...it's sucks, might not be the best for consumers, but that,s how it is...you can rally with the weak all you want, but i wont.

In the last 4 years, here's the number of generations that Nvidia is actually significantly more power efficient than similarly priced AMD offers:

 

Spoiler

1

 

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, i_build_nanosuits said:

2011 is 5 YEARS AGO

You think it's the same architecture because they adopted a different naming scheme than Nvidia? Kepler vs GCN 1.0, High end Kepler vs GCN 1.1 and Maxwell vs GCN 1.2 have all had the exact same improvements to their architectures. I'm not sure why you think Nvidia has been ahead.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, i_build_nanosuits said:

how relevant is a GPU when it's NOT directly supported by 90% of the game developers out there is the question YOU should ask yourself...and provide me with an answer.

...and don't come with the 970, it's a weak ass card with not enough compute cores that's why it's not as fast as a 300W radeon beast.

All games on the market directly support AMD hardware. Claiming otherwise is just ridiculous.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, ivan134 said:

You think it's the same architecture because they adopted a different naming scheme than Nvidia? Kepler vs GCN 1.0, High end Kepler vs GCN 1.1 and Maxwell vs GCN 1.2 have all had the exact same improvements to their architectures. I'm not sure why you think Nvidia has been ahead.

no, you're wrong...AMD has only increased the number of stream processors from generations to generations, and when they increase the number after the GCN nomenclature it's because they added some features...suport for HDMI 2.0 (yééé!) etc. meanwile nvidia IMPROVED the throughput of the CUDA cores noticeably to the point where a 1664 maxwell cuda core GPU (GTX 970) is processing faster than a 2880 kepler cuda core card (GTX 780ti)...that's an IMPROVEMENT...they outperform while consuming less energy...kind of what intel is doing with the CPU by INCREASING IPC...AMD has done squadoosh but making bigger beefier more power hungry GPU's generations after generations...that's all.

it save them the cost of having to invest in research and development to IMPROVE on what they have available on the market...and there are still people buying them no matter...not many, but still.

a stream processor from 2011 has the same compute performance as a 2016 fury stream processor...meanwhile nvidia improved performance noticeably.

Why do you think my 2816 maxwell cuda cores GTX 980ti is like 40% faster than last gen 2880 kepler cuda core GTX 780ti? gimping?! yeah sure... o.O

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, i_build_nanosuits said:

-snip-

Could you provide something backing up Maxwell performing better in compute scenarios while having less CUDA cores than Kepler.

        Pixelbook Go i5 Pixel 4 XL 

  

                                     

 

 

                                                                           

                                                                              

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, Zodiark1593 said:

I cannot nod in agreement with anyone wishing the failure of either AMD or Nvidia. I want both companies to bring forth excellent products. Competitive products drive down prices and bring forth superior products more quickly. And high profits on both sides will aid in above goals. 

 

That said, I don't think lack of Async will he an issue for Nvidia as Maxwell seem to be pretty fully utilized to begin with. Unless there's a fair amount of headroom, Async won't be able to add much even if the hardware was there. 

Nope, AMD needs to fall by the wayside entirely and give RTG to Intel. There will be exactly 1 good year for AMD, and then it will all go up in smoke exactly as it did with Kepler V. Hawaii and Nehalem+Sandy Bridge vs. K10/Turion/Fusion and then Bulldozer.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, i_build_nanosuits said:

no, you're wrong...AMD has only increased the number of stream processors from generations to generations, and when they increase the number after the GCN it's because they added some features...suport for HDMI 2.0 (yééé!) etc. meanwile nvidia IMPROVED the throughput of the CUDA cores noticeably to the point where a 1664 maxwell cuda core GPU (GTX 970) is processing faster than a 2880 kepler cuda core card (GTX 780ti)...that's an IMPROVEMENT...they outperform while consuming less energy...kind of what intel is doing with the CPU by INCREASING IPC...AMD has done squadoosh but making bigger beefier more power hungry GPU's generations after generations...that's all.

 

a stream processor from 2011 has the same compute performance as a 2016 fury stream processor...nvidia improved performance noticeably.

The only place Maxwell is faster than Kepler is tessellation, so no, the 970 is not faster than a 780 ti. This is the exact same thing AMD has done with GCN 1.2 where a 380 with 1792 SPs is faster than a 280x with 2048 SPs in games with heavy tessellation. Let's look at games where Nvidia didn't get their dirty GameWorks hands on them and see how a 970 performs vs a 780 ti when obnoxious levels of tessellation are not used:

Spoiler


bf3_2560_1440.png

bf4_2560_1440.png

madmax_2560_1440.png

som_2560_1440.png

 

So again, what improvements are you talking about that AMD hasn't done? The only place they lead AMD is tessellation performance even with the improvements in GCN 1.2.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Citadelen said:

Could you provide something backing up Maxwell performing better in compute scenarios while having less CUDA cores than Kepler.

did i say COMPUTE? but even then, YES in single precision compute (which is what 99% of the people out there need and care about) maxwell is roughly 40% faster cuda core for cuda core vs kepler and even more than that vs older gen cards. Are you putting in doubt the fact that my 980ti is worlds faster than a GTX 780ti or a titan which features more cuda cores?! cause if that's the problem just ask google and you will have your answer real darn quick.

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, ivan134 said:

The only place Maxwell is faster than Kepler is tessellation, so no, the 970 is not faster than a 780 ti. This is the exact same thing AMD has done with GCN 1.2 where a 380 with 1792 SPs is faster than a 280x with 2048 SPs in games with heavy tessellation. Let's look at games where Nvidia didn't get their dirty GameWorks hands on them and see how a 970 performs vs a 780 ti when obnoxious levels of tessellation are not used:

  Hide contents

sniP!

 

So again, what improvements are you talking about that AMD hasn't done? The only place they lead AMD is tessellation performance even with the improvements in GCN 1.2.

2 FPS? 1216 cuda cores LESS on the 970 (-42%)...are you KIDDING ME son?! o.O

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, i_build_nanosuits said:

no, you're wrong...AMD has only increased the number of stream processors from generations to generations, and when they increase the number after the GCN nomenclature it's because they added some features...suport for HDMI 2.0 (yééé!) etc. meanwile nvidia IMPROVED the throughput of the CUDA cores noticeably to the point where a 1664 maxwell cuda core GPU (GTX 970) is processing faster than a 2880 kepler cuda core card (GTX 780ti)...that's an IMPROVEMENT...they outperform while consuming less energy...kind of what intel is doing with the CPU by INCREASING IPC...AMD has done squadoosh but making bigger beefier more power hungry GPU's generations after generations...that's all.

 

a stream processor from 2011 has the same compute performance as a 2016 fury stream processor...meanwhile nvidia improved performance noticeably.

And Nvidia did the same.

Big Kepler has less cuda then big maxwell.

 

Nvidia improved their CUDA core to V8 in kepler. And since then nothing changed in terms of core design.

Overall architecture did change but not core layout.

 

AMD has gone up and down in terms of SP count. And they only increased ACE count with Hawaii.

 

fury SP count is mostly because it was designed for TSCM 20nm. Which flopped. Which meant amd had to find a way to improve raw performance. Since AMD's shaders are faster then Nvidias they opted for brute force shader array. Rather than geometry and Rasterizers as Nvidia is more power efficient and effective in that aspect so amd wouldn't beat nvidia at that game at 28nm.

 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, i_build_nanosuits said:

no, you're wrong...AMD has only increased the number of stream processors from generations to generations, and when they increase the number after the GCN nomenclature it's because they added some features...suport for HDMI 2.0 (yééé!) etc. meanwile nvidia IMPROVED the throughput of the CUDA cores noticeably to the point where a 1664 maxwell cuda core GPU (GTX 970) is processing faster than a 2880 kepler cuda core card (GTX 780ti)...that's an IMPROVEMENT...they outperform while consuming less energy...kind of what intel is doing with the CPU by INCREASING IPC...AMD has done squadoosh but making bigger beefier more power hungry GPU's generations after generations...that's all.

it save them the cost of having to invest in research and development to IMPROVE on what they have available on the market...and there are still people buying them no matter...not many, but still.

a stream processor from 2011 has the same compute performance as a 2016 fury stream processor...meanwhile nvidia improved performance noticeably.

Why do you think my 2816 maxwell cuda cores GTX 980ti is like 40% faster than last gen 2880 kepler cuda core GTX 780ti? gimping?! yeah sure... o.O

55603836.jpg

 

Could you please use just 5 minutes googling NVidia and AMD architecture? You're spamming this thread with factually incorrect nonsense, and you obviously have no clue about the differences in architecture whether it's Kepler/Maxwell or GCN, which there are 4 official generations of (including Polaris), with vast differences.

 

Maxwell is effective, because it can do tessellation very effectively and not much more. Even Kepler is better at compute than Maxwell, meaning DX 12 games using async compute might perform better on Kepler than Maxwell comparative.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, i_build_nanosuits said:

 

lol ok...

by that i meant that AMD is broke as fuck and their hot running power hungry outdated radeon cards with GCN does NOT sell...and they can't afford to sponsor much game developers anymore...therefore not many games can take advantage of all that raw useless teraflops of compute horsepower and it will very likely still be the same with async compute and all that fuzzy shit you youngsters love to beat the drum about all over the place. it's irrelevant. if it doesn't work on nvidia GPU's, it doesn't work at all...end of the story.

So you better pray that nvidia will get their shit togheter and offer full support for async compute, otherwise your radeon cards are screwed in 9 games out of 10.

Nvidia still hold over 80% of the GPU market share last time i checked, and what? 80-90% of the AAA titles that have come out this year and in 2014-2015 had been supported by nvidia in some way or another and most of them now feature gameworks stuff whether you like it or not it's A FACT..and this is all a result of AMD being broke as fuck and offering old re-brew of their old technologies hoping it will someday pay off...saving money and hoping it turns out for the best...this is the AMD philosophy...and it doesn't work in tech, never has, never will...you have to be a front runner, who invest and develop new technologies.

Nice to see that you're fucking blind and didn't learn anything from my GTX 970 vs. R9 390 thread.

You know that GCN's aim is not gaming but computation-heavy scenarios? That's fucking why a 390 uses more power than a 980 Ti (not by much, albeit). 

 

I could go on and on about it, but I'll just leave you at this: Maxwell's aim was to be good at gaming and only gaming. GCN's aim was a whole architecture that could be utilized in multiple ways. End of conversation.

Check out my guide on how to scan cover art here!

Local asshole and 6th generation console enthusiast.

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, i_build_nanosuits said:

no, you're wrong...AMD has only increased the number of stream processors from generations to generations, and when they increase the number after the GCN nomenclature it's because they added some features...suport for HDMI 2.0 (yééé!) etc. meanwile nvidia IMPROVED the throughput of the CUDA cores noticeably to the point where a 1664 maxwell cuda core GPU (GTX 970) is processing faster than a 2880 kepler cuda core card (GTX 780ti)...that's an IMPROVEMENT...they outperform while consuming less energy...kind of what intel is doing with the CPU by INCREASING IPC...AMD has done squadoosh but making bigger beefier more power hungry GPU's generations after generations...that's all.

 

a stream processor from 2011 has the same compute performance as a 2016 fury stream processor...meanwhile nvidia improved performance noticeably.

So what you're saying is that Fermi must be a better architecture than Kepler right? A GTX 580 (512 cores) is about equivalent to a GTX 660 Ti (1344 cores) and it's at a lower frequency too. Kepler needs more than twice as many cores, and needs to run them at a higher frequency, just to get the same performance. It's much worse, right? If you're saying Kepler to Maxwell was an improvement because a Maxwell core has higher performance than a Kepler core, then you must also say that Kepler is huge step backwards from Fermi because a Kepler core gets much worse performance than a Fermi core. Nonsense. Maxwell is better than Kepler, but not because it needs less cores for the same performance. It's better because it has better performance for the same amount of transistors/silicon used and same amount of power consumed.

 

The only things that matter are what the chip produces (the performance) and what the chip needs to produce that (silicon and power consumption). The internal workings of the chip, how a chip is doing whatever it's doing, is completely irrelevant. If a GPU is producing more performance with less power and less silicon (read: cheaper price) than another, it's a better chip. If it turns out it has more cores or less cores or whatever, that doesn't matter, it's just a difference in design. If that design yields better results in terms of performance and power, then it's better.

 

If you want to argue NVIDIA architectures are better than AMD then go ahead, but find a different argument than performance-per-core, it's a meaningless nonsense argument.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Prysin said:

 

Big Kepler has less cuda then big maxwell.

 

yes...2880 vs 3072...192 cuda cores less...a mere 6%...i wonder what the performance difference that make the titanX/980ti the fastest graphics processor on the planet by a long shot is.

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Glenwing said:

So what you're saying is that Fermi must be a better architecture than Kepler right? A GTX 580 (512 cores) is about equivalent to a GTX 660 Ti (1344 cores) and it's at a lower frequency too. Kepler needs more than twice as many cores, and needs to run them at a higher frequency, just to get the same performance. It's much worse, right? If you're saying Kepler to Maxwell was an improvement because a Maxwell core has higher performance than a Kepler core, then you must also say that Kepler is huge step backwards from Fermi because a Kepler core gets much worse performance than a Fermi core. Nonsense. Maxwell is better than Kepler, but not because it needs less cores for the same performance. It's better because it has better performance for the same amount of transistors (and silicon) used and same amount of power consumed.

 

The only things that matters are what the chip produces (the performance) and what the chip needs to produce that (silicon and power consumption). The internal workings of the chip, how a chip is doing whatever it's doing, is completely irrelevant. If a GPU is producing more performance with less power and less silicon (read: cheaper price) than another, it's a better chip. If it turns out it has more cores or less cores or whatever, that doesn't matter, it's just a difference in design. If that design yields better results in terms of performance and power, then it's better.

 

If you want to argue NVIDIA architectures are better than AMD then go ahead, but find a different argument than performance-per-core, it's a meaningless nonsense argument.

agree

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, i_build_nanosuits said:

how relevant is a GPU when it's NOT directly supported by 90% of the game developers out there is the question YOU should ask yourself...and provide me with an answer.

...and don't come with the 970, it's a weak ass card with not enough compute cores that's why it's not as fast as a 300W radeon beast.

Sorry, I can't take you seriously anymore. 

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Dan Castellaneta said:

I could go on and on about it, but I'll just leave you at this: Maxwell's aim was to be good at gaming and only gaming.

which is perfectly fine by me since most of these cards are bought with the simple idea of rendering modern AAA games at high resolutions and epic framerates.

 

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×