Jump to content

Rumor: Nvidia’s Pascal Architecture Is In Trouble With Asynchronous Compute

Mr_Troll
Just now, i_build_nanosuits said:

which is perfectly fine by me since most of these cards are bought with the simple idea of rendering modern AAA games at high resolution.

Yes, but that isn't exactly what everyone wants. In return, Maxwell sucks fucking ass for compute shit, which is pretty much the biggest reason why I'd go Kepler/GCN over Maxwell. Sure, it's a bit dusty at gaming but it will smash said comparable card in something that can take advantage of it.

Check out my guide on how to scan cover art here!

Local asshole and 6th generation console enthusiast.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Dan Castellaneta said:

Maxwell sucks fucking ass for compute shit, which is pretty much the biggest reason why I'd go Kepler/GCN over Maxwell. Sure, it's a bit dusty at gaming but it will smash said comparable card in something that can take advantage of it.

but WHO CARES?!

i want all the eyecandy i can get and also smooth animation when i pawn noobs, you should TOO.

and when it comes to compute, well there you go :

 

 

 

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, patrickjp93 said:

Nope, AMD needs to fall by the wayside entirely and give RTG to Intel. There will be exactly 1 good year for AMD, and then it will all go up in smoke exactly as it did with Kepler V. Hawaii and Nehalem+Sandy Bridge vs. K10/Turion/Fusion and then Bulldozer.

It's always thanks to you that I somehow gain faith in purchasing AMD GPU's, because then if they succeed - they get support, and if they fail - Intel will support them.  Either way you just can't go wrong.  I would really love to see that though.  It'd be so interesting to watch Intel with AMD patents and RTG scare the crap out of Jen-Hsun.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, i_build_nanosuits said:

which is perfectly fine by me since most of these cards are bought with the simple idea of rendering modern AAA games at high resolutions and epic framerates.

So what happens to Maxwell cards when DX12 games starts using GPU compute for effects and AI?

 

Ashes of the Singularity 1080P Performance Benchmarks

 

Yeah Maxwell crashes and burns.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Notional said:

So what happens to Maxwell cards when DX12 games starts using GPU compute for effects and AI?

 

 

 

Yeah Maxwell crashes and burns.

how about we wait for GAMES and DRIVERS to get released before we start beating that old ass DX12 drum all over again shall we?!?!?!?!

i'm sure we all know secretly deep inside that nvidia will take the cake when the time comes ONCE AGAIN.

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Glenwing said:

You agree that the argument you're using is nonsense, or you agree that Fermi is better than Kepler?

i agree that there is more to it than cuda core/stream processor count.

but i wanted to point out that AMD's stream processor kept the same level of throughput through the generations and that they only increased stream processors count and power consumption (AKA make bigger more bad ass cards without real technological improvements HBM aside)

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, MEC-777 said:

Lol, no they don't.

 

1 hour ago, Citadelen said:

Umm, what?

I think he meant with market share. nVidia sells so many GPU's compared to AMD, even though Radeon cards (FOR GAMING ONLY) have more raw performance across the board. But..

 

nVidia has a beast marketing team. Kinda like Apple, marketing alone sells to people. nVidia sponsors events, teams, everything- Radeon just doesn't have as much coverage anywhere. I'm sure you can find 200 videos on the GTX 960 for every R9 380 video. nVidia already has so much of the market that they don't even need to perform better, people will just buy their GPU's because "it's from nVidia, it must be better."

I used to be quite active here.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, i_build_nanosuits said:

how about we wait for GAMES and DRIVERS to get released before we start beating that old ass DX12 drum all over again shall we?!?!?!?!

Game is out on Thursday 31st, and that benchmark is from this month too. AMD, NVidia, Microsoft and Intel has had full access to game source code for over a year, and could even recommend optimizations for game code to their hardware. NVidia did this and it was implemented. NVidia has released several drivers optimized for the newest beta version.

No what you see is the power of async compute, that Maxwell cannot handle.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Notional said:

Game is out on Thursday 31st, and that benchmark is from this month too. AMD, NVidia, Microsoft and Intel has had full access to game source code for over a year, and could even recommend optimizations for game code to their hardware. NVidia did this and it was implemented. NVidia has released several drivers optimized for the newest beta version.

No what you see is the power of async compute, that Maxwell cannot handle.

that is ONE game

how about gears of wars? that is DX12 too and it run like crap on AMD and it run a lot better on nvidia...what do you have to say bout that?

 

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

good thing there is no games worth getting i can just skip upgrading till q2/3 next year 

Processor: Intel core i7 930 @3.6  Mobo: Asus P6TSE  GPU: EVGA GTX 680 SC  RAM:12 GB G-skill Ripjaws 2133@1333  SSD: Intel 335 240gb  HDD: Seagate 500gb


Monitors: 2x Samsung 245B  Keyboard: Blackwidow Ultimate   Mouse: Zowie EC1 Evo   Mousepad: Goliathus Alpha  Headphones: MMX300  Case: Antec DF-85

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Eroda said:

good thing there is no games worth getting i can just skip upgrading till q2/3 next year 

Rise of the Tomb Raider if you havnt played that it's a must have IMHO. beautiful game, fun and entertaining, really well put togheter...will run well on a GTX 680 too.

http://www.metacritic.com/game/pc/rise-of-the-tomb-raider

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, i_build_nanosuits said:

that is ONE game

how about gears of wars? that is DX12 too and it run like crap on AMD and it run a lot better on nvidia...what do you have to say bout that?

also NO i have NOT seen the AOTS GAMEREADY DRIVER released from nvidia YET.

You mean the game that uses it's original 2006 source code and slaps on some horrible GameWorks HBAO+ and ads a useless DX12 spec to market Windows 10 and Windows store? Yeah that is as useful as Halo 2 was for Vista when that game for the original Xbox was DX10 exclusive. It means nothing and you know it (actually I can't tell with you and all the nonsense you spout).

Just because a game uses an API is used by a game does not mean its taken proper advantage of. Just look at the Talos Principle using Vulkan. The devs themselves say they need to change the entire graphics engine to actually take advantage of it. That did not happen in GOW.

 

Here's one:

http://www.geforce.com/whats-new/articles/geforce-355-60-whql-driver-released "Ashes Of The Singularity GeForce Game Ready Driver Released" back in 2015.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Eroda said:

good thing there is no games worth getting i can just skip upgrading till q2/3 next year 

*points at No Mans Sky*

        Pixelbook Go i5 Pixel 4 XL 

  

                                     

 

 

                                                                           

                                                                              

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Notional said:

You mean the game that uses it's original 2006 source code and slaps on some horrible GameWorks HBAO+ and ads a useless DX12 spec to market Windows 10 and Windows store? Yeah that is as useful as Halo 2 was for Vista when that game for the original Xbox was DX10 exclusive. It means nothing and you know it (actually I can't tell with you and all the nonsense you spout).

Just because a game uses an API is used by a game does not mean its taken proper advantage of. Just look at the Talos Principle using Vulkan. The devs themselves say they need to change the entire graphics engine to actually take advantage of it. That did not happen in GOW.

 

Here's one:

http://www.geforce.com/whats-new/articles/geforce-355-60-whql-driver-released "Ashes Of The Singularity GeForce Game Ready Driver Released" back in 2015.

yeah i saw that after the fact and i edited my post...but like i said AOTS is ONE GAME it's not the end all be all of DX12 gaming isnt it?!

how many games do you think will properly use async compute if nvidia which hold 80%+ of the market share can,t do it?! who will develop with that in mind?!

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, i_build_nanosuits said:

but WHO CARES?!

i want all the eyecandy i can get and also smooth animation when i pawn noobs, you should TOO.

and when it comes to compute, well there you go :

 

 

 

Use a different software then Adobe premiere and its not as black and white 

 

Look at Sony Vegas rendering time

Also take good note of tessmark and luxmark scores too

Titan vs Titan X

http://www.anandtech.com/bench/product/1446?vs=1447

 

HD 7970 vs Titan X

http://www.anandtech.com/bench/product/1495?vs=1447

 

R9 390 Vs GTX 970

http://www.anandtech.com/bench/product/1594?vs=1595

 

Now look closely at tessmark 

R9 390 Vs R9 380X

http://www.anandtech.com/bench/product/1594?vs=1592

 

Now lets compare the fastest card of both manufacturers 

http://www.anandtech.com/bench/product/1513?vs=1447

 

Even though the Fury X has 1000 stream processors more it still can't beat the Titan X in tessellation.

 

As for Sony Vegas rendering time. I think 21 seconds means AMD cards are hitting a storage IO or software bottleneck seeing as a smaller GCN 3 gpu is equally fast as the largest GCN 3 GPU 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Prysin said:

snip!

all that i knew already, a GPU is a very specific and complex piece of engineering and they will be better at some tasks vs others, obviously.

that's why some games favor AMD, and most games favor nvidia hardware.

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, i_build_nanosuits said:

yeah i saw that after the fact and i edited my post...but like i said AOTS is ONE GAME it's not the end all be all of DX12 gaming isnt it?!

how many games do you think will properly use async compute if nvidia which hold 80%+ of the market share can,t do it?! who will develop with that in mind?!

These games will use it:

  • Ashes of the Singularity by Stardock and Oxide Games
  • Total War: WARHAMMER by Creative Assembly
  • Battlezone VR by Rebellion
  • Deus Ex: Mankind Divided by Eidos-Montréal

But then you have all EA games on top of that.

 

However just because a game can utilize async compute, doesn't mean it won't be optimized for hardware that sucks at it. It just means AMD will get a performance boost NVidia will not.

 

As for your GCN nonsense, all 4 generations are very different. Both in their ACE's, introducing colour delta compression and tessellation optimization.

 

No one knows everything, but maybe you should listen/read more instead of spamming nonsense. Someone might believe the factually incorrect nonsense you spout.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Notional said:

These games will use it:

  • Ashes of the Singularity by Stardock and Oxide Games
  • Total War: WARHAMMER by Creative Assembly
  • Battlezone VR by Rebellion
  • Deus Ex: Mankind Divided by Eidos-Montréal

But then you have all EA games on top of that.

my 980ti will blast through any of these games with great framerates and detail level so i have absolutely no worries about upcoming DX12 titles, and i'm sure most of these games will perform better on nvidia's more expensive cards.

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, i_build_nanosuits said:

i agree that there is more to it than cuda core/stream processor count.

but i wanted to point out that AMD's stream processor kept the same level of throughput through the generations and that they only increased stream processors count and power consumption (AKA make bigger more bad ass cards without real technological improvements)

That's true to some extent, but let's not forget that the 290X was no different than what NVIDIA did with the Titan and 780, so if you want to accuse them of "just adding more cores every time" then we can't pretend NVIDIA has been doing none of that. GCN 1.0 was about the same power efficiency as Kepler, AMD was not really behind NVIDIA in that generation. The only thing there is to point to is the Fury X, which has plenty of power efficiency improvements, it has a huge amount of improvements compared to the original GCN 1.0. It isn't "just" adding more cores. I'd agree the architecture isn't as good as Maxwell, but it's not as far behind as you make it out to be.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Glenwing said:

That's true to some extent, but let's not forget that the 290X was no different than what NVIDIA did with the Titan and 780, so if you want to accuse them of "just adding more cores every time" then we can't pretend NVIDIA has been doing none of that.

maybe yes, but nvidia never released a GTX 880ti which would have been 100% the same as the GTX 780ti but with 6gb of video memory? have i missed something here?

Segmenting among the same family of GPU is one thing, you need to have faster and slower products to compete...but re-branding them generations after generations without ever ''REALLY'' improving on anything performance wise is another story.

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, i_build_nanosuits said:

my 980ti will blast through any of these games with great framerates and detail level so i have absolutely no worries about upcoming DX12 titles, and i'm sure most of these games will perform better on nvidia's more expensive cards.

I'm sure it will. Point is that my 2013 gen 290 will too at half the cost, 2 years ago.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Notional said:

-snip-

I didn't know Total War: Warhammer was using Async, that's great, maybe I'll be able to play it at more than 20FPS. :D

        Pixelbook Go i5 Pixel 4 XL 

  

                                     

 

 

                                                                           

                                                                              

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, i_build_nanosuits said:

maybe yes, but nvidia never released a GTX 880ti which would have been 100% the same as the GTX 780ti but with 6gb of video memory? have i missed something here?

Segmenting among the same family of GPU is one thing, you need to have faster and slower products to compete...but re-branding them generations after generations without ever ''REALLY'' improving on anything performance wise is another story.

Of course, I'd forgotten a 7970 was as good as the Fury X , damm, I could have saved LOADS of money.

/s

        Pixelbook Go i5 Pixel 4 XL 

  

                                     

 

 

                                                                           

                                                                              

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Notional said:

I'm sure it will. Point is that my 2013 gen 290 will too at half the cost, 2 years ago.

well then there you go...my WHOLE point to this thread i guess was this:

AMD peasant, don't slap it too high when you see that nvidia cards does not support async compute well...because in the end NVIDIA HAS THE MONEY and the MARKET SHARE and game developers develop games to appeal to the MASSES...so you should be VERY sad if pascal is not good at Async compute cause that would mean that you won't see that many games taking full advantage of it. end of the story.

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×