Jump to content

[Updated] Oxide responds to AotS Conspiracies, Maxwell Has No Native Support For DX12 Asynchronous Compute

Planned obsolescence is yet another reason to not buy NVidia, as it is very anti consumer centric. Maxwell is not even one year old and it's already becoming obsolete. Can't imagine people being so extremely fanboyish to accept this in the end.

So far only AMD has provided a functional GPU with HBM. We have no idea how far NVidia has come. Also define "better HBM 2 configs"? Because that is just guesswork from your part. If PAscal is "just Maxwell", Pascal will suck at async. shaders too, based on what we've seen so far.

As for DX 12.1, the .1 never seems to be utilized in games anyways. 10.1, 11.1, not very popular. And if NVidia cannot even make async shaders work in basic DX12, how would they ever be better than AMD in DX12?

Your entire post is a mix of guesses and speculation, and has nothing to do with facts or the empirics we've seen so far.

You don't do well as a business by playing footsie and being overly generous to your buyer base. You do well by selling as much as possible at as high a price as possible with production costs as low as possible. Businesses are not beholden to consumers. Consumers have needs and wants, and that's all that matters. AMD has been making the same naive mistakes as a business in both divisions and has allowed itself to lose way too many battles to be able to win the war. It's not about fanboyism. It's about buying the best item for oneself at a point in time. Nvidia knows that, and so Nvidia has 80% of the market. The enterprise and professional markets plan on reliability and long-term performance. Again AMD loses on that front. AMD has only its dedicated fanbase and the 5% of GPU consumers who want to stand on principle against businesses being good at business.

Pascal production began 2 months ago, and Samsung is offering HBM 2.0 4-stack configs with 48GB capacities at 1.5TB/s. This was confirmed in the last two weeks, meaning AMD is screwed since it's in an exclusive provision contract with Hynix for the Arctic Islands series. Nvidia will bury it on specs for memory.

Pascal is Maxwell plus the bells and whistles. It's not hard to implement Asynchronous Compute. You just add an intercommunication fabric/bus to the CUs and let them be autonomous. Beyond that Nvidia has its far larger cache system which will mitigate any latency-bound scenarios which, yes, do exist even in game rendering.

Both Nvidia and Intel are DX 12.1 compliant and are 92% of the graphics market. It'll become popular.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

just ignore him. He always sprouts out stuff without even having a single link to some unknown website to back him up... BUT on the other hand, he can look at these numbers and try to explain his theory thought.

The Glorious Sight Of Factual Evidence

in AMDs case we just see a steady improvement, with the FuryX topping out at nearly double the computational power of the Tahiti based 7970 GHz edt.

You have facts my friend. That, I like. A lot.

'Fanboyism is stupid' - someone on this forum.

Be nice to each other boys and girls. And don't cheap out on a power supply.

Spoiler

CPU: Intel Core i7 4790K - 4.5 GHz | Motherboard: ASUS MAXIMUS VII HERO | RAM: 32GB Corsair Vengeance Pro DDR3 | SSD: Samsung 850 EVO - 500GB | GPU: MSI GTX 980 Ti Gaming 6GB | PSU: EVGA SuperNOVA 650 G2 | Case: NZXT Phantom 530 | Cooling: CRYORIG R1 Ultimate | Monitor: ASUS ROG Swift PG279Q | Peripherals: Corsair Vengeance K70 and Razer DeathAdder

 

Link to comment
Share on other sites

Link to post
Share on other sites

You don't do well as a business by playing footsie and being overly generous to your buyer base. You do well by selling as much as possible at as high a price as possible with production costs as low as possible. Businesses are not beholden to consumers. Consumers have needs and wants, and that's all that matters. AMD has been making the same naive mistakes as a business in both divisions and has allowed itself to lose way too many battles to be able to win the war. It's not about fanboyism. It's about buying the best item for oneself at a point in time. Nvidia knows that, and so Nvidia has 80% of the market. The enterprise and professional markets plan on reliability and long-term performance. Again AMD loses on that front. AMD has only its dedicated fanbase and the 5% of GPU consumers who want to stand on principle against businesses being good at business.

Pascal production began 2 months ago, and Samsung is offering HBM 2.0 4-stack configs with 48GB capacities at 1.5TB/s. This was confirmed in the last two weeks, meaning AMD is screwed since it's in an exclusive provision contract with Hynix for the Arctic Islands series. Nvidia will bury it on specs for memory.

Pascal is Maxwell plus the bells and whistles. It's not hard to implement Asynchronous Compute. You just add an intercommunication fabric/bus to the CUs and let them be autonomous. Beyond that Nvidia has its far larger cache system which will mitigate any latency-bound scenarios which, yes, do exist even in game rendering.

Both Nvidia and Intel are DX 12.1 compliant and are 92% of the graphics market. It'll become popular.

Let me ask you a simple question...

 

If Nvidia started taping out from TSCM 2 months ago... knowing that it takes what? 90 days for a full production....

How do you redesign the internal bus layout without causing large delays and thus VERY unhappy share holders and customers?

 

Or better yet... if you were to do this in a driver, you would just add another layer of complexity and failure to an already extremely complex driver....

 

So..... if Async shaders are supposed to be awesome in Pascal, this should have been taken care of then... months ago maybe even a year ago.... And if Nvidia HASNT done anything about it now... well... doesnt look like a very "bright" future then...

Link to comment
Share on other sites

Link to post
Share on other sites

Both Nvidia and Intel are DX 12.1 compliant and are 92% of the graphics market. It'll become popular.

But less than 5% of of the cards out there support 12.1/12_1 what ever they call it.

| Intel i7-3770@4.2Ghz | Asus Z77-V | Zotac 980 Ti Amp! Omega | DDR3 1800mhz 4GB x4 | 300GB Intel DC S3500 SSD | 512GB Plextor M5 Pro | 2x 1TB WD Blue HDD |
 | Enermax NAXN82+ 650W 80Plus Bronze | Fiio E07K | Grado SR80i | Cooler Master XB HAF EVO | Logitech G27 | Logitech G600 | CM Storm Quickfire TK | DualShock 4 |

Link to comment
Share on other sites

Link to post
Share on other sites

An educated guess doesn't mean it's accurate. You are basing your 'facts' over the past. There's no evidence so far that shows Pascal is superior in performance in comparison to Arctic Islands, mainly because it hasn't been fully developed as of yet.

What we do know that AMD's 2 year old card is contesting, and in some cases, beating NVIDIA's flagship model in a DX12 game as of now. That my friend, is a fact.

Just letting you know the 980 was being compared to the 390x.

Not the ti.

The 980ti was compared to the fury x and they basically tied all accross the board. Now that is a huge improvment on that game where amd gets obliterated at dx11 but again that isnt really relevant.

So yes hawaii seems to beat gm204 at dx12. At least right now, and at least at that game.

But if hawaii was beating gm200 then it is also beating fiji which I dont believe.

I assume we all aready agreed that if you wanted to buy a 330-400 dollar gpu then assuming you could cool it ofc buy the hawaii gpu.

(390x makes less sense for 100 dollars over the 390 than a 980 does for 150 dollars over a 970.)

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

 

Your entire post is a mix of guesses and speculation, and has nothing to do with facts or the empirics we've seen so far.

 

An educated guess doesn't mean it's accurate. You are basing your 'facts' over the past.

 

Déjà vu. It's almost like i said the exact same thing to @patrickjp93 several times.

 

http://linustechtips.com/main/topic/436531-update-3-sapphires-reference-costs-649-usd-amd-reveals-r9-nano-benchmarks-ahead-of-launch/?p=5875824

http://linustechtips.com/main/topic/433764-intel-plans-to-support-vesas-adaptive-sync/?p=5837723

 

Oh wait, it's because i have. Patrick has still yet to reply to my demands for evidence because he simply has none. He gets destroyed every single time he opens his mouth in my direction, because it's easy to see through pseudo-intellectuals.

 

He has officially become my favorite game to play these days.

 

Why people even care that AMD is leading in DX12 is beyond me. AMD has amazing hardware. Nvidia has amazing software. DX12 removed a lot of the driver overhead, and AMD's amazing hardware finally gets put to use. It makes Nvidia look bad in comparison, but that is because the drivers Nvidia used were already highly refined. Very little extra performance came of it. That being said, the results are still rather close to each other as far as performance goes. 

 

DX12-High.png

source: http://www.extremetech.com/gaming/212314-directx-12-arrives-at-last-with-ashes-of-the-singularity-amd-and-nvidia-go-head-to-head/2

 

I personally love that DX12 is having a great impact on AMD cards. It means Nvidia has to step up their hardware game. Win-Win for consumers once again.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

It's not double precision. You can see more in this video:

 

 

Interesting... so Maxwell's "Async compute" may actually just be preemptive scheduling, hence the degrade in performance when Oxide devs tried to implement DX12 Async compute for Maxwell.

 

As always, the truth will come out in the coming weeks and months as more DX12 games are available, especially games that use Async Compute.

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just letting you know the 980 was being compared to the 390x.

Not the ti.

The 980ti was compared to the fury x and they basically tied all accross the board. Now that is a huge improvment on that game where amd gets obliterated at dx11 but again that isnt really relevant.

So yes hawaii seems to beat gm204 at dx12. At least right now, and at least at that game.

But if hawaii was beating gm200 then it is also beating fiji which I dont believe.

I assume we all aready agreed that if you wanted to buy a 330-400 dollar gpu then assuming you could cool it ofc buy the hawaii gpu.

(390x makes less sense for 100 dollars over the 390 than a 980 does for 150 dollars over a 970.)

No, in these benchmarks the 980Ti is being contested with the 290X. And in some cases, it lost to it.

'Fanboyism is stupid' - someone on this forum.

Be nice to each other boys and girls. And don't cheap out on a power supply.

Spoiler

CPU: Intel Core i7 4790K - 4.5 GHz | Motherboard: ASUS MAXIMUS VII HERO | RAM: 32GB Corsair Vengeance Pro DDR3 | SSD: Samsung 850 EVO - 500GB | GPU: MSI GTX 980 Ti Gaming 6GB | PSU: EVGA SuperNOVA 650 G2 | Case: NZXT Phantom 530 | Cooling: CRYORIG R1 Ultimate | Monitor: ASUS ROG Swift PG279Q | Peripherals: Corsair Vengeance K70 and Razer DeathAdder

 

Link to comment
Share on other sites

Link to post
Share on other sites

@MageTank

 

Funny thing is, i checked Nvidias OWN sites (their news and media section)

The only thing mentioned is Nvidia slides saying "4x compute"... At no point does any article on any website that google can present me, EVER, mention Async shaders in Pascal..... So..... Unless Nvidia can overcome the monstrous performance of the Fury X with an even more monstrous performing GPU, then no, they wont win in games using async shaders... they will more likely get rekt, hard...

 

 

Also, this post here explains AMDs async shaders more in depth

http://www.anandtech.com/show/9124/amd-dives-deep-on-asynchronous-shading

Link to comment
Share on other sites

Link to post
Share on other sites

You don't do well as a business by playing footsie and being overly generous to your buyer base. You do well by selling as much as possible at as high a price as possible with production costs as low as possible. Businesses are not beholden to consumers. Consumers have needs and wants, and that's all that matters. AMD has been making the same naive mistakes as a business in both divisions and has allowed itself to lose way too many battles to be able to win the war. It's not about fanboyism. It's about buying the best item for oneself at a point in time. Nvidia knows that, and so Nvidia has 80% of the market. The enterprise and professional markets plan on reliability and long-term performance. Again AMD loses on that front. AMD has only its dedicated fanbase and the 5% of GPU consumers who want to stand on principle against businesses being good at business.

Pascal production began 2 months ago, and Samsung is offering HBM 2.0 4-stack configs with 48GB capacities at 1.5TB/s. This was confirmed in the last two weeks, meaning AMD is screwed since it's in an exclusive provision contract with Hynix for the Arctic Islands series. Nvidia will bury it on specs for memory.

Pascal is Maxwell plus the bells and whistles. It's not hard to implement Asynchronous Compute. You just add an intercommunication fabric/bus to the CUs and let them be autonomous. Beyond that Nvidia has its far larger cache system which will mitigate any latency-bound scenarios which, yes, do exist even in game rendering.

Both Nvidia and Intel are DX 12.1 compliant and are 92% of the graphics market. It'll become popular.

 

Actually you do in long term. NVidia is playing a dangerous game by ripping off their customers with vendor lock in an conjunction with planned obsolescence. At some point these people will wake up and see NVidia is ripping them off, and will start to react. Maybe NVidia will see the writing on the wall and adapt to such a market situation before that happens, but that is only speculation of course. Consumers are far more emotional than business customers, so mistreating them will have long lasting effects. I know your stance on consumer is king, but it depends on the markets.

 

When people buy graphics cards, they usually buy them or many years (3-4 perhaps). Only hardware nerds like us in here wants to upgrade more often. An aggressive planned obsolescence of just one year or so, will have consequences for consumer trust. Such trust is difficult to attain and very difficult to regain once lost.

 

Arctic Islands began either earlier or about the same time. AMD has priority access to SK Hynix HBM, but that doesn't mean AMD cannot or won't use Samsung HBM, if SKH cannot provide the demand needed. SKH can produce HBM2 at 8hi as well, and considering HBM1 can be overclocked 100%, I doubt there will be any speed differences between the two manufacturers once products launch. It's not like Samsung makes faster RAM in any other market than SKH.

 

It was hard enough for NVidia it seems. I assume Pascal will have proper DX12 support, just as Arctic Island GPU's will have 12.1. So far we know nothing of Pascal or Arctic Island (Greenland) architecture, so cache systems, etc., are pure speculations.

 

Again a speculation. So far Maxwell isn't even fully DX 12.0 compliant, with lack of async shader support. That is very surprising. But in standard NVidia reactions, it's all about blaming everyone else.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

@MageTank

 

Funny thing is, i checked Nvidias OWN sites (their news and media section)

The only thing mentioned is Nvidia slides saying "4x compute"... At no point does any article on any website that google can present me, EVER, mention Async shaders in Pascal..... So..... Unless Nvidia can overcome the monstrous performance of the Fury X with an even more monstrous performing GPU, then no, they wont win in games using async shaders... they will more likely get rekt, hard...

 

 

Also, this post here explains AMDs async shaders more in depth

http://www.anandtech.com/show/9124/amd-dives-deep-on-asynchronous-shading

 

I checked my information for Maxwell and turns out I was assuming Maxwell 2.0 could do 32x async compute based my misinterpretation of this thread post. I updated OP of this thread accordingly because it may be incorrect information.  

 

What is weird is that Josh Walreth has said multiple times (on PCPer podcasts) that Maxwell 2.0 had async compute engines similar to GCN's, but turns out he was probably going off the same misinformation.

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

@MageTank

 

Funny thing is, i checked Nvidias OWN sites (their news and media section)

The only thing mentioned is Nvidia slides saying "4x compute"... At no point does any article on any website that google can present me, EVER, mention Async shaders in Pascal..... So..... Unless Nvidia can overcome the monstrous performance of the Fury X with an even more monstrous performing GPU, then no, they wont win in games using async shaders... they will more likely get rekt, hard...

 

 

Also, this post here explains AMDs async shaders more in depth

http://www.anandtech.com/show/9124/amd-dives-deep-on-asynchronous-shading

Yeah. Normally, Nvidia always brags about what their card can do, even if it is something very minuscule. For them to leave out an advertising boon, just does not seem like them. They still advertise Shadowplay and Shield Streaming on cards, and those features are not exactly important in the grand scheme of things. 

 

Oh well. We just have to wait to see how things plays out. Good news is, when things get too heated, Patrick normally goes into hiding. That means there should be less misinformation being spread in the mean time. So take that for whatever you will.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah. Normally, Nvidia always brags about what their card can do, even if it is something very minuscule. For them to leave out an advertising boon, just does not seem like them. They still advertise Shadowplay and Shield Streaming on cards, and those features are not exactly important in the grand scheme of things. 

 

Oh well. We just have to wait to see how things plays out. Good news is, when things get too heated, Patrick normally goes into hiding. That means there should be less misinformation being spread in the mean time. So take that for whatever you will.

It's called Reality Distortion Field - coined by Apple, copied by NVIDIA.

 

People buy into that shit xD

Link to comment
Share on other sites

Link to post
Share on other sites

No, in these benchmarks the 980Ti is being contested with the 290X. And in some cases, it lost to it.

1. no one will run the shittier (lower performing) of the two dx implementations just because they can.

2. You missed the point because if those benchmarks are true then Hawaii also outperforms Fiji (or ties it, which would indicate some other bottleneck here or it just doesn't make sense.), because fury x to 980ti was a straight draw.

3. This is the first one I have ever seen for this game where straight up without any exception dx12 makes nvidia worse.

Even people bitching about the 980 scaling showed most of the time 10-20% positive gains.

Just pointing these things out.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

@MageTank

Funny thing is, i checked Nvidias OWN sites (their news and media section)

The only thing mentioned is Nvidia slides saying "4x compute"... At no point does any article on any website that google can present me, EVER, mention Async shaders in Pascal..... So..... Unless Nvidia can overcome the monstrous performance of the Fury X with an even more monstrous performing GPU, then no, they wont win in games using async shaders... they will more likely get rekt, hard...

Also, this post here explains AMDs async shaders more in depth

http://www.anandtech.com/show/9124/amd-dives-deep-on-asynchronous-shading

http://www.extremetech.com/gaming/212314-directx-12-arrives-at-last-with-ashes-of-the-singularity-amd-and-nvidia-go-head-to-head/2

Just saying. They apparently don't have much overcoming to do when they are basically tied accross the board.

Look clearly amd's cards will get a bigger boost in dx12 than nvidia. Clearly.

Clearly if two gpus are very comparable now the lead should shift notably in amds favor.

But when the two real flagship cards are still basically tied in dx12 on this ONE game (alpha....) there really isn't too much to complain about either way.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1. no one will run the shittier (lower performing) of the two dx implementations just because they can.

2. You missed the point because if those benchmarks are true then Hawaii also outperforms Fiji (or ties it, which would indicate some other bottleneck here or it just doesn't make sense.), because fury x to 980ti was a straight draw.

3. This is the first one I have ever seen for this game where straight up without any exception dx12 makes nvidia worse.

Even people bitching about the 980 scaling showed most of the time 10-20% positive gains.

Just pointing these things out.

1. I never said they will, but both NVIDIA and AMD have said that DX12 will improve performance on their cards. It's just that AMD has benefited better from it at the moment.

2. I'm not sure what you mean by 'missing the point'. A 2 year old card has virtually an equal performance of NVIDIA's current flagship, it's pretty straightforward. The Fury X still has performance gains, just not as big as the 290X.

3. Welcome to reality. Performance gains on DX12 for NVIDIA cards (at the moment) is very minimal. Why else did you think they attempted to call these numbers 'bugs'? AMD has an advantage here due to their GPU architecture (async compute engines). It's still too early to say but so far they're winning.

'Fanboyism is stupid' - someone on this forum.

Be nice to each other boys and girls. And don't cheap out on a power supply.

Spoiler

CPU: Intel Core i7 4790K - 4.5 GHz | Motherboard: ASUS MAXIMUS VII HERO | RAM: 32GB Corsair Vengeance Pro DDR3 | SSD: Samsung 850 EVO - 500GB | GPU: MSI GTX 980 Ti Gaming 6GB | PSU: EVGA SuperNOVA 650 G2 | Case: NZXT Phantom 530 | Cooling: CRYORIG R1 Ultimate | Monitor: ASUS ROG Swift PG279Q | Peripherals: Corsair Vengeance K70 and Razer DeathAdder

 

Link to comment
Share on other sites

Link to post
Share on other sites

1. I never said they will, but both NVIDIA and AMD have said that DX12 will improve performance on their card. It's just that AMD has benefited better from it at the movement.

2. I'm not sure what you mean by 'missing the point'. A 2 year old card has virtually an equal performance of NVIDIA's current flagship, it's pretty straightforward. The Fury X still has performance gains, just not as big as the 290X.

3. Welcome to reality. Performance gains on DX12 for NVIDIA cards (at the moment) is very minimal. Why else did you think they attempted to call these numbers 'bugs'? AMD has an advantage here due to their GPU architecture.

You really aren't getting that of the 20 other sites to benchmark these cards no one else showed straight up worse scores... In basically every work load.

And if the 290x>980ti but 980ti=fury x then all that tells you is yay Hawaii (well it also reeks of Bullshit/some other game bottleneck or limitation, but I mean throw doubt out and it's basically a straight shaft to both flagships).

On an aside I've been kinda annoyed no one has shown utilization numbers throughout the test as the initial 980/390x numbers were indicating some really serious cpu single core requirements (with scaling topping out below 6 cores) as the 5960x was just being trashed by the 6700k at the cpu demanding sections.

That isn't me calling out saying it would make any difference on the gpu number side, but it has irked me for a while that the amd vs nvidia shit totally overscored what could have otherwise been some very interesing insights on cpu utilization under the same constraints. (That being it's one game. And it's the alpha of that game.... I mean lol.)

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

You really aren't getting that of the 20 other sites to benchmark these cards no one else showed straight up worse scores... In basically every work load.

And if the 290x>980ti but 980ti=fury x then all that tells you is yay Hawaii (well it also reeks of Bullshit/some other game bottleneck or limitation, but I mean throw doubt out and it's basically a straight shaft to both flagships).

On an aside I've been kinda annoyed no one has shown utilization numbers throughout the test as the initial 980/390x numbers were indicating some really serious cpu single core requirements (with scaling topping out below 6 cores) as the 5960x was just being trashed by the 6700k at the cpu demanding sections.

That isn't me calling out saying it would make any difference on the gpu number side, but it has irked me for a while that the amd vs nvidia shit totally overscored what could have otherwise been some very interesing insights on cpu utilization under the same constraints. (That being it's one game. And it's the alpha of that game.... I mean lol.)

If you could find me another source other than Ars Technica that compares the 290X and the 980Ti, I'd be very interested in it.

The point I'm trying to get across is that AMD GPUs just generally have a better performance gain than NVIDIA cards in DX12. That's what the fuss is about, the previous, more advanced and better cards of NVIDIA are struggling to compete against the older cards of AMD.

post-237505-0-40269000-1440960301.jpg

This benchmark here shows a massive improvement of the 390X (basically a rebranded 290X) in which it marginally beats a card that was previously the superior flagship and was more expensive. Yes, the performance on some cards will not be as drastic (Fury X for example) but this could very well bring AMD back into the game.

Only time will tell. And that time will come next week when Ark: Survival Evolved finally implements DX12 into their game.

'Fanboyism is stupid' - someone on this forum.

Be nice to each other boys and girls. And don't cheap out on a power supply.

Spoiler

CPU: Intel Core i7 4790K - 4.5 GHz | Motherboard: ASUS MAXIMUS VII HERO | RAM: 32GB Corsair Vengeance Pro DDR3 | SSD: Samsung 850 EVO - 500GB | GPU: MSI GTX 980 Ti Gaming 6GB | PSU: EVGA SuperNOVA 650 G2 | Case: NZXT Phantom 530 | Cooling: CRYORIG R1 Ultimate | Monitor: ASUS ROG Swift PG279Q | Peripherals: Corsair Vengeance K70 and Razer DeathAdder

 

Link to comment
Share on other sites

Link to post
Share on other sites

If you could find me another source other than Ars Technica that compares the 290X and the 980Ti, I'd be very interested in it.

The point I'm trying to test across is that AMD GPUs just generally have a better performance gain than NVIDIA cards in DX12. That's what the fuss is about, the previous, more advanced and better cards of NVIDIA are struggling to compete against the older cards of AMD.

image.jpg

This benchmark here shows a massive improvement of the 390X (basically a rebranded 290X) in which it marginally beats a card that was previously the superior flagship and was more expensive. Yes, the performance on some cards will not be as drastic (Fury X for example) but this could very well bring AMD back into the game.

Your base point is fine. I just take exception to all the people who were flipping out that rip 980ti without taking into account that again the 980ti and fury x were just straight up tying.

So really rip high end gpus.

Which I guess was part of the point right?

Also apparently nvidia did some just straight cpu testing with a 5930k and while it's interesting and all. I'd still like to see the performance scaling between lots of slow (3.3-3.5) cores and the same or fewer higher clocked cores. (Yes its a bit of self-centeredness there with my 4.5 Ghz 5820k)

http://www.legitreviews.com/wp-content/uploads/2015/08/nvidia-test-results.jpg

Scroll to the green graphics.

It's interesting and all. But they didn't even bother looking at 1440p or 4k (which I understand should show less difference but still) or looking at 4 core vs 6 core and 4.5-5.0 Ghz vs 3.5 (since the stock 5960x is down there.)

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Your base point is fine. I just take exception to all the people who were flipping out that rip 980ti without taking into account that again the 980ti and fury x were just straight up tying.

So really rip high end gpus.

Which I guess was part of the point right?

Also apparently nvidia did some just straight cpu testing with a 5930k and while it's interesting and all. I'd still like to see the performance scaling between lots of slow (3.3-3.5) cores and the same or fewer higher clocked cores.

Good to know.

I wouldn't say ripirieno to high-end GPUs yet, we can't base all our data on one game. Ark Survival will give us another crucial data point once DX12 is implemented into it next week.

As for the NVIDIA testing, I'd be pretty skeptical about the benchmarks if they directly come from NVIDIA. While it's true that 1440p or 4K would show minimal difference to its current pattern, it may just be that the benchmarks might be a bit too low for PR. I don't know. Don't take my word for it.

'Fanboyism is stupid' - someone on this forum.

Be nice to each other boys and girls. And don't cheap out on a power supply.

Spoiler

CPU: Intel Core i7 4790K - 4.5 GHz | Motherboard: ASUS MAXIMUS VII HERO | RAM: 32GB Corsair Vengeance Pro DDR3 | SSD: Samsung 850 EVO - 500GB | GPU: MSI GTX 980 Ti Gaming 6GB | PSU: EVGA SuperNOVA 650 G2 | Case: NZXT Phantom 530 | Cooling: CRYORIG R1 Ultimate | Monitor: ASUS ROG Swift PG279Q | Peripherals: Corsair Vengeance K70 and Razer DeathAdder

 

Link to comment
Share on other sites

Link to post
Share on other sites

An educated guess doesn't mean it's accurate. You are basing your 'facts' over the past. There's no evidence so far that shows Pascal is superior in performance in comparison to Arctic Islands, mainly because it hasn't been fully developed as of yet.

What we do know that AMD's 2 year old card is contesting, and in some cases, beating NVIDIA's flagship model in a DX12 game as of now. That my friend, is a fact.

Pascal's already past risk production and is in volume production at TSMC right now. This was announced 2 months ago.

 

The "beating" is within margin of error, and no one should be surprised given both the 290/390X and 980TI both have 2816 SPs. When you get all the overhead out of the way, DUH! Then AMD runs into its ROP imbalance issues and Nvidia runs into its lack of Async Shaders for Maxwell. The problem is Koduri hasn't gotten his ROP count right in 5 generations, and I have 0 faith in him doing correctly for Arctic Islands. The fact is Koduri hasn't shown he can learn from his mistakes, as made clear by Fiji's terrible, imbalanced design that doesn't outperform the 390X in AOTS or any other DX 12 title right now. Nvidia consistently proves it will deliver the absolute best of what is desired for a market at a given time. Maxwell was designed with DX 12 in mind, but then 20nm never happened, so Nvidia scaled back and delivered the best for DX 11. Now all it has to do is put everything back and add a couple bells and whistles and deploy it for DX 12. AMD won't win. It's not a guess.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

When you see a 290x almost reaching a 980 ti of course peeps will be butthurt ;)

Computational power ism there with the R9 series, however its rarely utilised correctly and you end up with graphics cards that perform far below what they are actually capable off.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Let me ask you a simple question...

 

If Nvidia started taping out from TSCM 2 months ago... knowing that it takes what? 90 days for a full production....

How do you redesign the internal bus layout without causing large delays and thus VERY unhappy share holders and customers?

 

Or better yet... if you were to do this in a driver, you would just add another layer of complexity and failure to an already extremely complex driver....

 

So..... if Async shaders are supposed to be awesome in Pascal, this should have been taken care of then... months ago maybe even a year ago.... And if Nvidia HASNT done anything about it now... well... doesnt look like a very "bright" future then...

You're operating under the delusion the bus wasn't redesigned long before that. DX 12 was fully specced and ratified for 6 months before risk production of Pascal began. The revision already happened.

 

DX 12 removes the need for complex drivers. AMD, Nvidia, and Intel have all said this.

 

Read my first point. Development and deployment of products happens in staggered parallel much like a CPU's superscalar operations.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Computational power ism there with the R9 series, however its rarely utilised correctly and you end up with graphics cards that perform far below what they are actually capable off.

 

mmmmhmmm? And?

i5 2400 | ASUS RTX 4090 TUF OC | Seasonic 1200W Prime Gold | WD Green 120gb | WD Blue 1tb | some ram | a random case

 

Link to comment
Share on other sites

Link to post
Share on other sites

But less than 5% of of the cards out there support 12.1/12_1 what ever they call it.

Doesn't matter. They have the sales base that AMD doesn't, and with Skylake that will shift to 35% very quickly in the coming months. with Pascal it'll be back up to 65-75% by the end of Q2/Q3 2016 as per the deployment schedule.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×