Jump to content

[Updated] Oxide responds to AotS Conspiracies, Maxwell Has No Native Support For DX12 Asynchronous Compute

Plot twist: TSMC can't get 16nm FF+ yields up due to large die sizes. 14/16nm delayed until 2017.

2nd Plot Twist: Intel reaches 10nm ahead of schedule and tries to make a dGPU based on the Cannonlake graphics engine in late 2016.

3rd Plot Twist: Samsung decides to try producing dGPUs for AMD and Nvidia.

4th plot twist: North korea goes on the offensive and Samsung will have to shift its attention and have to delay their attempt at making dGPUs

Link to comment
Share on other sites

Link to post
Share on other sites

4th plot twist: North korea goes on the offensive and Samsung will have to shift its attention and have to delay their attempt at making dGPUs

North Korea going on the offensive would end 6 days later in NK being wiped out or KJU stepping down and the U.S. and China taking a joint venture in a transitional government. China only tolerates NK as long as it's a thorn in the side of the U.S. but doesn't do anything stupid to cause a war on China's border.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Plot twist: TSMC can't get 16nm FF+ yields up due to large die sizes. 14/16nm delayed until 2017.

2nd Plot Twist: Intel reaches 10nm ahead of schedule and tries to make a dGPU based on the Cannonlake graphics engine in late 2016.

3rd Plot Twist: Samsung decides to try producing dGPUs for AMD and Nvidia.

That would actually be good, however Nvidia would kick and scream if Intel tried to enter the dGPU market for the 3rd time.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

That would actually be good, however Nvidia would kick and scream if Intel tried to enter the dGPU market for the 3rd time.

Nvidia could do nothing to stop it, because Intel is already at Tier 1 DX 12 support. By the time the Cannonlake graphics engine is finalized (may be released with Kaby Lake even), Intel will practically without a doubt be at Tier 3 support or even Tier 3 + DX 12.1 full support, and if it has a 2-node lead at that point, Intel could win for a brief period on brute force even if it doesn't have the most refined architecture by any stretch. I would laugh so hard in that instance. Not to mention Intel doesn't actually use Maxwell designs. It uses those patents to learn and find ways around them, because Intel doesn't want to be beholden to Nvidia and keep paying them into the future for licenses of GPU IP.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Nvidia could do nothing to stop it, because Intel is already at Tier 1 DX 12 support. By the time the Cannonlake graphics engine is finalized (may be released with Kaby Lake even), Intel will practically without a doubt be at Tier 3 support or even Tier 3 + DX 12.1 full support, and if it has a 2-node lead at that point, Intel could win for a brief period on brute force even if it doesn't have the most refined architecture by any stretch. I would laugh so hard in that instance.

Even so that wouldn't stop Nvidia's tantrum. Remember Nvidia moaning about SLI before they acquired 3dfx? (Of course Nvidia wouldn't be able to buy Intel)

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

-snip
-something about Kepler-
-snip-

Duno what to say about bad performance on Gameworks games using older GPUs but my 560 ran Witcher 3 at ~50fps on low-ish settings (some features turned on a little big higher, just a little). I still don't have any reason to upgrade from my 560, it feels so weird D: .

i5 4670k @ 4.2GHz (Coolermaster Hyper 212 Evo); ASrock Z87 EXTREME4; 8GB Kingston HyperX Beast DDR3 RAM @ 2133MHz; Asus DirectCU GTX 560; Super Flower Golden King 550 Platinum PSU;1TB Seagate Barracuda;Corsair 200r case. 

Link to comment
Share on other sites

Link to post
Share on other sites

Duno what to say about bad performance on Gameworks games using older GPUs but my 560 ran Witcher 3 at ~50fps on low-ish settings (some features turned on a little big higher, just a little). I still don't have any reason to upgrade from my 560, it feels so weird D: .

 

what resolution were you playing at?

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

what resolution were you playing at?

1050p

1680x1050

 

Almost 1080p.

i5 4670k @ 4.2GHz (Coolermaster Hyper 212 Evo); ASrock Z87 EXTREME4; 8GB Kingston HyperX Beast DDR3 RAM @ 2133MHz; Asus DirectCU GTX 560; Super Flower Golden King 550 Platinum PSU;1TB Seagate Barracuda;Corsair 200r case. 

Link to comment
Share on other sites

Link to post
Share on other sites

So lets say , im picky between the 390 and 970 ( 390 should be arriving soon enough , this is just my take on this ) , and i wont do bad no matter which one i buy . 

 

BUT , if i go AMD , il possibly get a very nice performance increase in the future , while the 970 would stay the same pretty much ? 

 

 

Also , what kind of a gain could the Fury X achieve ? 290x got around a 58% improvement ... ( if im correct looking at the arstechnica benchmarks ) 

The Subwoofer 

Ryzen 7 1700  /// Noctua NH-L9X65 /// Noctua NF-P14s Redux 1200PWM

ASRock Fatal1ty X370 Gaming-ITX/ac /// 16GB DDR4 G.Skill TridentZ 3066Mhz

Zotac GTX1080 Mini 

EVGA Supernova G3 650W 

Samsung 960EVO 250GB + WD Blue 2TB

Link to comment
Share on other sites

Link to post
Share on other sites

So lets say , im picky between the 390 and 970 ( 390 should be arriving soon enough , this is just my take on this ) , and i wont do bad no matter which one i buy . 

 

BUT , if i go AMD , il possibly get a very nice performance increase in the future , while the 970 would stay the same pretty much ? 

 

 

Also , what kind of a gain could the Fury X achieve ? 290x got around a 58% improvement ... ( if im correct looking at the arstechnica benchmarks ) 

 

The first iteration of the Nitrous engine, for example, has only 20% of its Graphics pipeline in compute. The next iteration will be over 50% according to the dev. That's the going trend in DX12 titles... moving towards utilizing more compute resources and less Graphics resources. Fury-X is yet another card which looks like it will be enjoying a long life. Around 8.6Tflops isn't bad in terms of compute performance.

 

Myself... I'm waiting for Greenland and Pascal before I make my choice.

"Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth." - Arthur Conan Doyle (Sherlock Holmes)

Link to comment
Share on other sites

Link to post
Share on other sites

If there's a "side" I'm defending... it's the truth. I don't fly a green or red banner.

You are doing God's work. Gotham didn't deserve Batman, and we don't deserve you. Sadly, the fan war is a never ending war, regardless of "truth". That being said, i wanted to take the time out of this dirt slinging show to thank you for your insight on this situation. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

You are doing God's work. Gotham didn't deserve Batman, and we don't deserve you. Sadly, the fan war is a never ending war, regardless of "truth". That being said, i wanted to take the time out of this dirt slinging show to thank you for your insight on this situation. 

 

LOL  :lol:

"Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth." - Arthur Conan Doyle (Sherlock Holmes)

Link to comment
Share on other sites

Link to post
Share on other sites

You are doing God's work. Gotham didn't deserve Batman, and we don't deserve you. Sadly, the fan war is a never ending war, regardless of "truth". That being said, i wanted to take the time out of this dirt slinging show to thank you for your insight on this situation.

That said he is neither the hero we need or want but still helpful none the less

Everything you need to know about AMD cpus in one simple post.  Christian Member 

Wii u, ps3(2 usb fat),ps4

Iphone 6 64gb and surface RT

Hp DL380 G5 with one E5345 and bunch of hot swappable hdds in raid 5 from when i got it. intend to run xen server on it

Apple Power Macintosh G5 2.0 DP (PCI-X) with notebook hdd i had lying around 4GB of ram

TOSHIBA Satellite P850 with Core i7-3610QM,8gb of ram,default 750hdd has dual screens via a external display as main and laptop display as second running windows 10

MacBookPro11,3:I7-4870HQ, 512gb ssd,16gb of memory

Link to comment
Share on other sites

Link to post
Share on other sites

It's now clear that AMD handing Mantle over to Khronos group in order to jump start Vulkan development was very much a strategic move, vulkan has happened a lot faster than anybody expected thanks to that leg up. I guess AMD knew that speeding up the move to next Gen APIs is good business for them.

Link to comment
Share on other sites

Link to post
Share on other sites

The first iteration of the Nitrous engine, for example, has only 20% of its Graphics pipeline in compute. The next iteration will be over 50% according to the dev. That's the going trend in DX12 titles... moving towards utilizing more compute resources and less Graphics resources. Fury-X is yet another card which looks like it will be enjoying a long life. Around 8.6Tflops isn't bad in terms of compute performance.

 

Myself... I'm waiting for Greenland and Pascal before I make my choice.

Fury X is fucked in the long run... its completly choked by its low ROP count... its compute is massive, but ROPs arent strong enough, so it gets choked. This is why the Fury X barely have better pixel fill rate then a 290X, as they have identical amount of ROPs...

if they had added more ROPs, and cut down the compute units to like... Fury levels (3800). Then it would fare better in the long run.

i would say the Fury X has a good 3-4 years in it... but it all depends on how quickly resolution and refresh rate increases happen...

Link to comment
Share on other sites

Link to post
Share on other sites

I have honestly been trying to work this out and my build has been pretty much put on hold until I actually understand what's going on. So... I shouldn't buy a current Nvidia graphics card and buy AMD instead I have no clue what is going on, haha. 

I have no idea what I am doing

Link to comment
Share on other sites

Link to post
Share on other sites

I have honestly been trying to work this out and my build has been pretty much put on hold until I actually understand what's going on. So... I shouldn't buy a current Nvidia graphics card and buy AMD instead I have no clue what is going on, haha.

Based on the information we have so far AMD is a safer buy. I.e. if you are considering two GPUs which perform the same in current games then go with the AMD option. Also let's see what comes up in nvidia's driver update which Oxide confirmed.
Link to comment
Share on other sites

Link to post
Share on other sites

Fury X is fucked in the long run... its completly choked by its low ROP count... its compute is massive, but ROPs arent strong enough, so it gets choked. This is why the Fury X barely have better pixel fill rate then a 290X, as they have identical amount of ROPs...

if they had added more ROPs, and cut down the compute units to like... Fury levels (3800). Then it would fare better in the long run.

i would say the Fury X has a good 3-4 years in it... but it all depends on how quickly resolution and refresh rate increases happen...

 

I very much doubt that. With focus on async compute, and compute in general (which completely bypasses ROP's), we are looking more at bandwidth limitations in the future. That should have been solved already with the introduction of HBM, especially if they are 1024bit/1Ghz.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

From my perspective,

The inclusion of HBM coupled with the color compression found in Fiji, helps it achieve decent unfiltered ROPs performance above Hawaii. A good example of this is the 3D Mark Vantage Pixel Fill benchmark.

What really hurts Fiji, in my opinion, is its Gtris/s performance when coupled with its geometry performance and its shading performance. A good example of this is the Ashes of the Singularity benchmark. All three together leads to the small triangle problem on GCN hardware.

What the console guys are doing, to get around this issue, is they're beginning to move triangle operations into compute. Kollock, the Oxide developer, mentioned this to is over at overclock.net.

Since the trend, going forward, is a move towards compute rather than graphics, we're likely to see Fiji remain relevant for some time to come. Compute also saves on memory bandwidth and memory usage, both being important since we're talking about a 4GB vram Graphics card.

My 2 cents.

"Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth." - Arthur Conan Doyle (Sherlock Holmes)

Link to comment
Share on other sites

Link to post
Share on other sites

 

Awesome, thanks!

 

I've updated the OP of this thread with that info. Those graphs are especially remarkable. 

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

well on the topic of AMD needing to "have more pressure applied to stretch their legs".... signs of that was already in the 290X when it first came out.

 

AMD specifically stated that the 290X wouldnt REALLY stretch its legs before 4K, but since nobody bothered with 4k at that point, it just got hammered by the 780Ti and kind of ignored for a while as people waited for Maxwell to come out......

Link to comment
Share on other sites

Link to post
Share on other sites

well on the topic of AMD needing to "have more pressure applied to stretch their legs".... signs of that was already in the 290X when it first came out.

 

AMD specifically stated that the 290X wouldnt REALLY stretch its legs before 4K, but since nobody bothered with 4k at that point, it just got hammered by the 780Ti and kind of ignored for a while as people waited for Maxwell to come out......

AMD really took a big gamble with that, and TBH mainstream 4K is still a long time away so they might have aimed for 4K a bit too soon.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

AMD really took a big gamble with that, and TBH mainstream 4K is still a long time away so they might have aimed for 4K a bit too soon.

well, looking at what we know today..... they are apparently in for the long game..

 

And yes, the 290X can still do 4k 30FPS... which for a soon 3 year old card is pretty darn impressive

Link to comment
Share on other sites

Link to post
Share on other sites

well, looking at what we know today..... they are apparently in for the long game..

 

And yes, the 290X can still do 4k 30FPS... which for a soon 3 year old card is pretty darn impressive

And since they scale well in Crossfire due to the truly modern implementation, that's just another thing that points to them thinking so far forward (again more like a bit too far due to their financial state). Bridged SLI is actually bad in comparison to Crossfire when you look at the differences.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×