Jump to content

AMD's 3xx series tessellation "not" improved - performance boost comes from the drivers

zMeul

You won't get the new drivers unless you flash to a 390 or continue to take them from here. Amd only has these drivers to distinguish the two generations so why would it ever gift you the performance bonus when it wants you to buy a new 390 instead.

the release drivers for 390 (catalyst 15.15) which contain these new optimizations are not yet compatible with older GPUs.

 

AMD will come out with a common driver package for all GPUs; which is what they always do. And since it is Hawai they are not going to purposely cripple the 290 series, if they found new optimizations they will implement it. Remember they are not interested in trying to make the 390x look good next to the 290x. They only want to make it look good vs the 980. If we buy AMD GPUs they are happy, and soon the 290 series will be out of stock everywhere.

 

Also they would get a lot of bad rep from the community if they don't incorporate the software improvements they discover. It literally does not make business sense...

Link to comment
Share on other sites

Link to post
Share on other sites

the 15.5 beta driver u mention here has been released like almost 1 month ago......

This is a modified 15.15 driver for the 300 series that was ported back down to 200.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Well asychories shaders work in all GCN GPUs from what I've read so I don't know why it would only work on 3xx only.

  ﷲ   Muslim Member  ﷲ

KennyS and ScreaM are my role models in CSGO.

CPU: i3-4130 Motherboard: Gigabyte H81M-S2PH RAM: 8GB Kingston hyperx fury HDD: WD caviar black 1TB GPU: MSI 750TI twin frozr II Case: Aerocool Xpredator X3 PSU: Corsair RM650

Link to comment
Share on other sites

Link to post
Share on other sites

With the unsold inventory problems AMD was having, I'm not the least bit surprised. How many times do I have to say consumers do not matter to these companies before people will listen and understand that? Humans have needs and wants and will pay to have them satisfied. You will never organize a global boycott or do anything of large enough scale to make them budge. The only thing that budges them is competition.

 

Seriously AMD, just sell ATI to Intel, merge with Nvidia, and move on.

yep because we need less competition btw if it was not for amd HBM would not be a thing

Link to comment
Share on other sites

Link to post
Share on other sites

the release drivers for 390 (catalyst 15.15) which contain these new optimizations are not yet compatible with older GPUs.

AMD will come out with a common driver package for all GPUs; which is what they always do. And since it is Hawai they are not going to purposely cripple the 290 series, if they found new optimizations they will implement it. Remember they are not interested in trying to make the 390x look good next to the 290x. They only want to make it look good vs the 980. If we buy AMD GPUs they are happy, and soon the 290 series will be out of stock everywhere.

Also they would get a lot of bad rep from the community if they don't incorporate the software improvements they discover. It literally does not make business sense...

Not really. The 200 series is discontinued. They have no financial interest in supporting them anymore. Also there would be absolutely zero reason for anyone to buy the 300 series over their same memory up to 150 usd cheaper 200 variants if they gave the driver updates to everyone.

They have to show generational improvements to justify the rebranding (and to justify updating your card to current amd users, because let's be honest no one is going to swap to a 300 series card from a 980 or higher. Hell all the same arguments exist between the 290x and the 970 that exist between the 390x and the 970 so I doubt many of those would switch either.), and that is literally the only improvements they have to show for it.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

The differences are very visible until 32 vs 64, at least for me, and I'm halfway to blind as it is.

maybe that's the problem

 

look at the below. The bottom shot yields way higher framerates. The top one only serves to make the game run slower for everyone. Fortunately for AMD users we we have a way to force the bottom preset. If CD Projekt Red had done their jobs properly it would not be necessary and everybody would be enjoying hairworks with high framerates. If you lower it further to x8 I guess it's visible if you are really looking for it.

 

witcher3-2015-05-21-03-24-40-03.bmp

witcher3-2015-05-21-03-26-09-28.bmp

Link to comment
Share on other sites

Link to post
Share on other sites

yep because we need less competition btw if it was not for amd HBM would not be a thing

Not true. HBM was in development before AMD jumped onboard. AMD just bankrolled and got early access. Besides, Nvidia was part of the Hybrid Memory Cube effort before AMD jumped on HBM. Without AMD we'd have had JEDEC jump on HMC instead of HBM, because Hynix couldn't have bought the vote with AMD's money like it did to keep HMC out of the JEDEC family despite its ability to replace system memory and accelerator memory at lower power and higher speeds with lower latency.

 

AMD dying would bring about more competition if ATI went to Intel and the CPU division went to Nvidia. Nvidia has the Denver IP it can use, and on x86 that will make a very large difference (Intel would have to sell Nvidia the license since it needs x86_64 in the medium term (could potentially respin Itanium or a new 64-bit standard with more GP registers without needing all the legacy crap)). Intel has also been after Nvidia's head since the Larrabee betrayal. Competition would run rampant for 4-6 years before team green would just get buried.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

They have no financial interest in supporting them anymore.

Yes they do; PR and goodwill. The Internet knows that 390x is Hawai. They can't artifically segregate them from 290 series without facing a backlash which loses them money. Do you realize the 7970Ghz to this day performs like the 280x in the newest games? Why? Same architecture and AMD applied the same driver optimizations. If they are supporting the 390 series it does not take any extra effort to support the 290 series. They would have to purposely deny the optimizations to the 290 series.

 

Nobody with a 290 is going to buy a 390x anyway. And soon all the 2xx series will be out of stock everywhere. AMD's chief interest is to look good vs competing NVIDIA cards.

Link to comment
Share on other sites

Link to post
Share on other sites

maybe that's the problem

 

look at the below. The bottom shot yields way higher framerates. The top one only serves to make the game run slower for everyone. Fortunately for AMD users we we have a way to force the bottom preset.

 

 

 

Do you seriously not see the differences here? I can actually count the strands on his head in x64, whereas they are more amorphous in x16. Above 32x is not worth it for detail, but there is a distinct difference even I can see. The beard's also very different in quality.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Not true. HBM was in development before AMD jumped onboard. AMD just bankrolled and got early access. Besides, Nvidia was part of the Hybrid Memory Cube effort before AMD jumped on HBM. Without AMD we'd have had JEDEC jump on HMC instead of HBM, because Hynix couldn't have bought the vote with AMD's money like it did to keep HMC out of the JEDEC family despite its ability to replace system memory and accelerator memory at lower power and higher speeds with lower latency.

 

AMD dying would bring about more competition if ATI went to Intel and the CPU division went to Nvidia. Nvidia has the Denver IP it can use, and on x86 that will make a very large difference (Intel would have to sell Nvidia the license since it needs x86_64 in the medium term (could potentially respin Itanium or a new 64-bit standard with more GP registers without needing all the legacy crap)). Intel has also been after Nvidia's head since the Larrabee betrayal. Competition would run rampant for 4-6 years before team green would just get buried.

amd at the least helped fund it and nvidia backed another technology which didnt work. also amd may not make good cpus but they at least know how to make one unlike nvidia who only made mobile cpus also zen in 2016 could be very competitive why do you want to see amd dissolved before we see what they are making 

Link to comment
Share on other sites

Link to post
Share on other sites

Do you seriously not see the differences here? I can actually count the strands on his head in x64, whereas they are more amorphous in x16. Above 32x is not worth it for detail, but there is a distinct difference even I can see. The beard's also very different in quality.

It obviously makes a difference what size screen you are looking at the image lol.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah, except that 390X is being compared to a bottom of the barrel 980 that isn't overclocked. Overclock both, and Hell throw a real 980 at it, and let's see what we get.

 

Unfortunately its near impossible to find GTX 980 non reference 1440p benchmarks with hair works on but we can see a Gigabyte GTX 980 Windforce score a 48fps on ULTRA 1440p no hairworks. (digital foundry has it at 47.5 fps average 36 min)  If we take the general assumption that turning hairworks on will remove 15-20 fps from a gtx 980 We'll have an average of 33fps at best assuming we drop an average of 15fps by turning hairworks on. 

 

These comparison are obviously sketchy as balls but i think my point still stands  is...(and goddamn somebody get all of these gpus and just make a giant very specific benchmark pls) I dont think the 290x and by correctional the 390x is as far behind as a lot of people might think.

CPU: Intel i5 4690k W/Noctua nh-d15 GPU: Gigabyte G1 980 TI MOBO: MSI Z97 Gaming 5 RAM: 16Gig Corsair Vengance Boot-Drive: 500gb Samsung Evo Storage: 2x 500g WD Blue, 1x 2tb WD Black 1x4tb WD Red

 

 

 

 

"Whatever AMD is losing in suddenly becomes the most important thing ever." - Glenwing, 1/13/2015

 

Link to comment
Share on other sites

Link to post
Share on other sites

So if the AMD drivers are forcing a lower tessellation... then the original comparison from HardOCP is also moot since it is really no longer apples to apples. 

 

CDPR and Nvidia really should let users set the amount of tessellation in HW themselves.

Turnip OC'd to 3Hz on air

Link to comment
Share on other sites

Link to post
Share on other sites

15.15 is the new one.

amds official website says  that 15.5 beta driver is the latest one

AMD Rig - (Upgraded): FX 8320 @ 4.8 Ghz, Corsair H100i GTX, ROG Crosshair V Formula, Ghz, 16 GB 1866 Mhz Ram, Msi R9 280x Gaming 3G @ 1150 Mhz, Samsung 850 Evo 250 GB, Win 10 Home

(My first Intel + Nvidia experience  - recently bought ) : MSI GT72S Dominator Pro G ( i7 6820HK, 16 GB RAM, 980M SLI, GSync, 1080p , 2x128 GB SSD + 1TB HDD... FeelsGoodMan

Link to comment
Share on other sites

Link to post
Share on other sites

Not true. HBM was in development before AMD jumped onboard. AMD just bankrolled and got early access. Besides, Nvidia was part of the Hybrid Memory Cube effort before AMD jumped on HBM. Without AMD we'd have had JEDEC jump on HMC instead of HBM, because Hynix couldn't have bought the vote with AMD's money like it did to keep HMC out of the JEDEC family despite its ability to replace system memory and accelerator memory at lower power and higher speeds with lower latency.

AMD dying would bring about more competition if ATI went to Intel and the CPU division went to Nvidia. Nvidia has the Denver IP it can use, and on x86 that will make a very large difference (Intel would have to sell Nvidia the license since it needs x86_64 in the medium term (could potentially respin Itanium or a new 64-bit standard with more GP registers without needing all the legacy crap)). Intel has also been after Nvidia's head since the Larrabee betrayal. Competition would run rampant for 4-6 years before team green would just get buried.

What was the Larrabee betrayal? I know Intel developed a GPU named Larrabee @ 2008-2009 but didn't know Nvidia had anything to do with that?

CPU: Intel Core i7 7820X Cooling: Corsair Hydro Series H110i GTX Mobo: MSI X299 Gaming Pro Carbon AC RAM: Corsair Vengeance LPX DDR4 (3000MHz/16GB 2x8) SSD: 2x Samsung 850 Evo (250/250GB) + Samsung 850 Pro (512GB) GPU: NVidia GeForce GTX 1080 Ti FE (W/ EVGA Hybrid Kit) Case: Corsair Graphite Series 760T (Black) PSU: SeaSonic Platinum Series (860W) Monitor: Acer Predator XB241YU (165Hz / G-Sync) Fan Controller: NZXT Sentry Mix 2 Case Fans: Intake - 2x Noctua NF-A14 iPPC-3000 PWM / Radiator - 2x Noctua NF-A14 iPPC-3000 PWM / Rear Exhaust - 1x Noctua NF-F12 iPPC-3000 PWM

Link to comment
Share on other sites

Link to post
Share on other sites

amds official website says  that 15.5 beta driver is the latest one

you're not including the driver set shipped with the cards, in the box

Link to comment
Share on other sites

Link to post
Share on other sites

So if the AMD drivers are forcing a lower tessellation... then the original comparison from HardOCP is also moot since it is really no longer apples to apples.

No AMD drivers are not doing that.

 

Witcher 3 is a seperate discussion; AMD users were able to optimize hairworks performance by capping the tessellation level manually in catalyst control center.

 

This has nothing to do with what's going on in the 3xx series driver which is probably some optimization regarding how tessellation workloads are processed in general.

Link to comment
Share on other sites

Link to post
Share on other sites

the Larrabee betrayal

this is interesting, I know of the project but not about "the betrayal part"

Link to comment
Share on other sites

Link to post
Share on other sites

amd at the least helped fund it and nvidia backed another technology which didnt work. also amd may not make good cpus but they at least know how to make one unlike nvidia who only made mobile cpus also zen in 2016 could be very competitive why do you want to see amd dissolved before we see what they are making 

HMC has been deployed in high performance computing for 2 years now in the Oracle Sparc systems for database hosting. HMC does work. Nvidia's just at the back of the line, and Intel has a huge order for Knight's Landing which Micron and it are working around the clock to fill.

 

Nvidia made a dual-core CPU so powerful it immediately blew past Apple's best and required apple use a 3rd core to beat it. Clearly Nvidia can do it. It's a modular, extensible design too.

 

I already know the theoretical limits of Zen. We have 40% promised IPC improvements over Excavator, and we know the cycle counts for Excavator, and we know AMD is sticking with the dual-FMAC design, just having an independent pair per core. It's not hard to construct an integer linear program to optimize based on aggregated opensource benchmark scores. AMD would have to have the 8-core Zen come in at a stunning 3.8GHz just to beat the 5960X at stock, and most 5960X make it to 4.4 GHz. With Denard scaling dead, there's no way the drop to 14nm will make this possible in the promised 95-135W TDP frame they've given, and if they use HDL the clocks will never get that high anyway, at all.

 

But more importantly, AMD needs to die now so Nvidia doesn't get swallowed up by Intel first. Intel is shoving Nvidia out of the HPC market slowly but surely, and Knight's Landing is going to deal a vicious blow. If Nvidia is knocked out of that market, it's no longer in competition with Intel anywhere else in the semiconductor industry. At that point, with its low-end sales eaten up by Intel and AMD iGPUs, the shareholders will be very open to a buyout or merger. If Intel gets its hands on Nvidia before AMD dies, AMD will be crushed outright and no one will be capable of stepping forward to compete in its place. IBM will be dead (a prerequisite to knocking out Nvidia, and a very real possibility at the rate it's going), and Apple won't be interested in picking up that smoldering heap. Qualcomm won't either with the PC market shrinking anyway. 

 

If AMD doesn't go down before Nvidia does, you can say goodbye to competition forever, regardless of your desires for the short term.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

It obviously makes a difference what size screen you are looking at the image lol.

23" Viewsonic 1080p. You people are crazy or not looking if you can't see the differences.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

HMC has been deployed in high performance computing systems for 2 years now in the Oracle Sparc systems for database hosting. HMC does work. Nvidia's just at the back of the line, and Intel has a huge order for Knight's Landing which Micron and it are working around the clock to fill.

 

Nvidia made a dual-core CPU so powerful it immediately blew past Apple's best and required apple use a 3rd core to beat it. Clearly Nvidia can do it. It's a modular, extensible design too.

 

I already know the theoretical limits of Zen. We have 40% promised IPC improvements over Excavator, and we know the cycle counts for Excavator, and we know AMD is sticking with the dual-FMAC design, just having an independent pair per core. It's not hard to construct an integer linear program to optimize based on aggregated opensource benchmark scores. AMD would have to have the 8-core Zen come in at a stunning 3.8GHz just to beat the 5960X at stock, and most 5960X make it to 4.4 GHz. With Denard scaling dead, there's no way the drop to 14nm will make this possible, and if they use HDL the clocks will never get that high anyway, at all.

 

But more importantly, AMD needs to die now so Nvidia doesn't get swallowed up by Intel first. Intel is shoving Nvidia out of the HPC market slowly but surely, and Knight's Landing is going to deal a vicious blow. If Nvidia is knocked out of that market, it's no longer in competition with Intel anywhere else in the semiconductor industry. At that point, with its low-end sales eaten up by Intel and AMD iGPUs, the shareholders will be very open to a buyout or merger. If Intel gets its hands on Nvidia before AMD dies, AMD will be crushed outright and no one will e capable of stepping forward to compete in its place. IBM will be dead (a prerequisite to knocking out Nvidia, and a very real possibility at the rate it's going), and Apple won't be interested in picking up that smoldering heap. Qualcomm won't either with the PC market shrinking anyway. 

 

If AMD doesn't go down before Nvidia does, you can say goodbye to competition forever, regardless of your desires for the short term.

you know how weak mobile processors are compared to desktop processors right. and you know nvidia's tegra processor cant compete against a pentium

Link to comment
Share on other sites

Link to post
Share on other sites

maybe that's the problem

 

look at the below. The bottom shot yields way higher framerates. The top one only serves to make the game run slower for everyone. Fortunately for AMD users we we have a way to force the bottom preset. If CD Projekt Red had done their jobs properly it would not be necessary and everybody would be enjoying hairworks with high framerates. If you lower it further to x8 I guess it's visible if you are really looking for it.

 

 

 

 

Do you seriously not see the differences here? I can actually count the strands on his head in x64, whereas they are more amorphous in x16. Above 32x is not worth it for detail, but there is a distinct difference even I can see. The beard's also very different in quality.

 

gallery_6624_1762_547642.jpg

 

Is there a difference? yes, i'd definitely say so.

Is it extremely small? i'd also say yes.

Is it worth the massive performance hit and would you even be able to notice it without zooming in a bunch?

i'd say no, but that part is up to the user i suppose

 

#photoshipisforrichkids

#paintmasterrace

CPU: Intel i5 4690k W/Noctua nh-d15 GPU: Gigabyte G1 980 TI MOBO: MSI Z97 Gaming 5 RAM: 16Gig Corsair Vengance Boot-Drive: 500gb Samsung Evo Storage: 2x 500g WD Blue, 1x 2tb WD Black 1x4tb WD Red

 

 

 

 

"Whatever AMD is losing in suddenly becomes the most important thing ever." - Glenwing, 1/13/2015

 

Link to comment
Share on other sites

Link to post
Share on other sites

you're not including the driver set shipped with the cards, in the box

will this driver availaible for download later on?.... 

AMD Rig - (Upgraded): FX 8320 @ 4.8 Ghz, Corsair H100i GTX, ROG Crosshair V Formula, Ghz, 16 GB 1866 Mhz Ram, Msi R9 280x Gaming 3G @ 1150 Mhz, Samsung 850 Evo 250 GB, Win 10 Home

(My first Intel + Nvidia experience  - recently bought ) : MSI GT72S Dominator Pro G ( i7 6820HK, 16 GB RAM, 980M SLI, GSync, 1080p , 2x128 GB SSD + 1TB HDD... FeelsGoodMan

Link to comment
Share on other sites

Link to post
Share on other sites

snip

Well, not many switch from 780 -> 980, so I would expect that not many would buy the upgrade from 290x -> 390x. Seems like everyone is under the impression that everyone has loads of money... Well, if we had loads of money, none of this matters since the real card is the Fury(X). I don't think anyone will do a side grade if they are satisfied with their own card, because then they're just wasting money. If we don't have loads of money, then we're going to go price/performance or will pick a side. Also, AMD doesn't have money/resources to try to appease everyone for the same 10% performance benefit from creating a completely different architecture. I doubt they even want to spend money/resources greatly improving this architecture when the new technologies are coming that will make the architecture they would have made no longer usable anyway...

 

Similarly, a lot of developers simply won't waste resources upgrading the engines used in previous games (they may optimize it to bring the experience to what is expected). Instead they'll make a new game with the minor upgrades. I don't see people complaining that Witcher 3's core gameplay mechanics didn't change entirely (which is what we saw going from Witcher 1 -> 2).

 

Also, all this proves is that if I have a 200 series card and am lucky, I can jump to a 300 series card with a driver... If I couldn't upgrade by driver, I'm definitely not upgrading by buying because the differences between generation to generation is too insignificant anyway.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×