Jump to content

AMD adds FSR driver support, discontinues support on older GPUs/OSes

porina
41 minutes ago, Kid.Lazer said:

I think the biggest problem here is the apparent hypocrisy surrounding AMD's drivers. On one hand, they get tons of flak for having poor drivers and the classic "AMD fine wine" driver optimizations. Then everyone turns around and wants them to support every card, no matter how different the architecture, for umpteen years. Compared to Nvidia, their budget is a fraction of a fraction. In light of that, you can't have it both ways. So pick one: do you want new, forward-seeking features, or long-term-yet-stagnant "support"?

Why do you feel like it is hypocrisy to ask for both long driver support and good drivers? Especially when you don't have to choose if you go with the competitor.

 

Imagine if people had the same attitude towards Intel.

"It's hypocritical to compare Intel to AMD and want Intel to match them in both performance and power consumption! You can't have it both ways. Pick one. High performance or low power consumption?"

The obvious response to that is "then I'll just not buy an Intel CPU".

 

Same thing applies here. If AMD has to choose between supporting their old products or develop new features then that gives me less incentive to buy from them when Nvidia are capable of doing both.

 

 

39 minutes ago, leadeater said:

Part of AMD's problem is they use all sorts of architectures within product generations so they can't just say for example all GCN 2.0 and older cards are now legacy as even RX 500 has GCN 1.0 cards in that lineup, where the main architecture for RX 500 series was GCN 4.0. 

 

Nvidia on the other hand don't do that, GTX 10 series is Pascal and every single product within that generation has a Pascal architecture. Nvidia may however choose give certain particular cards an extended life span but those are typically very low end model for OEM or such needs. The last mixed architecure generation on desktop was GTX 700 series, GTX 900 series had 2 very low end mobile models that were Kepler rather than Maxwell.

I totally get that, but I don't think consumers should be burden with knowing that GCN 3.0 will probably get far shorter support than GCN 4.0 because GCN 3.0 is very similar to GCN 1.0, and we can't expect AMD to keep supporting that forever.

It becomes messy really fast if we are going to start blaming consumers for not knowing how similar or dissimilar GCN 1.0 is compared to GCN 3.0 or 4.0.

 

The technical reason for why GCN 3.0 is dropped at the same time as GCN 1.0 is pretty obvious (at least to those up to date with GPU architectures), but that doesn't make the situation any better.

 

The device tree in the Galaxy S21 being similar to the device tree in the S10 would not exactly make everyone forgive Samsung if they decided to not release any more updates for the S10, S20 or S21 starting tomorrow. People would still be pissed that their S21 that they recently bought was no longer supported. Pointing to the S10 and going "but the S10 was supported for quite a long time!" does not matter to S21 owners.

Likewise, GCN 1.0 owners getting long support does not matter to GCN 3.0 owners.

Link to comment
Share on other sites

Link to post
Share on other sites

@LAwLz

Updates for Android phones are already shit so giving that as an example is not the best choice lol. Granted, Samsung is not the prime offender, but it's there with the rest of the crowd. Partially to blame is themselves and their spamming of market with billion devices every month. No one can support that many or release software updates in any meaningful timeframes.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAwLz said:

I totally get that, but I don't think consumers should be burden with knowing that GCN 3.0 will probably get far shorter support than GCN 4.0 because GCN 3.0 is very similar to GCN 1.0, and we can't expect AMD to keep supporting that forever.

It becomes messy really fast if we are going to start blaming consumers for not knowing how similar or dissimilar GCN 1.0 is compared to GCN 3.0 or 4.0.

Yep that is the problem, you get those yoyo time lengths of support and optimization as when current cards are using 2 generation old archecture you as in this example a 2 generation old high(er) model GPU owner get those benefits. However if AMD then needs to make a hard cut they are left with the tough choice of either product generation or archecture generation, the second being highly complicated to actually communicate through to consumers who generally don't understand that area well.

 

I think RX 500 only got chosen as the demark point of the last product generation like this that used multiple GCN and AMD is essentially cutting ties with GCN soon, this is it for GCN soon it'll be entirely legacy just like TeraScale is.

 

Not saying we should or are blaming consumers for this but with the way AMD does/did things they dug them self quite a deep hole with no good way of getting out.

 

At the very least everything after RX 500 all used singular architecture generations in the products, like Nvidia. Vega only used GCN 5, RX 5000 only used RDNA 1, RX 6000 only used RDNA 2. Just please AMD keep doing it this way, for everyone's sake.

 

Edit:

It's a little crazy that RX 500 generation actually used GCN 1.0 in a product which is 9 years old (5 years at the time), the RX 520. Like come on, really? Nobody saw that being a problem?

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, LAwLz said:

long driver support and good drivers? Especially when you don't have to choose if you go with the competitor.

I've read your statements about support and while I agree that AMD dropping Fiji support is ridiculous, you seem to be forgetting about what Nvidia does too.

How quickly they stop optimizing for older generations, Kepler was destroyed by lack of optimizations compared to AMD cards of the same age for years, and I didn't see you complain. Maxwell is dead right now as well, so what if you get a driver update if no game optimizations are added and no features are added? Most people seem to not update Nvidia's drivers for such cards because it seems pointless anyway. Pascal stopped getting any meaningful game optimizations long time ago too, which is easily seen when you see a 2070 Super surpass a 1080Ti, which did not happen when it launched. But in any newer game, there's a large gap between them.

I'm not defending AMD here, I'm pointing out that the alternative isn't somehow magically better as you make it out to be. It's arguably been worse with support in the past years.

CPU: AMD Ryzen 7 5800X3D GPU: AMD Radeon RX 6900 XT 16GB GDDR6 Motherboard: MSI PRESTIGE X570 CREATION
AIO: Corsair H150i Pro RAM: Corsair Dominator Platinum RGB 32GB 3600MHz DDR4 Case: Lian Li PC-O11 Dynamic PSU: Corsair RM850x White

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAwLz said:

"It's hypocritical to compare Intel to AMD and want Intel to match them in both performance and power consumption! You can't have it both ways. Pick one. High performance or low power consumption?"

The obvious response to that is "then I'll just not buy an Intel CPU".

That analogy isn't comparable, as objective performance of a particular product is not the same as perpetual software support.

 

And I get what you're saying about getting both sides of longevity and upgrades, but as someone who builds products, I can assure you it isn't always possible (just look at RTX). And it's not just a matter of "they decided not to because they're bad." Strong demarcation points are often inevitable, either because a product is not profitable to maintain, or you found a long-standing physical issue that needs addresses sooner, rather than later. Yes, it sucks for the consumer, but such is life. Security is a non-issue in this case, and performance of older cards have already reached maximum potential.

 

It's also worth pointing out to the many people fretting about their old cards: loss of support =/= non-functional. I have an old HD4870 that I can install in Windows 10 just fine.

Primary Gaming Rig:

Ryzen 5 5600 CPU, Gigabyte B450 I AORUS PRO WIFI mITX motherboard, PNY XLR8 16GB (2x8GB) DDR4-3200 CL16 RAM, Mushkin PILOT 500GB SSD (boot), Corsair Force 3 480GB SSD (games), XFX RX 5700 8GB GPU, Fractal Design Node 202 HTPC Case, Corsair SF 450 W 80+ Gold SFX PSU, Windows 11 Pro, Dell S2719DGF 27.0" 2560x1440 155 Hz Monitor, Corsair K68 RGB Wired Gaming Keyboard (MX Brown), Logitech G900 CHAOS SPECTRUM Wireless Mouse, Logitech G533 Headset

 

HTPC/Gaming Rig:

Ryzen 7 3700X CPU, ASRock B450M Pro4 mATX Motherboard, ADATA XPG GAMMIX D20 16GB (2x8GB) DDR4-3200 CL16 RAM, Mushkin PILOT 1TB SSD (boot), 2x Seagate BarraCuda 1 TB 3.5" HDD (data), Seagate BarraCuda 4 TB 3.5" HDD (DVR), PowerColor RX VEGA 56 8GB GPU, Fractal Design Node 804 mATX Case, Cooler Master MasterWatt 550 W 80+ Bronze Semi-modular ATX PSU, Silverstone SST-SOB02 Blu-Ray Writer, Windows 11 Pro, Logitech K400 Plus Keyboard, Corsair K63 Lapboard Combo (MX Red w/Blue LED), Logitech G603 Wireless Mouse, Kingston HyperX Cloud Stinger Headset, HAUPPAUGE WinTV-quadHD TV Tuner, Samsung 65RU9000 TV

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Morgan MLGman said:

I've read your statements about support and while I agree that AMD dropping Fiji support is ridiculous, you seem to be forgetting about what Nvidia does too.

How quickly they stop optimizing for older generations, Kepler was destroyed by lack of optimizations compared to AMD cards of the same age for years, and I didn't see you complain. Maxwell is dead right now as well, so what if you get a driver update if no game optimizations are added and no features are added? Most people seem to not update Nvidia's drivers for such cards because it seems pointless anyway. Pascal stopped getting any meaningful game optimizations long time ago too, which is easily seen when you see a 2070 Super surpass a 1080Ti, which did not happen when it launched. But in any newer game, there's a large gap between them.

I'm not defending AMD here, I'm pointing out that the alternative isn't somehow magically better as you make it out to be. It's actually been worse with support in the past years.

This seems to be a really common argument lately all over the place.  “Sure I did an awful thing but it’s done by others as well”. It’s not a good thing to do because it creates a slippery slope of degradation.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/22/2021 at 7:01 PM, RejZoR said:

While ReShade could do it, it doesn't have control over framebuffer sizing. ReShade can only operate at resolution game is running where FSR changes (lowers) render resolution and outputs it to monitor's native resolution. Something ReShade can't do. Would be cool if it could do such things, people already do all sorts of funky thing with ReShade's shaders (best example is Marty's RTGI shader which adds screen space real time ray tracing to almost any game. I've used it for a while and it's pretty sick. Having RTGI and FSR and then CAS on top would be pretty sick 😄

Also ReShade would apply FSR over the top of any overlay elements like the HUD, unlike when FSR is implemented in a game (as mentioned in the LTT video) and so even if you were to get it working you would lose sharpness on any text there.

CPU: i7 4790k, RAM: 16GB DDR3, GPU: GTX 1060 6GB

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Bombastinator said:

This seems to be a really common argument lately all over the place.  “Sure I did an awful thing but it’s done by others as well”. It’s not a good thing to do because it creates a slippery slope of degradation.

Of course, but facts are facts and need to be taken into consideration. I've bashed Nvidia for doing what they did especially to Kepler in terms of drivers and I'm honestly stunned that AMD dropped support for the R9 Fury series this soon - I understand the technical reasoning, but I don't think that's gonna cut it.
I mentioned Nvidia, because original author of the comment made it sound like they were any different, which they aren't

CPU: AMD Ryzen 7 5800X3D GPU: AMD Radeon RX 6900 XT 16GB GDDR6 Motherboard: MSI PRESTIGE X570 CREATION
AIO: Corsair H150i Pro RAM: Corsair Dominator Platinum RGB 32GB 3600MHz DDR4 Case: Lian Li PC-O11 Dynamic PSU: Corsair RM850x White

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Morgan MLGman said:

How quickly they stop optimizing for older generations, Kepler was destroyed by lack of optimizations compared to AMD cards of the same age for years, and I didn't see you complain. Maxwell is dead right now as well, so what if you get a driver update if no game optimizations are added and no features are added? Most people seem to not update Nvidia's drivers for such cards because it seems pointless anyway. Pascal stopped getting any meaningful game optimizations long time ago too, which is easily seen when you see a 2070 Super surpass a 1080Ti, which did not happen when it launched. But in any newer game, there's a large gap between them.

Do we know if any past vs recent relative performance changes are due to driver optimisation, or are modern games using GPUs differently than older games? Put aside the likes of RTX and DLSS which didn't exist in the older cards. Are newer games running better on more modern hardware because of hardware differences that weren't or even can't be utilised by older games?

 

When Pascal was launched, I recall the 1070 was near enough the same performance as a 980 Ti. Is the suggestion that, for more modern games, the 1070 would pull ahead? I actually have both GPUs in FE forms so this could be tested if I can be bothered to spend the time on it. I also have a 1080 Ti and a 2070 (not Super), but that would be a less balanced comparison.

 

What may or may not apply to the above are observations I have on compute use cases. These may be either heavily INT or FP based but if you pick cards of equivalent gaming performance, newer generations seem to do much better in compute than gaming performance alone would suggest. The 2070 I have easily beats a 1080 Ti in both absolute performance and efficiency. I don't recall exactly but I think Turing did have some extra execution resource added, and it would seem that compute use cases are able to make use of it even without specific coding targeting it. The extra cores of Ampere seem to deliver a similar boost, but I've not tested it in detail as running those loads are more of a winter pastime.

 

One final thought, the top end Maxwell (excluding Titans) was the 980 Ti. Roughly equivalent to a 1070, 2060?, maybe 3050??? The top card of the time is now a low end card today. Lower Maxwell cards would be scraping entry level for modern games.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Morgan MLGman said:

How quickly they stop optimizing for older generations, Kepler was destroyed by lack of optimizations compared to AMD cards of the same age for years, and I didn't see you complain. Maxwell is dead right now as well, so what if you get a driver update if no game optimizations are added and no features are added? Most people seem to not update Nvidia's drivers for such cards because it seems pointless anyway. Pascal stopped getting any meaningful game optimizations long time ago too, which is easily seen when you see a 2070 Super surpass a 1080Ti, which did not happen when it launched. But in any newer game, there's a large gap between them.

[Citation Needed]

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, porina said:

Do we know if any past vs recent relative performance changes are due to driver optimisation, or are modern games using GPUs differently than older games? Put aside the likes of RTX and DLSS which didn't exist in the older cards. Are newer games running better on more modern hardware because of hardware differences that weren't or even can't be utilised by older games?

 

When Pascal was launched, I recall the 1070 was near enough the same performance as a 980 Ti. Is the suggestion that, for more modern games, the 1070 would pull ahead? I actually have both GPUs in FE forms so this could be tested if I can be bothered to spend the time on it. I also have a 1080 Ti and a 2070 (not Super), but that would be a less balanced comparison.

One possible consideration would be more modern games being more memory (GPU) intensive than older titles. On older games or at 1080p I think the performance may be similar but once you move to higher resolutions or games with larger textures that put more demand on the memory that might be where you start to see the newer generation cards edging ahead. If you do go back and test it then that would be worth testing as well.

CPU: Intel i7 6700k  | Motherboard: Gigabyte Z170x Gaming 5 | RAM: 2x16GB 3000MHz Corsair Vengeance LPX | GPU: Gigabyte Aorus GTX 1080ti | PSU: Corsair RM750x (2018) | Case: BeQuiet SilentBase 800 | Cooler: Arctic Freezer 34 eSports | SSD: Samsung 970 Evo 500GB + Samsung 840 500GB + Crucial MX500 2TB | Monitor: Acer Predator XB271HU + Samsung BX2450

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Spotty said:

One possible consideration would be more modern games being more memory (GPU) intensive than older titles. On older games or at 1080p I think the performance may be similar but once you move to higher resolutions or games with larger textures that put more demand on the memory that might be where you start to see the newer generation cards edging ahead. If you do go back and test it then that would be worth testing as well.

VRAM configuration certainly is a possible variable. Some stats for selected cards below:

 

Model, VRAM capacity, VRAM bandwidth, FP32 rate

980 Ti, 6 GB, 336.6 GB/s, 6.1 TFLOPS
1070, 8 GB, 256.3 GB/s, 6.5 TFLOPS

 

1080 Ti, 11 GB, 484.4 GB/s, 11.3 TFLOPS
2070 Super, 8GB, 448.0 GB/s, 9.1 TFLOPS
 

Since the claim was "Maxwell is dead" the 980 Ti to 1070 difference is more interesting to me. We actually see quite a drop in VRAM bandwidth on the newer card. Rated FP32 compute perf is rated about the same, with the caution that doesn't necessarily translate to gaming performance.

 

I also included 1080 Ti to 2070S since that pairing was mentioned in the earlier post. On those limited stats, the 2070S should be lower than a 1080 Ti. Again, I can't directly test this one since I don't have a 2070S, and the system my 1080 Ti is in I really don't want to mess around with.

 

There's one other variable I have in mind: CPUs. During the Maxwell era quad cores still ruled. We didn't take our first serious steps beyond 4 cores in mainstream until Ryzen in March 2017. Pascal was already current at that time, launched nearly a year earlier in May 2016. Only with Turing released in September 2018 could we argue that we had moved beyond quad cores in non-HEDT higher end gaming systems. We have Zen+ released April 2018, and Coffee Lake in October 2017. Older games would be less likely to be optimised for beyond 4 cores and newer games may take better advantage of that. But, does that also affect the GPU performance in some way? Wouldn't a newer CPU help in newer games similarly for both?

 

I'm still undecided if I'll do this testing, since it has to be clear what should be tested and is meaningful to be tested. My thinking is, if I do this it'll be a 10600k stock with some fast ram (at least 3200, not sure what is actually in it that system). Latest nvidia driver only. I can swap between the 980 Ti and 1070. Game list will be the hard part, since it needs to represent games through the years, and I either own, or there is a free demo/stand alone benchmark for it. I wont do manual gameplay benchmarks either, benchmark has to be built into the game. If the claim is true, then we might expect to see closer performance between the two GPUs for older games, with a gap opening for more modern games. I do not intend to test older driver versions as that is a pain to do. If the claim that driver optimisations of older cards is dropped is true, it shouldn't be needed.

 

I haven't looked myself, is there a demo/benchmark available for any of the FSR supported titles? That could be thrown in also. [Edit: I have Anno 1800 which does have built in benchmark, and I see the patch has been released]

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, porina said:

Since the claim was "Maxwell is dead" the 980 Ti to 1070 difference is more interesting to me. We actually see quite a drop in VRAM bandwidth on the newer card. Rated FP32 compute perf is rated about the same, with the caution that doesn't necessarily translate to gaming performance.

You have to remember that Nvidia has improved their memory compression from generation to generation as well. 

Turing offers roughly 25% higher memory compression than Pascal, and Pascal had roughly 20% better compression than Maxwell.

 

So while the hardware's memory bandwidth might have gone down, the effective memory bandwidth should be up.

 

 

 

There are a lot of variables at play here, which is the reason why I don't really believe @Morgan MLGman when he says Nvidia are "not optimizing drivers for their older cards". Is that just one of those myths that people parrot without actually verifying it because they want it to be true, or is there any evidence that support these theories?

 

 

I've been trying to find any evidence for this for quite some time now and the only benchmark I have been able to find that supports this theory was a Project Cars benchmark, but that appears to only have been for some specific driver versions and later versions fixed it, so it seems to have been a bug. 

 

Oh, and some other "evidence" that I've found is that some generations of AMD cards seems to have improved more than some generations of Nvidia cards over the years. So if AMD and Nvidia both released comparable cards in 2015, the AMD card might have had a performance lead in 2017. But that's not "proof" of Nvidia deliberately gimping their old cards. That might as well just mean AMD had more room for improvement at the driver level at launch.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, LAwLz said:

You have to remember that Nvidia has improved their memory compression from generation to generation as well. 

Turing offers roughly 25% higher memory compression than Pascal, and Pascal had roughly 20% better compression than Maxwell.

 

So while the hardware's memory bandwidth might have gone down, the effective memory bandwidth should be up.

Ok, I wasn't aware of that. Well, I was aware there was some kind of memory compression going on, but those numbers are bigger than I expected. For example, if you look at lossy image compression methods, while newer ones generally claim better performance over older ones, it isn't that big for a similar perceptual quality.

 

If I apply 20% to the 1070 value in my earlier post, it still falls a little short of the 980 Ti, but would put them closer to parity on that metric. 

 

It raises the new question (to me) is there some benchmark for effective GPU VRAM bandwidth? Not hardware bandwidth, because we can just look up those numbers. Would have to be applied to some workload optimised for such.

 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, porina said:

It raises the new question (to me) is there some benchmark for effective GPU VRAM bandwidth? Not hardware bandwidth, because we can just look up those numbers.

Mining 😉

 

1070 is a lot faster, DaggerHash 19 vs 30, Well not really but mining is ram bandwidth mainly.

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, leadeater said:

Mining 😉

 

1070 is a lot faster, DaggerHash 19 vs 30, Well not really but mining is ram bandwidth mainly.

I don't think mining can take advantage of Nvidia's memory compression though. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, LAwLz said:

I don't think mining can take advantage of Nvidia's memory compression though. 

🤷‍♂️ Never bothered to check tbh. I think it should still work but probably wayy less efficiently due to data randomness. Like Nvidia quotes general best/better cases when it comes to things like compression anyway, though I think newer compression implementation better meet their targets than older ones.

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, leadeater said:

Mining 😉

 

1070 is a lot faster, DaggerHash 19 vs 30, Well not really but mining is ram bandwidth mainly.

Interesting point there. I wonder if VRAM is a bit like a SSD in that case. The headline figure is peak bandwidth, but the effective bandwidth may be (much) lower depending on the workload pattern.

 

Anyone remember the Ethlargement tool for top end Pascal during the 1st round of mining boom? It was a software alternative to custom mining bios of the time. That gave significant uplift in mining performance for the same ram speed settings (and implicitly max bandwidth) by optimising access latency.

 

6 minutes ago, LAwLz said:

I don't think mining can take advantage of Nvidia's memory compression though. 

I understood but never checked it was attained from some form of texture compression. Like so many things, the more you look at it, the more complicated it gets.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, leadeater said:

 

I think RX 500 only got chosen as the demark point of the last product generation like this that used multiple GCN and AMD is essentially cutting ties with GCN soon, this is it for GCN soon it'll be entirely legacy just like TeraScale is.

 

 

I would hope if they are going to stop supporting it that they wait at least another 5+ years as you can still buy 570 new.  I think it is very poor form to stop supporting something like a GPU within a few years of offering it for sale.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Framebuffer compression uses algorithms focused on image compression, meaning it'll be tossing around pixels and colors information. Not really useful for the compute tasks used in GPU mining if you ask me, meaning raw bus width might be more beneficial than narrower one with more advanced compression.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

🤷‍♂️ Never bothered to check tbh. I think it should still work but probably wayy less efficiently due to data randomness. Like Nvidia quotes general best/better cases when it comes to things like compression anyway, though I think newer compression implementation better meet their targets than older ones.

Don't quote me on this, but I believe Nvidia's compression, at the very least the compression I'm referring to, is only for textures. It shouldn't work on GPGPU loads.

Nvidia might have some other compression that works on things like mining though, but like you said, the data might be too random to compress.

Link to comment
Share on other sites

Link to post
Share on other sites

After doing funny combos of DSR and DLSS, I wonder how it would work stacking DLSS and then FSR on top. If you think of it, this isn't like compression where stacking just doesn't work.

 

Here, it would go like this:

NATIVE -> DLSS (performance gain of rendering at lower res and upscaling to native resolution) -> FSR (performance gain as FSR works on native resolution output from DLSS which it can further decrease in resolution and use its own magic on that).

 

I just wonder if you can stick FSR between DLSS output and output on the monitor as I have no idea if rendering pipeline allows such a thing or is DLSS output always absolutely final.

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, Forbidden Wafer said:

Dota2 now supports FSR.

And so far it looks like the best implementation of FSR yet since you have a slider for render resolution from 40% to 100% which you can move in 1% increments.

 

At 1440p there is barely any image difference when going down to 80% render resolution, I really have to look hard to find any noticeable difference. At 70% there is a noticeable difference but it's minor. At 40% the difference is quite apparent.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, WereCat said:

And so far it looks like the best implementation of FSR yet since you have a slider for render resolution from 40% to 100% which you can move in 1% increments.

I'm happy with anything that lets me play with my RX480 on my 4K display. Damned RTX 2070 died with 6 months of use and I spent over a year fighting Zotac to get my money back... Decided to buy the 3060TI, and then Nvidia screw up with Hardware Unboxed and I cancelled my order... Then crazy price hikes...

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×