Jump to content

Gaming Performance Tested On 'Worn Out' RTX 2080 Ti Mining Card

K E N N Y
20 minutes ago, Kisai said:

RTX 30 series do not yet support AV1 encoding, so I'm not personally missing the 30 series, but that's around the corner. https://www.nvidia.com/en-us/geforce/news/rtx-30-series-av1-decoding/

-snip-

You definitely are not doing that on any existing iGPU. 

 

https://techcommunity.microsoft.com/t5/media-at-microsoft/av1-hardware-accelerated-video-on-windows-10/ba-p/1765451

The RTX 30 series will not support AV1 encoding either, and encoding with NVENC, QuickSync etc has nothing to do with how powerful the GPU is. It's all handled by ASICs on the chips.

As far as decoding is concerned, iGPUs are just as powerful as graphics cards (of course depends on which generation you compare).

 

 

20 minutes ago, Kisai said:

So if you're streaming content, and it now has to involve AV1 ingests, SOL. No iGPU is going to support that.

AV1 is not quite production ready yet, and even if you do argue that it is, it is supported on iGPUs as well. At least Intel's iGPUs and discrete GPUs.

 

20 minutes ago, Kisai said:

Quicksync has largely been a joke for encoding. 

QuickSync was revolutionary when it came out. It was out quite long before AMD and Nvidia started adding discrete encode engines. The quality was already pretty damn good, and still is. 

 

20 minutes ago, Kisai said:

Under Windows 7 you couldn't use the iGPU for encoding unless you were also using it as main GPU, and under Windows 10 you can only use it if the iGPU is turned on an plugged into an active monitor.

This is wrong.

1) You could use it for encoding even it if wasn't your main GPU on Windows 7. I would know, because I used it for it. You could do a work-around for the need to have a monitor plugged in by creating a virtual display adapter. I think it was because if no monitor wad detected, the GPU would be turned off.

2) You do not need a monitor plugged in to make it work in Windows 10, not even a virtual one. As long as you got the iGPU turned on it should work. You might need the QuickSync SDK as well but I am not sure. Don't have an Intel CPU anymore.

 

 

Also, all of this still isn't a NEED. It's a WANT. You don't have to upgrade your graphics card to a 30 series if you want to stream. Even if Twitch supported AV1 (which it doesn't yet, outside of some very limited testing), it's not like all of a sudden other formats becomes unusable as soon as a new one comes out. You might get slightly better quality with a new format, but that's about it. The quality we already got is perfectly acceptable. If it was unusable then Twitch wouldn't have become popular to begin with.

AV1 vs AVC (or VP9 or whatever Twitch uses) is like debating medium vs high settings in a game. If a game is fun to play on high settings (or a stream is worth watching at high quality) then it is most likely fine to play (or watch) at medium settings too. It's a "nice to have", not "need to have". And as I said earlier, "I want my game to look slightly better" is not a more noble or important goal than "I want to mine on it and make a bit of money". At least not in my eyes.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Kisai said:

Please go actually watch a game streamer.

You know that was a joke right....

 

Also oh god no, AV1 support isn't good, what can we do. Not use it, problem solved.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

You know that was a joke right....

 

Also oh god no, AV1 support isn't good, what can we do. Not use it, problem solved.

I am a massive fan of AV1 but yeah... AV1 still has quite a few years before it will be production ready.

Twitch doesn't even support AV1, and even if it did, it will be a long time before hardware encoding and decoding is widely available. And by widely available I mean like 20% of viewers or streamers having it supported in hardware.

 

"Yeah I need a new graphics card today because in 5 years one of the features might make streamed video look slightly better".

And again, even if that is the case, that doesn't make your use case more important than let's say mining in my eyes. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

You know that was a joke right....

I know your posting style and knew it was a joke. That said, that's how a lot of threads get derailed if no effort is made to bring it back on topic.

 

Quote

Also oh god no, AV1 support isn't good, what can we do. Not use it, problem solved.

Think of the poor ISP's and Datacenters who have to pay for expensive bandwidth! oh no! /s

 

My thought on it really is more along the lines of the 30xx parts having decode is fine, but as nothing is doing encode, that puts twitch and youtube in a position of having to do encoding in software, and that might result in them having to invest in far more cpu-powered ingest stations. According to some benchmarks, liboam takes 4 days per minute of 4K video.

 

It kinda amazes me that people can't wrap their brains around the fact that "less bandwidth = less costs" for both the ISP and the data center (though, sadly, not you, the consumer, though it does allow you to get better video on weaker connections.) It's not a question of IF, it's a question of "how soon can we push this on consumers", leading to another round of hardware upgrades. SmartTV's, and STB/PVR's likely being the first victims, followed by laptops and other mobile devices. Since you can't even play AV1 in software on anything with a battery without it being drained quickly.

 

https://www.ncta.com/whats-new/report-where-does-the-majority-of-internet-traffic-come

October 17, 2019

250-Global-Application-Category-Traffic-

 

I imagine the numbers for streaming are only higher now.

And of course we had stuff like this happen:

https://www.ctvnews.ca/health/coronavirus/netflix-reduces-video-quality-in-canada-to-lower-internet-bandwidth-use-1.4869993

Quote

"We believe that this will provide significant relief to congested networks and will be deploying it in Canada for the next 30 days," said Ken Florance, vice president of content delivery in a statement on Thursday.

AKA, Canada's ISP's still charge too much and provide insufficient bandwidth to their customers.

 

All these old GPU's from mining rigs can still be used to do video encoding, the video encoders don't typically improve every generation of GPU. Heck it's more than likely that a mining rig could ALSO do video encoding to better make use of the GPU utilization. Just I don't see that happening. Just since none of them, not even the 30xx have AV1, means that reusability time is shrinking quickly for them.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Kisai said:

I know your posting style and knew it was a joke. That said, that's how a lot of threads get derailed if no effort is made to bring it back on topic.

Yes well telling me to go watch a game stream is of any use? Like do you really think I have not seen any game streamers or streams, or any Twitch stream of any kind at all?

 

It warrants a joke very well as brining up game streamers as needing a current gen or high end GPU now is so off base it's that funny. Because to be a game streamer they already have a GPU and a functioning method to be able to stream. If they feel that they need to make an upgrade to how they do it there are options, even right the way down to buying an ex-lease business desktop for next to nothing to use as a stream computer and do it all in software on the CPU.

 

As to the topic, that's about performance degradation over time which simply is not a thing with microprocessors. There's basically nothing to discuss unless overlocking is introduced where it's actually possible to see some kind of degradation in achievable performance but due to what it is the data is so unclean it's very hard to put any validity towards it. The degradation is also primarily caused by electrical damage which will not happen under standard usage.

 

 

1 hour ago, Kisai said:

My thought on it really is more along the lines of the 30xx parts having decode is fine, but as nothing is doing encode, that puts twitch and youtube in a position of having to do encoding in software, and that might result in them having to invest in far more cpu-powered ingest stations. According to some benchmarks, liboam takes 4 days per minute of 4K video.

These platforms will always do this task on CPU, it's the only solution that scales and is flexible enough in workload management and protocol support. None of them will be switching over to GPU accelerated workflows. If there is going to be a change in hardware used it will be switching to ARM CPUs with custom inbuilt encoders with as low cost as possible so the hardware can be switched out at minimal cost when required.

 

Encoding and decoding are entirely different discussions, trying to bring them in to a single conversation is the same as talking about mini-golf/crazy-golf and golf as the same thing. Other than a putter and a golf ball the similarities end there.

 

1 hour ago, Kisai said:

It kinda amazes me that people can't wrap their brains around the fact that "less bandwidth = less costs" for both the ISP and the data center (though, sadly, not you, the consumer, though it does allow you to get better video on weaker connections.)

Bandwidth is cheap and easily expandable and is actually more abundant and advancing quicker than CPU and GPU power is and is more power efficient as well. The main benefit to a better encoder is a higher picture quality in the same bandwidth, not much more outside of home PCs and gamers i.e. end users.

 

Where bandwidth is limited is last mile and everything involved with that is directly and only as a result of ISPs not investing in upgrading those parts of the network. Backhauls and datacenters are not short of bandwidth nor hosting locations to expand in to as usage demand grows. Much of these networks are unlit waiting for capacity demand to grow or using older optical standards and can be upgraded when required.

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/24/2021 at 9:46 AM, Blademaster91 said:

Sure replacing the thermal paste and thermal pads fixes the performance issue, but the average gamer buying a used GPU isn't going to do that, also the only GPU brand I can think of that doesn't void your warranty for taking apart a GPU is EVGA.

XFX doesn't either. 

 

Quote

Can I service my graphics card myself without VOIDING Warranty?

For USA and Canada, you can service the graphics card yourself, piercing warranty stickers will not void the warranty. This includes replacing thermal paste, and cleaning out the fan/heastinks assembly of dust and debris. IMPORTANT: Any accidental physical damage to components on the card however will not be covered by warranty, servicing the card is only recommended for PC Technicians and other IT professionals. Please take care when servicing a card, components can be sensitive and delicate.

https://www.xfxforce.com/support/xfx-warranty

so as long as you don't fuck up anything after, you're good, I think the warranty stickers are more for if you knock a ram chip or a capacitor or some shit off, they can tell that you were servicing the card yourself, and that it wasn't cause by it melting off or some shit, idk.

AMD blackout rig

 

cpu: ryzen 5 3600 @4.4ghz @1.35v

gpu: rx5700xt 2200mhz

ram: vengeance lpx c15 3200mhz

mobo: gigabyte b550 auros pro 

psu: cooler master mwe 650w

case: masterbox mbx520

fans:Noctua industrial 3000rpm x6

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Kisai said:

the video encoders don't typically improve every generation of GPU.

That might be a bit of a stretch? AFAIK NVENC improved every generation expect volta and ampere. And quick sync has changed most generations (except later 14nm), although it mostly seems to be improvements like newer codecs, better chroma subsampling, and color depth.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Kisai said:

According to some benchmarks, liboam takes 4 days per minute of 4K video.

Sure, if you are doing offline encodes and wanting to maximize the results in every aspect possible, otherwise no.

 

132938-Ozer_AV1_Figure1-ORG.jpg

 

132940-Ozer_AV1_Figure2-ORG.jpg

 

Use real time settings for real time usage applications

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Craftyawesome said:

That might be a bit of a stretch? AFAIK NVENC improved every generation expect volta and ampere. And quick sync has changed most generations (except later 14nm), although it mostly seems to be improvements like newer codecs, better chroma subsampling, and color depth.

I guess it depends on how you define "improve".

Better image quality at the same bit rate with the same format rarely improves, but like you said, things like adding new format support, support for higher resolutions, etc happens quite frequently. At least for Nvidia and Intel hardware.

 

 

10 hours ago, leadeater said:

Sure, if you are doing offline encodes and wanting to maximize the results in every aspect possible, otherwise no.

 

Use real time settings for real time usage applications

libaom is notoriously slow.

SVT-AV1, that you linked, is way faster. I haven't been following the development of libaom that closely lately (and things change really fast so take my words with a shovel of salt), but last time I checked, libaom was still single threaded for the first pass, and it didn't utilize multiple cores that well in the second pass either.

Av1an works around this issue by splitting the source up into multiple segments, and each segment then gets its own instance of for example libaom.

 

SVT-AV1 is the official encoder endorsed by AOMedia, and libaom is now mostly for research.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, LAwLz said:

libaom is notoriously slow.

SVT-AV1, that you linked, is way faster. I haven't been following the development of libaom that closely lately (and things change really fast so take my words with a shovel of salt), but last time I checked, libaom was still single threaded for the first pass, and it didn't utilize multiple cores that well in the second pass either.

Av1an works around this issue by splitting the source up into multiple segments, and each segment then gets its own instance of for example libaom.

I would have to go back to the site I was reading but I think that isn't so bad anymore, off memory the basic TL;DR for advised settings to use with libaom was that it is twice the run time compared to x265.

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, LAwLz said:

libaom is notoriously slow.

Found the site, see below.

 

Quote

So that’s how I chose the encoding settings. How did performance compare? As promised, you see this in Table 4, which shows the average time, bitrate, and quality results from two separate 10-second encodes. It has three major sections. The top section, AV1 Codecs as Tested, shows the performance achieved as tested in the quality comparisons via the encoding strings shown previously. 

https://www.streamingmedia.com/Articles/ReadArticle.aspx?ArticleID=142941

 

132943-Ozer_AV1_table4_c1-ORG.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, leadeater said:

That sounds good but I don't trust Jan Ozer and I would recommend you be skeptical of his claims as well.

 

Take this quote as an example:

Quote

Kudos to the AOM engineering team for accelerating results to this degree. For perspective, when I first tested AV1, AV1 took about 780 times longer to encode than x265; not it’s down to 2x.

The reason why his first AV1 test was so slow was because he never enabled multithreading for AV1, but did for HEVC.

 

The guy also has a habit of using the incorrect presets. Like setting libaom to cpu-used=0 but veryslow for x265.

He has also tested things with broken builds because he just uses whatever someone sends him without looking into things for himself. His scores are all over the place.

I mean, in the article you linked he says he thinks a 200% increase in encode time is worth a 4% increase in VMAF. The reason he chose a target score of ~88 was because Visionular told him to, and wouldn't you know it, Visionular performs the best in his tests. 

 

The article you linked seems far better than his previous articles though. According to this article, Google and Intel reached out to him and helped "improving the accuracy of my recent AV1-related research". But he still seems fairly clueless, his scores are all over the place from article to article, and he still seems to mostly just blindly follow what someone else tells him to do.

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/26/2021 at 5:43 PM, Mark Kaine said:

oh god please no, not the nihilistic take again. 'just use your imagination lul' 

 

you also don't need a car, a house or a toilet, you want these things (well at least I hope so in the latter case!) 

Unless the definition of nihilism changed when I wasn't looking, I am not certain my post (or views on this situation) qualifies as nihilism. If anything, I'd argue that it represents the antithesis of nihilism given the context. Being able to discern between ones wants and needs serves to highlight what is important in ones life, not make it devoid of meaning.

 

Also, when did I ever imply that gamers should use their imagination? Plenty of retro games can be played on integrated graphics and are still quite fun to play. Their playing it would still make them gamers by definition. If they refuse to play those games because they do not want to play those games, then they should fork up the required money for the hardware capable of playing what it is they wish to play.

 

I do not have a natural bias towards gamers, nor am I blindly defending miners. I myself have never mined or traded a single cryptocurrency nor do I ever care to do so as it's not my cup of tea. Unlike my disinterest in mining, I've been gaming all my life (albeit, not as much lately as I would like to) and one would think that I would empathize more with gamers on this subject, but that's simply not true. I take the side of all consumers equally in these situations, and like it or not, miners are just as much of a consumer as traditional gamers. That much is black & white to me.

 

Again, if you want to argue the moral aspects of mining vs gaming, have at it. I'll gladly sit that one out and watch from the sidelines like I normally do, but if we are conversing about who is "more entitled" to a graphics card, the simple answer is whomever is willing to spend the most time camping outside or the most money buying it from those camping outside, as they simply wanted it more.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Something that no one has touched on in this thread which I know for a fact to be true. 

 

Comparing a year and a half card to a new card is already flawed from how the components were built standpoint.  Toms is making the assumption that the components are the same.  WRONG!!!  When you make chips in a fab you are constantly tweaking the process to get better yield and better performance.  Why?  Money.  Yield makes sense but why performance?  Because it gives you a bigger window when the parts are binned.   That 10% performance might translate into large profits because of how the parts are sorted.  If you have a binning curve that isn't a normal distribution that 10% could equate into 30% more parts that are now passers.  Not taking into account the very best parts can now be sold at a premium. 

 

In a fab, because of the cost of everything, every second is important, every single percent is important.  Because of the volume that goes through a fab, even amounts that seem to be inconsequential are huge when looking at the big picture.  It all translates into X amount of savings, which allows Y more wafers to run and gives us Z more profit.

 

Source:  Been in the business 25 years. 10 of those years in a bunny suit in a fab.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Nah bro the mined 2080 TI runs 20 to 30 degrees hotter than the non-mined one which runs at 50+~C 

 

Source: 

Blow for blow, the mined GPU runs exceedingly more hot, to the point that it will fail and die MUCH faster than a non mined one.

 

 

This is basic logic for me, but people don't really like logic...allow me to explain the situation clearly, understandably and logically.

 

 

Your average second hand GPU buyer will not have the knowledge, capacity or will to break open a GPU's cooler and replace the thermal paste and thermal pads inside the GPU to fix the temperatures. This is the responsibility of the miner who is selling the card. The buyer does NOT know how to operate the GPU outside of plug and play situations and will not understand why their purchased GPU runs so hot.

 

Furthermore, the difference in temperature provides on average 10% less performance. But the real problem is the temperature, which will of course bring more noise with it as well.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Rym said:

This is the responsibility of the miner who is selling the card.

This must be the largest amount of bs i've read in a long time

One day I will be able to play Monster Hunter Frontier in French/Italian/English on my PC, it's just a matter of time... 4 5 6 7 8 9 years later: It's finally coming!!!

Phones: iPhone 4S/SE | LG V10 | Lumia 920 | Samsung S24 Ultra

Laptops: Macbook Pro 15" (mid-2012) | Compaq Presario V6000

Other: Steam Deck

<>EVs are bad, they kill the planet and remove freedoms too some/<>

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Rym said:

Blow for blow, the mined GPU runs exceedingly more hot, to the point that it will fail and die MUCH faster than a non mined one.

3 things; clean the cooler then no difference unless paste needs replacing (rare), thermal controls on the GPU will not let it prematurely die as that's the point of them, this will happen on a graphics card exclusively used for gaming.

 

Things get old and dirty, that's not a mining only related problem.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, suicidalfranco said:

This must be the largest amount of bs i've read in a long time

Thermal paste and pads do not need replacing after 2-3 years, it's nobodies responsibility because it's not necessary, Just blow the damn crap out from between the heatsink fins.

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, Rym said:

This is the responsibility of the miner who is selling the card.

So what if it's a second hand gaming card, with no miner involved?

Do you blame it on a miner too? 😂

 

Also, from my experience with so many gpu, NONE of them needed repasting after four years, their temps never changed much compared to when new, just dust the damn thing

 

Don't tell me your average gamer won't do dusting when they buy a new GPU?

Even if they don't, that's THEIR responsibility since it's their crap now.

The seller sells the item as is, if you think dusting isn't worth it then don't buy it.

It takes a lot of dust to do 20c delta, don't tell me you can't see it

-sigh- feeling like I'm being too negative lately

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Moonzy said:

So what if it's a second hand gaming card, with no miner involved?

Do you blame it on a miner too? 😂

 

 

You do realize my post is exclusively about mined GPUs and in no way shape or form conceivable does it refer to non-mined second hand GPUs?

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Rym said:

You do realize my post is exclusively about mined GPUs and in no way shape or form conceivable does it refer to non-mined second hand GPUs?

you do realize dust is not only on mining GPU right?

-sigh- feeling like I'm being too negative lately

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, Moonzy said:

you do realize dust is not only on mining GPU right?

I guess there is a perception that graphics cards wear in a similar manor to say car tyres, they don't but it seems like people think this. It's also extremely, extreme, hard to make thermal pad dry out and lose any amount of thermal conductivity. That takes like decades on thicker ones. Thermal grease will do that a bit quicker but I've not repasted my GPU water block since I put it on back in early 2014 and don't turn my computer off and largely run the loop passively which means actually hot water temps at idle and load (load very hot lol). There's been no change and thermal control/management of my GPUs under what is probably even more thermally stressful conditions than a mining GPU.

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, leadeater said:

I guess there is a perception that graphics cards wear in a similar manor to say car tyres, they don't but it seems like people think this. It's also extremely, extreme, hard to make thermal pad dry out and lose any amount of thermal conductivity. That takes like decades on thicker ones. Thermal grease will do that a bit quicker but I've not repasted my GPU water block since I put it on back in early 2014 and don't turn my computer off and largely run the loop passively which means actually hot water temps at idle and load (load very hot lol). There's been no change and thermal control/management of my GPUs under what is probably even more thermally stressful conditions than a mining GPU.

Perhaps nVidia should keep a WOR database like is kept for vehicles. If a card has been sold to someone who used it for mining, then users have the right to know if it's been used as such to make a purchase decision on the secondary market.

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Kisai said:

Perhaps nVidia should keep a WOR database like is kept for vehicles. If a card has been sold to someone who used it for mining, then users have the right to know if it's been used as such to make a purchase decision on the secondary market.

I just don't see how this is important information at all, so long as the card doesn't come with a custom vbios purpose for mining it's usage doesn't matter, it really does make no difference at all. I'd rather buy a GPU used for mining in a open rig than a GPU used for gaming that's been all it's life in a negative pressure crappy case full of dust and heat. Even then the difference really doesn't actually matter but strictly speaking the mining GPU was run in a better environment than the gaming card was.

 

Really do think most aren't thinking these things through properly. What exactly do people actually think mining does that degrades or could degrade the card so much, especially compared to any other workload type.

 

Keeping track of what a card was used for is completely pointless and doesn't actually provide meaningful information to a second hand buyer, all that's going to happen is personal uneducated bias is going to come in to it and end up passing up a perfect good card for one that is potentially worse simply because it was only used for "gaming", as if that makes any difference.

 

You know the worst cards out of all of them? Server GPUs used in 2U servers. They have far less board design effort on them, like do a VRM analysis and you'll see just how bad, and the coolers are below average at best. We have Tesla M10's still running today installed in 2015 in 1U height nodes. If for what ever reason you want to be selective about what GPUs to not buy then it's server GPUs used in 1U and 2U servers, by far the hottest and hardest life of anything and they are still fine.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, leadeater said:

personal uneducated bias

At this point I just wanna cry tbh

Most of the hate on crypto is really unfounded and unreasonable, probably stems from jealousy.

 

Somehow miners who monitors temps and what not (most mining applications that I know reports card temps) is damaging the GPU more than gamers who games with dusty GPU that overheats and they wouldn't know until game fps drops to beyond playable

 

The application I use:

Spoiler

Screenshot_20210304-212018.thumb.png.f91adb6a73ff3f77b86c5e8e0c487001.png

 

The temp starts becoming yellow if it's above 70

Friggin 70, most gamers run their GPU above that

Some of my GPU runs cooler under load than your GPU when idling

 

Somehow pumping lower voltage is more damaging, somehow less thermal cycle is more damaging

 

Saying miners don't care about their hardware is stupidly false, we're in it to earn money, not to kill hardware, and hardware isn't free

-sigh- feeling like I'm being too negative lately

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×