Jump to content

New vs Used GPU Benchmarks

Thought you guys were testing older nvidia drivers vs newer nvidia drivers to see if older cards lose performance

I edit my posts a lot, Twitter is @LordStreetguru just don't ask PC questions there mostly...
 

Spoiler

 

What is your budget/country for your new PC?

 

what monitor resolution/refresh rate?

 

What games or other software do you need to run?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

So, how does overclocked chewing gum compare to a stock chewing gum?

Do not feed.

Link to comment
Share on other sites

Link to post
Share on other sites

Talks about how AMD helped make this video possible, uses Nvidia GTX 480.

Link to comment
Share on other sites

Link to post
Share on other sites

There's a few things that were probably missed by the video.

Your old card was relatively cared for, the heatsink was probably removed a few times, at some point you probably tested water cooling on that card, the thermal paste on it is probably not the one from six years ago, the heatsink was probably cleaned of dust at some point.

 

I've seen (and had) cards like the Radeon 4850 for example which have heatsinks with small diameter channels in the heatsink that basically get clogged with dust after months of use, which makes the fan spin faster constantly to maintain the temperature below 90c. I remember having to use a straw  to blow air through those channels to remove the dust mites from inside the heatsink. Nowadays, there's better designed heatsinks out there, that one on the Radeon 4850 was particularly bad.

 

Then there's also the issue of heat but not in the sense you discussed in the video.

All cards have polymer or electrolytic capacitors in them with a finite lifetime ...for low esr high quality ones used in VRMs the lifetime is typically 2000h@105c and the general consensus is that you double that rating with each 10c lower for electrolytic capacitors and about 3-4x for polymers.  So you have 4000h @ 95c , 8000h @ 85c, 16000h @ 75c and so on.

This lifetime rating means that if the capacitors are subjected to their maximum ratings and that temperature for that period of time, at the end of that period they'd still be within a few percentages of the specifications

In a video card, the capacitors are usually hovering around the 50-60c , they heat up from the leads on the pcb and the components around them but at the same time they're cooled by the airflow around them from the cooler.. So they will last a relatively long time, years, even if they're kept very warm.

But, they still degrade over time slowly, faster if the card is used for bitcoin mining or other such cases where it's running hot 24/7

 

Basically, the point is the capacitors in time degrade but will still be fine, within the specifications the engineers that designed the video card VRM expect, but closer to the minimum permitted specs. The VRM may however have to work "harder" to pump the voltages and currents the gpu chip and memory chips expect so as the card gets older, the power supply may be more "abused", you may see higher input current spikes, which may eventually cause problems on the motherboard.

 

You would determine this only by tapping inside the 12v pins on the pci express slot and on the pci-e connector and by measuring the current and voltage, the way that Tom's hardware measures the power consumption on video cards  these days, measuring the instant power consumption hundreds of times a second to see current bursts, spikes etc

See how they do the measurements here: http://www.tomshardware.com/reviews/amd-radeon-rx-480-polaris-10,4616-9.html

 

Link to comment
Share on other sites

Link to post
Share on other sites

That roof analogy was weird.

Are Canadian roofs built in a way where they don't develop small local leaks, having which is still better than having no roof at all? Do they only fail catastrophically?

Link to comment
Share on other sites

Link to post
Share on other sites

I had a used GTX670 die on me 6 months ago.

Where as the one I owned from new is still going fine now.

 

Details:

I had the 'Primary' rig with an EVGA reference GTX670, which I bought from new for about £350-£400 not long after launch.

And 12 months ago, I bought a second EVGA reference GTX670 for my 'Secondary' rig from ebay, that was titled "Not used for mining" (So I thought it was obviously used for mining, but thought it worth the risk). Bought it for £100 shipped.

 

I sold my 'Primary' rig to a friend, and kept my 'Secondary' machine.

 

The used 670 died within 6 months of buying it.

 

The one that I sold to my friend, is still going good today.

 

Also worth noting, is that the used 670 I bought was ~10 FPS slower in every scenario than the one I had owned from new. But that could just be down to the silicon lottery.

 

 

------

Me, with one new and one used card with an unknown history, is a very small sample size.

 

I wonder if anyone else here has had a similar experience?

 

:)

Link to comment
Share on other sites

Link to post
Share on other sites

You mentioned the ASUS X99 Deluxe II in this video.

Can we get a review of it please?

I talk in brackets (a bit like this (sometimes multilayered) (I know it's hard to read but I sort of think like this(maybe it has something to do with being a programmer)))

Link to comment
Share on other sites

Link to post
Share on other sites

Nice to see the video on a rig with a gtx 480 oveclocked.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, nicklmg said:

Amazon: http://geni.us/t4QAAc
NCIX: http://bit.ly/29uOYi3

 

Yes we're testing some 480s today - no, they are not of the "RX" variety. Let's see if hard use really causes performance degradation!

 

 

Now... do the same testing with cpus, ram, and motherboards. Just because these graphics cards don't degrade, doesn't mean that other components don't 

Link to comment
Share on other sites

Link to post
Share on other sites

With video cards, there is pretty much no reason to implement planned obsolescence, as these companies are confident that people will upgrade if real improvement is made.

 

With smartphones, the companies tend to take measures such as using non-user replaceable batteries, so that over time, the user unconsciously changes their use habits (checking the device less, not playing that game during the lunch brake, not taking 20 pictures in a row because the squirrel is incredibly cute).

 

The thing that dissatisfies people most, is when their phone no longer feels as if it is allowing them to do everything they want in a comfortable fashion. With a lithium polymer battery easily losing over as much as 20+% of its capacity over a year or so, it is easy to see why companies are focused on making it difficult to replace the battery. Unlike design choices such as slapping a crappy Capxon branded capacitor right above the heatsink of a linear regulator so that the monitor you bought, does not last much beyond 2 years (cheap monitors do this frequently), or using crap parts in general which have poor reliability. With smartphones, they design them to now need the really crap components which can cause unpredictable failures that may increase costs due to RMA, instead, they use the battery, which has a datasheet containing a full characterization of how long it will last given charge rates, temperatures, and many other factors. This means that the company can safely tune a device to ensure that with regular use, you will be at 60% battery capacity after 2 years, and to make it through the day, you have to cut your use down by almost 50%. Since it is gradual, you do not notice it as much, unless you did a controlled battery life test (100% CPU and 100% brightness until the battery dies).

 

Another way to gauge the long term functionality and reliability of a product, is to look at the warranty period. A warranty is a selling point, and thus companies are encouraged to make them as long as possible, The length limit, is determined by the failure rate over time, e.g., if their component selection is known to become significantly more likely to fail after 2 years (e.g., failure rates jumps up from 5% to around 20% after around 2 years, then 2 years will be the limit.

With smartphones, the limit is typically based on the battery life. For other devices which do not have components with as much of a predictable failure rate, they work on historic data of how long some of the parts last.

 

For example, the backblaze data:

 

  • WD red (3 year warranty)
  • Seagate Barracuda (1 year warranty)
  • Hitachi GST Deskstar (3 year warranty)

ZTQ86Uy.jpg

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Streetguru said:

Thought you guys were testing older nvidia drivers vs newer nvidia drivers to see if older cards lose performance

They dont end of story.

Thats that. If you need to get in touch chances are you can find someone that knows me that can get in touch.

Link to comment
Share on other sites

Link to post
Share on other sites

In Europe the minimum warranty allowed is 2 years.

 

Yeah Seagate drives are often worse than other brands. Just as a sidenote, Backblaze's figures don't really apply to everyone as they use them in very particular scenarios, like having 30-40 hard drives in a single case, with more than normal vibrations and if not careful higher temperatures and 24/7 usage.

 

Google's hard drive study was much better at showing what to expect from hard drives. Too lazy to find link now, just google for "google hard drive pdf" and you'll find the document.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Streetguru said:

Thought you guys were testing older nvidia drivers vs newer nvidia drivers to see if older cards lose performance

Yeah... me too....

but I guess somewhat we all knew the answer to that already. 

:ph34r:

If it is not broken, let's fix till it is. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, thekeemo said:

They dont end of story.

It actually happened with The Witcher 3, though it was fixed later on

 

and a similar thing happened with Fallout 4 when the 1.3 patch was released, kepler cards lost performance, don't know if that was ever fixed, and ya random russian site benchmarks, someone more mainstream has been needing to validate them for a while now I think
 

 

I edit my posts a lot, Twitter is @LordStreetguru just don't ask PC questions there mostly...
 

Spoiler

 

What is your budget/country for your new PC?

 

what monitor resolution/refresh rate?

 

What games or other software do you need to run?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Streetguru said:

someone more mainstream has been needing to validate them for a while now I think

someone mainstream won't validate it because nVidia has them by the balls

CPU: Intel i7 5820K @ 4.20 GHz | MotherboardMSI X99S SLI PLUS | RAM: Corsair LPX 16GB DDR4 @ 2666MHz | GPU: Sapphire R9 Fury (x2 CrossFire)
Storage: Samsung 950Pro 512GB // OCZ Vector150 240GB // Seagate 1TB | PSU: Seasonic 1050 Snow Silent | Case: NZXT H440 | Cooling: Nepton 240M
FireStrike // Extreme // Ultra // 8K // 16K

 

Link to comment
Share on other sites

Link to post
Share on other sites

Lol, that ad at the end.  You spend 40 seconds listing all the ways that you don't use paper, and then try and sell us a printer!  

 

"We made our office more efficient by ditching old-fashioned written media like whiteboards and memos, and switching to electronic databases.  But if for some reason you want to do the opposite of that, here's a printer to drown your office in useless paper!"  

 

Do you guys not use paper at all, @LinusTech?  Because if you do, you should mention it.  

Link to comment
Share on other sites

Link to post
Share on other sites

The Video Should have a Thumbnail saying- 'A 480 from EVGA?'

 

Such Click bait,Much wow.

Hardcore Hardware Whore | Check out my Blog for for Latest Tech News and more | Cricket Fan | Weapons Nut | 

Link to comment
Share on other sites

Link to post
Share on other sites

Wow HP sponsorship, congrats. Really fantastic video guys.

 

On 7/5/2016 at 11:17 AM, mariushm said:

Backblaze's figures don't really apply to everyone

Their testing shows us what drives will survive better in any circumstances, it's good data.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, BenjaminC said:

 

Their testing shows us what drives will survive better in any circumstances, it's good data.

But the problem is they don't reflect real world figures, for some hard drive models.

 

For example, let's say a user buys a WD Green drive and uses it as recommended, 6-10 hours a day, with normal average home use (percentage of time reading or writing data to platters versus just idling), with the drive parking its heads from time to time as designed. A regular home user that buys a WD Green drive will also most likely buy only one hard drive or two at most, and maybe a SSD.

 

This behavior will be completely different compared to how Backblaze operates its drives, which is lots of drives in close proximity to each other heating up from the heat radiated by the other drives, subjected to vibrations from the other drives and from the fans trying to keep the whole server cool. 

 

You just can't put a WD Green drive in the same league with NAS or Enterprise drives and then bitch about WD Green failing more often, so for WD Green it's not "good data".

 

Statistically, the chances of a  WD Green drive failing in a Backblaze server are much higher compared to usage at home, so if you check the Backblaze statistics and see 5% of WD Green drives fail within 2 years, this 5% value is irrelevant for home users and WD Green drives. 

 

The statistics from the Google study would be more relevant, as they're servers with number of hard drives ranging from 1 to a few, not servers with 40-60 drives installed in them, except the 24/7 operation Google's servers may be much closer to a normal home user's computers.

Google also played around with temperatures inside the servers so we can now learn at which temperature statistically hard drives fail more often and so on ... Backblaze's servers are all running at same temperatures...

 

Backblaze also bought a huge amount amount of Seagate desktop drives over time and gathered all the failed drives and sent a whole lot of them back to Seagate when they died, and Seagate gave them refurbished drives which (most of them) died within months, and these premature failures skewed some statistics. They do say this in their blog and the stats but still, it's worth mentioning. Now I think they're not even installing refurbished drives in their servers, they're only using new or hard drives replaced by new drives, not repaired or refurbished.

 

Link to comment
Share on other sites

Link to post
Share on other sites

@nicklmg that open air chassis looks really great. Can you tell me what it is called and where I can buy it?

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/5/2016 at 8:24 AM, TFMRealm said:

Now... do the same testing with cpus, ram, and motherboards. Just because these graphics cards don't degrade, doesn't mean that other components don't 

 

Unless I'm missing something here... a video card has a processor of some kind, a PCB with many components that involve power delivery, and memory.  I'd imagine the video card lesson can be applied to separate components in this case... which is more than likely no.  Though capacitors were brought up by mariushm, so in this case I imagine that capacitors should follow the same general idea. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×