Jump to content

Memory clock OC - does higher offset degrade faster?

I have my rig set up mining Ethereum and I plopped a +500MHz memory clock offset on all of my mining cards.  2070, 2060 Super, 2060, 1080, and 2 1070s. Everything but my 1080 Ti (which needs it reduced because 1080 Tis are the spaz of the GPU world).  All of the cards are fine and +500MHz seemed like a reasonable offset.  But I have seen people discuss pushing the memory clock as high as it will possibly go stable, sometimes putting something like a +1000MHz or more memory clock offset on their cards.

 

As I would like to reuse the cards for other purposes when they are done mining, I have been leery about going too high with the MC.  As such I keep them cool and undervolted.  But does it matter?  Is there any appreciable difference in memory degradation between +500MHz and +1000MHz if the card is stable and greatly undervolted (my 20 series cards are all locked in at 700mV and my 10 series at 900mV... 711W draw for all of them)?

 

Am I just leaving hashing power on the table or is it safer not to push things too far?

Link to comment
Share on other sites

Link to post
Share on other sites

For all i know as long as you keep them cool youll be fine

 

And yes you will def want to oc the absolute sht out of your gpu memory for faster mining since mining relies on vram

Link to comment
Share on other sites

Link to post
Share on other sites

IIRC most of the degradation comes from running hot or a lot of thermal cycling, so as long as you keep memory temperatures in check it shouldn't be much different from running an OC for gaming.

 

Regarding the +1000 on memory, that can be easy to reach but you need to check if performance actually increases. Pascal (and up) has ECC memory, so while the clock might be higher, if it spends more time correcting errors than improving performance, the net result is a drop in performance.

Crystal: CPU: i7 7700K | Motherboard: Asus ROG Strix Z270F | RAM: GSkill 16 GB@3200MHz | GPU: Nvidia GTX 1080 Ti FE | Case: Corsair Crystal 570X (black) | PSU: EVGA Supernova G2 1000W | Monitor: Asus VG248QE 24"

Laptop: Dell XPS 13 9370 | CPU: i5 10510U | RAM: 16 GB

Server: CPU: i5 4690k | RAM: 16 GB | Case: Corsair Graphite 760T White | Storage: 19 TB

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, tikker said:

IIRC most of the degradation comes from running hot or a lot of thermal cycling, so as long as you keep memory temperatures in check it shouldn't be much different from running an OC for gaming.

 

Regarding the +1000 on memory, that can be easy to reach but you need to check if performance actually increases. Pascal (and up) has ECC memory, so while the clock might be higher, if it spends more time correcting errors than improving performance, the net result is a drop in performance.

Thanks for the info. The bit about Pascal and ECC memory makes sense. My 1080 Ti gives me the best hash rate with a -400MHz memory clock. The 1080 and 1070's both showed improvement bumping it up to +500MHz.  The Pascal cards also do not like the decrease in power.  The 20-series cards didn't drop a blip locking in the power at 700mV.

 

I guess I just need to go through one by one and increase until it crashes.  Any advice on the core clocks?  My understanding is that the Ethereum algorithm is memory clock reliant, so I have not even bothered touching the core clocks.  Since the cards are all way undervolted and running in the upper 40C, I might as well squeeze every bit of hash rate out of them that I can.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, skaughtz said:

Thanks for the info. The bit about Pascal and ECC memory makes sense. My 1080 Ti gives me the best hash rate with a -400MHz memory clock. The 1080 and 1070's both showed improvement bumping it up to +500MHz.  The Pascal cards also do not like the decrease in power.

The oddities around Pascal are due to a particular hardware "issue" that Nvidia has acknowledged and cannot be fixed. They definitely benefit from a core overclock (whereas you'd normally underclock) and I indeed also at times see better hashrates from my 1080 Ti underclocking memory. It seems related to the Memory Controller Load in GPU-Z. Often it benefits me to get that as close to 100% as I can, but not always I've found lately. Honestly now I've just set it at 80% PT, +225 core, +750 mem and I'm happy at around 35-38 MH/s. Switching to the iGPU I can get it up to 42, but that's where mine sort of tops out.

Crystal: CPU: i7 7700K | Motherboard: Asus ROG Strix Z270F | RAM: GSkill 16 GB@3200MHz | GPU: Nvidia GTX 1080 Ti FE | Case: Corsair Crystal 570X (black) | PSU: EVGA Supernova G2 1000W | Monitor: Asus VG248QE 24"

Laptop: Dell XPS 13 9370 | CPU: i5 10510U | RAM: 16 GB

Server: CPU: i5 4690k | RAM: 16 GB | Case: Corsair Graphite 760T White | Storage: 19 TB

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, tikker said:

The oddities around Pascal are due to a particular hardware "issue" that Nvidia has acknowledged and cannot be fixed. They definitely benefit from a core overclock (whereas you'd normally underclock) and I indeed also at times see better hashrates from my 1080 Ti underclocking memory. It seems related to the Memory Controller Load in GPU-Z. Often it benefits me to get that as close to 100% as I can, but not always I've found lately. Honestly now I've just set it at 80% PT, +225 core, +750 mem and I'm happy at around 35-38 MH/s. Switching to the iGPU I can get it up to 42, but that's where mine sort of tops out.

I would really love some more insight into the whole memory degradation thing.

 

I decided to just mess with my EVGA RTX 2070 Black Gaming to see what I could get out of it.  As of me writing this, I stopped at +1350MHz on the memory clock and found that -150MHz was the sweet spot on the core clock. -200MHz core clock would drop the hash rate, while anything above -150MHz didn't make a difference. It is still locked in at 719mV, running at 47C, and I increased the hash rate more than 5MH/s.

 

It still has not crashed, which worries me. Am I burning up something unknown?  +1350MHz seems way too good to be just silicon luck.  Or is that all that it is?

 

Forgive me that I am not in the habit of massively overclocking hardware. I like to protect my investments.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, skaughtz said:

It still has not crashed, which worries me. Am I burning up something unknown?  +1350MHz seems way too good to be just silicon luck.  Or is that all that it is?

That's could be the ECC kicking in. Do you see a hashrate increase between say +1000 and +1350?

 

Most important would be the memory temperatures. Since there  are no sensors on pre-3000 cards I believe, your best bet would be aiming some IR thermometer on the chip or something. There's no shame/harm in tuning it back down if your happy sacrificing thata 5 Mh/s for safety.

Crystal: CPU: i7 7700K | Motherboard: Asus ROG Strix Z270F | RAM: GSkill 16 GB@3200MHz | GPU: Nvidia GTX 1080 Ti FE | Case: Corsair Crystal 570X (black) | PSU: EVGA Supernova G2 1000W | Monitor: Asus VG248QE 24"

Laptop: Dell XPS 13 9370 | CPU: i5 10510U | RAM: 16 GB

Server: CPU: i5 4690k | RAM: 16 GB | Case: Corsair Graphite 760T White | Storage: 19 TB

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, tikker said:

That's could be the ECC kicking in. Do you see a hashrate increase between say +1000 and +1350?

 

Most important would be the memory temperatures. Since there  are no sensors on pre-3000 cards I believe, your best bet would be aiming some IR thermometer on the chip or something. There's no shame/harm in tuning it back down if your happy sacrificing thata 5 Mh/s for safety.

That's exactly my concern.  There is a notable hash rate increase with each step from +1000 on, but I have no way of telling what the temps are outside of the main GPU temperature reported, which is 47C, through Precision X1 (all of the cards are EVGA, and I use that program for everything but the voltage, which is locked at 700mV in the mining command line).

 

I figured that by locking in the mV through trex it wouldn't feed anything on the card more than it could handle, since that is quite a bit undervolted to begin with.  My original goal was to drop the total power draw.  Now that I have that down I'd like to eek out what hash rate I can, but not at the expense of burning the cards up in a month.

 

Edit: With all of that said, it did increase the reported power draw from 93W to 99W.  That worries me since the GPU temp didn't increase.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×