Jump to content

Overclocking a Geforce GTX 760.

Toxickun

Okay.. so i've come to the realization that despite doing some research like watching what a graphic card is to reading my 2008 textbook on computer stuffs.
i've come to the conclusion that i have very little in-debt knowledge about graphic card and overclocking for i can't seem to figure out

Why is this Geforce GTX 760 gpu at full max yet everything else, (The core clock, memory clock) Is so low on the graph?

This has been brought to my attention ever since i overclocked an rtx 2080 card [which i returned cuz rtx 3080..;] For that one i was seeing full memory usage upon running fur mark.
That was impressive! Yet for this card no matter how much i try to overclock it's coreclock and memory clock (to a point where it gets unstable) it never reachs it's max potential.
Only the gpu usage.
So like what?
Is their a way to overclock the gpu usage or something?
Or at least get that memory usage n core clock to hit it's max? at least halfway...

GzZ0ugy.png
LIke i'd love if someone can tell me what would changing the bars on the afterburner effect what i can see in detail.
Recently i found out jumping the memory clock to 1000 would create artifacting and than a crash later. so that's something new. though i wonder what's happening when i increase the memory clock.
Like is it storing memory for later use by the core clock? or something..
Does anything change more fi the GPu temperature rises. is it also responsable for the crash/artifact thing?
What does the core clock actually do!?!? it's either i raise it up to a point where it would crash and than lower it down a little to where it's stable but i don't see any noticeable difference with it's increase/decrease.

Link to comment
Share on other sites

Link to post
Share on other sites

do you mean why it doesnt pin up to the top of the graph?

if so, then it's normal

 

as for memory usage, it depends on how much the game needs, it only uses as much as it needs

using maximum memory capacity would actually be a big problem for most games as you'll start accessing system memory and that can cause stuttering

 

did you limit your GPU power limit to 70%? and try using unigine heaven instead of furmark.

-sigh- feeling like I'm being too negative lately

Link to comment
Share on other sites

Link to post
Share on other sites

Oh it is? it's just i've seen the rtx card hit the top of the graph and turn yellow so i'd assume this one should do the same..;

Also woaw.. it uses system memory? But the gpu has it's own memory doesn't it? hence the core memory thing..;

and i'm not sure but i maxed out both core voltage and power limit.

Do you mean the heaven benchmark thing?

Link to comment
Share on other sites

Link to post
Share on other sites

Is that screenshot while running Furmark? If so, I think that's the reason you're only seeing 71% power usage. Nvidia at some point implemented hardware measure and software measures in their drivers to detect Furmark and automatically power throttle to protect the card from damaging itself. Try Heaven like @Moonzy said or some other benchmark or game instead. That's a much more realistic load.

 

Your core and memory clocks seem fine, they align with Nvidia's advertised boost (1033 MHz) and memory (6 GHz, you see 3 because DDR=Double Data Rate) clocks..

1 hour ago, Toxickun said:

though i wonder what's happening when i increase the memory clock.

During use stuff like textures needs to be loaded in and out of memory. The memory clock indicates how fast it can do this. A higher memory clock speed means it can load and unload textures faster for example.

 

1 hour ago, Toxickun said:

Recently i found out jumping the memory clock to 1000 would create artifacting and than a crash later. so that's something new.

Factory settings are all about guaranteed stability. Sometimes the chips can run faster (i.e. overclocking). You can only push the chips so far though. Once you start seeing artifacts as you describe, you know you have reached the limits and need to clock lower. That's all :)

 

1 hour ago, Toxickun said:

What does the core clock actually do!?!?

The GPU core does the heavy lifting in terms of calculations for rendering the scene (how things should look), e.g. the lighting of the scene (with ray tracing being the latest new eyecandy), maybe some physics and slapping textures on things. A higher core clock means it can do these calculations faster. It takes time to render a frame, so the faster the GPU can do these calculations the more frames per second it can push out.

1 hour ago, Toxickun said:

it's either i raise it up to a point where it would crash and than lower it down a little to where it's stable but i don't see any noticeable difference with it's increase/decrease.

Congratulations, you've discovered overclocking! Depending on how much MHz you can add the gains can be hardly anything noticable to a nice little boost to your FPS or benchmark score.

Crystal: CPU: i7 7700K | Motherboard: Asus ROG Strix Z270F | RAM: GSkill 16 GB@3200MHz | GPU: Nvidia GTX 1080 Ti FE | Case: Corsair Crystal 570X (black) | PSU: EVGA Supernova G2 1000W | Monitor: Asus VG248QE 24"

Laptop: Dell XPS 13 9370 | CPU: i5 10510U | RAM: 16 GB

Server: CPU: i5 4690k | RAM: 16 GB | Case: Corsair Graphite 760T White | Storage: 19 TB

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, Toxickun said:

it's just i've seen the rtx card hit the top of the graph and turn yellow so i'd assume this one should do the same..;

it's because they exceed the scale of the graph lol

 

29 minutes ago, Toxickun said:

it uses system memory? But the gpu has it's own memory doesn't it? hence the core memory thing..;

it only uses system memory when the VRAM is out, which is a bad thing and you never want that to happen

 

30 minutes ago, Toxickun said:

i'm not sure but i maxed out both core voltage and power limit.

i recommend against doing this for long term use, unless you just wanna have some fun with it and you dont mind it dying prematurely

 

31 minutes ago, Toxickun said:

Do you mean the heaven benchmark thing?

yes, it's a much more realistic load

furmark have killed GPUs in the pasts, so nvidia may have limited the power when it detects furmark running

 

also, remember to quote or mention the person you're replying to so they'll see it ;) 

-sigh- feeling like I'm being too negative lately

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×