Jump to content

When Should You actually Upgrade Your GPU?

TechMasterMind

I know this sounds like a really simple question but I'm on the border line on 120fps on GTA V at minimum 'CPU settings'.

I'm at 100% on both cpu & gpu usage.

Will upgrading my graphics card actually improve framerate???

Basically will a graphics card that renders frames faster actually output more frames allowing me to get higher fps and or crank up any settings that take up cpu time?!?

My specs:

i5 2400

2x4gb ddr3 1333

gtx 660

i7 8700k 5.0GHz 4.0Ghz Cache (Stock Cooler)

2x8GB 3400mhz RAM 19-19-19-38

GTX 1060 3GB 2050Mhz Core, 9500Mhz Memory

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, TechMasterMind said:

I know this sounds like a really simple question but I'm on the border line on 120fps on GTA V at minimum 'CPU settings'.

I'm at 100% on both cpu & gpu usage.

Will upgrading my graphics card actually improve framerate???

Basically will a graphics card that renders frames faster actually output more frames allowing me to get higher fps and or crank up any settings that take up cpu time?!?

My specs:

i5 2400

2x4gb ddr3 1333

gtx 660

Yeah. Overclock your RAM To 1600 If you can. And grab yourself a nice gtx 960. Or a 690. 

CPU: Intel Core i9 9900K | Ram: 16GB Corsair LPX 3000 DDR4 | Asus Maximus XI Hero Z390 | GPU: EVGA RTX2080 XC | 960 EVO Samsung 500GB M.2 | 850 EVO Samsung 250GB M.2 | Samsung 1TB QVO SSD | 1TB HDD WD Blue 

Laptop: Dell XPS 13 2 in 1 9370 | I7 1065G7 | 32GB DDR4 | 1TB SSD |

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, TechMasterMind said:

I know this sounds like a really simple question but I'm on the border line on 120fps on GTA V at minimum 'CPU settings'.

I'm at 100% on both cpu & gpu usage.

Will upgrading my graphics card actually improve framerate???

Basically will a graphics card that renders frames faster actually output more frames allowing me to get higher fps and or crank up any settings that take up cpu time?!?

My specs:

i5 2400

2x4gb ddr3 1333

gtx 660

5

i think that an i7 for 1ga 1155 and something like a 1050 ti and another 8gb or ram would be a good upgrade and yes it is time 

Link to comment
Share on other sites

Link to post
Share on other sites

You're missing the most important spec:  your monitor.   Can it even output 120hz or more?  Or is it a 60hz monitor?

 

The point being, if its a 60hz monitor all this is moot.  You should turn up settings quality until you get 60fps solid, then ask this same question.

 

If its a 120hz monitor, then no.  Pushing more frames won't do you any good.  But a new gpu will let you turn up settings and keep 120fps.

 

If its a >120hz monitor, then yes.  A new GPU will push more frames and be better.

 

**edit**

And yes.  You would need to basically build a new PC.  This platform is too old to really upgrade.

Link to comment
Share on other sites

Link to post
Share on other sites

well, there's no "should", it's always based on what you want.

but to give you an answer, personally I'd would save for a new system, everythings pretty outdated now. So to get a good improvment, you should not only upgrade gpu, but also cpu as GTA taxes both.

But I would not complain about 120fps...

GUITAR BUILD LOG FROM SCRATCH OUT OF APPLEWOOD

 

- Ryzen Build -

R5 3600 | MSI X470 Gaming Plus MAX | 16GB CL16 3200MHz Corsair LPX | Dark Rock 4

MSI 2060 Super Gaming X

1TB Intel 660p | 250GB Kingston A2000 | 1TB Seagate Barracuda | 2TB WD Blue

be quiet! Silent Base 601 | be quiet! Straight Power 550W CM

2x Dell UP2516D

 

- First System (Retired) -

Intel Xeon 1231v3 | 16GB Crucial Ballistix Sport Dual Channel | Gigabyte H97 D3H | Gigabyte GTX 970 Gaming G1 | 525 GB Crucial MX 300 | 1 TB + 2 TB Seagate HDD
be quiet! 500W Straight Power E10 CM | be quiet! Silent Base 800 with stock fans | be quiet! Dark Rock Advanced C1 | 2x Dell UP2516D

Reviews: be quiet! Silent Base 800 | MSI GTX 950 OC

 

Link to comment
Share on other sites

Link to post
Share on other sites

     You can upgrade your gpu but if both are at 100% usage you have the gpu and cpu matched up right. so it looks like a whole system upgrade because the cpu will be a bottleneck for a new gpu. but I would hold off till you are below your preferred fps due to the high ddr4 and gpu prices 

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for the response but yes my monitor can do 120hz+ you've skipped around my actual question would upgrading the gpu flat our increase framerate if the cpu is already pinned?!?

i7 8700k 5.0GHz 4.0Ghz Cache (Stock Cooler)

2x8GB 3400mhz RAM 19-19-19-38

GTX 1060 3GB 2050Mhz Core, 9500Mhz Memory

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, firecrafter711 said:

     You can upgrade your gpu but if both are at 100% usage you have the gpu and cpu matched up right. so it looks like a whole system upgrade because the cpu will be a bottleneck for a new gpu. but I would hold off till you are below your preferred fps due to the high ddr4 and gpu prices 

That's not what I'm asking I'm afraid I'm asking: will upgrading the GPU increase my framerate full stop despite the cpu being pinned, I can always switch out gpu settings if I want to use spare gpu power.

i7 8700k 5.0GHz 4.0Ghz Cache (Stock Cooler)

2x8GB 3400mhz RAM 19-19-19-38

GTX 1060 3GB 2050Mhz Core, 9500Mhz Memory

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, 19_blackie_73 said:

well, there's no "should", it's always based on what you want.

but to give you an answer, personally I'd would save for a new system, everythings pretty outdated now. So to get a good improvment, you should not only upgrade gpu, but also cpu as GTA taxes both.

But I would not complain about 120fps...

I'm not asking what I should upgrade I'm afraid since I cant upgrade the system to much more than a 50% performance increase because IPC and clockspeeds havnt come all that far in the last 6 years what I'm asking is: will upgrading the GPU increase my framerate full stop despite the cpu being pinned, I can always switch out gpu settings if I want to use spare gpu power.

i7 8700k 5.0GHz 4.0Ghz Cache (Stock Cooler)

2x8GB 3400mhz RAM 19-19-19-38

GTX 1060 3GB 2050Mhz Core, 9500Mhz Memory

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, TechMasterMind said:

That's not what I'm asking I'm afraid I'm asking: will upgrading the GPU increase my framerate full stop despite the cpu being pinned, I can always switch out gpu settings if I want to use spare gpu power.

1

it may increase it some but I don't believe it would be enough if an increase to be worth the price of a new gpu/new to you. but it could increase your quality do to a more memory on the gpu

Edited by firecrafter711
more info
Link to comment
Share on other sites

Link to post
Share on other sites

It may? I know I'm being a bit of an annoyance here but I'm basically asking if anyone has tested this i.e. cpu at 100% gpu 100% --> upgrade gpu to where its at 50% ---> does fps increase?

The price of an overall upgrade to a gtx 780 is like £40 so if I can both increase fps to where I can have consistent frames being produced along side the monitor (non-freesync display) PLUS settings increases and fps and multi monitors galore on other games.

i7 8700k 5.0GHz 4.0Ghz Cache (Stock Cooler)

2x8GB 3400mhz RAM 19-19-19-38

GTX 1060 3GB 2050Mhz Core, 9500Mhz Memory

Link to comment
Share on other sites

Link to post
Share on other sites

When you max out your video card, your system will look to the cpu to leverage the remaining needs.

 

I think a gpu upgrade will help you since that is a 2gb vram card and gtav is a very resource hungry game.

 

However, since you can't OC that i5, I would suggest looking at a GTX 1060 6gb.

 

Sandy bridge is still a capable CPU for gaming and you can always migrate that 1060 into a new build.

Desktop:

AMD Ryzen 7 @ 3.9ghz 1.35v w/ Noctua NH-D15 SE AM4 Edition

ASUS STRIX X370-F GAMING Motherboard

ASUS STRIX Radeon RX 5700XT

Corsair Vengeance LPX 16GB (2x 8GB) DDR4 3200

Samsung 960 EVO 500GB NVME

2x4TB Seagate Barracuda HDDs

Corsair RM850X

Be Quiet Silent Base 800

Elgato HD60 Pro

Sceptre C305B-200UN Ultra Wide 2560x1080 200hz Monitor

Logitech G910 Orion Spectrum Keyboard

Logitech G903 Mouse

Oculus Rift CV1 w/ 3 Sensors + Earphones

 

Laptop:

Acer Nitro 5:

Intel Core I5-8300H

Crucial Ballistix Sport LT 16GB (2x 8GB) DDR4 2666

Geforce GTX 1050ti 4GB

Intel 600p 256GB NVME

Seagate Firecuda 2TB SSHD

Logitech G502 Proteus Spectrum

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, TechMasterMind said:

Thanks for the response but yes my monitor can do 120hz+ you've skipped around my actual question would upgrading the gpu flat our increase framerate if the cpu is already pinned?!?

 

Honestly its not something with a universal answer.  It depends on a lot of things.  For some games yes, for others no.  A single-player game like a Tomb-Raider, absolutely yes.  Something more physics-based like a Project Cars with a 32 grid of opponents - no.

 

The point is that you're not really asking the right questions.  You need a goal.  Is your goal to get more frames at a certain quality setting?  if so, what is your max framerate?  Is your monitor GSync or Freesync?  Or is your goal to play games at the best possible setting with 120fps?  

 

The bottom line is your system is very old and ready for an update.  You can buy a newer video card, but I wouldn't expect much improvement until you upgrade the rest of the PC to match.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, TechMasterMind said:

I'm not asking what I should upgrade I'm afraid since I cant upgrade the system to much more than a 50% performance increase because IPC and clockspeeds havnt come all that far in the last 6 years what I'm asking is: will upgrading the GPU increase my framerate full stop despite the cpu being pinned, I can always switch out gpu settings if I want to use spare gpu power.

 

This is the most absurd thing I've read all day.  Clock speeds are not what make modern processors RIDICULOUSLY better than what you're using.  You know nothing, John Snow.

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, Vantage9 said:

 

This is the most absurd thing I've read all day.  Clock speeds are not what make modern processors RIDICULOUSLY better than what you're using.  You know nothing, John Snow.

Find me a benchmark that shows ANY processor out performing my CPU by 'RIDICULOUS' proportions in single threaded performance (what matters in gaming) then.

i7 8700k 5.0GHz 4.0Ghz Cache (Stock Cooler)

2x8GB 3400mhz RAM 19-19-19-38

GTX 1060 3GB 2050Mhz Core, 9500Mhz Memory

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, Jon Jon said:

When you max out your video card, your system will look to the cpu to leverage the remaining needs.

 

I think a gpu upgrade will help you since that is a 2gb vram card and gtav is a very resource hungry game.

 

However, since you can't OC that i5, I would suggest looking at a GTX 1060 6gb.

 

Sandy bridge is still a capable CPU for gaming and you can always migrate that 1060 into a new build.

Cheers Mr. Goku Poster

i7 8700k 5.0GHz 4.0Ghz Cache (Stock Cooler)

2x8GB 3400mhz RAM 19-19-19-38

GTX 1060 3GB 2050Mhz Core, 9500Mhz Memory

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, TechMasterMind said:

Find me a benchmark that shows ANY processor out performing my CPU by 'RIDICULOUS' proportions in single threaded performance (what matters in gaming) then.

I guess it all depends on your definition of ridiculous.  

 

The average single thread i7 8700k benchmark beats out the i5 2400 by over 1,000 points.

 

https://www.cpubenchmark.net/singleThread.html

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, tcari394 said:

I guess it all depends on your definition of ridiculous.  

 

The average single thread i7 8700k benchmark beats out the i5 2400 by over 1,000 points.

 

https://www.cpubenchmark.net/singleThread.html

 

 

 

 

I wouldn't trust that solely because if you look, the i3-8100 at 3.6ghz is somehow 500+ points lower than the i7-8700k at 3.7ghz.

 

That literally makes no sense.

 

If we wanted to compare synthetics, then the op should just post his cinebench scores, but I don't see how relevant this is when he already knows the performance he wants to get.

 

Keep in mind, he is running a GTX 660, which is five years old and doesn't support a lot of the newer rendering techniques afforded by more recent cards.

 

 

Desktop:

AMD Ryzen 7 @ 3.9ghz 1.35v w/ Noctua NH-D15 SE AM4 Edition

ASUS STRIX X370-F GAMING Motherboard

ASUS STRIX Radeon RX 5700XT

Corsair Vengeance LPX 16GB (2x 8GB) DDR4 3200

Samsung 960 EVO 500GB NVME

2x4TB Seagate Barracuda HDDs

Corsair RM850X

Be Quiet Silent Base 800

Elgato HD60 Pro

Sceptre C305B-200UN Ultra Wide 2560x1080 200hz Monitor

Logitech G910 Orion Spectrum Keyboard

Logitech G903 Mouse

Oculus Rift CV1 w/ 3 Sensors + Earphones

 

Laptop:

Acer Nitro 5:

Intel Core I5-8300H

Crucial Ballistix Sport LT 16GB (2x 8GB) DDR4 2666

Geforce GTX 1050ti 4GB

Intel 600p 256GB NVME

Seagate Firecuda 2TB SSHD

Logitech G502 Proteus Spectrum

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Jon Jon said:

I wouldn't trust that solely because if you look, the i3-8100 at 3.6ghz is somehow 500+ points lower than the i7-8700k at 3.7ghz.

 

That literally makes no sense.

 

The 8700k has higher turbo boost

 

 

OP: simulate it yourself: turn down a setting that taxes the GPU but is neutral to the CPU and see if you get more frames. Or overclock your GPU and see if you get more

Primary: CPU Core i7-4790K  |  MOBO Gigabyte GA-B85M-D3H   |  RAM 24GB Crucial DDR3-1600 CL9  |  GPU XFX Radeon RX 580 GTS Black Edition  |  CPU Cooler Thermaltake Frio Silent 14  |  Case Cooler Master N400  |  PSU Corsair CXM 750 Watt |  Boot Drive 500GB Samsung 850 Evo  |  Storage 500GB WD Laptop HDD + 2TB Toshiba HDD + 250GB WD Laptop HDD + 250GB WD Laptop HDD + 4TB WD Blue HDD  |  Monitor Acer XG270HU  |  Secondary Monitor Nixeus VUE-24  |  Tertiary Monitor Sony SDM-HS53  |  OS Windows 10

Secondary: (down for maintenance) CPU Core 2 Quad Q9300  |  MOBO (Asus P5N-E arriving soon)  |  RAM 8GB DDR2-800  |  GPU Visiontek Radeon R9 270  | CPU Cooler Cooler Master Hyper T2  |  Case Rajintek Arcadia  |  PSU EVGA 500 BV  |  Boot Drive 240GB PNY SSD  |  Storage 120GB Seagate PATA HDD  |  Removable Drives Sony PATA DVD RW Drive + 3.5 inch Floppy Drive  |  Monitor HP S2031  |  OS Windows 10

Link to comment
Share on other sites

Link to post
Share on other sites

What? even though the i3 8100 and i7 8700k are absolutely the same architecture their single thread performance is NOT identical by any means, there's plenty more to consider such as L2 L3 cache, silicon purity, instructions refinement and so on.

Personal Desktop":

CPU: Intel Core i7 10700K @5ghz |~| Cooling: bq! Dark Rock Pro 4 |~| MOBO: Gigabyte Z490UD ATX|~| RAM: 16gb DDR4 3333mhzCL16 G.Skill Trident Z |~| GPU: RX 6900XT Sapphire Nitro+ |~| PSU: Corsair TX650M 80Plus Gold |~| Boot:  SSD WD Green M.2 2280 240GB |~| Storage: 1x3TB HDD 7200rpm Seagate Barracuda + SanDisk Ultra 3D 1TB |~| Case: Fractal Design Meshify C Mini |~| Display: Toshiba UL7A 4K/60hz |~| OS: Windows 10 Pro.

Luna, the temporary Desktop:

CPU: AMD R9 7950XT  |~| Cooling: bq! Dark Rock 4 Pro |~| MOBO: Gigabyte Aorus Master |~| RAM: 32G Kingston HyperX |~| GPU: AMD Radeon RX 7900XTX (Reference) |~| PSU: Corsair HX1000 80+ Platinum |~| Windows Boot Drive: 2x 512GB (1TB total) Plextor SATA SSD (RAID0 volume) |~| Linux Boot Drive: 500GB Kingston A2000 |~| Storage: 4TB WD Black HDD |~| Case: Cooler Master Silencio S600 |~| Display 1 (leftmost): Eizo (unknown model) 1920x1080 IPS @ 60Hz|~| Display 2 (center): BenQ ZOWIE XL2540 1920x1080 TN @ 240Hz |~| Display 3 (rightmost): Wacom Cintiq Pro 24 3840x2160 IPS @ 60Hz 10-bit |~| OS: Windows 10 Pro (games / art) + Linux (distro: NixOS; programming and daily driver)
Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, Princess Cadence said:

What? even though the i3 8100 and i7 8700k are absolutely the same architecture their single thread performance is NOT identical by any means, there's plenty more to consider such as L2 L3 cache, silicon purity, instructions refinement and so on.

Would that honestly drastically change the real world performance in single threaded tasks?

 

Maybe that's true in passmark, but I think the other guy hit the nail on the head with the 8700k boosting during the test, but I forget how high the i3 boosts ?

Desktop:

AMD Ryzen 7 @ 3.9ghz 1.35v w/ Noctua NH-D15 SE AM4 Edition

ASUS STRIX X370-F GAMING Motherboard

ASUS STRIX Radeon RX 5700XT

Corsair Vengeance LPX 16GB (2x 8GB) DDR4 3200

Samsung 960 EVO 500GB NVME

2x4TB Seagate Barracuda HDDs

Corsair RM850X

Be Quiet Silent Base 800

Elgato HD60 Pro

Sceptre C305B-200UN Ultra Wide 2560x1080 200hz Monitor

Logitech G910 Orion Spectrum Keyboard

Logitech G903 Mouse

Oculus Rift CV1 w/ 3 Sensors + Earphones

 

Laptop:

Acer Nitro 5:

Intel Core I5-8300H

Crucial Ballistix Sport LT 16GB (2x 8GB) DDR4 2666

Geforce GTX 1050ti 4GB

Intel 600p 256GB NVME

Seagate Firecuda 2TB SSHD

Logitech G502 Proteus Spectrum

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Jon Jon said:

?

Real world performance not too much being honest but it still is noticeable especially in synthetics if you go for a same frequency comparison.

Personal Desktop":

CPU: Intel Core i7 10700K @5ghz |~| Cooling: bq! Dark Rock Pro 4 |~| MOBO: Gigabyte Z490UD ATX|~| RAM: 16gb DDR4 3333mhzCL16 G.Skill Trident Z |~| GPU: RX 6900XT Sapphire Nitro+ |~| PSU: Corsair TX650M 80Plus Gold |~| Boot:  SSD WD Green M.2 2280 240GB |~| Storage: 1x3TB HDD 7200rpm Seagate Barracuda + SanDisk Ultra 3D 1TB |~| Case: Fractal Design Meshify C Mini |~| Display: Toshiba UL7A 4K/60hz |~| OS: Windows 10 Pro.

Luna, the temporary Desktop:

CPU: AMD R9 7950XT  |~| Cooling: bq! Dark Rock 4 Pro |~| MOBO: Gigabyte Aorus Master |~| RAM: 32G Kingston HyperX |~| GPU: AMD Radeon RX 7900XTX (Reference) |~| PSU: Corsair HX1000 80+ Platinum |~| Windows Boot Drive: 2x 512GB (1TB total) Plextor SATA SSD (RAID0 volume) |~| Linux Boot Drive: 500GB Kingston A2000 |~| Storage: 4TB WD Black HDD |~| Case: Cooler Master Silencio S600 |~| Display 1 (leftmost): Eizo (unknown model) 1920x1080 IPS @ 60Hz|~| Display 2 (center): BenQ ZOWIE XL2540 1920x1080 TN @ 240Hz |~| Display 3 (rightmost): Wacom Cintiq Pro 24 3840x2160 IPS @ 60Hz 10-bit |~| OS: Windows 10 Pro (games / art) + Linux (distro: NixOS; programming and daily driver)
Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, TechMasterMind said:

Find me a benchmark that shows ANY processor out performing my CPU by 'RIDICULOUS' proportions in single threaded performance (what matters in gaming) then.

 

Um, gaming in the real world is about a HELL of a lot more than single threaded benchmarks.  Let me enlighten you with a bit of personal experience.  Up until recently I was gaming on an i5-4690k (newer and faster than your processor).  I had upgraded to a 1080Ti to boost my VR performance, but quickly found that everything wasn't quite so GPU dependent as I thought.

 

In traditional games like Rise of the Tomb Raider, it would MURDER with everything maxxed out and the frames never dropping.  CPU usage wasn't maxed or really even strained, except in certain situations.  In particular jungle settings, when the tree-limb count and number of debris would go through the roof, suddenly the CPU would peg out at 100% and the frames would drop for the first time.  You see, its more than just rendering 3D pixels, its the PHYSICS of some of these things.  Anytime the number of physics calculations goes up (which is a lot in modern games), you need more cores/threads to push it along.

 

This effect was staggeringly more obvious in VR.  In more simple solo VR experiences like Duck Season, I could supersample like crazy, almost 2.0, while maintaining perfect frames.  Even in my game-of-choice, Assetto Corsa, I could race solo on a track with settings on max and supersample at 1.5+, but as soon as you add a grid of 10+ opponents things change.  Suddenly there's a lot of physics calculations happening, the CPU pegs out, and the Rift immediately starts dropping below 90fps, rarely maintaining it for more than a few seconds at a time.  

 

Swapped in a 7700k, a much more modern processor with more cores/threads and voila . . . solid 90fps with a grid of 32 other cars and settings on max.  The difference is so tangibly obvious its laughable.  Coupled with the fact that things like my 3DMark and VRMark scores almost tripled in what percentile I'm in (same video card, but better processor), there's no question.  The way you say "single threaded performance is what matters for gaming" might be true for a narrow band of games and situations, but I think that's increasingly an antiquated way to see things.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×