Jump to content

CHEAP does NOT mean GOOD VALUE - Low End GPUs Explained

On 12/25/2022 at 12:54 PM, cmndr said:

Can you provide a benchmark showing a 5x increase from a mid range CPU to a high end CPU? I showed a 5x uplift for GPUs. 

You provided a generic techpowerup relative performance graph which doesn't show real-world performance in these scenarios. Those graphs are a generic relative performance determined from specifications and results in synthetic tests.

On 12/25/2022 at 12:54 PM, cmndr said:

This has been done since the 1990s. 

Note I said: "this hasn't really changed in a decade or two"

You were bringing up that point as if some crazy hardware acceleration happened between the 2000 series and 3000 series.

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/26/2022 at 2:35 AM, DarkSwordsman said:

You provided a generic techpowerup relative performance graph which doesn't show real-world performance in these scenarios. Those graphs are a generic relative performance determined from specifications and results in synthetic tests.

Note I said: "this hasn't really changed in a decade or two"

You were bringing up that point as if some crazy hardware acceleration happened between the 2000 series and 3000 series.

It's the average of multiple benchmarks. 
THG and anandtech have similar charts. 

The charts have followed this pattern for 10+ years. GPUs usually age worse than CPUs. You can still use a top end CPU from 2009 (barely) and have an OK experience. Upgrading to a CPU that's got 5x the horsepower doesn't get you 5x the frame rate over an OCed x5660. Upgrading from a GTX 280 to a 3080 is night/day. 

Can you provide data to back up your claim? I haven't seen a single spec of evidence on your end. 

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, cmndr said:

Can you provide data to back up your claim? I haven't seen a single spec of evidence on your end. 

https://youtu.be/pGy3A5-F8_8?t=591

https://www.youtube.com/watch?v=53OmzV45C7o

https://www.youtube.com/watch?v=k7hfGJ7TC5A

https://www.youtube.com/watch?v=qDu4is1WUh4


I would like to re-note:
 

My original point was to counter your weird claim of 1080p low, 1440p medium, 4k high, or whatever.

 

There are many cards, even something like a 2060 Super, that can push 1440p Max settings in many modern titles with good FPS, and possibly still even be CPU limited. Games these days lack optimization, mainly in draw calls/parallel processing, so faster CPU cores often are beneficial, even if the GPU is at max utilization. This is, again, why I talk about Doom, since it's CPU processing has been improved so much that it can actually feed GPUs plenty of data and see significant boosts in performance across GPUs, even at 1080p.

 

Good GPUs do help, especially ones with plenty of VRAM for demanding titles and support for resiable bar or other features where usable. But in the grand scheme of things, it's incorrect to generalize that GPUs are the main thing that will allow people to gain FPS, and your assumptions about game settings/resolutions per tier are also incorrect. Modern GPUs are way ahead of the performance curve and have plenty of crazy features. The software has not caught up, and only now we are seeing engines like Unreal Engine 5.1 that can actually take advantage of this hardware properly. Though again, that still comes at the cost of draw calls and other CPU-based processing.

At the end of the day, the sort of gains that someone will see will depend on their usage scenario. For example: In VRChat, you would think it's not necessarily a demanding game. However, it is, but only because of seriously unoptimized avatars and worlds. But even then, getting a better GPU won't necessarily solve your problems. It often has to do with the number of draw calls as well as the amount of VRAM the textures and meshes take up. I.E: More VRAM and a better CPU would improve the performance in VRChat, as a general rule, and raw graphics horsepower won't really do anything unless you're pushing a super higher resolution HMD and have a lot of people using complex shaders.

Most of my performance woes have been due to me running a 3950X. Despite running the same resolution and settings as someone else with a 9700K, and me having 2080S and them a 2070S, I often get 10-30% less performance than they do, even if the GPU is nearly limited.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, DarkSwordsman said:

Can you provide charts or at least time stamps?

I clicked on one and it's showing that CSGO can get 800FPS on a 5700 vs like 400 with a 1700... at which point I'd claim CPU doesn't matter since you're monitor limited at the 400FPS mark (you should be running  at a few FPS under your monitor's frame rate with VRR enabled)

In that same video overall it looks like a 4 year old CPU vs a new CPU gets up to 50%... which isn't too crazy. It also shows that 3060 vs 3080 has near 2x scaling (which is expected, there's a ~2.5x difference in theoretical performance) which implies that the GPU is the main limit. Note that the gap between 3060 vs 3080 is  far smaller than the gap between something like 4090 vs 1050Ti (83 TFLOPS vs 2.1) and yes, I'm aware TFLOPS is a flawed metric... but there IS a reason why a system with a mid-range CPU and a high end GPU usually runs circles around the high end CPU with a mid-range GPU system in games. 


I know it takes time to look up videos but all the CPU charts I see look like this:


https://www.tomshardware.com/reviews/intel-core-i9-13900k-i5-13600k-cpu-review/5

https://www.techpowerup.com/review/amd-ryzen-9-7950x/19.html

https://www.anandtech.com/show/17585/amd-zen-4-ryzen-9-7950x-and-ryzen-5-7600x-review-retaking-the-high-end/18


Where for nearly every title, the BEST CASE scenario for showing CPU differences (low graphics cards settings $1000+ card) doesn't show big CPU differences. 

Do you have critiques of THG, TPU or Anandtech? LTT reviews generally mirror the results. 

 

8 hours ago, DarkSwordsman said:

 


I would like to re-note:
 

My original point was to counter your weird claim of 1080p low, 1440p medium, 4k high, or whatever.

 

There are many cards, even something like a 2060 Super, that can push 1440p Max settings in many modern titles with good FPS, and possibly still even be CPU limited. Games these days lack optimization, mainly in draw calls/parallel processing, so faster CPU cores often are beneficial, even if the GPU is at max utilization. This is, again, why I talk about Doom, since it's CPU processing has been improved so much that it can actually feed GPUs plenty of data and see significant boosts in performance across GPUs, even at 1080p.

 

Good GPUs do help, especially ones with plenty of VRAM for demanding titles and support for resiable bar or other features where usable. But in the grand scheme of things, it's incorrect to generalize that GPUs are the main thing that will allow people to gain FPS, and your assumptions about game settings/resolutions per tier are also incorrect. Modern GPUs are way ahead of the performance curve and have plenty of crazy features. The software has not caught up, and only now we are seeing engines like Unreal Engine 5.1 that can actually take advantage of this hardware properly. Though again, that still comes at the cost of draw calls and other CPU-based processing.

At the end of the day, the sort of gains that someone will see will depend on their usage scenario. For example: In VRChat, you would think it's not necessarily a demanding game. However, it is, but only because of seriously unoptimized avatars and worlds. But even then, getting a better GPU won't necessarily solve your problems. It often has to do with the number of draw calls as well as the amount of VRAM the textures and meshes take up. I.E: More VRAM and a better CPU would improve the performance in VRChat, as a general rule, and raw graphics horsepower won't really do anything unless you're pushing a super higher resolution HMD and have a lot of people using complex shaders.

Most of my performance woes have been due to me running a 3950X. Despite running the same resolution and settings as someone else with a 9700K, and me having 2080S and them a 2070S, I often get 10-30% less performance than they do, even if the GPU is nearly limited.

I looked at relatively low video settings (1080p) with high end video cards. This creates a "best case scenario" for showing differences across CPUs since it reduces the odd of the GPU bottlenecking performance.
The difference wasn't big across CPUs. at most you're seeing like a 2x difference between a highly overclocked CPU with a top end GPU at settings that don't stress the GPU vs a lower end CPU. 

I then looked at higher settings to emulate a slow card like a 2060 (4K shifts the bottleneck to the video card). The difference between CPUs shrunk to almost 0 across a bunch of titles. 

If you look at video cards at high settings (e.g. 4K), the difference between a high end card and a lower end one is 3-10x. 

Most video games are MUCH MUCH MUCH more sensitive to the videocard than the CPU. 
Which kind of makes sense, they're videogames first, not simulations. 

 

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

@cmndr All I can say is that you are significantly oversimplifying everything. There are many cases where higher graphics settings can actually be aided by a better CPU due to higher quality textures or other features that need the CPU to deliver stuff to the CPU. This includes draw calls, mind you, which drastically increase with the higher quality settings, plus other "graphics" features like LOD which is actually a CPU task most times (to create and draw the objects that will be rendered on by the GPU).

Your own links show significant improvements in FPS (often 20-50%) between just a single CPU generation and between CPUs of the same generation.

But this doesn't even matter. The original person I was replying to was using a 1700X. That CPU is at least 4 AMD generations and 5 Intel generations behind the current processors, never mind that the infinity fabric on Zen/Zen+ was god awful. Even just upgrading from my 1700X to my 3950X (which is way worse for games than even an older CPU, the 9700K) I saw significant gains even on higher quality settings.

Again: You're oversimplifying things and making assumptions, plus pretending the data is saying something else as if to improve your point.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, DarkSwordsman said:

@cmndr All I can say is that you are significantly oversimplifying everything. There are many cases where higher graphics settings can actually be aided by a better CPU due to higher quality textures or other features that need the CPU to deliver stuff to the CPU. This includes draw calls, mind you, which drastically increase with the higher quality settings, plus other "graphics" features like LOD which is actually a CPU task most times (to create and draw the objects that will be rendered on by the GPU).

Your own links show significant improvements in FPS (often 20-50%) between just a single CPU generation and between CPUs of the same generation.

But this doesn't even matter. The original person I was replying to was using a 1700X. That CPU is at least 4 AMD generations and 5 Intel generations behind the current processors, never mind that the infinity fabric on Zen/Zen+ was god awful. Even just upgrading from my 1700X to my 3950X (which is way worse for games than even an older CPU, the 9700K) I saw significant gains even on higher quality settings.

Again: You're oversimplifying things and making assumptions, plus pretending the data is saying something else as if to improve your point.

I want to emphasize that I've said probably 10x in this thread that use case matters and I highlighted Factorio as an edge case. If you have a very narrow use case you should be looking up that specific use case. The general discussion is about aggregate performance. 


For most non-simulators though... Yeah, a 25% shift between CPUs. And I haven't debated that it can be of value... Heck I'll go further and say that there's an average uplift of around 100% going from a G5400 to a 13900. If you're just looking at CPU costs it's something like $600 to get a 100% uplift. If you "split the difference" and get something like a 12100 then it's more like $60 extra to get half way there and then an extra $540ish to get the other half way there. GPU bottlenecking is usually the big limiter if you have a $100ish CPU. 



The reason I say GPU matters more is... The difference between a top end GPU and a bottom end GPU is 20x. AKA 2,000% more performance. Even a "mid range" solution from not that long ago like a 570 is still 10x slower. 

A 25% uplift matters 80x less than a 2000% uplift. 

Heck even if we restrict it to a 2080 vs a 4090 it's still a 200% uplift (i.e. the 4090 is 3x the speed) vs a 25% uplift. If you're looking at just GPU costs it's something like $1000 to get a 200% uplift, which is more cost effective vs the CPU solution
Relative Performance 3840x2160

Saying that CPU matters a bunch to some extent is like dropping a $20 bill to pick up a quarter you see on the street and then saying the $20 bill isn't worth the energy to pick up. 

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/29/2022 at 9:58 AM, cmndr said:

Heck even if we restrict it to a 2080 vs a 4090 it's still a 200% uplift (i.e. the 4090 is 3x the speed) vs a 25% uplift. If you're looking at just GPU costs it's something like $1000 to get a 200% uplift, which is more cost effective vs the CPU solution

Saying that CPU matters a bunch to some extent is like dropping a $20 bill to pick up a quarter you see on the street and then saying the $20 bill isn't worth the energy to pick up. 

I never said the CPU matters more. I said, in their use case/scenario, that upgrading the CPU should do a lot for them.

Also, you are still generalizing. Just because a GPU is "20x" more performant than another doesn't mean you will get 20x more FPS which is basically what you are implying.

At the end of the day, it depends on use case. If you are playing games on higher settings and you notice your GPU is near 100% utilization most of the time, then yeah, an upgrade should help you pretty significantly. However, if a single or few CPU cores are nearly maxed out and your GPU is sitting at 50% utilization, the best case scenario is that you will see frames slightly earlier (but we are talking fractions of a millisecond), and maybe in some cases a little more FPS depending on what sorts of graphics features the game uses.

I also want to mention that you showed off yet another generic "relative performance" graph. Those are useful if you are looking into buying a new GPU and know you are GPU limited. They are not useful if you are trying to show the actual real-world performance difference between GPUs for games that aren't GPU heavy or if someone often uses lower settings/resolution anyways (CS:GO, DayZ, Tarkov, etc. are good examples, where FPS is valued over graphics).

"200% uplift" doesn't matter when you aren't even GPU limited anyways. The only place that these relative graphs actually are realistically scaling are rendering workloads like Cycles in Blender. The only real exception here is VRChat or games like it, where the game/models are so unoptimized that they flood VRAM. Though it still likely won't improve FPS overall, just stability and FPS when the VRAM fills up on other cards.

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/30/2022 at 11:07 PM, DarkSwordsman said:

Just because a GPU is "20x" more performant than another doesn't mean you will get 20x more FPS which is basically what you are implying.

Give me a specific title that you card about if you want to go into specifics. Your critique is akin to "you're just looking at the overall outcome of a dozen benchmarks" (though the pattern repeats on a bunch of different sites). 

Your critique was that this chart showed relative percent. 

Rounding errors aside, the difference between the slowest card and the fastest card is 9FPS vs 152 FPS. This doesn't even factor in the faster card having access to DLSS (potentially 2x more frames) with the slower one NOT. 
 

Average Gaming FPS FPS 3840x2160

 

 

I want to emphasize that the differences in CPUs aren't that high. A $70ish 1600AF (not show but similar to a 2600) isn't 20x slower than a 13900k. 
 

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

  

6 hours ago, cmndr said:
Spoiler

Average Gaming FPS FPS 3840x2160

What program was this in?
 

6 hours ago, cmndr said:

Your critique was that this chart showed relative percent. 

No. My critique was that this relative chart was not tied to a specific program or game. It was just an arbitrary chart for relative performance, just like userbenchmark.

If you go look at benchmarks between games, yes, of course many of the games will show great performance differences between GPUs. But it's because they are testing on the latest CPUs, latest games, and trying to push for the highest settings, often 1440p Ultra.

The majority of the games *I* play are often not even pushing this 1070 to 100% utilization, so the actual performance difference of getting a new GPU would be negligible for me. This is why I suggested a CPU upgrade for that user, because there is a good chance that many of the games they play are not maxing out their GPU, and even if they are (like Cyberpunk), that CPU is so old and terrible that the frame timings are probably not even good.

Based on my experience, there is no point in upgrading the GPU with a processor like the 1700X. It isn't even about FPS, it's also about stability and smoothness, which the 1700X can not provide due to its terrible infinity fabric.

Same thing happened with me and Battlefield 1. A GPU intensive game, which actually did nearly max out my 770 and 970, but I only noticed the actual performance boost by going from a 2600K to a 1700X. The GPU can't do anything if it can't be fed consistent data from the CPU.

This video from Jay perfectly shows off what I am talking about:
 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, DarkSwordsman said:

  

What program was this in?
 

No. My critique was that this relative chart was not tied to a specific program or game. It was just an arbitrary chart for relative performance, just like userbenchmark.

Average of around 20 games. https://www.techpowerup.com/review/nvidia-geforce-rtx-4090-founders-edition/

They include individual benchmarks. Let's say you're interested in BF5 - https://www.techpowerup.com/review/nvidia-geforce-rtx-4090-founders-edition/7.html

 

Or Civ 6 - https://www.techpowerup.com/review/nvidia-geforce-rtx-4090-founders-edition/9.html

Note that the benchmark charts are abbreviated but you ARE finding ~2x the performance from going from a "low end" GPU like a 2080 Ti to a 4090.

Getting to a 2x difference in CPUs requires going down to something like an Intel g5400 (2 cores at 4GHz, barely any cache, etc.) and going up to a 13900k (24 cores each of which is faster, 8 of which are ~50-70% faster in CPU intensive tasks). The gap between a budget 13100 and a 13900k isn't very big. I wouldn't say an i3 is "good enough" for all use cases, but diminishing returns kick in after a $100ish 5600g, while GPU scaling keeps on going and going and going. 

You spent more time complaining than clicking the source I provided. 
There are cases where CPU matters. There's a reason why a CPU from 2010ish was usable in 2020 BUT GPUs from 2010 weren't. 

In most applications, the GPU matters MUCH MUHC more. 

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, cmndr said:

Battlefield V is still a relatively modern game, probably being pushed at max settings. But even then, you can see there is some sort of performance issue since the AMD cards are doing better than the 4090, especially at 1080p.

I also question the legitimacy of Civ 6 since Civ and other games like it are commonly CPU-bound due to the complexity of running all that simulation and math. It having a profound GPU difference is probably only on max settings, which many people aren't going to be playing anyways since they want to maximize their turn speed.
 

3 hours ago, cmndr said:

You spent more time complaining than clicking the source I provided. 
There are cases where CPU matters. There's a reason why a CPU from 2010ish was usable in 2020 BUT GPUs from 2010 weren't. 

In most applications, the GPU matters MUCH MUHC more. 

I don't disagree that in many applications the GPU matters more. I mean heck, in Blender, it matters way more than any CPU! But that's not the topic here, is it?


I was making the argument that this user (the one with a 1700X and 1070 Ti playing Cyberpunk) would probably see a significant improvement by upgrading their CPU. This isn't to get the max FPS or anything, but to give them a more reliable and smooth user experience.

You're throwing around top FPS numbers in games where graphics matters, when all I was saying is that the user should see a better experience by upgrading their CPU to something that isn't terrible, that definitely contributes to poor 1% and 0.1% lows, plus lower FPS overall.

The main factor you are missing from your GPU comparisons is overall smoothness. You are arguing a strawman that you created, as if to generalize everything about gaming into just GPUs, and I was trying to point out that your reply to me was irrelevant to the recommendation I made. Yes, GPU matters a lot in games, but by your logic, that would be true across the board. It definitely is true that, maybe, a very graphically intensive game will see a performance benefit going from a 2080 to a 4080, but what does that matter if the CPU they're using can't contribute a smooth gaming experience?

This is why my nephew, who owns a 9700K + 2070 Super, manages to get not only 10-40% more FPS than me with a 3950X + 2080 Super, but also gets much better 1% and 0.1% lows, despite having a WORSE GPU than me, and running their games at the same settings and resolution.

GPU isn't everything. It's a combination of both, and the CPU can not be overlooked. Game engines and software in general are complex systems with a lot of dependencies and various bottlenecks. This is why the CPU is still important and it is often a case by case basis, so stop trying to argue otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, DarkSwordsman said:

I was making the argument that this user (the one with a 1700X and 1070 Ti playing Cyberpunk) would probably see a significant improvement by upgrading their CPU. This isn't to get the max FPS or anything, but to give them a more reliable and smooth user experience.


Let's do a sanity check on that... RTX 3080 at 1080p... 
Using the 2700x as  a standin for the 1700 we can see that the 3080 can pump out around 80FPS at 1080p. 
The main issue with the 1700 isn't throughput but latency which has its own set of implications. 
 

cyberpunk-2077-1920-1080.png

 

 

The slowest card in the line up is a 2080Ti, which is about 70-90% more powerful than a 1080. It's able to power the game at 90FPS with a 3800. The 3800 can power the game all the way up to 140FPS in this chart. 
 

cyberpunk-2077-1920-1080.png

 


I have a hard time finding exact figures for the 1070 since it's hard to find benchmarks for cards that slow but... CP2077 shows a 4790 or 3600 being matched with a 2080. Not a great  proxy though. 

System Requirements

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, cmndr said:
Spoiler


Let's do a sanity check on that... RTX 3080 at 1080p... 
Using the 2700x as  a standin for the 1700 we can see that the 3080 can pump out around 80FPS at 1080p. 
The main issue with the 1700 isn't throughput but latency which has its own set of implications. 
 

cyberpunk-2077-1920-1080.png

 

 

The slowest card in the line up is a 2080Ti, which is about 70-90% more powerful than a 1080. It's able to power the game at 90FPS with a 3800. The 3800 can power the game all the way up to 140FPS in this chart. 
 

cyberpunk-2077-1920-1080.png

 


I have a hard time finding exact figures for the 1070 since it's hard to find benchmarks for cards that slow but... CP2077 shows a 4790 or 3600 being matched with a 2080. Not a great  proxy though. 

System Requirements

 

I think I'm failing to see your point? You literally just proved my point with that 2700x comparison, where the 1700x is certainly a bit worse than that. I.E: The CPU would be a pretty significant upgrade and would support whatever GPU they get in the future.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, DarkSwordsman said:

I think I'm failing to see your point? You literally just proved my point with that 2700x comparison, where the 1700x is certainly a bit worse than that. I.E: The CPU would be a pretty significant upgrade and would support whatever GPU they get in the future.

It's only 25% worse. 
The gain from a faster GPU is much larger than 25%. 
The bottleneck is primarily the GPU. 

As a thought experiment, if upgrading part A improved performance by 0.000000000001% and upgrading part B improved performance by 10000000%, would it even be worth discussing part A?

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, cmndr said:

It's only 25% worse. 
The gain from a faster GPU is much larger than 25%. 
The bottleneck is primarily the GPU.

You only provided:
- specific GPU with a variety of CPUs (the worst of which was a generation ahead of the one I was talking about)
- variety of GPUs with a specific CPU

You did not provide: Two different CPUs from significantly different generations, with at least two different GPUs.

This video precisely shows the problem. It compares the 1700x, 3700x, 5700x, and 7700x, with a 3080 and 3060. You can very clearly see that the GPU does not matter when the 1700X is concerned. Sure maybe you can make the argument "Well, the 3060 is 23% faster than the 1070! 🤓", but really? The difference is negligible here, especially when the 3080 is 150% faster than the 1070 (so the 3060 and 1070 are much closer than you think).

Feel free to find actual sources that directly compete with this one. But as far as I can see, this one exactly proves my point of the 1700X being dogwater and the user would likely see an okay uplift in FPS, but namely an uplift in smoothness by upgrading the CPU first.

 

Link to comment
Share on other sites

Link to post
Share on other sites

An addendum: Yeah, a lot of you were correct on 1630s being bad value, and it's because these cards aren't even cheap enough to justify its relative performance.

 

GPU prices have been fluctuating these days but sadly there isn't yet the best cheapest option, untill these damn 6400s or 1630s get a price cut asap.

 

Otherwise, the iGPU that you got is gonna be the best option.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×