Jump to content

Why Does GPU Technology Advance Faster Than CPU Technology?

Pratigious

If you look at the performance increases in GPU's across the years vs CPU's, there's a major difference. The GTX 1060 despite being a mid-range card is equal, maybe possibly faster than the GTX 980 and it also costs less. If we look at the I7 4790k vs the I5 7600k, the I7 4790k is still faster. Why isn't the I5 7600k faster than I7 4790k? The GTX 1060 was released a year after the GTX 980 and it's faster. If Im not wrong, the same thing applies to the RX 580 and RX 480, those are faster than the R9 390X after just a year. I also consider the RX 580 and 480 mid range GPU'S, they're more comparable to the GTX 1060, you don't go wrong buying either one. 

Link to comment
Share on other sites

Link to post
Share on other sites

I also consider a cpu that costs 250$ midrange and any cpu that costs 300 dollars or more high-end. 

Link to comment
Share on other sites

Link to post
Share on other sites

Because for years Intel have had no competition and have had no need to improve performance.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

Different technologies. Just because we get massive improvements in GPUs, it does not mean we get the same in the car industry, or in energy generation, or in robotics.

 

GPUs and CPUs are very different and not comparable like that

Ultra is stupid. ALWAYS.

Link to comment
Share on other sites

Link to post
Share on other sites

probably competition, amd hasnt been competitive in the CPU market for a long time, causing intel to halt a lot of innovations. only after ryzen came out intel started putting more than 4 cores on its consumer cpu's. meanwhile in the GPU department amd has been close to nvidia in terms of performance in the last couple of years causing both parties to innovate as fast as possible to beat the competition.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, CUDA_Cores said:

CPUs are serial in nature while GPUs are parallel in nature. If I want to make a GPU faster, I add more stream processors or CUDA cores. If I want to make a CPU faster, I have to invest tons of money into R&D about how to increase instructions per clock. 

Is that why Coffee Lake has had 2 extra cores added and the 9000 series are rumoured to be getting 2 more?

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Master Disaster said:

Because for years Intel have had no competition and have had no need to improve performance.

That's probably the main reason.

They are holding back on innovations ... so once they get decent competition, they just release something way ahead.

Intel i7 12700K | Gigabyte Z690 Gaming X DDR4 | Pure Loop 240mm | G.Skill 3200MHz 32GB CL14 | CM V850 G2 | RTX 3070 Phoenix | Lian Li O11 Air mini

Samsung EVO 960 M.2 250GB | Samsung EVO 860 PRO 512GB | 4x Be Quiet! Silent Wings 140mm fans

WD My Cloud 4TB

Link to comment
Share on other sites

Link to post
Share on other sites

All of the above. Plus, the serial nature of CPUs makes them highly dependent on clock speeds, and we're already scraping the barrel of what silicon can do in that department.

 

Fun fact: Even back in 1980 Michael Crichton wrote a book about the word reaching the limits of silicon and how boron doped diamonds were the dream replacement (Edit: I may have misremembered that a bit). Nearly 40 years later and no such luck :(

 

Of course, the death of silicon improvement was greatly exaggerated :P They would have barely even been getting started back then.

If you want good hardware recommendations, please tell us how you intend to use the hardware. There's rarely a single correct answer.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Master Disaster said:

Is that why Coffee Lake has had 2 extra cores added and the 9000 series are rumoured to be getting 2 more?

That's happening for 2 reasons - partly that software is finally starting to actually make use of more than 4 cores, and partly that AMD is competitive again, pushing Intel to give users more cores.

 

Whereas software that uses GPUs doesn't need to be changed to use GPUs with more shader cores.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, CUDA_Cores said:

Exactly. Intel is trying to make their CPUs competitive again with AMD (also a big reason why CPUs have been improving so fast today), but it's really hard to make each core itself faster. To compensate, they add more cores. 

Something they could have done a looooooong time ago but chose not too because they had no need to.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

Intel "tweaks" x86 since 1978, and first "GPU" was from 1999 (at least in name ;)).
There is only so much you can improve at this point in x86.

The other way to look at it is this :
GPU's are "easy", because they operate on bilions of pixels (you usually don't need to know color of one pixel to get to the next one).
CPU's in most cases need to calculate something first, before moving on to the next thing.
"Moar cores" strategy is good for now, but it can't help when program simply doesn't scale with more cores.

I would like to know what Intel does beyond 7nm era...

CPU : Core i7 6950X @ 4.26 GHz + Hydronaut + TRVX + 2x Delta 38mm PWM
MB : Gigabyte X99 SOC (BIOS F23c)
RAM : 4x Patriot Viper Steel 4000MHz CL16 @ 3042MHz CL12.12.12.24 CR2T @1.48V.
GPU : Titan Xp Collector's Edition (Empire)
M.2/HDD : Samsung SM961 256GB (NVMe/OS) + + 3x HGST Ultrastar 7K6000 6TB
DAC : Motu M4 + Audio Technica ATH-A900Z
PSU: Seasonic X-760 || CASE : Fractal Meshify 2 XL || OS : Win 10 Pro x64
Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, agent_x007 said:

Intel "tweaks" x86 since 1978, and first "GPU" was from 1999 (at least in name ;)).
There is only so much you can improve at this point in x86.

The other way to look at it is this :
GPU's are "easy", because they operate on bilions of pixels (you usually don't need to know color of one pixel to get to the next one).
CPU's in most cases need to calculate something first, before moving on to the next thing.
"Moar cores" strategy is good for now, but it can't help when program simply doesn't scale with more cores.

I would like to know what Intel does beyond 7nm era...

Along with CPUs and GPUs having different instruction sets.

 

To kind of give an example: in BOINC and F@H, there certain WUs (Work Units) that CPU can do, but the GPU can't do because of instruction sets.   Then there even WUs that certain GPUs can do but other GPUs have a hard to time to do, these tend to be WUs that require DP (double precision).

2023 BOINC Pentathlon Event

F@H & BOINC Installation on Linux Guide

My CPU Army: 5800X, E5-2670V3, 1950X, 5960X J Batch, 10750H *lappy

My GPU Army:3080Ti, 960 FTW @ 1551MHz, RTX 2070 Max-Q *lappy

My Console Brigade: Gamecube, Wii, Wii U, Switch, PS2 Fatty, Xbox One S, Xbox One X

My Tablet Squad: iPad Air 5th Gen, Samsung Tab S, Nexus 7 (1st gen)

3D Printer Unit: Prusa MK3S, Prusa Mini, EPAX E10

VR Headset: Quest 2

 

Hardware lost to Kevdog's Law of Folding

OG Titan, 5960X, ThermalTake BlackWidow 850 Watt PSU

Link to comment
Share on other sites

Link to post
Share on other sites

there really is no need for more CPU power for average buyer, only enthusiasts that are a niche market, you could always have bought a xeon, and CPU power is very limited technologically, more power is really difficult, and more cores for what?

They could always make CPU's bigger, but then there is TreadRipper, is a niche product.

There is always room for more GPU power, always and that's the main difference. 

.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, asus killer said:

there really is no need for more CPU power for average buyer, only enthusiasts that are a niche market, you could always have bought a xeon, and CPU power is very limited technologically, more power is really difficult, and more cores for what?

They could always make CPU's bigger, but then there is TreadRipper, is a niche product.

There is always room for more GPU power, always and that's the main difference. 

Yeah but it's because most software doesn't take advantage of a lot of cores that mainstream users don't see a need for more cores because their pc doesn't go any faster by buying a higher core count sku

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, mikat said:

Yeah but it's because most software doesn't take advantage of a lot of cores that mainstream users don't see a need for more cores because their pc doesn't go any faster by buying a higher core count sku

It's difficult to code for multiple core, that's what we hear. 

An also there are limitations, in games even if you had 100 cores, they would sit idle, there is a limit for physics calculations that you need, that there isn't in gpu processing, like 3D rendering, the more the better.

.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, asus killer said:

It's difficult to code for multiple core, that's what we hear. 

An also there are limitations, in games even if you had 100 cores, they would sit idle, there is a limit for physics calculations that you need, that there isn't in gpu processing, like 3D rendering, the more the better.

it's difficult to split your tasks into little pieces that don't need eachother to execute*

Link to comment
Share on other sites

Link to post
Share on other sites

Try telling streamers, content creators and editors they don't need high core count high clock speed CPUs because they'll laugh at you.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

there are multiple reasons here, but the main ones are that there has been no competition, its hard to parelalize many workloads and that GPUs can be made bigger and bigger if nothing else where as increaseing the CPU core count wont do much for many workloads because of how they are both utelized and so structured.

 

*edit*

oh and im pretty sure NVIDIA has a bigger RND budget then Intel

I spent $2500 on building my PC and all i do with it is play no games atm & watch anime at 1080p(finally) watch YT and write essays...  nothing, it just sits there collecting dust...

Builds:

The Toaster Project! Northern Bee!

 

The original LAN PC build log! (Old, dead and replaced by The Toaster Project & 5.0)

Spoiler

"Here is some advice that might have gotten lost somewhere along the way in your life. 

 

#1. Treat others as you would like to be treated.

#2. It's best to keep your mouth shut; and appear to be stupid, rather than open it and remove all doubt.

#3. There is nothing "wrong" with being wrong. Learning from a mistake can be more valuable than not making one in the first place.

 

Follow these simple rules in life, and I promise you, things magically get easier. " - MageTank 31-10-2016

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, mikat said:

it's difficult to split your tasks into little pieces that don't need eachother to execute*

Some of the tasks, anyway.

 

The tasks that run on GPUs are very easy to split into little independent pieces, and that's why you can just throw more shader cores at the problem if your GPU is too slow. So we have GPUs with four thousand shader cores, but mainstream CPUs are only at 8 cores.

 

1 hour ago, Bananasplit_00 said:

*edit*

oh and im pretty sure NVIDIA has a bigger RND budget then Intel

Hah, no. Definitely not. Intel is a huge juggernaut compared to Nvidia, and that includes R&D spending. Intel is spending over a billion a month, Nvidia is spending half a billion per quarter.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Sakkura said:

Hah, no. Definitely not. Intel is a huge juggernaut compared to Nvidia, and that includes R&D spending. Intel is spending over a billion a month, Nvidia is spending half a billion per quarter.

really? my bad then :) 

I spent $2500 on building my PC and all i do with it is play no games atm & watch anime at 1080p(finally) watch YT and write essays...  nothing, it just sits there collecting dust...

Builds:

The Toaster Project! Northern Bee!

 

The original LAN PC build log! (Old, dead and replaced by The Toaster Project & 5.0)

Spoiler

"Here is some advice that might have gotten lost somewhere along the way in your life. 

 

#1. Treat others as you would like to be treated.

#2. It's best to keep your mouth shut; and appear to be stupid, rather than open it and remove all doubt.

#3. There is nothing "wrong" with being wrong. Learning from a mistake can be more valuable than not making one in the first place.

 

Follow these simple rules in life, and I promise you, things magically get easier. " - MageTank 31-10-2016

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Bananasplit_00 said:

there are multiple reasons here, but the main ones are that there has been no competition, its hard to parelalize many workloads and that GPUs can be made bigger and bigger if nothing else where as increaseing the CPU core count wont do much for many workloads because of how they are both utelized and so structured.

 

*edit*

oh and im pretty sure NVIDIA has a bigger RND budget then Intel

You realise Intel owns like 1/3 of the world's silicon fab facilities, right?

 

Nvidia is like the annoying spec of dust that lands on the TV compared to Intel.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Master Disaster said:

Try telling streamers, content creators and editors they don't need high core count high clock speed CPUs because they'll laugh at you.

You do realise that rendering/editing videos basicly makes CPU do what GPU does (ie. calculating values for pixels) ?
No wonder Sherlock it likes "moar cores" approach - is what I want to say :P

Streamers are a special case. 
When you run multiple CPU intensive tasks at the same time (rendering + gaming is another one*), you get more stable performance with more CPU cores.
*because who likes sitting idle or watching a progress bar till rendering ends ?

The main question is : How many PC users actually do that kind of stuff ?
I say not many, and that's why "moar core" isn't best way to improve x86 performance in general.

CPU : Core i7 6950X @ 4.26 GHz + Hydronaut + TRVX + 2x Delta 38mm PWM
MB : Gigabyte X99 SOC (BIOS F23c)
RAM : 4x Patriot Viper Steel 4000MHz CL16 @ 3042MHz CL12.12.12.24 CR2T @1.48V.
GPU : Titan Xp Collector's Edition (Empire)
M.2/HDD : Samsung SM961 256GB (NVMe/OS) + + 3x HGST Ultrastar 7K6000 6TB
DAC : Motu M4 + Audio Technica ATH-A900Z
PSU: Seasonic X-760 || CASE : Fractal Meshify 2 XL || OS : Win 10 Pro x64
Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Master Disaster said:

Try telling streamers, content creators and editors they don't need high core count high clock speed CPUs because they'll laugh at you.

one of the most annoying things in the internet is people that reply like they can just read parts of what we right. The partial readers plague. :D

Are those average consumers? they could always buy xeons, so there were advances there.

.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, asus killer said:

one of the most annoying things in the internet is people that reply like they can just read parts of what we right. The partial readers plague. :D

Are those average consumers? they could always buy xeons, so there were advances there.

Almost as annoying as people who make blanket statements speaking for every person on the planet insisting that their own needs should be the only thing anybody ever needs.

 

I read your entire post and I disagree

 

Just because you don't need something doesn't mean nobody else does and suggesting that streaming and video editing isn't mainstream is as bad as saying no one outside of enthusiasts needs faster CPUs, it's demonstrably wrong.

 

Hint, if there wasn't a need for it then Coffee Lake wouldn't be getting it. That's how consumerism works.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Master Disaster said:

Almost as annoying as people who make blanket statements speaking for every person on the planet insisting that their own needs should be the only thing anybody ever needs.

 

Just because you don't need something doesn't mean nobody else does and suggesting that streaming and video editing isn't mainstream is as bad as saying no one outside of enthusiasts needs faster CPUs, it's demonstrably wrong.

sweet lord, i did not wrote for nobody, just my opinion. What i said was that average consumer didn't needed it so there was no point in going that way by amd/intel. For those who needed the niche CPU's there always existed xeons.

it's not the minority that needs 20 cores that drives what amd/intel do.

.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×