Jump to content

CPU performance doesn't seem to follow Moore's Law... why not?

Basically what the title says.

 

Moore's Law is the observation that the number of transistors packed into a given area on an integrate circuit doubles roughly every two years (about a 40% increase per year). However, if you look at the performance increase from generation to generation of CPUs over the last years, that performance increase is far from 40% per year. Take Intel's unlocked i7 series, for example. Each generation only seems to add 10% or so performance over the previous generation (if we're lucky), and it looks like that trend is going to continue with Skylake, if the supposed leaked benchmarks are to be believed. Of course, the performance increase gained is dependent on the particular application, but I think 10% is a reasonable average.

 

On the other hand, if you look at GPUs, the trend is quite different. We can take the GTX 680, 780, and 980 as examples. Here's Anandtech's review of the 980 when it was first released, it contains a bunch of performance benchmarks which contain numbers for all three of the aforementioned cards:

 

http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review

 

Just checking out a few different games and graphical settings, it looks like the typical increase in performance going from a 680 to a 780 is 20-35%, and then the same going from a 780 to a 980. It's not the theoretical 40% per year, but it's way closer. So why is this the case?

 

I know that the internal architecture is necessarily completely different for GPUs vs CPUs, since CPU have to be able to deal with much more complicated operations, but I don't really see why that should be an explanation. If we consider the fact that a computer processor (GPU or CPU) is just mainly made up of a large collection of logical units, which are in turn mainly built up from transistors, it should follow that double transistors should give roughly double performance (assuming operating frequency stays the same).

 

The only other thing that I can think of is the growing strength of the on board graphics processors in modern CPUs. Can it be that the bulk of the extra transistors from generation to generation are going here, instead of into the main CPU cores? If that's the case, it seems strange that the manufacturers wouldn't offer more CPU models which don't bother with the on board graphics, and put as much power as possible into the main cores.

 

Is there anyone out there who can better explain this situation?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Theres competition in the GPU market

Thats that. If you need to get in touch chances are you can find someone that knows me that can get in touch.

Link to comment
Share on other sites

Link to post
Share on other sites

Diminishing returns.

My rig:
CPU: i5 4690k 24/7 @4.4ghz (1.165v) Max 4.7ghz (1.325v) COOLER: NZXT Kraken X61 MOBO: Asus Z97-A   RAM: 16GB Crucial Ballistix Tactical   GPU: EVGA GTX 970 SSC   PSU: EVGA GS 650W   CASE: NZXT Phantom 530 HDD: WD Caviar Blue 1TB + WD Black 2TB

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not an expert yet (Currently studying computer engeneering), but up to my understanding, the problem is size. We are talking about nanometer transistors when talking about CPUs, and the fact is that we are getting closer and closer to the physical limit for the construction of those transistors (Something to do with the properties of materials when using such small sizes, i know, for example aluminium becomes an incredible explosive when its so small.)

Intel had lots of trouble achieving 14nm transistors, years ago, we where talking of sizes exponentially bigger than these, so Moore's law can't just be followed.

 

Look at SSDs, the same thing is happening, we've had to start using a 3D architecture for them, as the limits for space/capacity had been reached. Before this, they also followed Moore's law.

Planning on trying StarCitizen (Highly recommended)? STAR-NR5P-CJFR is my referal link 

Link to comment
Share on other sites

Link to post
Share on other sites

-snip-

I think that people are incorrect when they call George Moore's hypothesis on the advancement of CPU transistors. It's not really much of a law, truth to be told. A law needs to have strong scientific foundations and be backed by substantial evidence. Personally, I think there's going to be a point in technological advancement where the progress almost comes to a standstill. Although I'm still waiting for the Singularity.

Did my post help you? Then make sure to rate it!

Check out my post on symbolic links! || PSU ranking and tiers || Pokemon Thread

 

Link to comment
Share on other sites

Link to post
Share on other sites

snip

Moore's law is just an observation. Just some guy who made a projection that ended up being kind of true for a while. Even Gordon Moore himself said there would be a "slow down" in 2013. And, well, there kind of has been. The dude just knows the business well. 

Link to comment
Share on other sites

Link to post
Share on other sites

There is no competition. Intel is just sitting on their ass twiddling their thumbs making shit, and AMD is staring with their mouth open doing nothing. Since there is no competition, there is no growth. Intel could definitely come out with a CPU, 60% performance gain tomorrow, but they wont, because there is no competition. 

“I don't like country music, but I don't mean to denigrate those who do. And for the people who like country music, denigrate means 'put down'.” - Bob Newhart

Remember kids; just because Linus has a video on it, doesn't mean that its the best choice Abide by the CoC | Looking for build help? Read this before posting |

Link to comment
Share on other sites

Link to post
Share on other sites

There is no competition. Intel is just sitting on their ass twiddling their thumbs making shit, and AMD is staring with their mouth open doing nothing. Since there is no competition, there is no growth. Intel could definitely come out with a CPU, 60% performance gain tomorrow, but they wont, because there is no competition.

Right cuz you know, things don't get harder the smaller the parts get. And we can just double the size of the die because everyone wants more power draw right?

I'm not saying that Intel isn't stagnating, but you are dead wrong if you think they can improve performance by that much.

My rig:
CPU: i5 4690k 24/7 @4.4ghz (1.165v) Max 4.7ghz (1.325v) COOLER: NZXT Kraken X61 MOBO: Asus Z97-A   RAM: 16GB Crucial Ballistix Tactical   GPU: EVGA GTX 970 SSC   PSU: EVGA GS 650W   CASE: NZXT Phantom 530 HDD: WD Caviar Blue 1TB + WD Black 2TB

Link to comment
Share on other sites

Link to post
Share on other sites

I think that people are incorrect when they call George Moore's hypothesis on the advancement of CPU transistors. It's not really much of a law, truth to be told. A law needs to have strong scientific foundations and be backed by substantial evidence. Personally, I think there's going to be a point in technological advancement where the progress almost comes to a standstill. Although I'm still waiting for the Singularity.

 

I agree with you, that's why I said "Moore's Law is the observation that...". I also agree that we'll get to a point where currently-used technology won't be able to offer any more improvements, because we will have reached a physical limit. We're getting fairly close to the point where individual (traditional) transistors can't be shrunk any further. That being said, we're still not at that point. Intel themselves have stated they they expect Moore's Law to continue until at least 2018 with the 7 nm process:

 

http://www.pcworld.com/article/2887275/intel-moores-law-will-continue-through-7nm-chips.html

 

Plus, the question I posed was about the past, where Moore's prediction HAS pretty much held strong. The fact is that transistor counts HAVE been roughly doubling ever two years, but CPU performance growth has been far from that. 

 

There is no competition. Intel is just sitting on their ass twiddling their thumbs making shit, and AMD is staring with their mouth open doing nothing. Since there is no competition, there is no growth. Intel could definitely come out with a CPU, 60% performance gain tomorrow, but they wont, because there is no competition. 

 

This seems like a more reasonable explanation to me. Especially since it also fits in with the performance trends we see in the GPU market. And if THAT'S the true explanation, then it's even more frustrating than the idea that on board graphics are stealing away our sweet sweet CPU core power...

Link to comment
Share on other sites

Link to post
Share on other sites

because: lack of competition, AMD is too far behind the best they have won't even compete with 3 generations old intel chip...so intel does not need to push performance forward anymore...they are cashing in ATM.

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

-snip-

Sorry. Didn't completely understand the original post :P. I think physical limitations won't be a problem though. With nanotechnology being more and more conceivable, with all the nano-medicine and what not, I think instead of a microprocessor we'll probably have nano-processors in the future.

Did my post help you? Then make sure to rate it!

Check out my post on symbolic links! || PSU ranking and tiers || Pokemon Thread

 

Link to comment
Share on other sites

Link to post
Share on other sites

It's dead on the consumer side for the most part, moore's law continues on the high end (ish).

 

A 10 core a few years ago was as high as it got with ~2.6bn transistors we now have things like the 18 core with 5.7bn transistors.

Link to comment
Share on other sites

Link to post
Share on other sites

Sorry. Didn't completely understand the original post :P. I think physical limitations won't be a problem though. With nanotechnology being more and more conceivable, with all the nano-medicine and what not, I think instead of a microprocessor we'll probably have nano-processors in the future.

 

Haha, I think "microprocessor" is a bit of an archaic term nowadays. The fact that we're now seeing products that went through 14 nm processing seems like a pretty clear indication that we're well into the realm of nanoprocessors. 

 

And sadly, we are getting close to physical limitations for traditional technology, at least in terms of shrinking down individual components. I mean, we have individual transistors that are around 14 nm thick... that's only about 50 atoms across. When you get to that scale, the properties of the materials you're using start to change, sometimes quite drastically.

 

So sometime in the next decade, processor manufacturers are going to have to change to something else. First they'll probably start building "up" and having multiple layers of transistors, like what we're starting to see in SSDs (thanks Xaring for pointing that out), but then eventually it might have to transition to a more fundamentally different technology.

 

My bets are on light-based technologies (photonics and plasmonics) as opposed to the current electricity-based technologies.

Link to comment
Share on other sites

Link to post
Share on other sites

Moore's law is an expected rate of transistor density growth, one that was originally said by Moore to cycle every 12months, and was then backed off to 18-24 months later.

 

 

to summarize this article http://www.extremetech.com/computing/178529-this-is-what-the-death-of-moores-law-looks-like-euv-paused-indefinitely-450mm-wafers-halted-and-no-path-beyond-14nm

 

Moore's law died when silicon suddenly got a whole lot more expensive to shrink down. Intel and AMD believed that 14nm would be the threshold, with anything smaller being extremely difficult. It turned out that 28nm was that threshold, and Intel had to start using Finfet (tri-gate) transistor technology at 22nm lithography to make chips larger than 100mm^2. A more cost effective method of making Silicon Wafer's was delayed for many years at this time, leaving samsung, AMD and others with no choice but to skip 20/22nm and go straight for 14/16nm, and make due with current wafer sizes. 

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

-snip-

Interesting point about plasmonic nanopaticles. Plasmonic nanoparticles are particles that have electrons can couple with electromagnetic wavelengths that are bigger than the particle itself, right?(Correct me if I'm wrong. My knowledge in this field is very limited). I heard about it being used in photovoltaic cells and cancer treatment, but not so much in computers. Would using them in a CPU allow a smaller chip to have faster clock-speeds, since clock-speeds are measured in hertz?

Did my post help you? Then make sure to rate it!

Check out my post on symbolic links! || PSU ranking and tiers || Pokemon Thread

 

Link to comment
Share on other sites

Link to post
Share on other sites

The technology is there but it's not being released since there is no competition for Intel so why release something when you can slow down releases and get more for less R&D

Link to comment
Share on other sites

Link to post
Share on other sites

snip

 

Yes, you are right that a plasmon (in this context, anyway) is a coupling between the electrons in a metal and an electromagnetic wave (i.e. light) at the surface of the metal. It means that the plasmon can only exist at the surface between a metal and an insulator (like air). If it tries to move off into the insulating material, it dies out, because the electrons can't follow (they're trapped in the metal), and if it tries to move deeper into the metal, it again dies because the electromagnetic wave very quickly dissipates all of its energy. So the plasmon is a wave that's trapped at the surface, much like a wave trapped on the surface of a pool of water. It doesn't have to be the surface of a nanoparticle either. It could be a tiny wire, or a metal film. 

 

The interesting thing is that plasmons offer a unique way of manipulating light. But let's stop and look at fiber optic cables for a second. This is a light-based data application which has already existed for some time. Why? Because with light, it's possible to move around MASSIVE amounts of data very quickly, especially when compared with normal electronics. Think about the massive fiber optic cables that span across the ocean, connecting North America and Europe. But there is a catch. We can't shrink fiber optic cables down, which means we can't really build a processor using light and optical fibers. 

 

This is where plasmons come in. When you use light (which is a wave) to excite a plasmon (another wave) at the surface of a metal, both the light and the plasmon have the same frequency, which means that they can both potentially be used to transfer the same amount of data. However, a plasmon can be squeezed into a MUCH smaller space than the light by itself. 

 

The conclusion here is that plasmons have the potential to combine the high data density of light with the very compact circuitry we're accustomed to with modern CPUs. 

Link to comment
Share on other sites

Link to post
Share on other sites

Even if the transistors are smaller, that doesn't mean they're packing in twice as many transistors.  They could pack in 60% the transistors to get slightly better performance than last gen, but then use less power or put a bigger iGPU or what have you.

 

The goal for a long time hasn't been increasing performance, it has been reducing power usage as more and more devices become battery reliant.

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

snip

 

That's a good point. Still, you'd think they might make an exception to that for their extreme edition processors, since the people buying those won't care one bit how much power is being used, or how good the integrate GPU is. Let's see:

 

https://www.pugetsystems.com/labs/articles/Core-i7-5960X-vs-4960X-Performance-Comparison-588/

 

It looks like the 5960X, 5930K, and 4960X are all trading blows with each other, depending on the benchmark. For the most part, it doesn't really seem like the per-core performance has changed much at all between the two generations. The only big increase in performance that stands out to me is the Linpack benchmark. As the author of the article points out, Linpack is able to use the AVX2 instruction set, which Ivy Bridge E didn't have. I guess a new instruction set IS a form of performance improvement, but not quite in the sense that I was thinking in the original post. Also, according to the LINPACK Benchmarks Wikipedia page:

 

 

 

However, a computer's performance when running actual applications is likely to be far behind the maximal performance it achieves running the appropriate LINPACK benchmark.

 

I guess what this all boils down to is ultimately money. As plenty of people have pointed out, without any real competition, there's no reason for Intel to push the technology as fast as they are able to. They just need to be able to stay a little bit ahead of the next best thing.

 

And that's why I cheer for AMD, but still buy from Intel  :P .

Link to comment
Share on other sites

Link to post
Share on other sites

Basically what the title says.

 

Moore's Law is the observation that the number of transistors packed into a given area on an integrate circuit doubles roughly every two years (about a 40% increase per year). However, if you look at the performance increase from generation to generation of CPUs over the last years, that performance increase is far from 40% per year. Take Intel's unlocked i7 series, for example. Each generation only seems to add 10% or so performance over the previous generation (if we're lucky), and it looks like that trend is going to continue with Skylake, if the supposed leaked benchmarks are to be believed. Of course, the performance increase gained is dependent on the particular application, but I think 10% is a reasonable average.

 

On the other hand, if you look at GPUs, the trend is quite different. We can take the GTX 680, 780, and 980 as examples. Here's Anandtech's review of the 980 when it was first released, it contains a bunch of performance benchmarks which contain numbers for all three of the aforementioned cards:

 

http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review

 

Just checking out a few different games and graphical settings, it looks like the typical increase in performance going from a 680 to a 780 is 20-35%, and then the same going from a 780 to a 980. It's not the theoretical 40% per year, but it's way closer. So why is this the case?

 

I know that the internal architecture is necessarily completely different for GPUs vs CPUs, since CPU have to be able to deal with much more complicated operations, but I don't really see why that should be an explanation. If we consider the fact that a computer processor (GPU or CPU) is just mainly made up of a large collection of logical units, which are in turn mainly built up from transistors, it should follow that double transistors should give roughly double performance (assuming operating frequency stays the same).

 

The only other thing that I can think of is the growing strength of the on board graphics processors in modern CPUs. Can it be that the bulk of the extra transistors from generation to generation are going here, instead of into the main CPU cores? If that's the case, it seems strange that the manufacturers wouldn't offer more CPU models which don't bother with the on board graphics, and put as much power as possible into the main cores.

 

Is there anyone out there who can better explain this situation?

 

Moore's Law is only about transistor density. Nowhere in Moore's Law is performance mentioned.

 

While there may be an expectation that doubling the number of components will double the "performance" This is rarely the case. As an example of this you only have to look as far as SLI/Crossfire. If you are getting 30 FPS with one graphics card, you should get 60 with two cards in SLI/Crossfire, 90 with three cards in SLI/Crossfire and 120 with four cards in SLI/Crossfire. How often do you get 1:1 scaling like that? Never.

 

Your comparison between the "performance" increase of GPU's vs. CPU's is somewhat flawed. "Performance" has a different meaning in CPU's than GPU's. It is an apples and oranges comparison. It always will be and apples and oranges comparison until the start using the same metric for CPU's as the do GPU's. Basically, completely different things are being measured when you benchmark a CPU and a GPU. If people started measuring CPU performance using the same metric they use for graphics cards that 10% yearly improvement statistic would change. Although, I am not sure how I would use a CPU's frame rate, and resolution figures to determine which CPU I wanted for my next build.

Sgt. Murphy says, "Never forget that your weapons and equipment were made by the lowest bidder."

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×