Jump to content

What's holding back CPUs?

Jovidah

As the topic says.

 

While I understand that:

-Adding more cores offers diminishing returns (Amdahl's law)

-Increasing IPC gets rather difficult once you hit the ceiling where most instructions are already executed in 1 cycle

 

I'm still somewhat puzzled by why we've been at a bit of a standstill for the last... 8 years or so? The natural thing then would seem to be increased clockcycles, yet all the CPU's stay stuck at around 3 to 4 Ghz. Even when everything has been scaled down a lot. Why is this? Is there some aspect of natural physics preventing it to go much higher than that? Why aren't we seeing 8 Ghz for example? If the problem was heat you'd just expect more innovation in cooling solutions.

 

I know part of it is 'no demand means trickle-to-market strategy by intel' but I'm just trying to understand why we've seen CPU's stuck in the 3-4 Ghz range for the last 10 years.

Link to comment
Share on other sites

Link to post
Share on other sites

Competition

CPU:R7 5800X    Motherboard: asrock x470 taichi ultimate   RAM: 32GB G.Skill Ripjaws-V 2X16GB    GPU: Gigabyte GTX1080TI gaming oc 11g   Case: Corsair 600Q Storage: 1TB Samsung 870(boot), samsung 850evo 500GB, 2TB Corsair MX500, samsung 2TB 970 evo plus, WD 5TB black    PSU: Corsair AX860    CPU cooling: Corsair H105

Link to comment
Share on other sites

Link to post
Share on other sites

 

1 minute ago, huilun02 said:

When there's a will there's a way.

The whole 'limitation' argument is BS.

Progress has been slow because Intel has no incentive to make big improvements, due to lack of competition from AMD.

They will haul ass once again if AMD delivers with Zen.

eeh idk. Zen could be a disappointment. I'm not  convinced that zen will necessarily be a powerhouse. 

Link to comment
Share on other sites

Link to post
Share on other sites

The thing about cpus is, you get to a point were cores are so powerful that few people will need more power. For every enthusiast system builder, there are thousands of casual users.  

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, doomsriker said:

ghz are just a number. They seldom mean much. 

 

 

I know. But with IPC becoming rather stagnant it seems to me to be logical to go back to it for improvements. In fact Intel is doing exactly that with Kaby Lake. I'm just wondering why we haven't seen further increases, and if there's some kind of limitation in natural physics.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Jovidah said:

I know. But with IPC becoming rather stagnant it seems to me to be logical to go back to it for improvements. In fact Intel is doing exactly that with Kaby Lake. I'm just wondering why we haven't seen further increases, and if there's some kind of limitation in natural physics.

Heat is a natural byproduct of energy. Overclocking will always be limited by cooling. But many fail to realize that heat is not the only restraint. At the end of the day, we are talking about a micro diode pushing electrical impulses; it is an imperfect process that tends to break down when over strained.   

Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, Jovidah said:

As the topic says.

 

While I understand that:

-Adding more cores offers diminishing returns (Amdahl's law)

-Increasing IPC gets rather difficult once you hit the ceiling where most instructions are already executed in 1 cycle

 

I'm still somewhat puzzled by why we've been at a bit of a standstill for the last... 8 years or so? The natural thing then would seem to be increased clockcycles, yet all the CPU's stay stuck at around 3 to 4 Ghz. Even when everything has been scaled down a lot. Why is this? Is there some aspect of natural physics preventing it to go much higher than that? Why aren't we seeing 8 Ghz for example? If the problem was heat you'd just expect more innovation in cooling solutions.

 

I know part of it is 'no demand means trickle-to-market strategy by intel' but I'm just trying to understand why we've seen CPU's stuck in the 3-4 Ghz range for the last 10 years.

I forget which video it was but I like the Linus put it "Intel has decided that consumer CPUs are fine the way they are and don't need any more major improvements"(Or something like that). I mean general clock increase and improvements to IPC or USB 3.0 support is all that is happened. But what else do we need honestly?

CPU: I7 5820K@4.0Ghz | Mobo: ASRock X99 Extreme4 | Ram: 4 x 4Gb G.Skill Tridentz@3200Mhz | GPU: XFX 390x | Cooling: Corsair H115i | PSU: Corsair RMX 850x | Storage: Samsung 250Gb 850 EVO + 2tb Seagate HDD | Case: Inwin 805 | Keyboard: Tesoro Gram Spectrum | Mouse: Tesoro Gram H3L | Mousepad: Corsair MM800 RGB  | OS: Windows 10

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not going to buy into the whole "Intel has no competition so they won't do anything" argument. It's compelling, but I don't buy it entirely. I look at other avenues like:

  • Frequency is a huge power hog. That's why 4.0GHz has been an effective road block for the past decade. Power dissipation per transistor has been simplified to P = CV^2f, but when f is a really large number compared to CV^2 and you have billions of transistors, that power adds up.
    • On that note, Intel's design philosophy is supposedly no more than 1% increase in power dissipation per 2% IPC gains.
  • Integration of components (like the FPU, L2 cache, and memory controller) helped a lot during the 90s and 2000s, and there's nothing really left to integrate in the CPU except the chipset and RAM. At that point, you'll just have an SoC. And it's unlikely RAM will be integrated into processors (Introducing the new Core i7-8700K with 8GB or 16GB of non-upgradeable RAM!) or the chipset for that matter (it would add a lot of pins)
  • There hasn't been recent a breakthrough in front end management. Most of the processor die isn't taken up by execution units any more, it's taken up by the front-end in sorting instructions and such. There's only so much you can do there before any improvement becomes a colossal effort that's not worth it. The "final frontier" if you will in processor design is to simplify the front-end. NVIDIA and Elbrus are trying to get there but we'll see how far they get.
  • The programs that most people use aren't even taxing anymore. Think about it, if I can watch YouTube on a phone whose power barely touches that of a processor made in say 2008, what makes anyone think a processor today will handle it appreciably faster?
  • There is multithreading, but for most daily programs people use (web browser, email client, some office productivity program), spreading the workload across cores doesn't really work as well as you think because almost all of the time, they're waiting on something to happen.

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, M.Yurizaki said:

I'm not going to buy into the whole "Intel has no competition so they won't do anything" argument. It's compelling, but I don't buy it entirely. I look at other avenues like:

  • Frequency is a huge power hog. That's why 4.0GHz has been an effective road block for the past decade. Power dissipation per transistor has been simplified to P = CV^2f, but when f is a really large number compared to CV^2 and you have billions of transistors, that power adds up.
    • On that note, Intel's design philosophy is supposedly no more than 1% increase in power dissipation per 2% IPC gains.
  • Integration of components (like the FPU, L2 cache, and memory controller) helped a lot during the 90s and 2000s, and there's nothing really left to integrate in the CPU except the chipset and RAM. At that point, you'll just have an SoC. And it's unlikely RAM will be integrated into processors (Introducing the new Core i7-8700K with 8GB or 16GB of non-upgradeable RAM!) or the chipset for that matter (it would add a lot of pins)
  • There hasn't been recent a breakthrough in front end management. Most of the processor die isn't taken up by execution units any more, it's taken up by the front-end in sorting instructions and such. There's only so much you can do there before any improvement becomes a colossal effort that's not worth it. The "final frontier" if you will in processor design is to simplify the front-end. NVIDIA and Elbrus are trying to get there but we'll see how far they get.
  • The programs that most people use aren't even taxing anymore. Think about it, if I can watch YouTube on a phone whose power barely touches that of a processor made in say 2008, what makes anyone think a processor today will handle it appreciably faster?
  • There is multithreading, but for most daily programs people use (web browser, email client, some office productivity program), spreading the workload across cores doesn't really work as well as you think because almost all of the time, they're waiting on something to happen.

 

Ageed, my daily driver sports a core 2 duo, and I could not imagine needing more for school tasks. 

Link to comment
Share on other sites

Link to post
Share on other sites

Quote

 

What's holding back CPUs?

physics

 

you need current to turn transistors on and off

the faster you want to switch transistors the more current you need

the more current you use the more heat you get

at a certain point there's a thing called thermal-runaway at that point no amount of cooling will help you

 

there's also 8GHz+ world record overclocks under LN2

CPU: Intel i7 5820K @ 4.20 GHz | MotherboardMSI X99S SLI PLUS | RAM: Corsair LPX 16GB DDR4 @ 2666MHz | GPU: Sapphire R9 Fury (x2 CrossFire)
Storage: Samsung 950Pro 512GB // OCZ Vector150 240GB // Seagate 1TB | PSU: Seasonic 1050 Snow Silent | Case: NZXT H440 | Cooling: Nepton 240M
FireStrike // Extreme // Ultra // 8K // 16K

 

Link to comment
Share on other sites

Link to post
Share on other sites

Oh, well since OP asked about the processor frequency specifically I guess I should touch on that too.

 

I mentioned before that frequency is a huge power hog. Stack Exchange has a good thread on this subject but it is EE and math heavy to understand the why.

 

There's also another thing that nobody really touches upon when dealing with high frequency processors: synchronization. To illustrate this point, let's examine why nearly every peripheral interface today uses a serial transfer method, or a 1-bit on the data line at a time, than parallel, or multiple bits on the data line at a time.

 

When two devices talk to each other, they obviously agree on a data rate. The receiver though uses this data rate not as a data rate per se, but as a sampling rate.That is, the receiver is sampling the data line x amount times per second for data. So a 1Mbps receiver will sample its data line at a rate of 1MHz, or once every 1us. And whatever data state that line is in, the receiver will think it's that no matter what the sender intended. So for example, if the sender sent a 1 but the receiver sees a 0, the receiver is going to think this is a 0 no matter what. In theory, it's possible for the sender to use a 0.5 Mbps transmission rate and the receiver to use a 1Mbps receive rate, and what will happen is the receiver will see the correct data, just doubled (so if the sender sent 1010, the receiver will think its 1100 1100).

 

The problem with parallel signaling ultimately came with a thing called propagation delay. The thing is, not only do signals get delayed due to traveling through components, but signals also travel significantly less than the speed of light in a vacuum, about 2/3s or so. This is actually enough that signals that were meant to be together can arrive late enough such that when the receiver checks the line, the one of the bits of data is offset. So if the sender sent 0011 and 1100 and the rightmost bit has enough propagation delay, the receiver will see 0010 then 1101. This is why on PCBs you see wavy lines like this:

 

Kk4SR.png

 

That's to add artificial propagation delay to the signal such that its complementary signals will arrive at almost exactly the same time.

 

And once you get into data rates in the Gbps, your sampling time is now 1ns or less. So if a bit's propagation delay adds up to 1ns or greater, the bit missed its change to meet up with its buddies. Propagation delays for transistors can be anywhere from 10ns to picoseconds. Going faster means you lose synchronization of the bits because of all these little delays causing issues.

 

While it's not exactly the explanation for why processors haven't been running faster, processors consist of many little components running in parallel that must talk to each other and remain in sync, lest they do something like work on the same thing twice or clobber someone's result with either an incorrect one (based on stale data) or old one. When you start working faster, parts can come lose synch from each other more easily.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Jovidah said:

I'm still somewhat puzzled by why we've been at a bit of a standstill for the last... 8 years or so?

Physics. It turns out that silicon can only switch so fast.

A sieve may not hold water, but it will hold another sieve.

i5-6600, 16Gigs, ITX Corsair 250D, R9 390, 120Gig M.2 boot, 500Gig SATA SSD, no HDD

Link to comment
Share on other sites

Link to post
Share on other sites

The lack of high end competition has certainly limited innovation by Intel. We are currently at a place in CPU history where unless Intel and or AMD do something amazing to peak the interest of the CPU buying public their revenues will start to drop as people hang on to their older CPUs longer because the new model doesnt provide anything much over what is still good enough with the older CPU. The market is saturated so without the regular upgrade path, it becomes self defeating for intel because their income will continue to diminish. 

 

I am still running a 5 year old i7-2600 Non K  at 4.4Ghz that for the most part keeps up with a current model i5-6600. Is it the latest and greatest? No it isnt.  While getting a nice new x99 8 core toy would be fun, the 2600 is still good enough to do everything that I need it to do except top benchmark leader boards

Link to comment
Share on other sites

Link to post
Share on other sites

People claiming that Intel has just been grazing while AMD struggles is just a plain bad lie. There is always a reason to innovate outside the gaming industry. 

 

There is a rather hard wall that both companies have hit. The next step isn't a problem with engineering, it's with physics, materials, cost and implementation. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, huilun02 said:

When there's a will there's a way.

The whole 'limitation' argument is BS.

Progress has been slow because Intel has no incentive to make big improvements, due to lack of competition from AMD.

They will haul ass once again if AMD delivers with Zen.

no. this is not how it works. 

 

You think Intel employs the best minds in the world to sit around and twiddle their thumbs until something starts a fire under their ass? 

 

They don't lay off or hire employees on a whim when there's competition. These employees are working all the same, with softer deadlines. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

It's probably a combination of several factors, but I think a big one is just that the desktop market is getting smaller and oriented more and more toward professionals/prosumers as the casual users are finding other types of devices. The biggest area for growth today for semiconductor manufacturers is in low-power/mobile applications, and indeed that's probably the area in which Intel has made the biggest strides over the past few years. It's not what gets enthusiasts excited, but you can't look at desktop products in a vacuum.

Link to comment
Share on other sites

Link to post
Share on other sites

Makes sense...... it's an exponential relationship. thus why increasing from 4.4 to 4.6 ghz can have massive heat gains. do we need to have stuff like 8ghz stock speed? or would it be more likley for games and things to just be way better at multi threading work loads so you end up with like 40 cores. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×