Jump to content

Intel: Chips To Become Slower But More Energy Efficient

HKZeroFive

I am not OK with this trend. For laptops, smartphones and other devices which require batteries then fine, keep the performance at a reasonable level and push for energy efficiency, but for desktops? Come on. Some of us do more than just browse Facebook and play Farmville. It still takes me several hours to do certain things because my 4.4GHz i5-2500K is such a massive bottleneck.

At least give us 6 core processors if you are going to make performance per core worse. They really should just make all i7 chips 6 cores or higher.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Coaxialgamer said:

they could just make bigger dies while staying on 14 nm a bit longer , instead of making smaller and smaller dies by going 10nm or less.

 

Sure , cooling would become an issue sooner or later , and they would have to move on eventually .

Making dies bigger increases cost, why do you think the 18 core xeon linus has was so expensive.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Watermelon Guy said:

Making dies bigger increases cost, why do you think the 18 core xeon linus has was so expensive.

Not as much as you would think actually.  If you compare the 2600k and 8350, the die of the fx chip is actually much larger (216 vs 319mm2, not to mention the igpu takes up a large portion of the intel chip). 

However,  it doesn't cost more than the 2600k, at launch. 

 

A large part of the cost is intel asking the consumer to pay more because of a lack of competition on amd's part. 

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

On laptops fine, but on desktop I don't really care about power consumption, I care a about performance for the price I am willing to pay.

 

Right now I am just fine with my 3570k :)

“Remember to look up at the stars and not down at your feet. Try to make sense of what you see and wonder about what makes the universe exist. Be curious. And however difficult life may seem, there is always something you can do and succeed at. 
It matters that you don't just give up.”

-Stephen Hawking

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, LAwLz said:

I am not OK with this trend. For laptops, smartphones and other devices which require batteries then fine, keep the performance at a reasonable level and push for energy efficiency, but for desktops? Come on. Some of us do more than just browse Facebook and play Farmville. It still takes me several hours to do certain things because my 4.4GHz i5-2500K is such a massive bottleneck.

At least give us 6 core processors if you are going to make performance per core worse. They really should just make all i7 chips 6 cores or higher.

It's for servers and IoT too. There's no getting around this. Even IBM is admitting it, and it's the clock speed king. That said, architectural improvements will still increase IPC, and newer instructions will be able to do more in a single command. The problem will ever be software not keeping up. That's the main problem now. There is double the compute power in a Haswell quad vs. a Sandy Bridge quad at the same clock speed, and yet benchmarks don't update for newer instructions and you don't get to see the effects.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Watermelon Guy said:

Making dies bigger increases cost, why do you think the 18 core xeon linus has was so expensive.

It's bigger than most dGPUs at 662 mm sq.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Paranoid Kami said:

Guess I'll be staying on my 2600k then.

There will still be architectural improvements, and eventually software will have to catch up to newer instructions. It's about 5 years behind. There is double the compute power in a Haswell quad compared to a Sandy Bridge quad. Software doesn't keep up to expose that outside the HPC arena. There is performance sitting on the table going untapped.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, patrickjp93 said:

It's for servers and IoT too. There's no getting around this. Even IBM is admitting it, and it's the clock speed king. That said, architectural improvements will still increase IPC, and newer instructions will be able to do more in a single command. The problem will ever be software not keeping up. That's the main problem now. There is double the compute power in a Haswell quad vs. a Sandy Bridge quad at the same clock speed, and yet benchmarks don't update for newer instructions and you don't get to see the effects.

No, what he is saying is that future architectures will have less IPC. It would not be "reduced speed" if they kept on increasing speed. Unless he by "speed" meant just the frequency, but that's not what I think he meant. The article further strengthens my belief that he means newer architectures will have worse IPC, or the same IPC but lower clocks (in either scenario the performance goes down) by saying "Holt has stated not just that Moore’s Law is coming to an end in practical terms, in that chip speeds can be expected to stall, but is actually likely to roll back in terms of performance". So we will end up with a generation or two where the new chip performs worse than the previous generation.

 

When you say "double the compute power in Haswell vs Sandy", are you talking about the beefier GPU or just AVX2? Because x264 can use AVX2 and it did not get anywhere near a doubling in performance from it. It was maybe like 15%, which is nice but nothing amazing. If you are talking about the GPU then far from all tasks are suitable for GPGPU.

In both scenarios we run into the problem where it might be a 100% performance increase for a very specific instruction, but most programs has to do more than one type of work so a 100% gain in one area does not translate to a 100% gain overall.

 

I still think Intel should move to 6-cores even on their mainstream platform.

Link to comment
Share on other sites

Link to post
Share on other sites

Forgive my ignorance, but can't they just like... Make a chip with the same TDP as the old ones but marginally more powerful?

(i.e. bigger chips?)

Just remember: Random people on the internet ALWAYS know more than professionals, when someone's lying, AND can predict the future.

i7 9700K (5.2Ghz @1.2V); MSI Z390 Gaming Edge AC; Corsair Vengeance RGB Pro 16GB 3200 CAS 16; H100i RGB Platinum; Samsung 970 Evo 1TB; Samsung 850 Evo 500GB; WD Black 3 TB; Phanteks 350x; Corsair RM19750w.

 

Laptop: Dell XPS 15 4K 9750H GTX 1650 16GB Ram 256GB SSD

Spoiler

sex hahaha

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LAwLz said:

No, what he is saying is that future architectures will have less IPC. It would not be "reduced speed" if they kept on increasing speed. Unless he by "speed" meant just the frequency, but that's not what I think he meant. The article further strengthens my belief that he means newer architectures will have worse IPC, or the same IPC but lower clocks (in either scenario the performance goes down) by saying "Holt has stated not just that Moore’s Law is coming to an end in practical terms, in that chip speeds can be expected to stall, but is actually likely to roll back in terms of performance". So we will end up with a generation or two where the new chip performs worse than the previous generation.

 

When you say "double the compute power in Haswell vs Sandy", are you talking about the beefier GPU or just AVX2? Because x264 can use AVX2 and it did not get anywhere near a doubling in performance from it. It was maybe like 15%, which is nice but nothing amazing. If you are talking about the GPU then far from all tasks are suitable for GPGPU.

In both scenarios we run into the problem where it might be a 100% performance increase for a very specific instruction, but most programs has to do more than one type of work so a 100% gain in one area does not translate to a 100% gain overall.

 

I still think Intel should move to 6-cores even on their mainstream platform.

He said nothing of IPC, and IPC is fully decoupled from clock rate. Performance may slip just a hair, but IPC will continue going up. All this is doing is slowing the clock back down.

 

Just AVX 256. x264 is also very bandwidth-bound, You're spending more time streaming data from RAM into cache than you are encoding or decoding it.

 

Most programs can be rewritten for better data and instruction-level parallelism. It's not hard to do. It just takes some discipline.

 

No, most mainstream consumers have 0 need for more than 2 cores, let alone 4. For enthusiasts more makes sense. High-end gaming (which is still so badly coded I'm not yet convinced more cores are yet necessary) is an enthusiast pursuit. It's not like Intel's even gouging you for 2 extra cores. It's a different platform, but the cost difference is overall microscopic.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

58 minutes ago, niofalpha said:

Can't they just like... Make a chip with the same TDP as the old ones but marginally more powerful?

(i.e. bigger chips?)

You do know the E5 2699 V3 is already 662 mm sq. right? It's bigger than most dGPU dies. There isn't much room for bigger dies. Yields start falling drastically, which will increase Intel's prices.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, patrickjp93 said:

You do know the E5 2699 V3 is already 662 mm sq. right? It's bigger than most dGPU dies. There isn't much room for bigger dies. Yields start falling drastically, which will increase Intel's pricing.

I meant larger cores. I.e. 4 More powerful cores in the current size of the 2011 CPUs.

Just remember: Random people on the internet ALWAYS know more than professionals, when someone's lying, AND can predict the future.

i7 9700K (5.2Ghz @1.2V); MSI Z390 Gaming Edge AC; Corsair Vengeance RGB Pro 16GB 3200 CAS 16; H100i RGB Platinum; Samsung 970 Evo 1TB; Samsung 850 Evo 500GB; WD Black 3 TB; Phanteks 350x; Corsair RM19750w.

 

Laptop: Dell XPS 15 4K 9750H GTX 1650 16GB Ram 256GB SSD

Spoiler

sex hahaha

Link to comment
Share on other sites

Link to post
Share on other sites

I take it they're doing this because we're getting close to the limit of what can actually be done with silicon in terms of die shrinks and making things work at that level.  I always knew there would be two options when we got to this bridge: actually do some R&D and come up with some massively different new technology that will let us keep getting faster, or give up.  I never seriously expected anyone - especially intel of all people - to take the latter, but I guess computers are fast enough.  640k - er, sorry, a 6700k - ought to be enough for anybody, right?

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, niofalpha said:

I meant larger cores. I.e. 4 More powerful cores in the current size of the 2011 CPUs.

If it's for scale-up workloads like analytics, Intel could do that, but overall the server world wants more cores, plain and simple.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Ryan_Vickers said:

I take it they're doing this because we're getting close to the limit of what can actually be done with silicon in terms of die shrinks and making things work at that level.  I always knew there would be two options when we got to this bridge: actually do some R&D and come up with some massively different new technology that will let us keep getting faster, or give up.  I never seriously expected anyone - especially intel of all people - to take the latter, but I guess computers are fast enough.  640k - er, sorry, a 6700k - ought to be enough for anybody, right?

No one is giving up, but CNT and graphene aren't ready. Silicon-Germanium is coming, but transistor designs necessarily have to change now. If that means switching speeds take a hit for a while, so be it, but no one's giving up.

 

For most people, a 6700K is overkill. For enthusiast gamers, it should be more than adequate, but games are coded like garbage so w/e.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, patrickjp93 said:

No one is giving up, but CNT and graphene aren't ready. Silicon-Germanium is coming, but transistor designs necessarily have to change now. If that means switching speeds take a hit for a while, so be it, but no one's giving up.

 

For most people, a 6700K is overkill. For enthusiast gamers, it should be more than adequate, but games are coded like garbage so w/e.

For sure I would expect a temporary blip or lapse in Moore’s Law while they transition to those new technologies (since apparently no one saw this coming and didn't start working on them soon enough), but the article just makes it sound like they're simply going to "accept this fate" and start making chips that are slower than what we have now... indefinitely.  I hope you're right though; that they will get back on the performance train sooner than later :)

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Ryan_Vickers said:

For sure I would expect a temporary blip or lapse in Moore’s Law while they transition to those new technologies (since apparently no one saw this coming and didn't start working on them soon enough), but the article just makes it sound like they're simply going to "accept this fate" and start making chips that are slower than what we have now... indefinitely.  I hope you're right though; that they will get back on the performance train sooner than later :)

People started on Spintronics 15 years ago. That doesn't mean it was an easy thing to master, much less get it ready for CMOS processes.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, patrickjp93 said:

People started on Spintronics 15 years ago. That doesn't mean it was an easy thing to master, much less get it ready for CMOS processes.

Oh yeah, what do you think it'll do to the CPU market if everyone's performance totally stalls for, say, 5 years?  It would be unprecedented, I think.  And weird... I mean, we've been reliably getting faster stuff every year for decades, and for that to suddenly change?   Hm...

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Ryan_Vickers said:

Oh yeah, what do you think it'll do to the CPU market if everyone's performance totally stalls for, say, 5 years?  It would be unprecedented, I think.  And weird... I mean, we've been reliably getting faster stuff every year for decades, and for that to suddenly change?   Hm...

Performance will keep going up for compute workloads, but consumer software, which isn't built with nearly the same expertise will stall out without a major shift in programming paradigms used to make it.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, patrickjp93 said:

Performance will keep going up for compute workloads, but consumer software, which isn't built with nearly the same expertise will stall out without a major shift in programming paradigms used to make it.

so, we could potentially see things like games not continue asking for better CPUs each year, and in so doing, see the parts of games that are CPU intensive not improve during that time...?

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Ryan_Vickers said:

so, we could potentially see things like games not continue asking for better CPUs each year, and in so doing, see the parts of games that are CPU intensive not improve during that time...?

Potentially. Microsoft will have to improve the visual c++ compiler to extract more Instruction-Level Parallelism from software compiled on it. And coders will have to get better about programming with implicit parallelism. Heck if they can add more data parallelism and make use of AVX 256 without having an I/O bottleneck in memory, we could see performance increase regardless. But simply put it's up to the devs to decide.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, patrickjp93 said:

There will still be architectural improvements, and eventually software will have to catch up to newer instructions. It's about 5 years behind. There is double the compute power in a Haswell quad compared to a Sandy Bridge quad. Software doesn't keep up to expose that outside the HPC arena. There is performance sitting on the table going untapped.

Then they should just do what Nvidia does and send people over to companies in order to help them get it working properly. There is currently no incentive for me and many others to upgrade if there is no noticeable improvement in the programs that we use even if the hardware is getting better.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Paranoid Kami said:

Then they should just do what Nvidia does and send people over to companies in order to help them get it working properly. There is currently no incentive for me and many others to upgrade if there is no noticeable improvement in the programs that we use even if the hardware is getting better.

Nvidia can do that for a small batch of game studios with relative ease, but even for Intel's programming army, they already have to split that up among many supercomputers around the world, and then there's the Linux kernel developers, their graphics driver team, their compiler team, RealSense, Intel Math Kernel Library, WiDi and WiGig, ethernet drivers, wireless card drivers, now omnipath as well, and a slew of other projects too. The few remaining programmers could be sent out to how many companies to help them build software better? There's a reason Intel developed OpenMP, CilkPlus, and Thread Building Blocks. They're all OPEN programming standards which are very easy to learn and unlock quite a lot of performance from your CPU with hardly any effort. OpenMP and CilkPlus have open-source implementations under GCC and Clang. Thread Building Blocks is due to come out on both soon.

 

For reference, OpenMP turns development of multithreaded synchronous workloads and data-parallel applications respectively into a walk in the park. You literally have to be a moron to not get 99% scaling with more cores with OpenMP. With CilkPlus you've got to be a bit more nuanced and careful to squeeze as many operations into a for-loop as you can (eliminating much loop overhead), but most of it is still very easy and pretty much automatically makes your code use SSE 4.1 and 4.2 and AVX instructions to do a lot in parallel even just on a single core. Combine the two and the result is extremely powerful (assuming you don't have a memory or I/O bottleneck). Thread Building Blocks is more for asynchronous workloads, but it's still way easier than trying to build with native C++ threads and results in better performance most of the time. For now if you want to try it you'll have to use Intel's proprietary compiler until the open-source compilers support it.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, patrickjp93 said:

Nvidia can do that for a small batch of game studios with relative ease, but even for Intel's programming army, they already have to split that up among many supercomputers around the world, and then there's the Linux kernel developers, their graphics driver team, their compiler team, RealSense, Intel Math Kernel Library, WiDi and WiGig, ethernet drivers, wireless card drivers, now omnipath as well, and a slew of other projects too. The few remaining programmers could be sent out to how many companies to help them build software better? There's a reason Intel developed OpenMP, CilkPlus, and Thread Building Blocks. They're all OPEN programming standards which are very easy to learn and unlock quite a lot of performance from your CPU with hardly any effort. OpenMP and CilkPlus have open-source implementations under GCC and Clang. Thread Building Blocks is due to come out on both soon.

 

For reference, OpenMP turns development of multithreaded synchronous workloads and data-parallel applications respectively into a walk in the park. You literally have to be a moron to not get 99% scaling with more cores with OpenMP. With CilkPlus you've got to be a bit more nuanced and careful to squeeze as many operations into a for-loop as you can (eliminating much loop overhead), but most of it is still very easy and pretty much automatically makes your code use SSE 4.1 and 4.2 and AVX instructions to do a lot in parallel even just on a single core. Combine the two and the result is extremely powerful (assuming you don't have a memory or I/O bottleneck). Thread Building Blocks is more for asynchronous workloads, but it's still way easier than trying to build with native C++ threads and results in better performance most of the time. For now if you want to try it you'll have to use Intel's proprietary compiler until the open-source compilers support it.

Decided to check it out and you weren't kidding when you said it was easy. Is this just something not taught in schools or do companies not think it's worth the time?

 

https://software.intel.com/videos/1-minute-intro-intel-tbb-parallel-for

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×