Jump to content

Intel officially announcing 10nm manufacturing

WMGroomAK
9 hours ago, WMGroomAK said:

I honestly think the most interesting portion of this is one of the reasons they have not switched to the 10nm process was that their 3rd gen 14nm process was seeing very similar performance to the 1st gen 10nm.  It kind of makes me wonder if the 10nm process will be a long lived one with multiple refinements matching what the future 7nm will perform or will 7nm make the 10nm process short lived.  Also, what kind of unexpected hurdles will the 7nm process give beyond going to 10nm. 

 

Off-Topic (slightly): Please no fanboyism on why Intel or AMD is the better company...  

Billions in R&D into a platform normally makes the successor platform take a long while to find real improvements.  Happens in any industry where there is robust & mature development of the technology.

 

It also brings up the possibility that we'll be seeing the end of the normal transistor approach in the not-too distant future.  Performance might not get insanely better from here, but at least your Cellphone will be able to play Crysis.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, IceCold008 said:

Wish Intel would do something about gaming preformance  in each of there generation of CPU. Its not exactly awe inspiring when a new CPU is released. 

That is up to game devs, not Intel. Threads don't solve everything, and they're the least efficient way to solve embarrassingly parallel problems, which most of the game engine pipeline is.

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, AnonymousGuy said:

Packing more performance in the same TDP / thermal envelope.

More like same performance into lower TDP.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, WereCat said:

More like same performance into lower TDP.

Because the last time Intel lowered the TDP for its top segment was when? The 4770K? For the performance at the top to find a home in lower TDPs means something has to take its place at the top. Come on, simple inductive reasoning here...

Link to comment
Share on other sites

Link to post
Share on other sites

These chips are gonna run on 1151 sockets, right?

3600X @ stocke | 5600XT TUF OC @ 1850 | 2x16 + 2x8 RAM 3200 HD | 1tb Samsung 970 EVO Plus | Lian Li 205M | TT Toughpower Grand RGB 850 | throwaway b450 asus mobo | BQ cooler

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Crossbred said:

These chips are gonna run on 1151 sockets, right?

Intel confirmed a long time ago that Cannon and Sky would be socket-compatible.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, MandelFrac said:

Because the last time Intel lowered the TDP for its top segment was when? The 4770K? For the performance at the top to find a home in lower TDPs means something has to take its place at the top. Come on, simple inductive reasoning here...

I was being sarcastic. I know, it's hard to tell from text..

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, MandelFrac said:

That is up to game devs, not Intel. Threads don't solve everything, and they're the least efficient way to solve embarrassingly parallel problems, which most of the game engine pipeline is.

True but it can go a long way with preformance when a game is CPU bound and not GPU. Better CPU preformance equals better Game preformance all around. Especially if your weakest link is your CPU in your system. 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, IceCold008 said:

True but it can go a long way with preformance when a game is CPU bound and not GPU. Better CPU preformance equals better Game preformance all around. Especially if your weakest link is your CPU in your system. 

Not really. Simply see the Extended Amdahl's Law. The moment you add threads, you add communication, and that communication overhead grows combinatorially whereas the speedup is only ever linear. It's better to get an 8x speedup out of your one thread by using AVX than it is to use 8 threads when it comes to the problems in a game engine.

Link to comment
Share on other sites

Link to post
Share on other sites

love these kinds of progress happening in this field. it permanently changes the game & platform specifications , we get to see lot of changes we imagined onto portable devices

Details separate people.

Link to comment
Share on other sites

Link to post
Share on other sites

On 29 martie 2017 at 4:20 PM, MandelFrac said:

Intel confirmed a long time ago that Cannon and Sky would be socket-compatible.

So if I get a Z170 motherboard now, it will work Cannon Lake?

Or will I have to change the motherboard even though it's 1151 socket.

Link to comment
Share on other sites

Link to post
Share on other sites

"better shrink that 4core die for more $$€€$$"

GPU drivers giving you a hard time? Try this! (DDU)

Link to comment
Share on other sites

Link to post
Share on other sites

Not sure how accurate this is, but AMD seems to be researching 7nm architechture http://segmentnext.com/2017/02/03/amd-zen-2/

"Hyper Demon" Build: 

Case: NZXT H440 Hyper Beast.  CPU: AMD R9 3900x (cooled by a KrakeX62).  GPU: AMD XFX RX 6900XT Merc 319 Black.  RAM: G.Skill Trident Z RGB DDR4 32GB ram @3600mhz.  Mobo: Asus Crosshair VI hero. PSU: Corsair RM850x.  Boot drive: Samsung 960 evo 500gb nvme ssd.  Game storage: Samsung 860 evo 1TB SATA SSD.  Bulk storage: WD Black 2TB.  

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Java said:

Not sure how accurate this is, but AMD seems to be researching 7nm architechture http://segmentnext.com/2017/02/03/amd-zen-2/

Yep, Global Foundries plans 7nm production for the second half of 2018.

        Pixelbook Go i5 Pixel 4 XL 

  

                                     

 

 

                                                                           

                                                                              

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, Darth Revan said:

So if I get a Z170 motherboard now, it will work Cannon Lake?

Or will I have to change the motherboard even though it's 1151 socket.

I would buy a 270 to be safe on BIOS updates, but for the more extreme boards, yes.

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, Java said:

Not sure how accurate this is, but AMD seems to be researching 7nm architechture http://segmentnext.com/2017/02/03/amd-zen-2/

What the other foundries call 10 isn't even close to being as dense as Intel's. Their 7nm won't be much tighter if at all, and if they expect to not hit the same issues Intel did, they're kidding themselves.

Link to comment
Share on other sites

Link to post
Share on other sites

On 29/03/2017 at 0:55 AM, IceCold008 said:

Wish Intel would do something about gaming preformance  in each of there generation of CPU. Its not exactly awe inspiring when a new CPU is released. 

We are still GPU and software optimisation bound. CPU's are plenty powerful enough to handle any game and any game in  the next 10+ years. 

 (\__/)

 (='.'=)

(")_(")  GTX 1070 5820K 500GB Samsung EVO SSD 1TB WD Green 16GB of RAM Corsair 540 Air Black EVGA Supernova 750W Gold  Logitech G502 Fiio E10 Wharfedale Diamond 220 Yamaha A-S501 Lian Li Fan Controller NHD-15 KBTalking Keyboard

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, kuddlesworth9419 said:

We are still GPU and software optimisation bound. CPU's are plenty powerful enough to handle any game and any game in  the next 10+ years. 

Thats not what i said 

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, IceCold008 said:

Thats not what i said 

Well, you expect Intel to throw the baby out with the bath water when there's not much competition AND games and other apps could actually grow into the instructions that have been around since Sandy Bridge and get a real second wind...

 

Here's the thing. Amdahl's Law and Gustafson's Law as written don't take into account communication overheads caused by using threads or the added cycles needed to launch and kill them. Intel a very long time ago said scalar instructions can only get so fast (perfect scenario 1 cycle a piece, but with integer division you're still up in the 90 cycles range), and even if you throw 4 independent ALUs at the problem, your executable now grows and the instruction cache overflows trying to hold all of those scalar instructions. So even if you can have 4 scalar ALUs doing the same work as 1 big vector ALU, you still pay the price of having all of those extra instructions, not to mention register pressure which shoots hyperthreading performance scaling clear in the foot.

 

Intel came up with MMX, and you got a doubling of throughput. You could check multiple values at once and shrink branch checking overhead. You could do math almost twice as fast. You could apply one of the fundamental operations to twice as much data at once. MMX was not super effective because it had no floating point support, but then we got SSE. Throughput doubled again, quadrupled in the case of floating point math.

 

SSE is still what compilers produce a lot of. Getting generic code to give you proper AVX in my experience, unless it's something trivially easy like a horizontal reduction of data, is exceedingly difficult. Plus, SSE can get you as much as 300fps in Bioshock Infinite, so where's the motivation to move up to the next level despite the fact it's been available since 2011? Intel has given us processors which are incredibly good at standing the test of time, and they have plenty more to give. If consumer software won't budge, Intel can throw you more cores, but that rapidly breaks down for the way consumer apps are built and how they work.

 

If the code is not embarrassingly parallel to the point you've got 95%+ concurrency, you have a hard upper limit of 20x speedup, and that takes you 16 cores just to get to 10x speedup, and it takes 1024 to put you close to that 20x speedup. Consumer apps are not built to be that concurrent, and what's worse is every new thread is communication overhead. Even if it's just a fork-join parallelism and doesn't need inter-thread communication, the overhead of launching and killing threads is significant, close to 100ms on Windows to launch or kill. The moment they have to communicate, your overhead grows combinatorially. Your theoretical speedup only grows linearly. So the real ceiling at that point is far lower than 20x speedup, and the max number of cores you'd want to throw at the problem is probably closer to 8 than 1024.

 

Intel delivers world class performance and tech. We down here don't use it nearly well enough to have the grounds to complain imho. We can't even get RAM choked properly, and we want Intel to deliver whole new worlds of performance with the same old programming techniques? It's impossible. There is a big benefit to be had if they can extend their L2 size to match AMD's, but IPC truthfully has very little place to go, and we haven't even caught up to 2011/2012 instruction sets that do twice as much per clock as the ones we use right now.

 

So does Intel deliver you more cores and throw away its future sales on tech that is just too good, or does it push developers down here, give them tools, and make them push the limits of what we have to make the next upgrade totally worth it?

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, MandelFrac said:

Well, you expect Intel to throw the baby out with the bath water when there's not much competition AND games and other apps could actually grow into the instructions that have been around since Sandy Bridge and get a real second wind...

 

Here's the thing. Amdahl's Law and Gustafson's Law as written don't take into account communication overheads caused by using threads or the added cycles needed to launch and kill them. Intel a very long time ago said scalar instructions can only get so fast (perfect scenario 1 cycle a piece, but with integer division you're still up in the 90 cycles range), and even if you throw 4 independent ALUs at the problem, your executable now grows and the instruction cache overflows trying to hold all of those scalar instructions. So even if you can have 4 scalar ALUs doing the same work as 1 big vector ALU, you still pay the price of having all of those extra instructions, not to mention register pressure which shoots hyperthreading performance scaling clear in the foot.

 

Intel came up with MMX, and you got a doubling of throughput. You could check multiple values at once and shrink branch checking overhead. You could do math almost twice as fast. You could apply one of the fundamental operations to twice as much data at once. MMX was not super effective because it had no floating point support, but then we got SSE. Throughput doubled again, quadrupled in the case of floating point math.

 

SSE is still what compilers produce a lot of. Getting generic code to give you proper AVX in my experience, unless it's something trivially easy like a horizontal reduction of data, is exceedingly difficult. Plus, SSE can get you as much as 300fps in Bioshock Infinite, so where's the motivation to move up to the next level despite the fact it's been available since 2011? Intel has given us processors which are incredibly good at standing the test of time, and they have plenty more to give. If consumer software won't budge, Intel can throw you more cores, but that rapidly breaks down for the way consumer apps are built and how they work.

 

If the code is not embarrassingly parallel to the point you've got 95%+ concurrency, you have a hard upper limit of 20x speedup, and that takes you 16 cores just to get to 10x speedup, and it takes 1024 to put you close to that 20x speedup. Consumer apps are not built to be that concurrent, and what's worse is every new thread is communication overhead. Even if it's just a fork-join parallelism and doesn't need inter-thread communication, the overhead of launching and killing threads is significant, close to 100ms on Windows to launch or kill. The moment they have to communicate, your overhead grows combinatorially. Your theoretical speedup only grows linearly. So the real ceiling at that point is far lower than 20x speedup, and the max number of cores you'd want to throw at the problem is probably closer to 8 than 1024.

 

Intel delivers world class performance and tech. We down here don't use it nearly well enough to have the grounds to complain imho. We can't even get RAM choked properly, and we want Intel to deliver whole new worlds of performance with the same old programming techniques? It's impossible. There is a big benefit to be had if they can extend their L2 size to match AMD's, but IPC truthfully has very little place to go, and we haven't even caught up to 2011/2012 instruction sets that do twice as much per clock as the ones we use right now.

 

So does Intel deliver you more cores and throw away its future sales on tech that is just too good, or does it push developers down here, give them tools, and make them push the limits of what we have to make the next upgrade totally worth it?

Bloody heck i bet your fingers hurt :). Didnt need a whole book reply. But hey its something to read. As for intel. I like there CPU it just that every generation there is only 1% to 10% increase in gaming preformance. Which is fine i suppose if your trying to save money by not needing to upgrade your CPU just yet. 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, IceCold008 said:

Bloody heck i bet your fingers hurt :). Didnt need a whole book reply. But hey its something to read. As for intel. I like there CPU it just that every generation there is only 1% to 10% increase in gaming preformance. Which is fine i suppose if your trying to save money by not needing to upgrade your CPU just yet. 

The thing is there's a 100% increase sitting there in unused silicon if you have Sandy Bridge or later. So not only do you want that performance increase, but you want more cores and thus a platform that very well might just last you to your dying day? Intel won't give you that on the cheap, and I think AMD will find they've made a grave mistake in doing exactly that. What the Hell will they sell to the laymen in 2,3,4,5 years if games start suddenly using AVX (Bulldozer, Jaguar, Sandy Bridge) and AVX2 (Haswell, Excavator)? And now with AVX 512 all of a sudden that 4x4 matrix multiplication at the crux of game engines can be reduced from a 1040 cycle operation using scalar instructions, from the 260ish needed using SSE horizontal broadcast-multiply-add techniques, from the 136 I shrunk it to with AVX, to a 10-cycle instruction.

 

So you want not just the moon (which you have) and the sun (which is being delivered by Coffee Lake), but you want the galaxy while you're at it? Intel won't give away its own future, and it's unreasonable to ask them to. When games do make that leap, threads will be in the way, and no one's gonna care that fmull still takes 3-4 cycles, because now you can do 16 of them at once, and you probably don't care so much about the 1.

Link to comment
Share on other sites

Link to post
Share on other sites

No need for me to upgrade m8. Like i said gaming preformance between each generation not worth the cost atm. And anyway i have a general rule when it comes to CPU upgrades, Only upgrade every 3 or 4 generations. Will take a look at intel lineup next year see whats on offer and if its any good will buy if not stay with my current CPU. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, IceCold008 said:

No need for me to upgrade m8. Like i said gaming preformance between each generation not worth the cost atm. And anyway i have a general rule when it comes to CPU upgrades, Only upgrade every 3 or 4 generations. Will take a look at intel lineup next year see whats on offer and if its any good will buy if not stay with my current CPU. 

My desktop back home will still have a 2600K in it for a couple years I reckon. I might buy something newer just to flex my muscles with new instruction sets and intrinsics, but I don't need the performance beyond the 2600K tbh.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, MandelFrac said:

I would buy a 270 to be safe on BIOS updates, but for the more extreme boards, yes.

Thanks. Just one more question, which will allow me to OC the processor, the Z series or H series?

Link to comment
Share on other sites

Link to post
Share on other sites

What does this mean for other architectures? Intel is resilient against ARM bc they have their own x86. I understand that ARM is mostly used because of their low power consumption, but these 10nm have very agressive steps toward competeting in that area, no?

Personal Rig:

[UPGRADE]

CPU: AMD Ryzen 5900X    Mb: Gigabyte X570 Gaming X    RAM: 2x16GB DDR4 Corsair Vengeance Pro    GPU: Gigabyte NVIDIA RTX 3070    Case: Corsair 400D    Storage: INTEL SSDSCKJW120H6 M.2 120GB    PSU: Antec 850W 80+ Gold    Display(s): GAOO, 现代e窗, Samsung 4K TV

Cooling: Noctua NH-D15    Operating System(s): Windows 10 / Arch Linux / Garuda

 

[OLD]

CPU: Intel(R) Core(TM) i5-6500 @ 3.2 GHz    Mb: Gigabyte Z170X-Gaming 3    RAM: 2x4GB DDR4 GSKILL RIPJAWS 4    GPU: NVIDIA GeForce GTX 960    Case: Aerocool PSG V2X Advance    Storage: INTEL SSDSCKJW120H6 M.2 120GB    PSU: EVGA 500W 80+ Bronce    Display(s): Samsung LS19B150

Cooling: Aerocool Shark White    Operating System(s): Windows 10 / Arch Linux / OpenSUSE

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×