Jump to content

Intel Cannonlake 10nm Preliminary Production Begun, Comes at Cost of Money and Production Time

patrickjp93

http://www.kitguru.net/components/cpu/anton-shilov/intel-we-will-not-need-euv-for-10nm-process-technology/

 

It's a short article so here's the gist:

 

Intel will not be using Extreme Ultra-Violet lithography for 10nm production. Whether this affects Cannonlake's succesor or not is unknown at this time.

 

They will instead use a tried and true method (though not on scales this small) called Multi-Patterning which is known to have slow production time as compared to full ultraviolet lithographic methods and was abandoned originally due to its raw expense. Supposedly they will chose quad-patterning specifically.

 

In short, unless yields are extremely high, the cost of upcoming Cannonlake chips is expected to rise beyond inflation.

 

Speculation:

While I'm sure Intel will put EUV on its production lines in late 2016, this may be even too late for the next architectural change, which may or may not give AMD's deal with Samsung the time it needs to put 14nm in AMD's hands at the same time.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Intel is not worried about AMD a single bit, lol

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

Intel is not worried about AMD a single bit, lol

If the wccf article also floating around right now is true, and if Keller knocks Xen out of the park, then it's possible AMD could threaten Intel once more, if only briefly.

 

One of the big disadvantages of ARM has been the process node on which it is being produced (Samsung aside). With AMD it's possible we could see a really powerful multi-core ARM processor which could threaten Intel's Xeons in a number of tasks where cache-misses are numerous and expensive, as well as provide low-power options for a lot of back-end work which would compete with Atom.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Who or what is cannonlake?

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

Who or what is cannonlake?

 

 

i also ask this.

 

and intel is miles ahead of amd in terms of high end, server and business class products. low to mid tiers though there at least on par.

Link to comment
Share on other sites

Link to post
Share on other sites

Who or what is cannonlake?

If you follow Intel's production roadmap, it's the code name of their 10nm chips based on the 14nm Skylake architecture (though revised) which is due out Q2/3 2015 as the successor to 14nm Broadwell which is launching mobile parts right now.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

If you follow Intel's production roadmap, it's the code name of their 10nm chips based on the 14nm Skylake architecture (though revised) which is due out Q2/3 2015 as the successor to 14nm Broadwell which is launching mobile parts right now.

Wow, why don't they slow down for a bit? maybe take a vacation? come back and make me a damned matrix thing to go in the back of my head.

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

i also ask this.

 

and intel is miles ahead of amd in terms of high end, server and business class products. low to mid tiers though there at least on par.

If that was true Intel wouldn't have put FPGAs on its high-end Xeons. ARM is great as a low-power chip which can handle tasks where cache-misses are numerous and expensive where x86 chips should be using cycles on heavy computing. Right now the trend is for Xeons to do heavy calculation and offload various types of tasks to an ARM chip on the same board. AMD and others intend to push Intel out of the server market with heterogeneous design and much lower power and lower heat options, both of which are the prime expenses in the life of a server.

 

If Intel didn't feel threatened by HSA, it wouldn't be implementing its own version. Skylake comes with unified memory, a move AMD just did in its Kaveri and Berlin(server-grade) APUs and packing on more than double the GPU core count between Haswell and Broadwell.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Why would they be? AMD hasn't made an interesting chip in two years.

Again, Intel sees ARM as a threat to its iron grip on server markets, and AMD does have champion heavyweight chip makers on its team which it poached from Apple, not to mention HSA (which Intel is implementing its own version of, hence Skylake having unified memory)), and finally of course its overall stronger GPU architecture which tons of people know how to program on.

 

Then there's the Samsung deal bringing 14nm tech to every Global Foundries facility by the end of 2016.

 

AMD has some life left, which is why Intel has been reacting so swiftly and drastically towards power efficiency and ever more compute power in its iGPU offerings (2 Teraflops in Broadwell's top SKU, or half of Nvidia's best Tesla compute card).

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I was more referring to the consumer market, but ARM is definitely a possible threat to Intel in the server markets.

 

Although AMD is developing some interesting technology, as you mentioned, but it feels like they're just to bridge the gaps in performance between their hardware. If they do roll out the NEW FX chips in 2015, I might be willing to give them another look, but they've been boring as shit lately.

 

Maybe we'll see the two directly compete in the consumer market again, but the performance differences are quite sad.

Link to comment
Share on other sites

Link to post
Share on other sites

AMD better make something amazing to fight intel before intel starts bumping up the cost of there CPU's.

Link to comment
Share on other sites

Link to post
Share on other sites

AMD better make something amazing to fight intel before intel starts bumping up the cost of there CPU's.

 

i dont think intel is going to go that way again, 

Link to comment
Share on other sites

Link to post
Share on other sites

if we're discussing the future of AMD, I see them either moving into mid-level work with their APUs, or dropping their own archs entirely and using ARM cores. They simply don't have the resources to compete with intel.

 

However, I do see them competeing with nvidia for quite a while. they are neck-and-neck, and have been for a while.

Daily Driver:

Case: Red Prodigy CPU: i5 3570K @ 4.3 GHZ GPU: Powercolor PCS+ 290x @1100 mhz MOBO: Asus P8Z77-I CPU Cooler: NZXT x40 RAM: 8GB 2133mhz AMD Gamer series Storage: A 1TB WD Blue, a 500GB WD Blue, a Samsung 840 EVO 250GB

Link to comment
Share on other sites

Link to post
Share on other sites

Doesn't decreasing size increase heat per unit? I could of swore I heard that on a fast as possible.

And so GabeN has told us to march forthith unto the Land of Holy welding our swords of mice, shields of keyboards, and helmets of Oculus Rifts where we shall reclaim it-which is rightfully ours-from the PUNY Console Peasants from whom armed only with mere controllers we shall decimate in all forms of battle and we shall dominate even in their most ridiculous tradition and infatuation of CoD. Yes, my brothers- sisters and trans sexuals too- we shall destroy the inferior of races with our might and majesty. And if any Peasants wish to join us they must speak now or forever perish. -Ancient Speech from a Leader of Old, Book of Murratri section 2

Link to comment
Share on other sites

Link to post
Share on other sites

Doesn't decreasing size increase heat per unit? I could of swore I heard that on a fast as possible.

While maintaining the same architecture, yes, but Briadwell is the same overall design moved around for scaling plus a process shrink which reduces power per transistor. Since the production area decreases and conductance area increases (due to greater density), overall cooling actually remains the same or gets better.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

While maintaining the same architecture, yes, but Briadwell is the same overall design moved around for scaling plus a process shrink which reduces power per transistor. Since the production area decreases and conductance area increases (due to greater density), overall cooling actually remains the same or gets better.

Wouldn't putting it on a bigger die or giving it a bigger heat spreader mean that it could run cooler and if so why doesn't haswel use a 2011 size heat spreader?

And so GabeN has told us to march forthith unto the Land of Holy welding our swords of mice, shields of keyboards, and helmets of Oculus Rifts where we shall reclaim it-which is rightfully ours-from the PUNY Console Peasants from whom armed only with mere controllers we shall decimate in all forms of battle and we shall dominate even in their most ridiculous tradition and infatuation of CoD. Yes, my brothers- sisters and trans sexuals too- we shall destroy the inferior of races with our might and majesty. And if any Peasants wish to join us they must speak now or forever perish. -Ancient Speech from a Leader of Old, Book of Murratri section 2

Link to comment
Share on other sites

Link to post
Share on other sites

Intel

just save ur self money and start stacking chips already

we all know u can do it

ive seen the prototypes

If your grave doesn't say "rest in peace" on it You are automatically drafted into the skeleton war.

Link to comment
Share on other sites

Link to post
Share on other sites

Wouldn't putting it on a bigger die or giving it a bigger heat spreader mean that it could run cooler and if so why doesn't haswel use a 2011 size heat spreader?

You're forgetting the Z dimension. Broadwell is also much thinner than Haswell, meaning there's less heat density than Haswell at the same transistor count. Heatspreaders are designed to cover all the thermally expensive parts of the chip. A socket 2011-size one wouldn't actually be much more effective at dissipating the heat on mainstream processors.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Intel

just save ur self money and start stacking chips already

we all know u can do it

ive seen the prototypes

You've yet to try cooling them.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

You're forgetting the Z dimension. Broadwell is also much thinner than Haswell, meaning there's less heat density than Haswell at the same transistor count. Heatspreaders are designed to cover all the thermally expensive parts of the chip. A socket 2011-size one wouldn't actually be much more effective at dissipating the heat on mainstream processors.

Ill go along with you because all this transistor stuffs hurts my head.

And so GabeN has told us to march forthith unto the Land of Holy welding our swords of mice, shields of keyboards, and helmets of Oculus Rifts where we shall reclaim it-which is rightfully ours-from the PUNY Console Peasants from whom armed only with mere controllers we shall decimate in all forms of battle and we shall dominate even in their most ridiculous tradition and infatuation of CoD. Yes, my brothers- sisters and trans sexuals too- we shall destroy the inferior of races with our might and majesty. And if any Peasants wish to join us they must speak now or forever perish. -Ancient Speech from a Leader of Old, Book of Murratri section 2

Link to comment
Share on other sites

Link to post
Share on other sites

If that was true Intel wouldn't have put FPGAs on its high-end Xeons.

Again, this is more related as Intel responding to Microsoft.

Intel slapping FPGAs on some of their chips is because of Microsoft.

 

Right now the trend is for Xeons to do heavy calculation and offload various types of tasks to an ARM chip on the same board.

That is really not a trend. Some do it, however it remains to the minority. It is not the usual.

 

AMD and others intend to push Intel out of the server market with heterogeneous design and much lower power and lower heat options, both of which are the prime expenses in the life of a server.

No. What are you talking about?

AMD and other are intending to take a piece of the lowerend server market. Specifically platforms with lower performance but high I/O.

Why would any big corporations even consider heterogeneous computing, when they still have tons of servers, where each feature multiple coprocessors.

 

If Intel didn't feel threatened by HSA, it wouldn't be implementing its own version. Skylake comes with unified memory, a move AMD just did in its Kaveri and Berlin(server-grade) APUs and packing on more than double the GPU core count between Haswell and Broadwell.

So you have to feel threatened to adapt?

How much Intel allocate to the IGP, have nothing to do with heterogeneous computing. It is more related to Intel improving their graphic performance.

Integrated GPs will be important at some point. Most of the future (5-10 years?) consumer market will rely on it.

Link to comment
Share on other sites

Link to post
Share on other sites

Again, this is more related as Intel responding to Microsoft.

Intel slapping FPGAs on some of their chips is because of Microsoft.

 

That is really not a trend. Some do it, however it remains to the minority. It is not the usual.

 

No. What are you talking about?

AMD and other are intending to take a piece of the lowerend server market. Specifically platforms with lower performance but high I/O.

Why would any big corporations even consider heterogeneous computing, when they still have tons of servers, where each feature multiple coprocessors.

 

So you have to feel threatened to adapt?

How much Intel allocate to the IGP, have nothing to do with heterogeneous computing. It is more related to Intel improving their graphic performance.

Integrated GPs will be important at some point. Most of the future (5-10 years?) consumer market will rely on it.

 

Microsoft is not the only one using the FPGAs. Most of the top 100 supercomputers are built on those particular Xeons.

 

The ARM trend is strong among all the newly built servers since 2011.

 

No, no business wants just a small piece of the pie and Qualcomm is certainly not an exception. AMD's shareholders won't settle for second best either.

 

Using coprocessors is still heterogeneous computing, but there's always a demand for more power that can be sold to people without the resources to own a supercomputer.

 

Companies don't adapt unless they feel threatened (read: if they think their customers will consider leaving them). Otherwise they would never innovate because that costs money.

 

No, iGP for intel is all about computing power right now. They have the basic graphics they need for office space computers and laptops, and the gamer market is too small for them to care about it. They're after stealing Nvidia's and AMD's server clients. To think otherwise is to be a naive business analyst.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Microsoft is not the only one using the FPGAs. Most of the top 100 supercomputers are built on those particular Xeons.

I never said they were. Intel made the announcement only days after microsoft provided some information regarding 'catapult'.

 

The ARM trend is strong among all the newly built servers since 2011.

You mean having both a x86 and ARM processor within the same server? No. It really haven't.

 

No, no business wants just a small piece of the pie and Qualcomm is certainly not an exception. AMD's shareholders won't settle for second best either.

We are talking about incredible big companies. They will settle as long as there is a profit.

They are also aware of Intels grap of the server market.

Everything is pointing that they are going after certain smaller markets within the big server-market.

Nothing is pointing at going for the entire market, as you are suggesting.

These companies know what is possible and what is not possible.

 

Using coprocessors is still heterogeneous computing, but there's always a demand for more power that can be sold to people without the resources to own a supercomputer.

By all definitions it is. I should have been more informative. I was meaning using heterogneous computing on a single chip.

Those who have the softare to utilize heterogeneous hardware, will mostlikely have the money. The software alone can be quite expensive.

 

Companies don't adapt unless they feel threatened (read: if they think their customers will consider leaving them). Otherwise they would never innovate because that costs money.

By this definition every company is at constant threat.

Every living persson is at constant threat.

 

No, iGP for intel is all about computing power right now. They have the basic graphics they need for office space computers and laptops, and the gamer market is too small for them to care about it. They're after stealing Nvidia's and AMD's server clients. To think otherwise is to be a naive business analyst.

What are you talking about? Sometimes I wonder where you get your ideas from.

Intels iGP is about improving the graphical performance of their processors. All their consumer products feature a iGP, however most of their server processors don't.

Intel really don't care about office space when considering graphical performance. It is mobile. Mobile gaming is a big one.

As I predicted earlier, most of the consumer market will rely on a single SoC chip instead of having a dedicated CPU and GPU.

The gamer market is only going to increase. Intels iGP have nothing to compete against AMDs and Nvidias server environment.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×