Jump to content

[confirmed] Intel reportedly delays 10nm mass production - poor yelds

zMeul

They can't be having yield issues on a process not yet implemented. The equipment isn't even in the building. The only place making 10nm chips right now is HQ for testing and tuning yields before wide deployment which still has validation runs before mass production can begin.

yes, and if those test show problems in the process why finish the factory? what if the yields are so bad that Intel 10nm is not viable

aren't they using same equipment the factory would have?! I'd say yes - would be lunacy to buy the equipment, build the factory, and find that the process is a dud

Link to comment
Share on other sites

Link to post
Share on other sites

The Skylake equivalent of the Devil's Canyon refresh (except this time with a new graphics core deployed) confirms a 10nm delay? That's not remotely credible evidence.

And I haven't said that it was confirmed, I said that now we have a confirmation, you're twisting my words. I said we basically knew this which, as you can see, was correct.

The ability to google properly is a skill of its own. 

Link to comment
Share on other sites

Link to post
Share on other sites

As usual, Intel is just chillin. No competition means they don't have to push themselves hard at all. I was kind of expecting this. They say the 14nm series will be just another standard issue of i3/i5/i7 with lower consumption and same amount of cores and threads. Same song and dance... :P

There is plenty of competition from ARM. Intel needs to keep shrinking their process to get to lower power envelopes, so they can compete in mobile. They are currently taking huge losses in that area.
Link to comment
Share on other sites

Link to post
Share on other sites

There is plenty of competition from ARM. Intel needs to keep shrinking their process to get to lower power envelopes, so they can compete in mobile. They are currently taking huge losses in that area.

Sure, focus on a single area. Abandon desktop development because screw that area, we dominate it and there is no competition there. They need to push all areas, like all companies do! :)

Link to comment
Share on other sites

Link to post
Share on other sites

Sure, focus on a single area. Abandon desktop development because screw that area, we dominate it and there is no competition there. They need to push all areas, like all companies do! :)

A new process node comes to mobile and desktop alike, it makes no difference which area they are focusing on.

Link to comment
Share on other sites

Link to post
Share on other sites

They need to push all areas, like all companies do! :)

Because that is what all companies do!
Link to comment
Share on other sites

Link to post
Share on other sites

I don't remember the last time?

So by forcing out a more expensive solution, they will have better profits?

They need the profit from 14nm to reinvest into 10nm.

Thy already have the profit from 14nm they need based on contract orders of Knight's Landing. I'm saying Intel already has the money to reinvest. Second, Prescott to Conroe. Huge 1 year turnaround on that one.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

And I haven't said that it was confirmed, I said that now we have a confirmation, you're twisting my words. I said we basically knew this which, as you can see, was correct.

This is still a rumor mill article and not a confirmation.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Because that is what all companies do!

-Nvidia pushes desktop, laptop/notebook and mobile GPU market. Even their own branded tablet and Shield.

-AMD pushes desktop, laptop/notebook GPU market and is making some really good progress with their APUs in both desktop and laptop market. While being weaker in the CPU area, they still manage to crossfire their own APUs and GPUs in laptops gaining impressive performance for a really small price tag.

-Samsung... Yeah, this company does literally EVERYTHING. They even run hospitals in Korea. Military etc.

 

These are just of the top of my head, there are plenty more! :)

Link to comment
Share on other sites

Link to post
Share on other sites

yes, and if those test show problems in the process why finish the factory? what if the yields are so bad that Intel 10nm is not viable

aren't they using same equipment the factory would have?! I'd say yes - would be lunacy to buy the equipment, build the factory, and find that the process is a dud

Wrong way to think about it. Intel and Asmyl already know the equipment will work for 10nm. It's only a matter of tuning light sources, masks, and lenses. The rest is already proven to work. Also, the fab has to be built either way. How they outfit it is up to Intel.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

A new process node comes to mobile and desktop alike, it makes no difference which area they are focusing on.

It does... the desktop process gains lower consumption but nothing more really, it barely gets 3-5% over the last architecture... How much is the i5 4xxx better than the 2xxx at same clocks? Is it even 10% performance wise? I don't think anyone really give a damn about 20-30W power difference. :)

Link to comment
Share on other sites

Link to post
Share on other sites

It does... the desktop process gains lower consumption but nothing more really, it barely gets 3-5% over the last architecture... How much is the i5 4xxx better than the 2xxx at same clocks? Is it even 10% performance wise? I don't think anyone really give a damn about 20-30W power difference. :)

It's up to 85% better depending on the workload. Blame software devs for not keeping up.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Wrong way to think about it. Intel and Asmyl already know the equipment will work for 10nm. It's only a matter of tuning light sources, masks, and lenses. The rest is already proven to work. Also, the fab has to be built either way. How they outfit it is up to Intel.

you keep going circles with the fab when the problem clearly seems to be the process node

yes the equipment works and yet, you still can have bad yields

was the 14nm process delayed 1y because of yields? yes it was! can the same thing happen to the 10nm process? yes, it can!

Link to comment
Share on other sites

Link to post
Share on other sites

2 things :

1. 10nm ! Yay

2. Intel , stop being lazy and get your head out of that money pile ...

... Life is a game and the checkpoints are your birthday , you will face challenges where you may not get rewarded afterwords but those are the challenges that help you improve yourself . Always live for tomorrow because you may never know when your game will be over ... I'm totally not going insane in anyway , shape or form ... I just have broken English and an open mind ... 

Link to comment
Share on other sites

Link to post
Share on other sites

It's up to 85% better depending on the workload. Blame software devs for not keeping up.

But in reality it's more like 10%. We can't really compare synthetic benchmark results as real use case scenarios. We can't take advantage of it in normal use, professional use, gaming... so what can we take advantage of? It reminds me of the FX 8xxx is bad. Well, in reality, it's not bad at all. Where the software can take advantage of all the 8 cores, it's great. But most software uses 2-4 cores and single core performance is better on Intels side so the Fx series fails in terms of every day use and gaming against an i5. :)

Link to comment
Share on other sites

Link to post
Share on other sites

But in reality it's more like 10%. We can't really compare synthetic benchmark results as real use case scenarios. We can't take advantage of it in normal use, professional use, gaming... so what can we take advantage of? It reminds me of the FX 8xxx is bad. Well, in reality, it's not bad at all. Where the software can take advantage of all the 8 cores, it's great. But most software uses 2-4 cores and single core performance is better on Intels side so the Fx series fails in terms of every day use and gaming against an i5. :)

You can take advantage of it in games if games were coded adaptively instead of as monoliths. The same is true of instruction sets as well as available core counts. This isn't Intel's fault. Intel provided the tools to make multithreading easy long ago in OpenMP. It's the fault of developers in consumer computing for not keeping up.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

You can take advantage of it in games if games were coded adaptively instead of as monoliths. The same is true of instruction sets as well as available core counts. This isn't Intel's fault. Intel provided the tools to make multithreading easy long ago in OpenMP. It's the fault of developers in consumer computing for not keeping up.

Oh I agree, most of the time it's the developers fault, doesn't matter if it's games or applications. Things aren't getting optimized because there is enough processing power to go around. We notice that more and more in the mobile segment. Most apps are shit code and a giant clusterfuck. Developers just want a bunch of money with the least amount of work invested and we're to blame for that, we keep demanding more, a new game in the series every year and then we get recycled games with old engines or new ones with a TON of bugs and poor optimization (Unity and Batman for example). :)

Link to comment
Share on other sites

Link to post
Share on other sites

Thy already have the profit from 14nm they need based on contract orders of Knight's Landing. I'm saying Intel already has the money to reinvest. Second, Prescott to Conroe. Huge 1 year turnaround on that one.

You must be joking  :lol:

Having money, and having money to spend, is two entirely different things.

Intel needs to fill up their 14nm. They haven't done that. One of the major reason is how expensive 14nm continually have been over their other nodes.

 

Wasn't prescott release in early 2004? 2 years ahead of conroe? 

 

-Nvidia pushes desktop, laptop/notebook and mobile GPU market. Even their own branded tablet and Shield.

-AMD pushes desktop, laptop/notebook GPU market and is making some really good progress with their APUs in both desktop and laptop market. While being weaker in the CPU area, they still manage to crossfire their own APUs and GPUs in laptops gaining impressive performance for a really small price tag.

-Samsung... Yeah, this company does literally EVERYTHING. They even run hospitals in Korea. Military etc.

 

These are just of the top of my head, there are plenty more! :)

I do hope you realise that there are just as many fronts that they don't push. You might not hear about it, but that is something entirely different.

 

It's up to 85% better depending on the workload. Blame software devs for not keeping up.

It is generally upto 15% better. 85% are special case scenarios, and cannot be archieve by most software.
Link to comment
Share on other sites

Link to post
Share on other sites

  • 3 weeks later...

Intel confirms 10nm delays as it enters 3rd generation of 14nm products

source: http://arstechnica.com/gadgets/2015/07/intel-confirms-tick-tock-shattering-kaby-lake-processor-as-moores-law-falters/

 

Intel has confirmed today that it will build a third generation of processors on its 14nm process, and that the switch to 10nm manufacturing has been delayed until the second half of 2017, showing the challenges that Moore's Law is under, and bringing an end to the company's "tick-tock" strategy.

---

AnandTech: http://www.anandtech.com/show/9446/intel-announces-fiscal-year-2015-quarter-two-results

Process.png

Link to comment
Share on other sites

Link to post
Share on other sites

I think the analysts are reading this incorrectly. I don't think this has anything to do with yields. I think Intel is doubling its rhythm to account for iGPU architecture changes in between the CPU architecture changes. It minimizes production risk and provides more validation time for each individual component practically for free.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×