Jump to content

Intel's 10nm only coming to servers in 2020 with Ice Lake

cj09beira
7 minutes ago, cj09beira said:

Dr. Jordan B. Peterson, you welcome 

thats because right now its intel's 14nm vs an 20nm node using finfet optimized for low power, so of course it will clock lower and of course it will consume more power, right now the only problem that is actually amd's fault is the higher than normal latency, which should improve orver time as amd improves on IF

Actually Ryzen 1 series was using a 14nm process. The 2xxx series is using a 12nm process.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, AngryBeaver said:

Actually Ryzen 1 series was using a 14nm process. The 2xxx series is using a 12nm process.

the thing is that that 14nm is equivalent to an intel 20nm finfet, they said themselves its a 20nm planar with finfet instead,  and 12nm isn't much of a change, they just optimized the process more towards max frequency 

Link to comment
Share on other sites

Link to post
Share on other sites

Meanwhile at AMD 7nm server parts are launching Q4

CPU: Core i9 12900K || CPU COOLER : Corsair H100i Pro XT || MOBO : ASUS Prime Z690 PLUS D4 || GPU: PowerColor RX 6800XT Red Dragon || RAM: 4x8GB Corsair Vengeance (3200) || SSDs: Samsung 970 Evo 250GB (Boot), Crucial P2 1TB, Crucial MX500 1TB (x2), Samsung 850 EVO 1TB || PSU: Corsair RM850 || CASE: Fractal Design Meshify C Mini || MONITOR: Acer Predator X34A (1440p 100hz), HP 27yh (1080p 60hz) || KEYBOARD: GameSir GK300 || MOUSE: Logitech G502 Hero || AUDIO: Bose QC35 II || CASE FANS : 2x Corsair ML140, 1x BeQuiet SilentWings 3 120 ||

 

LAPTOP: Dell XPS 15 7590

TABLET: iPad Pro

PHONE: Galaxy S9

She/they 

Link to comment
Share on other sites

Link to post
Share on other sites

57 minutes ago, cj09beira said:

increasing ipc is extremely difficult, and clock speeds haven't improved much since the 32nm days, so the only way to improve performance significantly is to add more cores (one could add a l4 cache but those are pretty expensive and people dont want to pay for them)

Ipc is hard to increase on mature architectures. I would imagine it is much easier for AMD to find ipc gains compared to Intel. 

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Master Disaster said:

Anybody ever thought Intel are playing the smart game?

 

Everybody knows 7nm is just about the limit of silicon transistors and without some new breakthrough, be it either a compound or new material, once we reach 7nm we're kind of stuck there.

 

It's possible Intel are sitting back and waiting for AMD to show all their playing cards to the group before they push forward with more node shrinks. At the end of the day all they have to do is beat AMD at 7nm and they've effectively won the silicon race forever.

Nah m8 you are wrong, because look how fast they got 7nm to production, not too long ago you could swear we would be stuck on 14/16nm due to all the rumours of how hard going smaller is, and we already have 7nm in production(not on shelves yet), for cpu/gpu and mobile/console all are working sampling on 7nm tech.

I think we hit a hard wall in RAM/ FLash memory scaling for quite some time, and with demands rising ram keeps increasing sadly.

Also the 7nm of TSMC/GF/Samsung is not the best 7nm  or what other call it "fake nm" i dont know, but they have 7nm+ and 7nmLP planned ahead and 5nm feasable, i think we will start hitting diminishing returns before we hit silicon limit, its estimated that designing CPU's at 5/3nm can double or triple in cost.

Read it in here https://semiengineering.com/big-trouble-at-3nm/

I dont know how accurate this is, but yeah, there might be special samples for server/high end workstation if there is demand with very high costs at 5/3nm and down, but i dont think we will see them on desktop beyond at 3nm and beyond because they will be too expensive, way too expensive.

 

Im still hoping for those ridiculous rumoured graphene chips running at 100ghz, now thats a fucking breaktrough, also graphene has better thermals so thats fucking crazy. Asuming it also uses less power due to less electric resistance, id love those to be real, even at 10Ghz vs what we have would still be something.

Dreams https://www.itproportal.com/2010/02/08/ibm-debuts-100ghz-graphene-processor/

 

Make GPU's with graphene at 100ghz please, at such performance you could have realtime photo-realistic ray traced games in VR at 200 FPS 4K per eye.

Link to comment
Share on other sites

Link to post
Share on other sites

I think its finally time for Intel to drop iGPU's, its the only obvious move, if they dont do it and the rumours are true i hope they sink completely, with such stuborness to sell people useless silicon instead of extra cores for the same price, they deserve to sink; Go AyyMD, show them how its done. Imagine a loser Ryzen CPU 8c/16t with integrated vega iGPU taking space and adding 100$ extra in price to the chip, while you are running a gtx 1080/Vega 64 etc.... thats just mega stupid.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, cj09beira said:

the thing is that that 14nm is equivalent to an intel 20nm finfet, they said themselves its a 20nm planar with finfet instead,  and 12nm isn't much of a change, they just optimized the process more towards max frequency 

Which is why I said they need to have a standard for these NM claims. As it stands each manufacturer measures their process differently which means you can't go off just the size of the manufacturing process.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, yian88 said:

I think its finally time for Intel to drop iGPU's, its the only obvious move, if they dont do it and the rumours are true i hope they sink completely, with such stuborness to sell people useless silicon instead of extra cores for the same price, they deserve to sink; Go AyyMD, show them how its done. Imagine a loser Ryzen CPU 8c/16t with integrated vega iGPU taking space and adding 100$ extra in price to the chip, while you are running a gtx 1080/Vega 64 etc.... thats just mega stupid.

If intel dropped iGPUs it would hurt their biggest selling markets, what company would do that?

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, AngryBeaver said:

Which is why I said they need to have a standard for these NM claims. As it stands each manufacturer measures their process differently which means you can't go off just the size of the manufacturing process.

even that wouldn't be enough because routing, and different kinds of memory cells all can be more or less dense and independent on the others, the only true way of doing this would be a hello world silicon edition, where each foundry would make the same soc and then die sizes power consumption and frequency range would be measured

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, The Benjamins said:

If intel dropped iGPUs it would hurt their biggest selling markets, what company would do that?

It wouldnt hurt anything, if they remove iGPU from desktop i5/i7/i9 and keep it on i3, pentiums and laptop chips.

Like seriously i7/i9 8c/16t 500$ with iGPU? really? thats an insult.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, yian88 said:

It wouldnt hurt anything, if they remove iGPU from desktop i5/i7/i9 and keep it on i3, pentiums and laptop chips.

Like seriously i7/i9 8c/16t 500$ with iGPU? really? thats an insult.

 

So no Ultrabook can use a i5 or i7; no office PC can use a i5, i7; no more i5, i7 NUCs; Macbooks and Surface's can't use i5, i7. Yes lets just gut 80% of intels sales of those lines.

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, The Benjamins said:

So no Ultrabook can use a i5 or i7; no office PC can use a i5, i7; no more i5, i7 NUCs; Macbooks and Surface's can't use i5, i7. Yes lets just gut 80% of intels sales of those lines.

Ultrabooks should stick to 4 core i3's period.

Office pc's only need pentium or i3 4 core period.

Who the hell uses nuc's no one, problem solved.

Macbooks and Surface that cost 1000$+ and use iGPU without dGPU should be banned from markets because they are scams.

 

If they had a decent iGPU like AMD APU for the same price i could understand but currently the iGPU thing is a cheap scam, intel is making tons of $ from it.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, yian88 said:

Ultrabooks should stick to 4 core i3's period.

Office pc's only need pentium or i3 4 core period.

Who the hell uses nuc's no one, problem solved.

Macbooks and Surface that cost 1000$+ and use iGPU without dGPU should be banned from markets because they are scams.

 

If they had a decent iGPU like AMD APU for the same price i could understand but currently the iGPU thing is a cheap scam, intel is making tons of $ from it.

Holy smokes, who died and made you judge, jury & executioner?

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Master Disaster said:

Holy smokes, who died and made you judge, jury & executioner?

Intel's many years of scam practice finally aproaching its end days. Rejoice. Or intel will bury itself with its iGPU high end cpu's by 2020.

Link to comment
Share on other sites

Link to post
Share on other sites

Intel does not make any mobile 4c i3's, and why does a surface and Macbook need a dGPU they are not gaming or CAD PCs.

 

I can see Intel having the some top end chips with out iGPU but their market share DEPENDS on iGPUs in the i5 and i7.

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, AngryBeaver said:

Yes intel doesn't have much wiggle room in core clocks atm, but you noticed they are producing an 8 core consumer cpu now right?

With 150W+TDP??
Like AMD Bulldozer...

Why don't you complain about the Power Consumption of Intel Products right now?

 

34 minutes ago, AngryBeaver said:

When it comes to clocks though the gap between intel and ryzen is still pretty big.

No, not really.  We are talking about 10-15% (4,3 -> 5).

 

But you also ignore the thing I've posted in the Last post you seem to have ignored:
Intel optimized the process for higher clockrates at the cost of Power Consumption!

 

AMD uses low(er) power processes, that are optimized for lower power processors and not for clockrate like Intel did their 14++

So you can argue that with a high performance process there wouldn't be any Clockgap.

 

34 minutes ago, AngryBeaver said:

You are now seeing more and more 5.2+ ghz intel chips, while Ryzen is pretty much soft-capped to 4.2-4.3 ghz.

Why do you compare Out of th Box Ryzen values to hardcore OC Intel values??

34 minutes ago, AngryBeaver said:

Also IPC has been steadily increasing even if slowly. Ryzen is due for a decent increase itself, but we will see how that goes.

WE know what to do to increase the Ryzen Performance.

So does AMD.

Fixing that is easy because you know what to do, you just have to take a look on how to do.

What could be possible would be a clock domain for the L3 and CCX communication for example with multipliers...

 

34 minutes ago, AngryBeaver said:

The thing is just throwing more cores out there doesn't do much to help in ST demanding titles like games.

...wich are ancient or just garbage or both.

And even Snow-Storms should have known that increasing the cores is the only way to go because there is no other way!

Every other way is exausted and either cost Milliards in R&D - wich they don't have or a couple of percent more Power- wich they don't want.

 

ever heard of diminishing returns?

We are at exponential investments for rather tiny gains right now. So it is highly likely to not go that rabbithole because that's not worth it (right now).

Because AMD needed Zen to be done yesterday and couldn't care about that in the Original design...

34 minutes ago, AngryBeaver said:

Do you know what is hurting ryzen so much right now when it comes to games and fps? Draw calls. Draw calls are handled in most cases by a single core. So when you have lower ipc and clocks per core it becomes a problem.

That's what the Turbo/XFX is for. My 1700x goes up to 3,9GHz Default.

And again, all the more reason to go for DX12!

Because less Drawcalls and its multithreadded!

 

34 minutes ago, AngryBeaver said:

So instead of focusing on squeezing as much power out of each core like intel, AMD is just throwing more cores at it. Quantity over quality.

You have no idea what you are talking about it seems...

 

a) AMD will also improve the per clock performance

b) There is still much room to optimize. Just compare the Original Bulldozer vs. the Last Iteration.

https://www.planet3dnow.de/cms/22697-erste-benchmarks-des-athlon-x4-845/

https://www.planet3dnow.de/cms/18564-amd-piledriver-vs-steamroller-vs-excavator-leistungsvergleich-der-architekturen/subpage-rendering-cinebench/

 

Just some examples of what could be possible. 

And now add 20-30% on top of a 'normal Bulldozer'. 

 

With Ryzen+ and minimal changes they already got a couple of percent out of the design.

and now imagine what could be possible if they have to do it new for a new Node anyway...

34 minutes ago, AngryBeaver said:

I mean look at the current situation. The intel 8700k performs on par with the 8 cores with only 6 cores. Also the 8700k is a 95w chip vs the 105w 2700x. So the 8700k is less power hungry.

....stilll the 105W TDP Chip consumes less than the Intel 95W TDP one.

Did you already forgot the THG Links I've posted, that showed the Power Consumption of Ryzen 1700 (or was it the x?) on par with an i3-8350 or so. 

 

And that is still the first chip that got only a new stepping with an improved node.

Now with the 7nm, you can assume that with +15% higher clockrates, AMD will reach up to 5GHz.

And another 10-15% on top at the same clock

 

And then we are talking about a 12 or possibly even 16 core processor.

 

Does THAT still look good for Intel? 

I don't think so.

 

And look at the differences between the Bulldozer Iterations!

And Imagine AMD had a good node AND did a full AM3+ Version of Excavator.

That looks to me like Zen is like 10-15% faster than an imagined, fixed Bulldozer.

 

And again, BD had the Problem of the shitty caches that crippled its performance. Especially the 2GHz L3 Cache...

 

 

26 minutes ago, cj09beira said:

Dr. Jordan B. Peterson, you welcome 

Exaclty.

I assume you watched the thing I mentioned? :D

 

26 minutes ago, cj09beira said:

thats because right now its intel's 14nm vs an 20nm node using finfet optimized for low power, so of course it will clock lower and of course it will consume more power, right now the only problem that is actually amd's fault is the higher than normal latency, which should improve orver time as amd improves on IF

Thing is the AMD CPUs are already MORE Efficient than Intel...

And the Clockrate advantage is just about 15%. 

4,3*1,15 ~ 4,945GHz

 

It looks huge but its not.

Especially when you think of the olden days, where you could overclock a CPU by 50%!

And that is no joke!

But most people here ain't old enough to remeber the first Mendochino - a low cost CPU that messed with the "good" parts due to integrated full speed L2, even though it was tiny (IIRC 64k or 128k vs 512k)

 

 

13 minutes ago, cj09beira said:

the thing is that that 14nm is equivalent to an intel 20nm finfet, they said themselves its a 20nm planar with finfet instead,  and 12nm isn't much of a change, they just optimized the process more towards max frequency 

Not max frequency.

That was just one of the things that happened. They just optimized it and AMD chose to go for the frequency and not efficiency because that was what they need the most...

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

49 minutes ago, cj09beira said:

 the only true way of doing this would be a hello world silicon edition, where each foundry would make the same soc and then die sizes power consumption and frequency range would be measured

And even that doesn't work because you have to use different libarys, so that effectively it is not the same chip, only a very similar one.

 

A couple of years ago we had kinda such thing with an Apple mobile Processor of wich there were two versions. One Samsung, one TSMC...

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

58 minutes ago, yian88 said:

I think its finally time for Intel to drop iGPU's, its the only obvious move, if they dont do it and the rumours are true i hope they sink completely, with such stuborness to sell people useless silicon instead of extra cores for the same price, they deserve to sink; Go AyyMD, show them how its done. Imagine a loser Ryzen CPU 8c/16t with integrated vega iGPU taking space and adding 100$ extra in price to the chip, while you are running a gtx 1080/Vega 64 etc.... thats just mega stupid.

I don't want an AMD monopoly any more than an Intel monopoly, thank you very much. It's "mega stupid" to want a monopoly.

 

I think they can do with shrinking the iGPU without removing it entirely. Having a usable display output in lieu of a dGPU can be very useful when troubleshooting, or just need a render server.

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Stefan Payne said:

With 150W+TDP??
Like AMD Bulldozer...

Why don't you complain about the Power Consumption of Intel Products right now?

 

No, not really.  We are talking about 10-15% (4,3 -> 5).

 

But you also ignore the thing I've posted in the Last post you seem to have ignored:
Intel optimized the process for higher clockrates at the cost of Power Consumption!

 

AMD uses low(er) power processes, that are optimized for lower power processors and not for clockrate like Intel did their 14++

So you can argue that with a high performance process there wouldn't be any Clockgap.

 

Why do you compare Out of th Box Ryzen values to hardcore OC Intel values??

WE know what to do to increase the Ryzen Performance.

So does AMD.

Fixing that is easy because you know what to do, you just have to take a look on how to do.

What could be possible would be a clock domain for the L3 and CCX communication for example with multipliers...

 

...wich are ancient or just garbage or both.

And even Snow-Storms should have known that increasing the cores is the only way to go because there is no other way!

Every other way is exausted and either cost Milliards in R&D - wich they don't have or a couple of percent more Power- wich they don't want.

 

ever heard of diminishing returns?

We are at exponential investments for rather tiny gains right now. So it is highly likely to not go that rabbithole because that's not worth it (right now).

Because AMD needed Zen to be done yesterday and couldn't care about that in the Original design...

That's what the Turbo/XFX is for. My 1700x goes up to 3,9GHz Default.

And again, all the more reason to go for DX12!

Because less Drawcalls and its multithreadded!

 

You have no idea what you are talking about it seems...

 

a) AMD will also improve the per clock performance

b) There is still much room to optimize. Just compare the Original Bulldozer vs. the Last Iteration.

https://www.planet3dnow.de/cms/22697-erste-benchmarks-des-athlon-x4-845/

https://www.planet3dnow.de/cms/18564-amd-piledriver-vs-steamroller-vs-excavator-leistungsvergleich-der-architekturen/subpage-rendering-cinebench/

 

Just some examples of what could be possible. 

And now add 20-30% on top of a 'normal Bulldozer'. 

 

With Ryzen+ and minimal changes they already got a couple of percent out of the design.

and now imagine what could be possible if they have to do it new for a new Node anyway...

....stilll the 105W TDP Chip consumes less than the Intel 95W TDP one.

Did you already forgot the THG Links I've posted, that showed the Power Consumption of Ryzen 1700 (or was it the x?) on par with an i3-8350 or so. 

 

And that is still the first chip that got only a new stepping with an improved node.

Now with the 7nm, you can assume that with +15% higher clockrates, AMD will reach up to 5GHz.

And another 10-15% on top at the same clock

 

And then we are talking about a 12 or possibly even 16 core processor.

 

Does THAT still look good for Intel? 

I don't think so.

 

And look at the differences between the Bulldozer Iterations!

And Imagine AMD had a good node AND did a full AM3+ Version of Excavator.

That looks to me like Zen is like 10-15% faster than an imagined, fixed Bulldozer.

 

And again, BD had the Problem of the shitty caches that crippled its performance. Especially the 2GHz L3 Cache...

 

 

Exaclty.

I assume you watched the thing I mentioned? :D

 

Thing is the AMD CPUs are already MORE Efficient than Intel...

And the Clockrate advantage is just about 15%. 

4,3*1,15 ~ 4,945GHz

 

It looks huge but its not.

Especially when you think of the olden days, where you could overclock a CPU by 50%!

And that is no joke!

But most people here ain't old enough to remeber the first Mendochino - a low cost CPU that messed with the "good" parts due to integrated full speed L2, even though it was tiny (IIRC 64k or 128k vs 512k)

 

 

Not max frequency.

That was just one of the things that happened. They just optimized it and AMD chose to go for the frequency and not efficiency because that was what they need the most...

I am a fan of his work.

about the frequency of the new zen 2 cpus, i dont know yet as from one side it is a high power node but amd is going to should the lower performing library, and i dont know how that will affect things.

btw one thing for us to pay attention to, amd seems to have more ipc when under ln2 than intel, which might point to amd having made zen with the goal of increasing the core's performance quite a bit (this can be seem alot on hwbot where under ln2 ryzen cpus score better even though they are at lower clocks )

1 minute ago, Stefan Payne said:

And even that doesn't work because you have to use different libarys, so that effectively it is not the same chip, only a very similar one.

 

A couple of years ago we had kinda such thing with an Apple mobile Processor of wich there were two versions. One Samsung, one TSMC...

the thing is that if you made the exact same soc you would loose part of the differences between them which are the libraries and their effect on the final routing

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, The Benjamins said:

Intel does not make any mobile 4c i3's, and why does a surface and Macbook need a dGPU they are not gaming or CAD PCs.

 

I can see Intel having the some top end chips with out iGPU but their market share DEPENDS on iGPUs in the i5 and i7.

Surface, being an obvious answer: GPGPU/CUDA acceleration for art/animation programs that can leverage it. Top end Surface devices also aren't horrible for light CAD work.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Drak3 said:

Surface, being an obvious answer: GPGPU/CUDA acceleration for art/animation programs that can leverage it. Top end Surface devices also aren't horrible for light CAD work.

But does every model need it?

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

the ONLY ones that care about  the scale is the chip makers. because they can get more on a wafer.  and that makes them more money.  as the consumer.. idgaf.  stop charging me more for something that cost you significantly less and stop dickin around at 4ghz.   the cure for cancer literally depends on the speed of these chips.  speed them up.  we have plenty of power to move heat and power the chips. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, The Benjamins said:

But does every model need it?

If it means freeing up system RAM, why not?

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Drak3 said:

If it means freeing up system RAM, why not?

How does taking up more PCB space help free up ram?

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×