Jump to content

Intel's 10nm only coming to servers in 2020 with Ice Lake

cj09beira
51 minutes ago, asus killer said:

i'm not an expert but when people say "10nm Intel equivalent to 7nm AMD" aren't we ignoring things like thermals and power consumption that can make a lot of difference as we seen on the MBP 2018?

No, because much is also Architecture.

And that is the point.

Intel Wasted much on 256bit AVX and other stuff that right now comes back to bite them.

Also they haven't done a new architecture in 10 years and only improved on what they already had. That is also coming back to bite them.

 

Its not the process (I think)...

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, cj09beira said:

though if i remember right intel's cache had more transistors which would lead to better clocks, so glofo's 7nm is denser if using the density optimized cell but slightly less dense if using the performance cell.

btw i would love amd to release a cpu using the 7nm HPC (ibm's 7nm) even just making a few and showing us what it could do would be cool

I would love for there to be a reason to use the HPC libraries, but that'd take an entire design cycle to produce. Then, what is it supposed to do? ~250mil dropped on a not very viable set of products would be fun to see, but we won't.

 

As for CPU density, we're still talking about Node Potential aspects. CPU designs are never at full density, but Intel most has kept their faster & small node advantage. That kept them ahead for years. The main thing is that Intel hasn't been able to roll out any IPC improvements in 3 years, at this point. It might be 5 years between Core updates by the time Ice Lake hits the desktop.

 

If we see Ice Lake-S. We could still end up seeing Tiger Lake-S first. (10nm++ is going to be the "fixed" 10nm node from Intel. I'm still surprised they haven't just rushed to move to that.)

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Stefan Payne said:

No, because much is also Architecture.

And that is the point.

Intel Wasted much on 256bit AVX and other stuff that right now comes back to bite them.

Also they haven't done a new architecture in 10 years and only improved on what they already had. That is also coming back to bite them.

 

Its not the process (I think)...

they have been trying to increase clocks for a while so that has some effect on power consumption 

2 minutes ago, VegetableStu said:

is GoFlo's 7nm definitions in the same ballpark as Intel's 10nm definitions? o_o

very much so yes

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Stefan Payne said:

No, because much is also Architecture.

And that is the point.

Intel Wasted much on 256bit AVX and other stuff that right now comes back to bite them.

Also they haven't done a new architecture in 10 years and only improved on what they already had. That is also coming back to bite them.

 

Its not the process (I think)...

AVX2 has its uses. AVX512 won't have its uses for probably another 2-3 generations. If not more. Mostly as it takes up too much die space, is only on Server & HEDT platforms, and it's far too memory intensive for DDR4. DDR6-era probably makes AVX512 worthwhile.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, cj09beira said:

they have been trying to increase clocks for a while so that has some effect on power consumption 

Yes that too and with a superior process that might have worked.

 

But they lost that because its expensive even for them and from what I've heard Intel also has problems with the capacity of their fabs - they aren't able to utilize some of it, wich lead them to try to act as a fab like TSMC. But nobody really trusted them for good reasons...

 

Anyway, the Problem they have right now is that AMD made a design that was optimized for efficiency...

 

And Intel tried to go the Performance route wich lead to the same issues as in the olden days with Pentium 4..

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Taf the Ghost said:

AVX2 has its uses.

Yes, but there is no such thing as free lunch.

 

There are reasons why AMD decided to go with only 128bit FPUs (and combine them for AVX) instead of fully 256bit.

One of those is power consumption. Another is complexety.

Because you need wider datapaths, that const transistors.

And those are mostly useless if you don't use AVX2...

 

1 minute ago, Taf the Ghost said:

AVX512 won't have its uses for probably another 2-3 generations. If not more. Mostly as it takes up too much die space, is only on Server & HEDT platforms, and it's far too memory intensive for DDR4. DDR6-era probably makes AVX512 worthwhile.

That is not the only problem.

You have to waste transistors that consume power.

You have to widen the datapaths for the caches, wich consumes power.

And you don't need most of that for normal operation anyway. 

So you waste Die size and efficiency.

 

And that is one of the reasons why Intel has Problems right now with the Powerconsumption...

 

 

Another Thing:
AMD uses 2 Dies right now: the APU and the CPU, Intel usually has a couple more. Alone for the HEDT Plattform, they use two or three dies alone. And usually they also use two or three dies for the Desktop as well.

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

58 minutes ago, Stefan Payne said:

Intel Wasted much on 256bit AVX and other stuff that right now comes back to bite them.

How it is biting them and what exactly was wasted? AVX has been the most important performance improvement in CPUs since MMX. Also AVX is just a generational improvement on from SSE so was that also a waste?

 

You say it's a waste a lot but you've never explained how or why, and also ignore the multitude of applications that use it, even basic things like Zip programs (there is a reason those are much faster now days).

 

AVX is an extreme small amount of die area of the CPU, here is a zoomed shot of just a single core of an Intel Skylake die.

450px-skylake_core_die_(annotated).png

  • ~3.95 mm x ~2.21 mm
  • ~8.73 mm²

 

AVX lives inside that small section in the top left, it's a small area of that small area.

 

Thing is not everything an application does is math code so not everything needs AVX all the time or at all but it doesn't make it a waste of resources. A waste of resources would be not implementing AVX in to CPUs and therefore requiring 10 to 100 times the cores to do the same math workload with all the thread parallelization issues that come with it, if it's even possible to parallelize at all, meaning totally massive dies.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

You say it's a waste a lot but you've never explained how or why,

I did -> you need wider Excecution units and Datapaths, that consumes power and cost transistors. 

And you also need wider datapaths wich also cost transistors and thus power.

 

1 minute ago, leadeater said:

and also ignore the multitude of applications that use it, even basic things like Zip programs (there is a reason those are much faster now days).

Yes and Ryzen performs pretty good in those too.

And you seem to ignore the power consumption increase of such changes or don't want to see it.

 

 

So you you're saying that you want AMD to have a higher power consumption than Intel or what are you implying?

 

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, Stefan Payne said:

I did -> you need wider Excecution units and Datapaths, that consumes power and cost transistors. 

And you also need wider datapaths wich also cost transistors and thus power.

All useful things for everything, it's not exclusive to AVX for those. Larger caches mean more cache hits meaning less data movement, so less power wasted moving data between caches and higher performance efficiency.

 

24 minutes ago, Stefan Payne said:

Yes and Ryzen performs pretty good in those too.

And you seem to ignore the power consumption increase of such changes or don't want to see it.

No I don't ignore it, as I've explained in the past the performance increase far out weighs the power increase, just put in AVX power tables like Intel does and drop frequency to control power and still get higher performance (a lot higher).

 

Plus Zen is not good at AVX2, it performs well enough while requiring twice the resources that Intel needs.

fpujulia.png

 

fpumandel.png

amd-tr-cpu-science.png

 

24 minutes ago, Stefan Payne said:

So you you're saying that you want AMD to have a higher power consumption than Intel or what are you implying?

That AVX is not and has never been a waste, nothing wrong with AMD de-prioritizing AVX2 performance and is justified but that doesn't then make Intel's past R&D work on AVX wasteful nor unnecessary. Intel is not AMD, Intel is not in the same situation as AMD, Intel can follow it's own path and has limits in where they can gain performance.

 

Edit:

Plus you haven't outlined how it's biting them. Reality is it's still a selling point and why customers buy certain Intel CPUs or Intel at all.

Link to comment
Share on other sites

Link to post
Share on other sites

And now show the Power Consumption in those applications and show that.


You know that right now nobody mentions the power consumption with the CPU, do you?

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Stefan Payne said:

And now show the Power Consumption in those applications and show that.


You know that right now nobody mentions the power consumption with the CPU, do you?

91497.png

 

When the 7900X is out performing both high end TR SKUs, in some cases significantly, you're going to need equally higher amounts of power draw to even bother bringing that in but since you asked here you go. Skylake-X total package power at stock is just as efficient as AMD is but can perform up to twice as fast so can have up to double the performance per watt.

 

This is why even asking what is the power draw is not the correct question, the correct question is what is the performance per watt and is something AVX excels at.

 

Edit:

image.png.81deb6c7193be4b3ed6fa2275264f938.png

https://computing.llnl.gov/tutorials/linux_clusters/intelAVXperformanceWhitePaper.pdf

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Stefan Payne said:

You know that right now nobody mentions the power consumption with the CPU, do you?

Because nobody cares how much their CPU draws. All they care about is performance and heat. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, mynameisjuan said:

Because nobody cares how much their CPU draws. All they care about is performance and heat. 

Don't know about you but I've never decided against buying a CPU because of high power draw, I just buy a better cooler. If I'm wanting performance then that's what I'm looking for, I might check power draw to size the cooler and maybe only if it's totally insane reconsider the purchase but that has never happened to date.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, leadeater said:

Don't know about you but I've never decided against buying a CPU because of high power draw, I just buy a better cooler. If I'm wanting performance then that's what I'm looking for, I might check power draw to size the cooler and maybe only if it's totally insane reconsider the purchase but that has never happened to date.

I couldnt care less about power draw. I mean hell people are still buying the 7980xe after the whole "holy shit its can draw 800w" we all had a laugh then no one batted an eye. 

 

These CPUs are built for performance and even then, with AXV the 7980xe can get pretty damn close to double the performance of TR in specific loads with only what, 8% more power? Making it more power efficient. 

 

If I am buy a machine and find out it needs more power, well I buy a bigger PSU and heatsink and call it a day. 

Link to comment
Share on other sites

Link to post
Share on other sites

Essentially these process nodes are a bit of both marketing naming schemes and "real" process node sizes. I can't remember if it was TSMC or GF that said that 14 --> 10nm --> 7nm are just generational improvements. Anyways something along the lines of 10nm is just an improved 14nm process without a whole lot of shrinkage, similar to how Intel's 14nm, plus, plus-plus, plus-plus-plus and they could possibly have just said 12nm.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Deli said:

The downfall of Intel. No one but itself to blame.

People need to actually research why NM claims are not worth a damn atm. There is no standard to how one measures their manufacturing process... which means what might be considered a 7nm process to amd would be a 12nm for intel. This is why Intel has been pushing to have a standard put in place for how all manufacturing processes are gauged.

 

image.png.aec8c509c6753adbaab02b26824e950b.png

 

As an example. This is even before 14++. Intel has much better transistor density with their design. They also have number better Pitch lengths. Which is no different than how the 10nm process will work out with them. They are already claiming 2.4x more desisty with 10nm than what they curently have with 14++. Which will put their 10nm process ahead of the 7nm process AMD is leveraging. So until they standardize the system in place for determining the NM of a process you need to take things with a grain of salt.

 

It is like me saying that since a v8 has more cylinders than a v6 it is better.

But that v8 could be making only 300hp  while the v6 is making 450. Then again it could be the v8 is only 250ci and the V6 is 290ci.  So going off just one number when there is no standard is very deceptive and does not reflect all the needed information to gauge which one is superior.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, AngryBeaver said:

They are already claiming 2.4x more desisty with 10nm than what they curently have with 14++.

That was what they were aiming for, but they have retracted that statement an i believe they are aiming for a more conservative goal.

 

Industry 7nm and intel 10nm will be competing on a similar level to eachother. Its just a matter of who gets going first.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Master Disaster said:

Anybody ever thought Intel are playing the smart game?

Intel is dropping out of depending on their fabs to have sufficient yields to meet product demand.  You'll see in future roadmaps where there's a lot more re-use of existing nodes vs. driving everything to the newest one.

 

Intel is crushing revenues right now because they're on version 3 of 14nm with version 4 coming soon.  That shit is cheap and reliable to fab on and customers clearly don't care that it isn't 10nm.

Workstation:  14700nonk || Asus Z790 ProArt Creator || MSI Gaming Trio 4090 Shunt || Crucial Pro Overclocking 32GB @ 5600 || Corsair AX1600i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, asus killer said:

i'm not an expert but when people say "10nm Intel equivalent to 7nm AMD" aren't we ignoring things like thermals and power consumption that can make a lot of difference as we seen on the MBP 2018?

It's a matter of what you consider to be the actual transistor, the whole device is bigger than 7nm. AMD's 7nm part should perform about the same as Intel's 10nm part both in terms of speed and in terms of power consumption/heat output.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Master Disaster said:

Anybody ever thought Intel are playing the smart game?

 

Everybody knows 7nm is just about the limit of silicon transistors and without some new breakthrough, be it either a compound or new material, once we reach 7nm we're kind of stuck there.

 

It's possible Intel are sitting back and waiting for AMD to show all their playing cards to the group before they push forward with more node shrinks. At the end of the day all they have to do is beat AMD at 7nm and they've effectively won the silicon race forever.

I wonder this myself

considering graphene chips or graphene coated copper

and fact emib hasnt even been used by intel yet

 

plus this https://www.techspot.com/news/75020-intel-now-capable-producing-full-silicon-wafers-quantum.html

Link to comment
Share on other sites

Link to post
Share on other sites

552F9579-62E5-4C2F-BA8F-76A01CC66AA9.jpeg.77edeb1fac974c90e19456005e373fac.jpeg

Laptop: 2019 16" MacBook Pro i7, 512GB, 5300M 4GB, 16GB DDR4 | Phone: iPhone 13 Pro Max 128GB | Wearables: Apple Watch SE | Car: 2007 Ford Taurus SE | CPU: R7 5700X | Mobo: ASRock B450M Pro4 | RAM: 32GB 3200 | GPU: ASRock RX 5700 8GB | Case: Apple PowerMac G5 | OS: Win 11 | Storage: 1TB Crucial P3 NVME SSD, 1TB PNY CS900, & 4TB WD Blue HDD | PSU: Be Quiet! Pure Power 11 600W | Display: LG 27GL83A-B 1440p @ 144Hz, Dell S2719DGF 1440p @144Hz | Cooling: Wraith Prism | Keyboard: G610 Orion Cherry MX Brown | Mouse: G305 | Audio: Audio Technica ATH-M50X & Blue Snowball | Server: 2018 Core i3 Mac mini, 128GB SSD, Intel UHD 630, 16GB DDR4 | Storage: OWC Mercury Elite Pro Quad (6TB WD Blue HDD, 12TB Seagate Barracuda, 1TB Crucial SSD, 2TB Seagate Barracuda HDD)
Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, M.Yurizaki said:

I feel like people are having a pissing contest over a number without understanding what that number actually means.

 

EDIT: When I say "people", I mean people who are not directly involved in the manufacturing process.

Isn't AMD's 7nm competing directly against Intel's 10nm?

 

They shouldn't be much different aside from the nm number

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, D13H4RD2L1V3 said:

Isn't AMD's 7nm competing directly against Intel's 10nm?

 

They shouldn't be much different aside from the nm number

There's so many metrics for transistor performance and density that a single number doesn't come close to capturing it.  Depending on your application, denser isn't even what you want.

 

TSMC does pull a lot of bullshit though where you can't trust their claims.

Workstation:  14700nonk || Asus Z790 ProArt Creator || MSI Gaming Trio 4090 Shunt || Crucial Pro Overclocking 32GB @ 5600 || Corsair AX1600i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×