Jump to content

The Empire Strikes Back - Alder Lake details revealed at Intel Architecture Day 2021

porina

Summary

While much information has been leaked, rumoured and extracted in other ways about Intel's upcoming Alder Lake CPU for desktop and mobile, we now have the first wave of official information about what will be a major catch up product especially for desktop.

 

Some quick info:

  • DDR5 at 4800 confirmed. 128-bit wide support same as DDR4 consumer platforms. Note each DDR5 module has two 32-bit channels instead of 1 64-bit channel in DDR4.
  • PCIe 5.0 confirmed, with up to 16 lanes from CPU. Also 4 PCIe 4.0 lanes e.g. for storage.
  • Top chipset will support up to 12 lanes of PCIe 4.0 and 16 lanes of 3.0.
  • TDP of top part will be 125W.
  • P-cores 19% IPC improvement over Rocket Lake.
  • E-cores better than Skylake at same thread count.
  • Built on Intel 7 process (formerly known as 10 SFE).
  • AVX-512 support removed.

 

Quotes

69.thumb.jpg.5e6aca569d7c3e12f2e1153a001eee7e.jpg

Quote

Each of the P-cores has the potential to offer multithreading, whereas the E-cores are one thread per core. This means there will be three physical designs based on Alder Lake:

8 P-core + 8 E-core (8C8c/24T) for desktop on a new LGA1700 socket

6 P-core + 8 E-core (6C8c/20T) for mobile UP3 designs

2 P-core + 8 E-core (2C8c/12T) for mobile UP4 designs

Top configuration will be 8 performance cores with 8 Efficiency cores. More on the performance expectations of both later.

 

76.thumb.jpg.45e3161ce2ad79425f9da73f270bc942.jpg

Quote

In contrast to previous iterations of Intel’s processors, the desktop processor will support all modern standards: DDR5 at 4800 MT/s, DDR4-3200, LPDDR5-5200, and LPDDR4X-4266. Alongside this the processor will enable dynamic voltage-frequency scaling (aka turbo) and offer enhanced overclocking support. What exactly that last element means we’re unclear of at this point.

Support of DDR5 at 4800 is a peak bandwidth increase of 50% over the highest standard DDR4 of 3200, although of course overclocked modules exist of DDR4 at much higher speed. With 4800 as the entry point it should at least be widely compatible unlike the extreme DDR4 XMP kits. LPDDR5 is also supported.

 

46.thumb.jpg.c3742bf3fe109a4c7c91c8bc72693aae.jpg

Quote

The aggregated changes of the new Golden Cove microarchitecture amount to a median IPC increase of 19% compared to Cypress Cove (Rocket Lake) - measured over a set of workloads including SPEC CPU 2017, SYSmark 25, Crossmark, PCMark 10, WebXPRT3, and Geekbench 5.4.1. We can see in the graph that there’s outlier workloads with up to +60% IPC, but also low outliers where the new design doesn’t improve thing much or even sees regressions, which is odd.

A claimed average increase in IPC of 19% over Rocket Lake. Nothing to complain about, but it was noted they didn't compare against Tiger Lake, which is a partial generation beyond Rocket Lake.

 

 

My thoughts

Based on what's presented we can have a fair idea of what performance could be like. For workloads scaling to 8 cores it will give Zen 3 a good fight. For workloads at 16 cores and beyond, Zen 3 will probably hold on. P-cores continue up in performance, and even E-cores are claimed to beat Skylake.

 

AMD will probably retain the crown at higher threaded workloads especially once they release the high cache versions of Zen 3, but that serves the top end of the market. Zen 4 is far enough out with little more than rumours to go on, but it is also expected to come out much later. Mainstream users not needing huge numbers of cores could see a good improvement with Alder Lake.

 

Outside of the CPU itself the connectivity brings with it the expected improvements. More ram bandwidth to feed more cores. More PCIe bandwidth for... something. It will be interesting to see what Intel's gaming GPU called ARC will support when it comes out, likewise next gen GPUs from AMD and nvidia.

 

 

Note: there is a LOT more information than I put above. I'm at risk of copy pasting the whole article if I keep going. Visit the source for more information.

 

Sources

https://www.anandtech.com/show/16881/a-deep-dive-into-intels-alder-lake-microarchitectures

 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

I’m confused what the benefit of big+little cores will be In a desktop, I can see it being good in laptops though… I wonder if there will be some firmware that offloads for example background processes like email etc to the littles and games stay on the big ones…

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Mel0nMan said:

I can see it being good in laptops though

It wasnt even good on laptop.

 

Press quote to get a response from someone! | Check people's edited posts! | Be specific! | Trans Rights

I am human. I'm scared of the dark, and I get toothaches. My name is Frill. Don't pretend not to see me. I was born from the two of you.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Mel0nMan said:

I’m confused what the benefit of big+little cores will be In a desktop, I can see it being good in laptops though…

Any power saving can only be a good thing I guess?

Case - Phanteks Evolv X | PSU - EVGA 650w Gold Rated | Mobo - ASUS Strix x570-f | CPU - AMD r9 3900x | RAM - 32GB Corsair Dominator Platinum 3200mhz @ 3600mhz | GPU - EVGA nVidia 2080s 8GB  | OS Drive - Sabrent 256GB Rocket NVMe PCI Gen 4 | Game Drive - WD 1tb NVMe Gen 3  |  Storage - 7TB formatted
Cooled by a crap load of Noctua fans and Corsair H150i RGB Pro XT

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Mel0nMan said:

I’m confused what the benefit of big+little cores will be In a desktop, I can see it being good in laptops though…

I haven't looked into Alder Lake specifically yet, but power savings can be a good thing in desktops too. 

Phobos: AMD Ryzen 7 2700, 16GB 3000MHz DDR4, ASRock B450 Steel Legend, 8GB Nvidia GeForce RTX 2070, 2GB Nvidia GeForce GT 1030, 1TB Samsung SSD 980, 450W Corsair CXM, Corsair Carbide 175R, Windows 10 Pro

 

Polaris: Intel Xeon E5-2697 v2, 32GB 1600MHz DDR3, ASRock X79 Extreme6, 12GB Nvidia GeForce RTX 3080, 6GB Nvidia GeForce GTX 1660 Ti, 1TB Crucial MX500, 750W Corsair RM750, Antec SX635, Windows 10 Pro

 

Pluto: Intel Core i7-2600, 32GB 1600MHz DDR3, ASUS P8Z68-V, 4GB XFX AMD Radeon RX 570, 8GB ASUS AMD Radeon RX 570, 1TB Samsung 860 EVO, 3TB Seagate BarraCuda, 750W EVGA BQ, Fractal Design Focus G, Windows 10 Pro for Workstations

 

York (NAS): Intel Core i5-2400, 16GB 1600MHz DDR3, HP Compaq OEM, 240GB Kingston V300 (boot), 3x2TB Seagate BarraCuda, 320W HP PSU, HP Compaq 6200 Pro, TrueNAS CORE (12.0)

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, SorryClaire said:

It wasnt even good on laptop.

 

Lakefield was honestly a crappy beta version of this, not gonna say alder lake is a beast without real numbers but on paper it looks rather good specially if the software optimizations the are talking about do deliver.

this is one of the greatest thing that has happened to me recently, and it happened on this forum, those involved have my eternal gratitude http://linustechtips.com/main/topic/198850-update-alex-got-his-moto-g2-lets-get-a-moto-g-for-alexgoeshigh-unofficial/ :')

i use to have the second best link in the world here, but it died ;_; its a 404 now but it will always be here

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, BondiBlue said:

I haven't looked into Alder Lake specifically yet, but power savings can be a good thing in desktops too. 

Yeah, but my 8 core cpus only use about 10 watts each at idle and about 15-30 with a web browser, music, email, other daily stuff open. And that’s 32 nanometer lithography. I would think that if there were enough cores that could clock low enough the big ones wouldn’t be needed.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Mel0nMan said:

I’m confused what the benefit of big+little cores will be In a desktop, I can see it being good in laptops though…

Someone else on this forum came up with a good reason in the past, but I forgot what it is or where to find it. Thinking right now, the small cores are really small compared to the big cores, yet the performance they offer doesn't scale with size. So, it could be seen like you're getting closer to 16 core performance for the silicon cost closer to 10 cores. Of course, it is way more complicated than that, but there can be some cost/performance benefit going that way.

 

1 minute ago, SorryClaire said:

It wasnt even good on laptop.

Lakefield I see as more like a proof of concept product. It's 1st gen, and arguably everyone was still earning. The OS. The software. The hardware.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

I think we'll have to wait for Windows 11 or better support in Linux for big.LITTLE designs to actually work well. I've seen how these Intel's new CPU's have dedicated "director" that coordinates what cores do what tasks and you need OS to cooperate with it well, otherwise you'll have crap experience like Riley had in the video above. For desktops, I don't think it really matters, but for laptops, especially if adjusting power profiles will also affect how load is distributed through these cores. It certainly has potential.

Link to comment
Share on other sites

Link to post
Share on other sites

This is gonna be interesting.

CPU: Core i9 12900K || CPU COOLER : Corsair H100i Pro XT || MOBO : ASUS Prime Z690 PLUS D4 || GPU: PowerColor RX 6800XT Red Dragon || RAM: 4x8GB Corsair Vengeance (3200) || SSDs: Samsung 970 Evo 250GB (Boot), Crucial P2 1TB, Crucial MX500 1TB (x2), Samsung 850 EVO 1TB || PSU: Corsair RM850 || CASE: Fractal Design Meshify C Mini || MONITOR: Acer Predator X34A (1440p 100hz), HP 27yh (1080p 60hz) || KEYBOARD: GameSir GK300 || MOUSE: Logitech G502 Hero || AUDIO: Bose QC35 II || CASE FANS : 2x Corsair ML140, 1x BeQuiet SilentWings 3 120 ||

 

LAPTOP: Dell XPS 15 7590

TABLET: iPad Pro

PHONE: Galaxy S9

She/they 

Link to comment
Share on other sites

Link to post
Share on other sites

I'm really confused by the efficiency cores because they are claiming up to 80% more performance at 4c/4t vs Skylake 2c/4t. Isn't there current generation core only like 20% faster than Skylake? So a 4c/4t efficiency core would be 50%~ more performance than their current cores at 2c/4t? Which would mean the efficiency core on a 1:1 basis is basically comparable to the current cores? Doesn't add up to me...

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Mel0nMan said:

I’m confused what the benefit of big+little cores will be In a desktop, I can see it being good in laptops though… I wonder if there will be some firmware that offloads for example background processes like email etc to the littles and games stay on the big ones…

California energy efficiency standards.

 

Let's say you had an 6c/8c design with 2 little cores. The power floor for the little cores are less than the big cores. So you could have your PC "sleeping" rather than having to spin up to full power mode. This means the PSU can be designed to get a better efficiency rating.

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, schwellmo92 said:

I'm really confused by the efficiency cores because they are claiming up to 80% more performance at 4c/4t vs Skylake 2c/4t.

One thing I didn't like is they compared 4t against 4t, but Skylake was using HT. I see it as 4c vs 2c4t. The "extra" threads from HT or SMT do help a bit, but not nearly as much as many think it does. A good case like Cinebench R15/R20/R23 is only around 30% benefit. So assuming it applies to Cinebench, it is now 4c vs 2.6c effective.

 

I've long speculated, more for AMD than Intel, if SMT might go away. It is much more predictable in performance to run code on a core without SMT, and it also takes out one of the many potential security weaknesses.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, porina said:

8 P-core + 8 E-core (8C8c/24T) for desktop on a new LGA1700 socket

That's not progress,from 10 cores to 8 cores.

Intel lives in a bubble.

3 hours ago, porina said:

Built on Intel 7 process (formerly known as 10 SFE).

Oh no,please make it stop,those bad naming schemes somehow get worse and worse.

A PC Enthusiast since 2011
AMD Ryzen 7 5700X@4.65GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2085MHz Memory 5000MHz
Cinebench R23: 15669cb | Unigine Superposition 1080p Extreme: 3566
Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Vishera said:

That's not progress,from 10 cores to 8 cores.

You don't care what those cores do? 10 Skylake cores vs 8 Rocket Lake cores (+18% IPC) vs 16 cores (8 big cores +19% IPC over Rocket + 8 small cores ??? IPC).

 

I think Zen 3 remains a solid contender at 16+ cores, but it will be interesting to see how this plays out separately in what was the up to 8 core space (most mainstream) and the >8-16 core space (power users?).

 

3 minutes ago, Vishera said:

Oh no,please make it stop,those bad naming schemes somehow get worse and worse.

We've done that one to death in another thread.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Sounds very promising. If Intel delivers on all these claims then Alder Lake might be the big comeback I've been hoping for. 

But even if the claims are true, Intel will still have to hit an appealing price point. I am worried that the big amount of PCIe lanes, DDR5 and the other new stuff might make these processors (or at the very least motherboards and memory) very expensive.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Mel0nMan said:

I’m confused what the benefit of big+little cores will be In a desktop, I can see it being good in laptops though… I wonder if there will be some firmware that offloads for example background processes like email etc to the littles and games stay on the big ones…

Background tasks tend to be brief, lightweight, but switch around between threads quite frequently, which can hurt overall throughput. 
 

Using small cores to handle these allows better utilization of the large cores, which should improve both efficiency and performance. The use of small cores works as the die footprint tends to be very small, making it inexpensive to implement vs throwing more large cores at the problem. Finally, errant programs pegging out a core isn’t going to be quite so disastrous to battery life (for laptop users) and heat as would be if it pegs a large core. 
 

These are my guesses, so grain of salt and all. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, LAwLz said:

But even if the claims are true, Intel will still have to hit an appealing price point. I am worried that the big amount of PCIe lanes, DDR5 and the other new stuff might make these processors (or at the very least motherboards and memory) very expensive.

Let's take those one at a time.

 

I've not been following if there have been leaks, but if we assume the images today are representative of the product we can estimate a die size. By eyeball, let's say it is comparable to existing higher end consumer CPUs. So manufacturing cost should also be in a similar ball park.

 

PCIe lanes from the CPU are no different than what we've seen since Zen 2 (maybe earlier? I can't recall) and Rocket Lake, with 16+4 user accessible lanes. The only thing new is Intel's support of gen 5 for the x16 link. Maybe mobo tolerances will need to improve to make that reliable.

 

Stated PCIe lanes from the chipset were the maximum configuration. It is expected they'll offer a range as usual, so those not needing all features can seek more cost effective options for the desired connectivity.

 

DDR5 will almost certainly not be cheap, but at the same time I'd not expect it to be that expensive. Will it be more expensive than DDR4 3200? 3600? Probably. Will it be more expensive than DDR4 4800? I doubt it, since those are binned and heavily overclocked. DDR5 4800 will be a basic speed and in volume manufacturing. There will be an early adopter premium, but longer term I'd hope to see it cost no more than JEDEC timing 3200.

 

Mobo pricing has been going up since around Skylake or Kaby Lake era. But arguably, they're also getting better build quality. People care a lot more about power phases and cooling now. I'd hope that pricing for like for like build quality will not be significantly different from today.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, porina said:

I've not been following if there have been leaks, but if we assume the images today are representative of the product we can estimate a die size. By eyeball, let's say it is comparable to existing higher end consumer CPUs. So manufacturing cost should also be in a similar ball park.

I dunno man. Seems like it will be pretty complex and they most likely require quite a bit more die area for the 8+8 CPU than their current 8 core CPU. The smaller node will hopefully help with keeping pricing down, but I am not that confident. 

If Intel wants to match AMD in terms of performance across the board then their 8+8 core CPU will probably have to cost about the same as AMD's 8 core CPU, and I don't think that will be the case.

I'm worried the price to performance in single core (or applications that don't scale well above 8 threads) will suffer because you end up with 8 small cores that don't contribute much in terms of performance, but do factor in into the price.

 

If I'm running a program that only uses 8 cores then an 8 core will perform the same as an 8+8 core, but the latter will probably require far more die area to manufacture (and thus have a higher price). Let's be honest, most people do not need more than 8 core today. Most people (even enthusiasts) probably don't need more than 6 cores either, but 8 seems to be a pretty good sweet spot today.

We might end up in a situation where AMD wins the price to performance crown in lightly threaded applications but gets beat in highly threaded applications, but most people probably want the opposite (good performance in lightly threaded programs).

 

 

3 hours ago, porina said:

PCIe lanes from the CPU are no different than what we've seen since Zen 2 (maybe earlier? I can't recall) and Rocket Lake, with 16+4 user accessible lanes. The only thing new is Intel's support of gen 5 for the x16 link. Maybe mobo tolerances will need to improve to make that reliable.

That last sentence is what I am worried about. Higher tolerances for PCIe 5.0 = higher motherboard cost.

 

 

3 hours ago, porina said:

DDR5 will almost certainly not be cheap, but at the same time I'd not expect it to be that expensive. Will it be more expensive than DDR4 3200? 3600? Probably. Will it be more expensive than DDR4 4800? I doubt it, since those are binned and heavily overclocked. DDR5 4800 will be a basic speed and in volume manufacturing. There will be an early adopter premium, but longer term I'd hope to see it cost no more than JEDEC timing 3200.

Yeah, that'll probably be the case. But I think the price premium will be there through Alder Lake's shelf life. DDR5 pricing will be solved over time, but I am someone who might be interested in upgrading to Alder Lake and DDR5 will probably make it a far more expensive upgrade than I'd like.

 

 

Don't get me wrong. I like everything that has been presented so far in regards to Alder Lake. I am excited for it. I am just worried that we're hearing about all this cool new stuff that's coming, and the price for an Alder Lake system will end up way higher than most people expect.

All info so far seems good, but I won't jump on the hype train until the price is revealed.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Mel0nMan said:

I’m confused what the benefit of big+little cores will be In a desktop, I can see it being good in laptops though… I wonder if there will be some firmware that offloads for example background processes like email etc to the littles and games stay on the big ones…

So assuming windows gets a proper update to its schedular there are good use cases for little cores on desktop. Typically per-transistors and per W perfomance of these little cores is better than big cores (you can normally fit multiple littles cores in the same dire area needed for just 1 big core). So as long as your task/s are easy to distribute per multiple cores (and you don't need them to finish as soon as possible) it is more effect (both in transistors and power) to use little cores. 

And yes on the high end desktop you care more about perf/W than you might think. After all the limiting factor of your cpus boots speed is its power draw and heat output, if you can move most/all of the os background to these little cores (this is work that would b happening anyway) the total power draw for your task will go down providing more thermal/power headroom to increase the power draw of the performance cores. 

In some ways the higher end your desktop is the more your interested in having big little as you start to do tasks were it is really important that the L1, L2 cache does not get cleared due to a context switch. By having a few small background cores that can handle random other background tasks the L1, L2 of the perfomance core does not get effected when those background tasks need to run.  For gaming (were any adding latency can result in uneven frame delivery) this is even more important to ensure the L1, L2 cache of the cpu cores used by the game engine stay un-touched by background noice from your life stream chat, the twitch video your streaming on the other screen etc.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, RejZoR said:

I think we'll have to wait for Windows 11 or better support in Linux for big.LITTLE designs to actually work well. I've seen how these Intel's new CPU's have dedicated "director" that coordinates what cores do what tasks and you need OS to cooperate with it well, otherwise you'll have crap experience like Riley had in the video above. For desktops, I don't think it really matters, but for laptops, especially if adjusting power profiles will also affect how load is distributed through these cores. It certainly has potential.

Good Big.LITTLE optimisation needs more than os level support, it needs developer to tag threads/tasks with priority. At the moment the only platform other than Apple that have been pushing this is android.

 

Apple have been pushing this for many years ensuring developer tag all tasks/threads with multiple levels of priority, macOS already on intel users this to schedule lower priority tasks on cores that are currently running at lower boost speeds. And on there big.LITTLE (M1) this is extremely helpful for them with almost all apps on the platform already having these annotations through the code base embedded within the binary.  

Without this the OS would only be able to really do scheduling at an process level since it would not be able to figure out which threads have a given priority. Even then it can be hard for the OS to figure out if some background `task` is user-initiated (and thus important, like rendering a video) or a real background task.  I think MS do have some flags devs can use but in my expirance in the space very very few apps apply this info to the code base.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, hishnash said:

Good Big.LITTLE optimisation needs more than os level support, it needs developer to tag threads/tasks with priority. At the moment the only platform other than Apple that have been pushing this is android.

 

It's also a 1st gen implantation of this new paradigm from Intel; Big.LITTLE, DDR5, PCIe 5.0. I suspect lots of bugs. That's not to say it will be a bad chip, but I'm not convinced it will be a good value until DDR5 drops in price and actual storage and GPUs can leverage PCIe 4.0 let alone 5.0.

So in the market for a new laptop because it's needed? Sure, go for it. Looking to upgrade your PC? I dunno, depends on how much you're hurting for an upgrade, otherwise IMHO I would pass on 1st gen.

Link to comment
Share on other sites

Link to post
Share on other sites

Can't wait for GN's take on this.  "Back to you Steve"

 With all the Trolls, Try Hards, Noobs and Weirdos around here you'd think i'd find SOMEWHERE to fit in!

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, Mel0nMan said:

I’m confused what the benefit of big+little cores will be In a desktop, I can see it being good in laptops though… I wonder if there will be some firmware that offloads for example background processes like email etc to the littles and games stay on the big ones…

Just gonna copy this from where I wrote it originally a while ago:

 

Smaller cores are more efficient at a given power load. We know that zen cores (for example) can be power throttled at times - this is clear when you measure power consumption vs cores loaded - a 5950x reaches peak overall power consumption at an 8-10-core load, despite being a 16-core processor:

 

https://images.anandtech.com/doci/16214/PerCore-1-5950X.png

 

it's clear here that the individual cores have under half the power available to them when undergoing a full-core load vs an 8 core load. There is potential performance left on the table here, but it can't be extracted because of power budgets and heat - this is one of the reasons why multicore scaling always kinda sucks even in perfectly multithreaded workloads.

 

And so this brings up an interesting question: is there much point having 16 high-powered cores if you can't make full use of all of them at once anyway? Given that the whole point of 'little' cores is to be more power-efficient than their 'big' partners, could it be that - by making better use of that limited power budget - they can offer significant benefits during a multithreaded workload?

 

The answer to that will depend on the answer to a different question: how close can a 'little' core get to a power-limited 'big' core? According to Intel, the answer to that is "pretty close". Their current Lakefield hybrid processors apparently show a performance profile that looks a bit like this:

 

https://images.anandtech.com/doci/15009/Tremont - Stephen Robinson - Linley - Final-page-013.jpg

 

which predicts that at 50% power budget, a Sunny Cove 'big' core consumes the same amount of power as a Tremont 'little' core running flat-out, while also providing ~15% more performance. Which doesn't sound too great for the little Tremont cores... until you realise that they are a third of the size of the big Sunny Cove core:

 

https://images.anandtech.com/doci/15877/LKF TOP.jpg

(there are four cores in that Tremont Atom section)

 

If you're chasing multicore performance - which you are when you're talking 8+ core processors - replacing a couple of big cores with six little cores in the same space seems to be a bit of a no-brainer. At 33% of their power budget, three Tremont cores should handily outperform the single 50% power budget Sunny cove core that would consume the same amount of power (3x ~50% relative perf > 1x ~80%) - provided of course that the workload is sufficiently multithreaded. But if it isn't multithreaded, then it shouldn't matter - you've still got big cores to take care of those tasks for you. This lines up with the leaked configurations, which suggest that the low core count desktop CPUs would only have big cores - ie. the little cores are for those who need the multicore performance.

 

As such, I think these new Alder Lake CPUs absolutely could beat out a 5950X in multithreaded workloads (provided the Windows scheduler is up to the task). We've already seen leaked benchmarks suggesting this is the case. Whether this lead will last very long is yet to be seen - I'm just glad that real competition finally seems to be back on the menu.

CPU: i7 4790k, RAM: 16GB DDR3, GPU: GTX 1060 6GB

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, porina said:

AVX-512 support removed

RIP AVX-512

 

Note for those that care Golden Cove on server platform will still support AVX-512.

Quote

But the silicon will still be physically present in the core, only because Intel uses the same core in its next generation server processors called Sapphire Rapids.

Technically is still there in Alder Lake but disabled so hybrid can work without be an even bigger headache.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×