Jump to content

MSI has just confirmed Intel Alder Lake CPU launch date (November 4th)

Lightwreather
4 minutes ago, J-from-Nucleon said:

I doubt anyone would use a CPU over a GPU for ML

My point.

 

I think Intel wants some ML like OpenCV (for Face Tracking), ASR and TTS to be available to end-users without needing (*checks prices*) $400 (GTX 1050Ti) nVidia GPU to process it. Yet most of the stuff is not compiled to use Intel's AVX instructions because it excludes too many CPU's that are still in use. 

 

If Intel was smart and wanted in on ML stuff to keep ahead of AMD's CPU's they should do more than just contribute to pytorch/tensorflow. Whenever nVidia comes out with some ML thing, they should come out with a version that runs on their own CPU/GPU's. Like right now there's not even a way to benchmark the ML performance across platforms, only between nVidia GPU's.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Kisai said:

Unless you're doing ML stuff on the CPU, there's no performance gain by having the 11th gen rocket lake CPU. And Alder Lake will end up being a step backwards if the ML load actually uses the AVX512 instructions. I don't see how games can use AVX512, since they're not typically on the training end of a machine learning even if they do find a use for it (eg some games have taken up some primative voice recognition such as Phasmophobia)

AVX-512 isn't a single thing, but a collection of various functions. The ML stuff is a later optional addition to it. The basic feature is like other AVX sets: doing the same instruction to multiple data at once. With AVX-512 consumer implementations we don't actually get an increase in FP64 execution resource, like we saw with the move to AVX in the first place, and extended a bit by AVX2. AVX-512 in SKX and similar did double the execution units for up to 2x uplift in perf. Consumer has same execution potential as AVX2, but I'm still seeing about 40% per clock increase from AVX-512 in that case. It may be due to expanded register availability, or other optimisations it enables.

 

As an outsider, it seems implementing AVX-512 could offer upgrades in performance if the software can already use AVX2. Cinebench has stated they don't support AVX-512 even though they have started using AVX, since '512 didn't provide a benefit. In my analysis of R23 scores it doesn't seem Cinebench uses that much AVX, as it is a poor correlation depending on the CPU's peak AVX performance. So it seems there must be some level of AVX needed for it to be worth it, and it may not be at that level.

 

40 minutes ago, J-from-Nucleon said:

I doubt anyone would use a CPU over a GPU for ML

Think I've asked similar in the past, and the answer came back along the lines that GPU's limited VRAM could be a limiting factor for bigger workloads, where CPUs are relatively unlimited in that area.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Kisai said:

I don't see how games can use AVX512, since they're not typically on the training end of a machine learning even if they do find a use for it

AVX512 can be seen as a doubly wider AVX2, so any SIMD workload can be throw at it, not only ML stuff.

The problem is, the amount of CPUs that have it is so thin, and the power/downclock required to used it so high that in the end it doesn't make sense for most commonly available software to support it. If you really want to do a SIMD-heavy program, you're better off doing it on a GPU anyway.

 

3 hours ago, Kisai said:

Like right now there's not even a way to benchmark the ML performance across platforms, only between nVidia GPU's.

There's no ML platform outside of nvidia anyway (sadly).

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

55 minutes ago, igormp said:

AVX512 can be seen as a doubly wider AVX2, so any SIMD workload can be throw at it, not only ML stuff.

True, but again, that's what the GPU is for, normally.

55 minutes ago, igormp said:

The problem is, the amount of CPUs that have it is so thin, and the power/downclock required to used it so high that in the end it doesn't make sense for most commonly available software to support it. If you really want to do a SIMD-heavy program, you're better off doing it on a GPU anyway.

What I'm thinking of here is running certain processes on the CPU (in particular face-id like logic for Windows Hello) and ASR (for Siri-like experiences) as Microsoft (and google, and amazon) all want you to use their cloud services for this stuff, and that adds too much latency and makes the features completely unusable if they're not on the internet, or the internet cost is prohibitive when used.

 

55 minutes ago, igormp said:

There's no ML platform outside of nvidia anyway (sadly).

Yeah, and every time I switch pytorch/tensorflow between CPU and GPU, it's like "process takes 10x longer on the CPU", after I re-setup a bunch of things on the 11th gen cpu, I had to double check the gpu features again and kept seeing the "Intel oneDNN" popup, but no idea if it was actually being used anywhere, and none of the computers will run with Tensorflow >2.3, when OneDNN shows up with 2.5. 

 

At any rate, I'm not a ML dev, most of the stuff I've been experimenting with has been with existing models and if they don't work on newer frameworks, SOL.

 

Link to comment
Share on other sites

Link to post
Share on other sites

I can't wait to see how hot and power hungry this runs between DDR5 and intel likely pushing 10nm SuperFin (or whatever they've renamed it too) hard, I doubt 14nm++++++ hard but still pretty hard

I know people testing DRR5 in servers are having a fun time with active cooling it

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, GDRRiley said:

I can't wait to see how hot and power hungry this runs between DDR5 and intel likely pushing 10nm SuperFin (or whatever they've renamed it too) hard, I doubt 14nm++++++ hard but still pretty hard

I know people testing DRR5 in servers are having a fun time with active cooling it

Perhaps, the days of having RAM coolers are coming back??? 

CPU Cooler Tier List  || Motherboard VRMs Tier List || Motherboard Beep & POST Codes || Graphics Card Tier List || PSU Tier List 

 

Main System Specifications: 

 

CPU: AMD Ryzen 9 5950X ||  CPU Cooler: Noctua NH-D15 Air Cooler ||  RAM: Corsair Vengeance LPX 32GB(4x8GB) DDR4-3600 CL18  ||  Mobo: ASUS ROG Crosshair VIII Dark Hero X570  ||  SSD: Samsung 970 EVO 1TB M.2-2280 Boot Drive/Some Games)  ||  HDD: 2X Western Digital Caviar Blue 1TB(Game Drive)  ||  GPU: ASUS TUF Gaming RX 6900XT  ||  PSU: EVGA P2 1600W  ||  Case: Corsair 5000D Airflow  ||  Mouse: Logitech G502 Hero SE RGB  ||  Keyboard: Logitech G513 Carbon RGB with GX Blue Clicky Switches  ||  Mouse Pad: MAINGEAR ASSIST XL ||  Monitor: ASUS TUF Gaming VG34VQL1B 34" 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, CommanderAlex said:

Perhaps, the days of having RAM coolers are coming back??? 

if not its going to be some massive passive stacks on top

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, Kisai said:

What I'm thinking of here is running certain processes on the CPU (in particular face-id like logic for Windows Hello) and ASR (for Siri-like experiences) as Microsoft (and google, and amazon) all want you to use their cloud services for this stuff, and that adds too much latency and makes the features completely unusable if they're not on the internet, or the internet cost is prohibitive when used.

Well, usually inference is done on the CPU or special NPUs (like in phones), and are lightweight enough that you won't need a GPU for those cases. GPUs are way more important when it comes to training, during development, requiring a GPU for inference is shooting yourself in the foot due to the high power consumption/costs/low availability.

Stuff like face-id and some voice recognition (even siri) are already being done on-device.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, GDRRiley said:

I can't wait to see how hot and power hungry this runs between DDR5 and intel likely pushing 10nm SuperFin (or whatever they've renamed it too) hard, I doubt 14nm++++++ hard but still pretty hard

I know people testing DRR5 in servers are having a fun time with active cooling it

Server stuff being registered ECC usually takes a fair bit more power than the consumer grade stuff enthusiasts get. DDR5 should work out lower power than DDR4 for the same bandwidth/latency. DDR5 is nominally 1.1v compared to DDR4's 1.2v, not considering that DDR4 getting anywhere close to DDR5 speeds will be heavily overclocked running at 1.35v or more.

 

As for the Intel 7 process it uses (formerly known as 10 SuperFin Enhanced), we'll have to wait and see. This could be the one where they finally catch up to TSMC 7nm. Yes, TSMC is effectively years ahead there. This isn't something you can recover from quickly and this will be an indication of where they're going. It's predecessor had the performance, just not the efficiency to match TSMC 7nm.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, igormp said:

There's no ML platform outside of nvidia anyway (sadly).

Well there is... sort of... but like... lol

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, porina said:

Server stuff being registered ECC usually takes a fair bit more power than the consumer grade stuff enthusiasts get. DDR5 should work out lower power than DDR4 for the same bandwidth/latency. DDR5 is nominally 1.1v compared to DDR4's 1.2v, not considering that DDR4 getting anywhere close to DDR5 speeds will be heavily overclocked running at 1.35v or more.

 

As for the Intel 7 process it uses (formerly known as 10 SuperFin Enhanced), we'll have to wait and see. This could be the one where they finally catch up to TSMC 7nm. Yes, TSMC is effectively years ahead there. This isn't something you can recover from quickly and this will be an indication of where they're going. It's predecessor had the performance, just not the efficiency to match TSMC 7nm.

remember though voltage is can be done by the sticks not the board. so I won't be surprised when we have consumer DRR5 running at 1.4V+ to try and get over 9000mts, yes its normally 1.1V but that will likely only get you to just above 6400MTS

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, porina said:

Think I've asked similar in the past, and the answer came back along the lines that GPU's limited VRAM could be a limiting factor for bigger workloads, where CPUs are relatively unlimited in that area.

So one of the Professors of Computational Biology gave this exact reasoning when talking about GPUs, he's actually internationally well known in his field so I'm not going to say his name because then you'll know where I work. Anyway all his HPC workloads require greater than 100GB ram so he currently has no interest in GPUs at all.

 

Edit: Removed. Google caching and search is just too good lol /Edit

 

Of course this is not ML however there is no such thing as "generic ML" in the same way much of the above computation in different ways and areas in the field are also done on GPUs. This is why Intel has VNNI and BF16  on their server processors, because the more you have to pass data around system buses and between accelerators etc the slower the overall end to end throughput/performance is. There can be a lot of benefit to keeping everything on CPU and system memory but these are all heavily "it depends".

https://www.intel.com/content/dam/www/public/us/en/documents/product-overviews/dl-boost-product-overview.pdf

 

I personally only know how to interpret workload requirements in to server builds, by having that conversation and asking questions, but other than that I know very very little about what our researchers do and the code, data etc they work with. I just don't have the time or the current skillset to because I don't just provide research/HPC support and this is on the side of my more primary role of corporate IT support for the core systems of the university.

 

We have people asking for GPUs and we have people not asking for GPUs so as yet GPUs cannot replace everything CPUs can do in these computation areas.

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, leadeater said:

Well there is... sort of... but like... lol

Well, even AMD says they do have it, but like... lol

 

9 minutes ago, leadeater said:

Anyway all his HPC workloads require greater than 100GB ram so he currently has no interest in GPUs at all.

100GB is still doable for a single server with 3~4 A100s. When you get to the 500~1TB range then you start to think about interconnect bottlenecks and whatnot, and it may end up being easier/cheaper to just use the CPU

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, igormp said:

100GB is still doable for a single server with 3~4 A100s. When you get to the 500~1TB range then you start to think about interconnect bottlenecks and whatnot, and it may end up being easier/cheaper to just use the CPU

He's on quad socket servers with 512GB ram currently, dual socket 7713 with 1TB are arriving this week for him. Still for him I believe a lot of it is also what he is doing not just the GPU memory alone that is the problem with not being able to utilize GPUs.

 

He's actually rather anti GPU lol. Probably why he hasn't found a way to use them.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, GDRRiley said:

I can't wait to see how hot and power hungry this runs between DDR5 and intel likely pushing 10nm SuperFin (or whatever they've renamed it too) hard, I doubt 14nm++++++ hard but still pretty hard

I know people testing DRR5 in servers are having a fun time with active cooling it

I've only seen rumors that alder lake PL2 is 228w, but hopefully its cooler than rocket lake.

2 hours ago, GDRRiley said:

if not its going to be some massive passive stacks on top

I'd like to see some tall ram sticks that doesn't have RGB on it making the cooling effectiveness worse.

Link to comment
Share on other sites

Link to post
Share on other sites

48 minutes ago, GDRRiley said:

remember though voltage is can be done by the sticks not the board. so I won't be surprised when we have consumer DRR5 running at 1.4V+ to try and get over 9000mts, yes its normally 1.1V but that will likely only get you to just above 6400MTS

If you mean, you can use more than standard voltage, that's a given. Still, if we're talking operating the CPU at stock then strictly speaking we should exclude ram OC from that, and only consider standard ram. Once you OC anything, power is unconstrained. In DDR4 you can get standard 3200 modules at 1.2v, and that's as far as the defined speeds go. That DDR5 standard goes beyond 6400 will be interesting as we move there. OC modules will probably reach that earlier, at increased power cost.

 

Not sure if you meant that one of the changes to DDR5 is that voltage conversion is done on module, not left to the mobo. This will be an area that overclockers will probably watch with interest.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Blademaster91 said:

I've only seen rumors that alder lake PL2 is 228w, but hopefully its cooler than rocket lake.

Efficiency should be improved, how much remains to be seen.

 

Not sure if I should call the Rocket Lake power consumption poorly presented, or poorly consumed. In my own testing, under typical operating conditions the perf/w of Rocket Lake is near enough the same as Comet Lake. Probably since they're both using the same process, and operating quite far up the curve. RKL using more power is also doing more work. Still nowhere near as efficient as Zen 3. I think that would be a good milestone for Intel if Alder Lake can match power efficiency of Zen 3.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

59 minutes ago, porina said:

If you mean, you can use more than standard voltage, that's a given. Still, if we're talking operating the CPU at stock then strictly speaking we should exclude ram OC from that, and only consider standard ram. Once you OC anything, power is unconstrained. In DDR4 you can get standard 3200 modules at 1.2v, and that's as far as the defined speeds go. That DDR5 standard goes beyond 6400 will be interesting as we move there. OC modules will probably reach that earlier, at increased power cost.

thing is ram is almost always pushed out of spec

59 minutes ago, porina said:

Not sure if you meant that one of the changes to DDR5 is that voltage conversion is done on module, not left to the mobo. This will be an area that overclockers will probably watch with interest.

voltage control can be done on module. I'm almost certain there will be some crazy 1.5V+ OC focused 16 or 32gb dims

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, porina said:

I think that would be a good milestone for Intel if Alder Lake can match power efficiency of Zen 3.

I'll go out on a limb and say it'll be more power efficient than Zen 3.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, GDRRiley said:

thing is ram is almost always pushed out of spec

The vast majority of systems run standards based ram. Only in the enthusiast niche where XMP is common is it pushed.

 

1 hour ago, leadeater said:

I'll go out on a limb and say it'll be more power efficient than Zen 3.

Let me refine my previous statement to P cores specifically. That would be the true test of what they can currently do. Efficiency at the tradeoff of performance is always an option. 

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, porina said:

Let me refine my previous statement to P cores specifically. That would be the true test of what they can currently do. Efficiency at the tradeoff of performance is always an option. 

Oh I still mean the P cores too. Mobile Tiger Lake is already rather power efficient, the 11800H is right now competitive with Zen 3 & 5800H/HS.

 

24.png

 

27.png

https://www.techspot.com/review/2262-intel-core-i7-11800h/

 

Gaming performance is equally as close.

 

So I'm not really going out on a limb, it's near impossible for Golden Cove to be less power efficient than Zen 3. Not with an enhanced silicon process and many archecturual improvements, would be a mighty dropped ball on Intel's part if so.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, leadeater said:

Oh I still mean the P cores too. Mobile Tiger Lake is already rather power efficient, the 11800H is right now competitive with Zen 3 & 5800H/HS.

Your 2nd chart is the most telling. Tiger Lake is a significant step over Comet Lake (wish they had Ice Lake there also), but it is clearly short of Zen 3 efficiency. It is interesting the scaling seems more linear at the top end presented, implying there is much more headroom potential. I really wish Intel offered Tiger Lake on desktop instead of Rocket Lake. With desktop power levels and cooling, it could be really potent. Maybe even too good it would make ADL look not so good.

 

8 hours ago, leadeater said:

So I'm not really going out on a limb, it's near impossible for Golden Cove to be less power efficient than Zen 3. Not with an enhanced silicon process and many archecturual improvements, would be a mighty dropped ball on Intel's part if so.

Intel have been playing catch up for a while. I'm going to be a bit more restrained, and will set hopes of Golden Cove to be efficiency competitive to Zen 3, with peak performance as a separate measure where I think it could do well in. Process wise they're only just reaching parity with TSMC 7, so any significant gains beyond that will have to be obtained through architecture.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, porina said:

Your 2nd chart is the most telling. Tiger Lake is a significant step over Comet Lake (wish they had Ice Lake there also), but it is clearly short of Zen 3 efficiency. It is interesting the scaling seems more linear at the top end presented, implying there is much more headroom potential. I really wish Intel offered Tiger Lake on desktop instead of Rocket Lake. With desktop power levels and cooling, it could be really potent. Maybe even too good it would make ADL look not so good.

 

Intel have been playing catch up for a while. I'm going to be a bit more restrained, and will set hopes of Golden Cove to be efficiency competitive to Zen 3, with peak performance as a separate measure where I think it could do well in. Process wise they're only just reaching parity with TSMC 7, so any significant gains beyond that will have to be obtained through architecture.

When it comes to desktop as you noticed the power scaling is very good so I really can't see in any way how a 12900K or w/e won't be more power efficient than Zen 3 at those power levels. At 35W it'll be much closer, possibly still not. At 45W I think this is where we'll see the largest battle as it could go either way.

 

Intel still designs their processes around more aggressive clocks and high power targets where TSMC almost always targets lower power, or at the very least lower frequencies.

 

Shift that black line up as little as 15% and it's quite a telling story.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×