Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Intel 10th Gen CPUs revealed w/ ASUS z490 Motherboards

Dellenn
 Share

Courtesy of Tech PowerUp

 

https://www.techpowerup.com/266411/intel-10th-generation-comet-lake-desktop-processors-and-400-series-chipsets-announced-heres-whats-new

 

Quote

Intel today launched its 10th generation Core desktop processor family and its companion Intel 400-series chipsets. Based on the 14 nm++ silicon fabrication process and built in the new LGA1200 package, the processors are based on the "Comet Lake" microarchitecture. The core design of "Comet Lake" and its IPC are identical to those of "Skylake," however Intel brought significant enhancements to the processor's clock-speed boosting algorithm, increased core- or thread counts across the board, and introduced new features that could interest enthusiasts and overclockers. The uncore component remains largely unchanged from the previous-generation, with support for DDR4 memory and PCI-Express gen 3.0. Use of these processors requires a new socket LGA1200 motherboard, they won't work on older LGA1151 motherboards. You can install any LGA115x-compatible cooler on LGA1200, provided it meets the thermal requirements of the processor you're using.

At the heart of the 10th generation Core processor family is a new 10-core monolithic processor die, which retains the same basic structure as the previous-generation 8-core "Coffee Lake Refresh" die, and 4-core "Skylake." The cores are arranged in two rows, sandwiched by the processor's uncore and iGPU blocks. A ring-bus interconnect binds the various components. The cache hierarchy is unchanged from previous generations as well, with 32 KB each of L1I and L1D caches; 256 KB of dedicated L2 cache per core, and 20 MB of shared L3 cache. The iGPU is the same Gen 9.5 based UHD 630 graphics. As we mentioned earlier, much of Intel's innovation for the 10th generation is with the processor's microcode (boosting algorithms).

 

 

I can't wait to see how these fare against the 3rd gen Ryzen chips - especially at these price points.

AOoPpASR5kM93BH7.jpg

faX4eOkkx6GTuMNF--2.jpg

download.jpg

download (1).jpg

Link to comment
Share on other sites

Link to post
Share on other sites

A TDP of 125W scares me. We all know just how much power a "95W" 9900K pulled.

¯\_(ツ)_/¯

 

 

Desktop:

Intel Core i7-11700K | Noctua NH-D15S chromax.black | ASUS ROG Strix Z590-E Gaming WiFi  | 32 GB G.SKILL TridentZ 3200 MHz | ASUS TUF Gaming RTX 3080 | 1TB Samsung 980 Pro M.2 PCIe 4.0 SSD | 2TB WD Blue M.2 SATA SSD | Seasonic Focus GX-850 Fractal Design Meshify C Windows 10 Pro

 

Laptop:

HP Omen 15 | AMD Ryzen 7 5800H | 16 GB 3200 MHz | Nvidia RTX 3060 | 1 TB WD Black PCIe 3.0 SSD | 512 GB Micron PCIe 3.0 SSD | Windows 10

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, BobVonBob said:

A TDP of 125W scares me. We all know just how much power a "95W" 9900K pulled.

I'm starting to think the 220W i9 PL2 limit is going to be true

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Dellenn said:

The core design of "Comet Lake" and its IPC are identical to those of "Skylake,"

-snip-

The uncore component remains largely unchanged from the previous-generation, with support for DDR4 memory and PCI-Express gen 3.0. Use of these processors requires a new socket LGA1200 motherboard, they won't work on older LGA1151 motherboards.

Skylake came out in.... looks it up... August 2015.

 

478680927741018141.png.4ee0bbd3fb73f78af892f8a3f5e891f4.png

 

Still no PCIe 4.0 to justify a new socket, 6 cores behind Ryzen, and four different variants of the i9, all called the 10900.

I mean, at least it boosts even higher, so the per core performance should be ahead of current gen Ryzen, which came out in July last year...

Edited by seon123
Something something

:)

Link to comment
Share on other sites

Link to post
Share on other sites

On a MASSIVE side note - why does the base clock say "up to" - this REALLY worries me.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, 5x5 said:

I'm starting to think the 220W i9 PL2 limit is going to be true

That would also significantly boost performance. If it is true, Intel will overtake AMD again. Yes, at the expense of power consumption, but if you are already buying a high-end machine you probably don't care as much about that. A good move from them IMO.

Link to comment
Share on other sites

Link to post
Share on other sites

The whole base clock and FOUR potential boost clocks is going to really confuse folks in the market. AMD is at least more concise in this area.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, zheega said:

That would also significantly boost performance. If it is true, Intel will overtake AMD again. Yes, at the expense of power consumption, but if you are already buying a high-end machine you probably don't care as much about that. A good move from them IMO.

Not really - when a 10-core Intel CPU is using more power than an AMD 16-core all in the goal of reaching the performance of their 12-core, we have a massive issue. Do remember, Intel can't push frequency much more - the power increase is due to lower yields on the 10-core models and the +20% power draw of the 2 additional cores. After all, the uarch and node are same as the 9900K (literally, there is a 99% match)

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, zheega said:

That would also significantly boost performance. If it is true, Intel will overtake AMD again. Yes, at the expense of power consumption, but if you are already buying a high-end machine you probably don't care as much about that. A good move from them IMO.

FX 9590 is good by that logic then?

Main Rig :

Ryzen 7 2700X | Powercolor Red Devil RX 580 8 GB | Gigabyte AB350M Gaming 3 | 16 GB TeamGroup Elite 2400MHz | Samsung 750 EVO 240 GB | HGST 7200 RPM 1 TB | Seasonic M12II EVO | CoolerMaster Q300L | Dell U2518D | Dell P2217H | 

 

Laptop :

Thinkpad X230 | i5 3320M | 8 GB DDR3 | V-Gen 128 GB SSD |

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Fatih19 said:

FX 9590 is good by that logic then?

FX9590 didn't overtake Intel in performance. This might. Or at least pull further away in games. Sure, I hate huge power draw as much as the next person, but holding higher boost clocks for longer will improve performance.

9 minutes ago, 5x5 said:

Not really - when a 10-core Intel CPU is using more power than an AMD 16-core all in the goal of reaching the performance of their 12-core, we have a massive issue. Do remember, Intel can't push frequency much more - the power increase is due to lower yields on the 10-core models and the +20% power draw of the 2 additional cores. After all, the uarch and node are same as the 9900K (literally, there is a 99% match)

Yes, I would guess that this is intended to keep boost for longer, not to increase max frequency, no? It should still help, especially since P2 isn't just higher, but also lasts longer. 

Link to comment
Share on other sites

Link to post
Share on other sites

 

Awareness is key. Never enough, even in the face of futility. Speak the truth as if you may never get to say it again. This world is full of ugly. Change it they say. The only way is to reveal the ugly. To change the truth you must first acknowledge it. Never pretend it isn't there. Never bend the knee.

 

Please quote my post in your reply, so that I will be notified and can respond to it. Thanks.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, zheega said:

FX9590 didn't overtake Intel in performance. This might. Or at least pull further away in games. Sure, I hate huge power draw as much as the next person, but holding higher boost clocks for longer will improve performance.

Yes, I would guess that this is intended to keep boost for longer, not to increase max frequency, no? It should still help, especially since P2 isn't just higher, but also lasts longer. 

PL2 lasts a whopping 100 seconds IIRC. Sustaining 220W for longer means the cheapest boards would be 300$

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, huilun02 said:

 

I love Steve. 60% of games made of all time since Space War is optimized for single core. 😁

Main Rig :

Ryzen 7 2700X | Powercolor Red Devil RX 580 8 GB | Gigabyte AB350M Gaming 3 | 16 GB TeamGroup Elite 2400MHz | Samsung 750 EVO 240 GB | HGST 7200 RPM 1 TB | Seasonic M12II EVO | CoolerMaster Q300L | Dell U2518D | Dell P2217H | 

 

Laptop :

Thinkpad X230 | i5 3320M | 8 GB DDR3 | V-Gen 128 GB SSD |

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, zheega said:

That would also significantly boost performance. If it is true, Intel will overtake AMD again. Yes, at the expense of power consumption, but if you are already buying a high-end machine you probably don't care as much about that. A good move from them IMO.

Intel will at least be competitive, people forget we need both Intel and AMD or else the consumer loses with increased pricing. I don't like the higher power consumption either, but in a high end gaming system the GPU is still consuming lots more power, and most people will be using a large air or water cooler.

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, 5x5 said:

Intel TDP at 125W? My god, I hope the power plants around the world are prepared :D :D

Can everyone please stop misunderstanding TDP. It doesn't stand for power consumption. 

 

28 minutes ago, 5x5 said:

I'm starting to think the 220W i9 PL2 limit is going to be true

Up to the system/mobo maker to decide where to set it. The Intel values are more like serving suggestions.

 

25 minutes ago, seon123 said:

Still no PCIe 4.0 to justify a new socket, 6 cores behind Ryzen, and four different variants of the i9, all called the 10900.

Not that difficult is it? With/without OC, with/without iGPU. AMD just don't give you the GPU in the first place.

 

12 minutes ago, Dellenn said:

The whole base clock and FOUR potential boost clocks is going to really confuse folks in the market. AMD is at least more concise in this area.

I'd argue Intel is better here, because people can have a better understanding of what the CPU might do in a given scenario. With Zen 2, it is pretty much a lucky dip what clock you might get depending on code and cooling.

 

17 minutes ago, 5x5 said:

On a MASSIVE side note - why does the base clock say "up to" - this REALLY worries me.

Because all clocks other than the base are "up to". That includes AMD. Remember all the perceived boost issues at Zen 2 launch?

 

12 minutes ago, 5x5 said:

Not really - when a 10-core Intel CPU is using more power than an AMD 16-core all in the goal of reaching the performance of their 12-core, we have a massive issue.

Putting aside the process and architecture differences, more cores in a lightweight highly scaling multi-thread workload like cinebench will be more power efficient than with fewer cores. So the question is, does it slot in between 8 and 12 core models? Even then, it will almost certainly be worse from process alone, that and they're operating on the upper limits of the curve following AMD's lead with Zen 2. The main difference is AMD only pushes the crap out of their CPUs for single thread, whereas Intel looks to be doing it more so at multicore also.

TV Gaming system: Asus B560M-A, i7-11700k, Scythe Fuma 2, Corsair Vengeance Pro RGB 3200@21334 4x16GB, MSI 3070 Gaming Trio X, EVGA Supernova G2L 850W, Anidees Ai Crystal, Samsung 980 Pro 2TB, LG OLED55B9PLA 4k120 G-Sync Compatible
Streaming system: Asus X299 TUF mark 2, i9-7920X, Noctua D15, Corsair Vengeance LPX RGB 3000 8x8GB, Gigabyte 2070, Corsair HX1000i, GameMax Abyss, Samsung 970 Evo 500GB, Crucial BX500 1TB, BenQ XL2411 1080p144 + HP LP2475w 1200p60
Desktop Gaming system (to be retired): Asrock Z370 Pro4, i7-8086k, Noctua D15, G.Skill Ripjaws V 3200 2x8GB, Asus Strix 1080 Ti, NZXT E850 PSU, Cooler Master MasterBox 5, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p144 G-sync

Former Main system (to be retired): Asus Maximus VIII Hero, i7-6700k, Noctua D14, G.Skill Ripjaws 4 3333@2133 4x4GB, GTX 1650, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, Acer RT280k 4k60 FreeSync [link]
Gaming laptop: Lenovo Legion, 5800H, DDR4 3200C22 2x8GB, RTX 3070, 512 GB SSD, 165 Hz IPS panel


 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, porina said:

Can everyone please stop misunderstanding TDP. It doesn't stand for power consumption. 

 

Up to the system/mobo maker to decide where to set it. The Intel values are more like serving suggestions.

 

Not that difficult is it? With/without OC, with/without iGPU. AMD just don't give you the GPU in the first place.

 

I'd argue Intel is better here, because people can have a better understanding of what the CPU might do in a given scenario. With Zen 2, it is pretty much a lucky dip what clock you might get depending on code and cooling.

 

Because all clocks other than the base are "up to". That includes AMD. Remember all the perceived boost issues at Zen 2 launch?

 

Putting aside the process and architecture differences, more cores in a lightweight highly scaling multi-thread workload like cinebench will be more power efficient than with fewer cores. So the question is, does it slot in between 8 and 12 core models? Even then, it will almost certainly be worse from process alone, that and they're operating on the upper limits of the curve following AMD's lead with Zen 2. The main difference is AMD only pushes the crap out of their CPUs for single thread, whereas Intel looks to be doing it more so at multicore also.

I agree but - I was talking about base - Intel, for the first time, are listing base as "up to" as well. That worries me that base clock might not meaning anything anymore.

Also, watching steeve, PL2 is set at 250W, 220W is reportedly the results people got in early testing.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, porina said:

Can everyone please stop misunderstanding TDP. It doesn't stand for power consumption. 

 

 

For the love of god yes!

From wiki

Quote

The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by a computer chip or component (often a CPU, GPU or system on a chip) that the cooling system in a computer is designed to dissipate under any workload.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, 5x5 said:

I agree but - I was talking about base - Intel, for the first time, are listing base as "up to" as well. That worries me that base clock might not meaning anything anymore.

I missed that detail. Maybe whoever created the slide got a bit copy paste happy and put it everywhere a clock was mentioned. Basically if you provide at least the TDP amount of cooling you should get at least the base clock. At least that's the way it has been. 

TV Gaming system: Asus B560M-A, i7-11700k, Scythe Fuma 2, Corsair Vengeance Pro RGB 3200@21334 4x16GB, MSI 3070 Gaming Trio X, EVGA Supernova G2L 850W, Anidees Ai Crystal, Samsung 980 Pro 2TB, LG OLED55B9PLA 4k120 G-Sync Compatible
Streaming system: Asus X299 TUF mark 2, i9-7920X, Noctua D15, Corsair Vengeance LPX RGB 3000 8x8GB, Gigabyte 2070, Corsair HX1000i, GameMax Abyss, Samsung 970 Evo 500GB, Crucial BX500 1TB, BenQ XL2411 1080p144 + HP LP2475w 1200p60
Desktop Gaming system (to be retired): Asrock Z370 Pro4, i7-8086k, Noctua D15, G.Skill Ripjaws V 3200 2x8GB, Asus Strix 1080 Ti, NZXT E850 PSU, Cooler Master MasterBox 5, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p144 G-sync

Former Main system (to be retired): Asus Maximus VIII Hero, i7-6700k, Noctua D14, G.Skill Ripjaws 4 3333@2133 4x4GB, GTX 1650, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, Acer RT280k 4k60 FreeSync [link]
Gaming laptop: Lenovo Legion, 5800H, DDR4 3200C22 2x8GB, RTX 3070, 512 GB SSD, 165 Hz IPS panel


 

Link to comment
Share on other sites

Link to post
Share on other sites

I'm more concerned with fact they're pricing 2c/2t Celeron for around 50$ and 2c/4t Pentium Gold is now in the 60-80 dollars range. I think they're going to be kneecapped by AMD here.

Main Rig :

Ryzen 7 2700X | Powercolor Red Devil RX 580 8 GB | Gigabyte AB350M Gaming 3 | 16 GB TeamGroup Elite 2400MHz | Samsung 750 EVO 240 GB | HGST 7200 RPM 1 TB | Seasonic M12II EVO | CoolerMaster Q300L | Dell U2518D | Dell P2217H | 

 

Laptop :

Thinkpad X230 | i5 3320M | 8 GB DDR3 | V-Gen 128 GB SSD |

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, porina said:

I missed that detail. Maybe whoever created the slide got a bit copy paste happy and put it everywhere a clock was mentioned. Basically if you provide at least the TDP amount of cooling you should get at least the base clock. At least that's the way it has been. 

Yeah, but I'm still slightly worried - you see, "up to" is a legal loophole word that I hate to see.

Anyhow, TDP values aside, based on 9900K and previous, I expect 200W power draw under normal conditions on an all core load to be accurate sadly.

Link to comment
Share on other sites

Link to post
Share on other sites

I actually like one thing about these chips and that's the 40 PCIe lanes!

 

Edit:

nvm, I can't read "Platform Lanes" properly :P

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share


×