Jump to content

Intel Core i9-11900(k) + i7-11700(k/kf) 8c16t Rocket Lake Desktop CPU Benchmarks and Pricing Leaked: (Update #8)

3 hours ago, BiG StroOnZ said:

Got another noice Rocket Lake update (will add to OP) ~

 

Intel Rocket Lake Core i9-11900K outperforms Ryzen 9 5950X in leaked Ashes benchmark. Intel could steal back its gaming performance crown from AMD, according to the leaked benchmark.

 

 

Source 1: https://www.notebookcheck.net/The-Intel-Core-i9-11900K-beats-the-Ryzen-9-5950X-in-new-Ashes-of-the-Singularity-benchmarks.509533.0.html

Source 2: https://www.techradar.com/news/intel-could-steal-back-its-gaming-performance-crown-from-amd-according-to-leaked-benchmark

Curious, are these two results produced by the same person? If so, it really begs the question why they would stick with the same GPU but not the same memory, unless they were using memory that simply wouldn't post on Ryzen. Still, using different memory speeds would also screw with an even comparison and would be a pretty weak testing methodology. If these are two different results from two different testers, I would say its impossible to compare the two. The fact that we only have a single score of the 5950X submitted on the Crazy_1080p preset for AotS using a 2080 Ti means we have extremely limited data to go on as well.

 

I am hoping Intel somehow pulled out an IPC boost despite rehashing the same Skylake architecture, but having owned every iteration of Skylake all the way up to my 8700K, I am not going to hold my breath. It will be interesting to see the results when we get a proper testing methodology without having to compare the scores of two different testers with different testing methodologies.

 

In case anyone wants to question why this is important, here is a result of the same Crazy_1080p preset with a GTX 1070 where the CPU framerate matches this alleged Core i9 11900K:https://www.ashesofthesingularity.com/benchmark#/benchmark-result/3ef43926-c850-486c-b027-87a3e6325f3a

image.thumb.png.d6e222e6f060310a3922e4c81f157fce.png

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, porina said:

Doubt it'll ever go mainstream and take over silicon. It has its niche use cases but for it to be economic over silicon... just can't see it happening unless something really radical happens.

I don’t think we’ll start truly investing money and effort into other materials unless there’s some major major breakthrough or we get really desperate. 
 

We were piling on cores, now we’re working on IPC again. Eventually IPC will prob slow down again and then we’ll prob move onto more cores again or dedicating parts of silicon to specific purposes. 

 

Silicon will prob sustain us till 2030? Hard to tell. A lot can change. Also who knows what architecture will look like by then since ARM and RISC-V are making moves. 

"If a Lobster is a fish because it moves by jumping, then a kangaroo is a bird" - Admiral Paulo de Castro Moreira da Silva

"There is nothing more difficult than fixing something that isn't all the way broken yet." - Author Unknown

Spoiler

Intel Core i7-3960X @ 4.6 GHz - Asus P9X79WS/IPMI - 12GB DDR3-1600 quad-channel - EVGA GTX 1080ti SC - Fractal Design Define R5 - 500GB Crucial MX200 - NH-D15 - Logitech G710+ - Mionix Naos 7000 - Sennheiser PC350 w/Topping VX-1

Link to comment
Share on other sites

Link to post
Share on other sites

O this is getting good! I don't care what brand is in my machine I'm buying whatever the heck gives me the best price to performance. If Intel starts churning out better chips than AMD then I'll switch back without hesitation. Finally starting to see that Intel engineering money being put to work.

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not sure what to make of the benchmarks for Ashes of the singularity considering that the 9900K and 5600X are both slightly worse and slightly better than the 11700KF and the 10900K is below all of them.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, porina said:

Doubt it'll ever go mainstream and take over silicon. It has its niche use cases but for it to be economic over silicon... just can't see it happening unless something really radical happens.

The raw material is like 1000x as expensive as silicon.  The issue though is silicon is an abundant element.  It’s the processing of the silicon that makes it expensive. I don’t know how much the actual produced cost of a microchip would rise because of it.  Someone would have to think of something clever.  That has happened before though.  There are a couple other materials that can be used to make computer chips.  Quantum computing was being looked at as a possible replacement but it is appearing to also be niche.  My point I guess is there are still directions to grow in when we exhaust SOI.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, BiG StroOnZ said:

 

According to current pricing information the i9-11900k would actually be $369 cheaper than Ryzen 9 5950X...

One would hope so, it’s got half the threads.  If there’s no intel tax though the math changes completely.  Ryzen2 was never actually better than coffee lake.  It was just cheaper while being gud’nuff. There are vanishingly few workloads that need more than 8/16 that don’t require enterprise level stuff.  Intel could have crushed AMD much earlier if they’d merely competed on price. 

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, porina said:

Doubt it'll ever go mainstream and take over silicon. It has its niche use cases but for it to be economic over silicon... just can't see it happening unless something really radical happens.

You can use 90%+ carbon materials such as graphite and diamonds,they are more conductive than silicon so you should get better performance.

Diamonds are pretty expensive but it would be such a cool idea :D

 

Here is a carbon wafer (featuring the RV16X-NANO microprocessor):

screen-shot-2019-08-27-at-10-36-57-pm-15

A PC Enthusiast since 2011
AMD Ryzen 7 5700X@4.65GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2085MHz Memory 5000MHz
Cinebench R23: 15669cb | Unigine Superposition 1080p Extreme: 3566
Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Vishera said:

You can use 90%+ carbon materials such as graphite and diamonds,they are more conductive than silicon so you should get better performance.

Just because a material is electrically conductive does not mean that it's suitable for use in electronics. Pure graphene (a single layer of graphite) for example, is not suitable for electronics as you can't make a transistor out of it - it doesn't have a band gap. If you tried to build a transistor out of it, it would be impossible to switch off. In many ways it's more akin to a metal than a semiconductor (hence it's status as a "semimetal").

 

Current research is trying to create an artificial band gap through doping, but nobody has managed to create one suitable enough for room-temperature electronics yet (there is a candidate material - Me-graphene - that's been getting some attention in the last few months, but afaik all research done on it to date has been through MD simulations rather than physical experiments. I'm not sure if anyone has ever actually synthesised the material.)

 

As such, most uses of graphene in transistors today (in labs mostly) are as an enhancement material - make the silicon transistor better - rather than as an outright silicon replacement. In particular, the graphene is generally used as the channel material - the bit of the transistor where current flows when the transistor is "on" - but the silicon is still required to create the gate itself. By making the channel 1-atom thick, the resistance of the transistor is essentially eliminated. (The transistor never completely turns off still, but it has an on/off ratio of ~5, meaning it's possible - albeit challenging - to use them in electronics.) This enhancement technique is what's generally considered to be the first step towards a silicon replacement.

 

Spoiler

graphene-field-effect-transistors-gfets-

 

 

Graphene FET (GFET) configurations - the graphene enhancement material is shown in red.

a) top gate GFET, b) back gate GFET, and c) dual gate GFET. Source

 

Quote

Here is a carbon wafer (featuring the RV16X-NANO microprocessor):

*a silicon wafer covered in carbon nanotubes. As described above, the nanotubes here are used as the channel material, rather than to fully construct the transistor. The paper on the RV16X-NANO hence refers to them as "complementary carbon nanotube transistors". The processor is not entirely constructed from carbon nanotubes.

 

*cue Riley going crazy*

CPU: i7 4790k, RAM: 16GB DDR3, GPU: GTX 1060 6GB

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/25/2020 at 1:20 AM, tim0901 said:

Just because a material is electrically conductive does not mean that it's suitable for use in electronics. Pure graphene (a single layer of graphite) for example, is not suitable for electronics as you can't make a transistor out of it - it doesn't have a band gap. If you tried to build a transistor out of it, it would be impossible to switch off. In many ways it's more akin to a metal than a semiconductor (hence it's status as a "semimetal").

 

Current research is trying to create an artificial band gap through doping, but nobody has managed to create one suitable enough for room-temperature electronics yet (there is a candidate material - Me-graphene - that's been getting some attention in the last few months, but afaik all research done on it to date has been through MD simulations rather than physical experiments. I'm not sure if anyone has ever actually synthesised the material.)

 

As such, most uses of graphene in transistors today (in labs mostly) are as an enhancement material - make the silicon transistor better - rather than as an outright silicon replacement. In particular, the graphene is generally used as the channel material - the bit of the transistor where current flows when the transistor is "on" - but the silicon is still required to create the gate itself. By making the channel 1-atom thick, the resistance of the transistor is essentially eliminated. (The transistor never completely turns off still, but it has an on/off ratio of ~5, meaning it's possible - albeit challenging - to use them in electronics.) This enhancement technique is what's generally considered to be the first step towards a silicon replacement.

Thank you for your reply,it's very informative!.

What do you think about a wafer made of diamonds?

Seems like it's possible.

On 12/25/2020 at 1:20 AM, tim0901 said:

*a silicon wafer covered in carbon nanotubes. As described above, the nanotubes here are used as the channel material, rather than to fully construct the transistor. The paper on the RV16X-NANO hence refers to them as "complementary carbon nanotube transistors". The processor is not entirely constructed from carbon nanotubes.

I see,i read some interviews of people working in the R&D of this project,

They mentioned that IBM is involved in the project and that it's possible that we will see a wafer fully made of carbon nano-tubes in a few years,

and that it's not a question of if,but a question of when,they also talked about some of the obstacles they encountered and how they solved them.

On 12/25/2020 at 1:20 AM, tim0901 said:

*cue Riley going crazy*

That's the first thing i thought about,

Riley gets excited every time carbon nano-tubes are mentioned.

 

Also i found out that Silicon Carbide wafers are a thing:

Silicon_Carbide_Wafer_micross.jpg

It makes sense,the superior performance that carbon offers combined with the familiarity of silicon seems like a good in between solution until full carbon wafers are a thing.

A PC Enthusiast since 2011
AMD Ryzen 7 5700X@4.65GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2085MHz Memory 5000MHz
Cinebench R23: 15669cb | Unigine Superposition 1080p Extreme: 3566
Link to comment
Share on other sites

Link to post
Share on other sites

On 12/25/2020 at 11:42 AM, Vishera said:

Diamonds are pretty expensive but it would be such a cool idea

Industrial diamonds are actually pretty well worthless, also easy to spot if you tried to fake them as a gem stone for jewelry. I think the bigger problem would be getting a viable process to use them in a fabrication process as the most common uses for it currently are rather crude things like diamond tipped drill bits etc (diamond powder embedded in tool bits).

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/24/2020 at 11:44 AM, jagdtigger said:

Knowing intel its a monolithic design with pretty horrific yields so highly doubt it.

Then again, 5800X is pretty "monolithic". It's made of single CCX (which needs to be perfect as latest CCX's consist of 8 cores and 5800X comes with just one CCX) and a controller. Compared to Ryzen 1800X with same total core count, that one had 2 quad core CCXs and 1 controller. If we take out the controller, 5800X is actually monolithic. Out of die controller is really there just for the sake of flexibility, sticking it into the CPU die wouldn't really change much or add special complexity. That's reserved to actual cores and massive caches that take up most of the die space.

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, RejZoR said:

Then again, 5800X is pretty "monolithic". It's made of single CCX

Its focused on gaming so no surprise there, and i wouldnt call it monolithic but whatever. AMD still in a better position because of the modular design, even if there are a few not perfect dies on tha wafer its not a total waste because they can re-use them in other products. AFAIK intel cant really do that.

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, RejZoR said:

Then again, 5800X is pretty "monolithic". It's made of single CCX (which needs to be perfect as latest CCX's consist of 8 cores and 5800X comes with just one CCX) and a controller. Compared to Ryzen 1800X with same total core count, that one had 2 quad core CCXs and 1 controller. If we take out the controller, 5800X is actually monolithic. Out of die controller is really there just for the sake of flexibility, sticking it into the CPU die wouldn't really change much or add special complexity. That's reserved to actual cores and massive caches that take up most of the die space.

Defects have more to do with square area though, the I/O die is larger (due to it being 12nm) and it containing the memory controller, PCIe controller and IF links. Dual CCX per CCD benefited mostly the lower core count SKUs as you could recover a CCD with defects in to a usable product if only 1 CCX was required and the defect were contained to a single CCX. Otherwise for a single CCD 8 core SKU there's no benefit to 1 or 2 CCX in relation to defects for that specific product.

 

Zen 2 CCD: 74mm2

Zen 2 IOD: 125mm2

Zen 2 Mobile:156mm2

 

Zen 2 Mobile has significantly higher defect rates compared to the Zen 2 CCD and IOD.

 

Also the EPYC IOD is 416mm2, it's really damn big lol.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, RejZoR said:

Then again, 5800X is pretty "monolithic". It's made of single CCX (which needs to be perfect as latest CCX's consist of 8 cores and 5800X comes with just one CCX) and a controller. Compared to Ryzen 1800X with same total core count, that one had 2 quad core CCXs and 1 controller. If we take out the controller, 5800X is actually monolithic. Out of die controller is really there just for the sake of flexibility, sticking it into the CPU die wouldn't really change much or add special complexity. That's reserved to actual cores and massive caches that take up most of the die space.

I'm sure I've mixed up usage in the past too in some cases, but in the context of a CPU, monolithic does strictly mean one die. You can have as two CCX and still be monolithic, with an APU being a better example of that. The logical structure doesn't really apply.

 

10 hours ago, jagdtigger said:

Its focused on gaming so no surprise there, and i wouldnt call it monolithic but whatever. AMD still in a better position because of the modular design, even if there are a few not perfect dies on tha wafer its not a total waste because they can re-use them in other products. AFAIK intel cant really do that.

Both AMD and Intel can disable parts if enough other parts are functional for a product level. So for example, Intel can and does disable cores, cache, iGPU to make alternative products. AMDs advantage there is in the separation of the CCD and IOD, but within the CCD itself the considerations are not much different.

 

10 hours ago, leadeater said:

Dual CCX per CCD benefited mostly the lower core count SKUs as you could recover a CCD with defects in to a usable product if only 1 CCX was required and the defect were contained to a single CCX. Otherwise for a single CCD 8 core SKU there's no benefit to 1 or 2 CCX in relation to defects for that specific product.

When it comes to yield I don't think the old 4-core CCX really gave a benefit over the new 8-core CCX. With Zen 2, you either run a single CCX, or you two with a balanced number of cores, and we was both variations offered with 4-core products.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, porina said:

When it comes to yield I don't think the old 4-core CCX really gave a benefit over the new 8-core CCX. With Zen 2, you either run a single CCX, or you two with a balanced number of cores, and we was both variations offered with 4-core products.

True but this was more commenting around 8 core products, there's still a non zero chance of defects making the new 8 core CCX not usable where as if it were the old dual 4 core CCX it may still have been, however I think that is extremely unlikely. If I were to guess AMD improved the inbuilt redundancy in to the dies to deal with defects which made them more comfortable changing to the 8 core CCX, the previous 4 core CCX was stated to be like that for modularity and to benefit things like defects, so I would think there was a good reason to do it that way when they did it. It's not like they couldn't have done 8 from the start.

 

When you look at the die details you can clearly see even at this wider viewpoint AMD has thought about ways to deal with as many problems and configurations as possible. In each core the L2 cache is split in to two 256KB slices and the L3 cache is split in to four 1MB slices, these can be disabled as required. Forgive me if I am remembering incorrectly but this is how the 3500X came to be? That China only CPU with half the active L3 cache than normal.

image.thumb.png.38ceff4f255e6751c04d059e4f9c0eed.png

 

So I don't really know what it really was for Zen 2 vs Zen 3 in going to the 8 core and what factors made that make more sense now, it could simply be a case of nothing was actually changed to allow it and is nothing more than yields being good enough the benefits gained by doing it make sense. Insight in to AMD's thinking I just don't have, all I know for sure is it was 4 core CCX before for a reason and now it's 8 core CCX, either something changed or a deciding factor in the past wasn't as applicable as thought to be back then. 

Link to comment
Share on other sites

Link to post
Share on other sites

Can't wait to see how these actually perform, it's been a long time since Intel has had any serious IPC gains.

 

I'm also intrigued to see how Alder Lake turns out, on paper its sounds pretty good, supposedly it'll be a massive IPC leap (<=50%?) over Skylake.  However I'm expecting it to be plagued with software issues due to it using a big.LITTLE design, this will be a first on Windows I think? Plus it'll likely be expensive as hell as it's apparently using both DDR5 and PCIe 5.0. That insane 1700 pin socket won't help motherboard prices either.

Dell S2721DGF - RTX 3070 XC3 - i5 12600K

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, leadeater said:

If I were to guess AMD improved the inbuilt redundancy in to the dies to deal with defects which made them more comfortable changing to the 8 core CCX, the previous 4 core CCX was stated to be like that for modularity and to benefit things like defects, so I would think there was a good reason to do it that way when they did it. It's not like they couldn't have done 8 from the start.

In interactions elsewhere, I'm reminded that bigger cache tends to mean slower cache. An L3 cache shared between 4 cores is easier and more predictable than a bigger L3 cache shared between 8 cores. AMD are improving their architecture design each generation, they don't have to make big jumps every time. I think that was more a factor than yield considerations. It may be that they didn't want to put the effort into optimising a bigger cache at the time Zen 2 was being designed. 

 

10 hours ago, leadeater said:

Forgive me if I am remembering incorrectly but this is how the 3500X came to be? That China only CPU with half the active L3 cache than normal.

I had to look it up as I didn't think 3500X was different in cache compared to its 6 core siblings. That seems to be the case with 32MB of L3. The headline difference was the lack of SMT support. It is even same base clock as 3600, but lacks 100 MHz rated boost clock. I don't think the "SMT" related logic is a significant part of the design, I have a figure in my head of mid single digit % when HT was introduced by Intel. 

 

2 hours ago, illegalwater said:

However I'm expecting it to be plagued with software issues due to it using a big.LITTLE design, this will be a first on Windows I think? Plus it'll likely be expensive as hell as it's apparently using both DDR5 and PCIe 5.0. That insane 1700 pin socket won't help motherboard prices either.

It will be Intel's 2nd, the first being Lakefield. Hopefully Intel and Microsoft are working together on it so it will be ready at OS level by the time it goes on sale, although it remains to be seen how general software will cope with it. Worst case, just get the version without the little cores, as IMO for most performance tasks the big cores will dominate anyway.

 

The socket shouldn't really be any problem. We've had even higher pin count sockets for a long time, even if not in mainstream platform. While it is a logical time to go DDR5 I'm still awaiting official confirmation of that. Can't get excited over PCIe 5.0 for the same reason I don't really care about 4.0 today, if 5.0 is a thing as I've not been keeping up on the rumours in that area.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, porina said:

In interactions elsewhere, I'm reminded that bigger cache tends to mean slower cache. An L3 cache shared between 4 cores is easier and more predictable than a bigger L3 cache shared between 8 cores.

I do wonder if the four 1MB slices per core also helps with that, rather than being just a single 4MB, don't know.

 

6 hours ago, porina said:

I had to look it up as I didn't think 3500X was different in cache compared to its 6 core siblings. That seems to be the case with 32MB of L3. The headline difference was the lack of SMT support.

It's the 3500 not the 3500X, not surprised I got it wrong lol. The 3500 only has 16MB L3 cache total rather than 32MB like on the 3500X, that means two of the four 1MB slices in each core is disabled. Both are only single CCD so that is the only possible way to have half the cache.

image.thumb.png.2550820139a42f4bf21746e4b5561307.png

 

https://en.wikichip.org/wiki/amd/ryzen_5/3500

https://en.wikichip.org/wiki/amd/ryzen_5/3500x

Link to comment
Share on other sites

Link to post
Share on other sites

Smol updoot (will update OP):

 

Geekbench i7-11700k Benchmark Result on Z490 AORUS MASTER 

 

Quote

EqTsnR5VgAAb2Oy.png.7b3985104e4e0b7524af5646f4f1017c.png

 

11th Gen Intel Core i7-11700K

Processor, 8 Cores, 16 Threads

Genuine Intel Family 6 Model 167 Stepping 1

Base Frequency 3.60 GHz

Geekbench 5 Score:

1807
Single-Core Score
10673
Multi-Core Score
 
 

Source: https://browser.geekbench.com/v5/cpu/5567437

Source 2: https://www.notebookcheck.net/Short-lived-happiness-for-Zen-3-Intel-Rocket-Lake-S-Core-i9-11900-QS-up-to-7-lead-over-the-Ryzen-7-5800X-and-up-to-33-over-the-Core-i9-10900K-Core-i9-11900-ES2-similar-to-9900K-10700K.512255.0.html

 

For reference here are some recent Ryzen 7 5800X results in the same benchmark. After looking through a decent amount of them, I would say at worst it is a tie between the 5800X and 11700k, at best the 11700k is slightly ahead on average. 

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, porina said:

In interactions elsewhere, I'm reminded that bigger cache tends to mean slower cache. An L3 cache shared between 4 cores is easier and more predictable than a bigger L3 cache shared between 8 cores. AMD are improving their architecture design each generation, they don't have to make big jumps every time. I think that was more a factor than yield considerations. It may be that they didn't want to put the effort into optimising a bigger cache at the time Zen 2 was being designed. 

 

I had to look it up as I didn't think 3500X was different in cache compared to its 6 core siblings. That seems to be the case with 32MB of L3. The headline difference was the lack of SMT support. It is even same base clock as 3600, but lacks 100 MHz rated boost clock. I don't think the "SMT" related logic is a significant part of the design, I have a figure in my head of mid single digit % when HT was introduced by Intel. 

 

It will be Intel's 2nd, the first being Lakefield. Hopefully Intel and Microsoft are working together on it so it will be ready at OS level by the time it goes on sale, although it remains to be seen how general software will cope with it. Worst case, just get the version without the little cores, as IMO for most performance tasks the big cores will dominate anyway.

 

The socket shouldn't really be any problem. We've had even higher pin count sockets for a long time, even if not in mainstream platform. While it is a logical time to go DDR5 I'm still awaiting official confirmation of that. Can't get excited over PCIe 5.0 for the same reason I don't really care about 4.0 today, if 5.0 is a thing as I've not been keeping up on the rumours in that area.

I didn’t used to really see the advantage of pcie4.  It’s starting to become really useful on atx and mini atx boards though simply because it allows the pcie count to be halved by using a second board while still allowing a high end GPU to run at full speed. The 2080ti could saturate 8 pcie3 lanes plus a little running at full speed.  One assumes a 3080 could saturate even more.  Still wouldn’t matter for ITX or computers where only one pcie slot was utilized though.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, Bombastinator said:

I didn’t used to really see the advantage of pcie4.  It’s starting to become really useful on atx and mini atx boards though simply because it allows the pcie count to be halved by using a second board while still allowing a high end GPU to run at full speed. The 2080ti could saturate 8 pcie3 lanes plus a little running at full speed.  One assumes a 3080 could saturate even more.  Still wouldn’t matter for ITX or computers where only one pcie slot was utilized though.

Looking back at the techpowerup testing, they claimed a 1% difference on a 3080 between 3.0 and 4.0, at 1080p and 4k. It's irrelevant unless you're a competitive benchmarker. Many other things in the system could make as much difference.

 

https://www.techpowerup.com/review/nvidia-geforce-rtx-3080-pci-express-scaling/27.html

 

Maybe if RTX IO (or the generic equivalent for AMD) gets used in more future games, I might change my mind then, but I'll likely have a newer system by the time that is in any way relevant.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/23/2020 at 6:43 AM, Kisai said:

Just based on what is seen here, the K parts likely will need liquid cooling just like the Ryzen 9 x950 parts. That much TDP is just going to result in a lot of thermal throttle without a good cooler.

 

The i7 parts do not require liquid cooling.

Since when does any ryzen CPU "require" liquid cooling?

A decent air cooler like a D15, Dark Rock Pro 4 or Freezer 50 will keep even a 5950X cool with ease.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, porina said:

Looking back at the techpowerup testing, they claimed a 1% difference on a 3080 between 3.0 and 4.0, at 1080p and 4k. It's irrelevant unless you're a competitive benchmarker. Many other things in the system could make as much difference.

 

https://www.techpowerup.com/review/nvidia-geforce-rtx-3080-pci-express-scaling/27.html

 

Maybe if RTX IO (or the generic equivalent for AMD) gets used in more future games, I might change my mind then, but I'll likely have a newer system by the time that is in any way relevant.

Is that a 3080 running at 8x pcie or 16x Pcie though?  A 3080 still can’t saturate x16 Pcie3.0, so the 1% difference is odd.  If the 3080 is running actual x16 there shouldn’t be any difference at all.  That there is even 1% is interesting.  The issue only even comes up when there is more than one expansion card and the lanes get cut in half.  This is why I think an x570 could beat a b450 in some but not all configurations but has a LOT more trouble beating b550.  With b550 all but the first PCIE slot and #1 nvme run through the chipset at pcie3.0 which is still way more than fine for most expansion cars uses. (Also means #2 nvme is tons slower. 2 nvmes is one of the few configurations x570 can beat b550.  Also with some high bandwidth cards that actually saturate the chipset link) 

Edited by Bombastinator

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Bombastinator said:

Is that a 3080 running at 8x pcie or 16x Pcie though?

I provided the link so you can look up any test details as you desire.

 

6 minutes ago, Bombastinator said:

 A 3080 still can’t saturate x16 Pcie3.0, so the 1% difference is odd.

You don't have to saturate something for it to make a difference if you change its speed. Also that 1% will be a rounded value, not that increased precision would be meaningful in this context.

 

6 minutes ago, Bombastinator said:

The issue only even comes up when there is more than one expansion card and the lanes get cut in half.

So basically you're thinking if you're running two PCIe devices off the CPU (excluding the NVMe lanes), the previous link was showing a further 3% drop. To me, that's still well in the doesn't make any practical difference territory.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, porina said:

I provided the link so you can look up any test details as you desire.

 

You don't have to saturate something for it to make a difference if you change its speed. Also that 1% will be a rounded value, not that increased precision would be meaningful in this context.

 

So basically you're thinking if you're running two PCIe devices off the CPU (excluding the NVMe lanes), the previous link was showing a further 3% drop. To me, that's still well in the doesn't make any practical difference territory.

Should be a lot more than 3%. More like 33%. If it’s not it implies something is up. 

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×