Jump to content

Intel Core i9-13900K 'Raptor Lake' CPU breaks 40,000 points in Cinebench R23 with unlimited power and 5.8GHz clock

Summary

Leaker OneRaichu shares new benchmark results featuring the flagship Core i9-13900k 'Raptor Lake' CPU. The 24-core and 32-thread 'Raptor Lake' processor scores 2,290 points in single-core Cinebench R23 test, and 35,693 points in multi-core with default power and clock settings. The same CPU with power cap unlimited scores 40,616 points in multi-core; so an improvement of 14% over default settings.

 

13900K-Default-Power.jpg.7c546cec2095447420c28da3eabad5e8.jpg

 

13900K-Ulimited-Power.thumb.jpg.49bcb765f4f5ba67aaa6cab5d7b37b46.jpg

 

13900K-Unlimited.thumb.jpg.a8194f1bff0048e2f1f6ddf357c42ba0.jpg

 

Quotes

Quote

If we were to compare multi-core performance at the same power settings, then i9-13900K beats any current flagship model, including Intel and AMD CPUs:

  • Intel Core i9-13900K vs Core i9-12900K at “Unlimited Power” 350W – 48% Faster
  • Intel Core i9-13900K vs Core i9-12900K at “Limited Power” 250W – 30% Faster
  • Intel Core i9-13900K vs Ryzen 9 5950X at “Unlimited Power” 350W – 67% Faster
  • Intel Core i9-13900K vs Ryzen 9 5950X at “Limited Power” 250W – 48% Faster

This sample was tested with AIDA64 Stability Test, while HWiNFO window was recording behavior of each core. It shows that 2 out of 8 available Raptor Cove cores reached 5.8GHz at some point throughout the test. The remaining Performance cores were running at 5.5GHz while all 16 Efficient cores reached a stable 4.3GHz clock.

 

Intel is set to announce its Raptor Lake CPUs next month at the Innovation event on September 28th. However, the availability is not expected until October 17th.

 

My thoughts

This is some seriously good multi-core performance for the 13900k. It's only 4% slower than a Threadripper 3975WX, and 16% slower than a Threadripper 5975WX; which are both 32c/64t CPUs. Both of those CPUs usually score around 42,300 to 48,400 points in Cinebench R23. It ends up being about 2% faster than a Threadripper 5965WX, a 24c/48t CPU, which scores about 40,200 points. Here the 13900k is scoring 40,616 points. Only downside I see with these results is power consumption, as when it was tested with unlimited power it drew 314W. It also hit 90°C with a 360mm AIO. This was during the AIDA64 stability test. 

 

Sources

https://videocardz.com/newz/core-i9-13900k-breaks-40k-points-in-cinebench-r23-with-unlimited-power-and-5-8-ghz-clock

 

Link to comment
Share on other sites

Link to post
Share on other sites

Cooler manufacturers are going to have to step up their game

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

PPT 350W ?? Who will use that  ?? And I don't even imagine the cooling you need ...

Intel should seriously work on performance/Watt

System : AMD R9 5900X / Gigabyte X570 AORUS PRO/ 2x16GB Corsair Vengeance 3600CL18 ASUS TUF Gaming AMD Radeon RX 7900 XTX OC Edition GPU/ Phanteks P600S case /  Eisbaer 280mm AIO (with 2xArctic P14 fans) / 2TB Crucial T500  NVme + 2TB WD SN850 NVme + 4TB Toshiba X300 HDD drives/ Corsair RM850x PSU/  Alienware AW3420DW 34" 120Hz 3440x1440p monitor / Logitech G915TKL keyboard (wireless) / Logitech G PRO X Superlight mouse / Audeze Maxwell headphones

Link to comment
Share on other sites

Link to post
Share on other sites

did someone say unlimited power??

21 minutes ago, Arika S said:

Cooler manufacturers are going to have to step up their game

that and PC for desktop designs, wasted space and wasted cooling for a look. then having to focus so much on cooling.

Link to comment
Share on other sites

Link to post
Share on other sites

That difference 14% between 250W and 350W is weird, the 12900K loses about the same when going from 250W to 125W, so I would expect the difference to be smaller. I'm kinda curious to see how the efficiency curve on the 13900K is compared to the 12900K and 12700K.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Arika S said:

Cooler manufacturers are going to have to step up their game

nah what you need an a dedicated aircon just for your gaming desktop!

that way your ancient thermaltake v1 can cool your 13900k!

*Insert Witty Signature here*

System Config: https://au.pcpartpicker.com/list/Tncs9N

 

Link to comment
Share on other sites

Link to post
Share on other sites

A 315W power draw for 5.5Ghz? Overclocking just seems kinda pointless except for benchmark numbers, most people wouldn't be able reasonably cool that.

Link to comment
Share on other sites

Link to post
Share on other sites

They really do be throwing more watts at their chips to get more performances instead of just working on efficiency, eh.

CPU: AMD Ryzen 3700x / GPU: Asus Radeon RX 6750XT OC 12GB / RAM: Corsair Vengeance LPX 2x8GB DDR4-3200
MOBO: MSI B450m Gaming Plus / NVME: Corsair MP510 240GB / Case: TT Core v21 / PSU: Seasonic 750W / OS: Win 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

300W CPU paired with a 600-800W GPU. Guaranteed 1kW space heater. I somewhat expected this though but I thought it wouldn't happen since Intel stepped up their game with e cores and what not. They've quite literally reached bulldozer levels of power consumption and that's when power limited. And when not power limited they beat even the extreme edition cpu's of the day that chugged power like there was no tomorrow. I'm curious what kind of cooling solutions will be required for this CPU.

image.png.fa562251787f65e16cc0f0bf70043673.png

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/7/2022 at 7:05 PM, captain_to_fire said:

Bulldozer Lake manifesting

The only difference is at least the 13900K is competent. The entire Bulldozer lineup was a waste of sand.  

Intel® Core™ i7-12700 | GIGABYTE B660 AORUS MASTER DDR4 | Gigabyte Radeon™ RX 6650 XT Gaming OC | 32GB Corsair Vengeance® RGB Pro SL DDR4 | Samsung 990 Pro 1TB | WD Green 1.5TB | Windows 11 Pro | NZXT H510 Flow White
Sony MDR-V250 | GNT-500 | Logitech G610 Orion Brown | Logitech G402 | Samsung C27JG5 | ASUS ProArt PA238QR
iPhone 12 Mini (iOS 17.2.1) | iPhone XR (iOS 17.2.1) | iPad Mini (iOS 9.3.5) | KZ AZ09 Pro x KZ ZSN Pro X | Sennheiser HD450bt
Intel® Core™ i7-1265U | Kioxia KBG50ZNV512G | 16GB DDR4 | Windows 11 Enterprise | HP EliteBook 650 G9
Intel® Core™ i5-8520U | WD Blue M.2 250GB | 1TB Seagate FireCuda | 16GB DDR4 | Windows 11 Home | ASUS Vivobook 15 
Intel® Core™ i7-3520M | GT 630M | 16 GB Corsair Vengeance® DDR3 |
Samsung 850 EVO 250GB | macOS Catalina | Lenovo IdeaPad P580

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, captain_to_fire said:

Bulldozer Lake manifesting

Bulldozer was shit, at least this has something to show for all that heat.

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

There is a discrepancy here...

CINEBENCH Says it's a 3GHz CPU,

And Cinebench checks the frequency of your CPU when you launch it so that's weird...

 

image_id_2683811.jpg

A PC Enthusiast since 2011
AMD Ryzen 7 5700X@4.65GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2085MHz Memory 5000MHz
Cinebench R23: 15669cb | Unigine Superposition 1080p Extreme: 3566
Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, Arika S said:

Bulldozer was shit, at least this has something to show for all that heat.

Should've done my research when I swapped out my 1100T for the FX-6300 from my friend who upgraded to Ryzen. Didn't know it was this bad of a regression... 

Intel® Core™ i7-12700 | GIGABYTE B660 AORUS MASTER DDR4 | Gigabyte Radeon™ RX 6650 XT Gaming OC | 32GB Corsair Vengeance® RGB Pro SL DDR4 | Samsung 990 Pro 1TB | WD Green 1.5TB | Windows 11 Pro | NZXT H510 Flow White
Sony MDR-V250 | GNT-500 | Logitech G610 Orion Brown | Logitech G402 | Samsung C27JG5 | ASUS ProArt PA238QR
iPhone 12 Mini (iOS 17.2.1) | iPhone XR (iOS 17.2.1) | iPad Mini (iOS 9.3.5) | KZ AZ09 Pro x KZ ZSN Pro X | Sennheiser HD450bt
Intel® Core™ i7-1265U | Kioxia KBG50ZNV512G | 16GB DDR4 | Windows 11 Enterprise | HP EliteBook 650 G9
Intel® Core™ i5-8520U | WD Blue M.2 250GB | 1TB Seagate FireCuda | 16GB DDR4 | Windows 11 Home | ASUS Vivobook 15 
Intel® Core™ i7-3520M | GT 630M | 16 GB Corsair Vengeance® DDR3 |
Samsung 850 EVO 250GB | macOS Catalina | Lenovo IdeaPad P580

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Vishera said:

There is a discrepancy here...

CINEBENCH Says it's a 3GHz CPU,

And Cinebench checks the frequency of your CPU when you launch it so that's weird...

No discrepancy here, just how Cinebench works. It shows the base clock. On my 5800x3d it also shows 3.4ghz, because that is the official speed, even tho it actually runs 4.2ghz all core while in the benchmark.

I only see your reply if you @ me.

This reply/comment was generated by AI.

Link to comment
Share on other sites

Link to post
Share on other sites

Looking at the scores and power reported, unlimited power is +13.7% perf for +36% power. Not entirely surprising as it'll be way in the diminishing returns part of the efficiency curve. If you do a points per watt, the efficiency at the power limited setting is +19% better.

 

Have we had a definitive 170W TDP vs 170W PPT from AMD on Zen 4? If they start with 170W TDP that's 212W already so not far off. Also keep in mind the difference that AMD enforce a power limit, whereas Intel gives the choice of what power limit to run at (even none) to the system builder without losing warranty.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

I found a few Cinebench R23 results for Zen 4 (7000 series), of course sadly they are not verified, but they appeared in the database nonetheless (if they are accurate, makes things very interesting):

 

cpumonk1.thumb.jpg.66af1566f9d64c2782f9f1ecd90303d9.jpg

 

cpumonk2.thumb.jpg.59af52a05aaef0e832f51124d9ab0d54.jpg

 

cpumonk3.thumb.jpg.4b9a4ccb84bd23fd097fab7c3af320df.jpg

 

cpumonk4.thumb.jpg.ef858aa18f5fcf2dccd23fdee6cc47b6.jpg

 

We have the 7950X, 7900X, 7700X, and 7600X. There are other Raptor Lake entries as well. These listings stand out by being "Not verified". 

 

https://www.cpu-monkey.com/en/cpu_benchmark-cinebench_r23_multi_core-16

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/8/2022 at 12:17 AM, BiG StroOnZ said:

it drew 314W

Isn't it getting a bit... unsustainable perhaps? + future 4090 Ti and we're approaching a kilowatt. Apparently the most expensive house appliance is A/C / cooling, maybe future pcs will challenge it 😆

Link to comment
Share on other sites

Link to post
Share on other sites

Power draw isn't that crazy imo. Yeah it's higher than anyone wants, but it's getting close to the 3970X with 16 baby cores and half the threads. 13700K will have almost the same gaming performance at around half the wattage, just like the 12900K vs the 12700K. If Raptor Lake doesn't re-add AVX-512 support, I'm definitely getting Zen 4 unless AMD tries something dumb like price matching their 8-core vs. Intel's 6+8 core. 12600K stomps the 5800X in literally everything, but I guess there's enough AMD fanboys around now to keep the price on the 5800X up lol

Link to comment
Share on other sites

Link to post
Share on other sites

53 minutes ago, SeriousDad69 said:

If Raptor Lake doesn't re-add AVX-512 support, I'm definitely getting Zen 4 unless AMD tries something dumb like price matching their 8-core vs. Intel's 6+8 core.

Why are you going Zen 4 if Raptor Lake doesn't enable AVX-512?

 

Zen4 doesn't have AVX-512 either. 

Link to comment
Share on other sites

Link to post
Share on other sites

Why do people consider CB a good benchmark? 

1) if your going to be doing CPU based Raytracing rendering in Cinema 4d your not going to be doing it with a scenes that is this small and simple. The only reaoson you do CPU based RT these days is if:

a) your assets size is to large to fit in VRAM
b) you are using custom shaders that cant be recreated on the GPU (such as open shader lang). 

How a cpu will perfomance in ether of the above cases is very different to how it perfomance in the very simple (small) test case that is CB benchmark not to mention that the Cinema 4d  engine has had quite a few improvements added to it since the current version of CB forked it code so perfomance is very different in the real world.


2) if your not planning on using your machine for Cinema 4d cpu rendering then CB is really bad proxy for perofmance as it uses only a small fraction of your cpus feature set. You should benchmark based on the tasks you intend to use your system for (the exact software you plan on using). The ordering of cpus and relative performance % that CB returns can not be used at all to predict how the system will perform for your task.

3) to stress your system? as above in point (2) CB only uses a small subset of your systems features so is a very poor stress test if your hoping to detect things like overclocking instabilities. You can have CB be perfect but even a simple task like oping a word document might use some cpu feature completely un-used by CB and thus could fail. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/8/2022 at 10:17 AM, BiG StroOnZ said:

Only downside I see with these results is power consumption, as when it was tested with unlimited power it drew 314W

Reported socket power, power at the wall? The only really metric for this is power at the wall and you should not subtract background power draw if you're comparing things as that punishes systems with good power management. 

The only useful nature of socket power is cooling the socket and even then (as with The Level 1 review of the new thread ripper pros) some motherboards might lie a little about the power draw so as to be able to provide more power to the compute chipsets. (one of the workstation boards somehow does not count the power drawn by the IO die as power draw so is able to provide the compute dies with an extra 20 W without it showing up even as an OC). 

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, hishnash said:

Why do people consider CB a good benchmark? 

There is a difference between being a popular benchmark and being a good benchmark. It is a popular benchmark, and I think many are aware of the limitations to some extent.

 

It's popularity is probably from a few major reasons like: easy to get, easy to run, single number to compare (usually MT). It is interesting to look at while it runs.

 

How good it is depends on the users understanding. For example, I'd use R15 as an indicator of non-AVX performance, with a generally good HT/SMT scaling (~30%), and not significantly ram scaling so it takes out that variable. Downside of the above is that the score is highly predictable you hardly need to even run it. Before Alder lake, you only need an architecture specific scaling factor (related to IPC in this task), a factor for HT/SMT scaling, the number of cores, and the clock they run at and you can predict the score within a few %. Any error is usually due to the software environment not being optimised for benchmarking thus causing lower scores. I think this can still work with R20 and newer but I never bothered to do it beyond checking the overall behaviour hadn't significantly changed. It's just collecting a new set of scaling coefficients at that point.

 

45 minutes ago, hishnash said:

3) to stress your system? as above in point (2) CB only uses a small subset of your systems features so is a very poor stress test if your hoping to detect things like overclocking instabilities. You can have CB be perfect but even a simple task like oping a word document might use some cpu feature completely un-used by CB and thus could fail. 

When overclockers stress, they use multiple stresses for that very reason. CB is one of them. R15 was pretty good as an indicator of stability for non-AVX workloads for example. R20 and newer I'm not so sure about since they do use AVX but not in a strong way, that other AVX loads like Prime95 are more useful.

 

47 minutes ago, hishnash said:

Reported socket power, power at the wall? The only really metric for this is power at the wall and you should not subtract background power draw if you're comparing things as that punishes systems with good power management. 

Socket power is the most useful for architectural comparisons like we are doing here. For AMD CPUs ideally we even need to break further between cores and IOD. Wall power confuses the comparison by taking in a bunch of unrelated factors because it apples to the system. Understand the basics then build from it. System level is the end result, but without the fundamentals you will lack understanding of what's going on.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, hishnash said:

Why do people consider CB a good benchmark? 

Because it's:

Free (as in free beer).

Easy to run (just download, install and click run).

Scales really well with cores (which a lot of benchmark doesn't do).

Can be ran in either single core or multicore mode.

It's quick to run.

 

 

I would not necessarily call it a "good" benchmark as much as it is an "accessible benchmark".

It being easy to run doesn't mean it is good. It scaling well with cores might actually give an incorrect perception of how the CPU performs (because a lot of consumer software is not highly threaded).

 

But it is easier to get someone to run Cinebench for like 5 minutes than it is to ask someone to pay 1000 dollars for SPEC and then have them compile it, run that for several hours, and post the 20+ different scores it will spit out.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×