Jump to content

New Details of Intel Rocket Lake Officially Revealed

Random_Person1234
Go to solution Solved by Random_Person1234,
37 minutes ago, Bombastinator said:

Heh.  Zilog vs Motorola sound like the title of a bad black n’ white 1960’s b grade Japanese monster movie to me.  Maybe that’s because Motorola sounds like Mothra a little bit. 

odd, because for me Zilog could sound like a monster

but the "ro" in motorola makes it a very generic name for me

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, CTR640 said:

Intel has been stagnating the cpu divisions for many of years and as a result: we've been stuck on the fucking 4cores bullshit.

 
 
 
 
 
 
 
 

They did the something similar back with the Pentium 4 days. From 2000 to 2008 they just refreshed that CPU architecture with minimal improvements. In similar fashion to current Intel claims about AMD CPU offering, they were claiming like dual core was useless, no games or applications would use it (turned out, that even if they didn't use it, you noted a substantial performance boost, just by having background processes and A/V thrown on a separate core from your game or actively used demanding application. And of course, dual core CPUs shined with Vista was designed to leverage dual core CPUs (which of course 'caused a big hit in performance for single core CPU owners). They also claimed how the best gaming experience is on their Pentium 4... same jazz as now.

 

Intel is a company that is quick to get complacent and not innovate, if they are not pushed to the edge.

 

Quote

It's inevitable 4cores cpu's will be problematic some day, especially for games. Games would have to be nerfed because of that and want 6cores? Pay up over 600 bucks or more! 6cores and above were the HEDT like the i7-5930K and i7-5960X. And not to forget about the yearly 3-5% performance for the same prices.

 

Don't forget about the security issues where the fix would bring down the performance by 3-5%. Granted on certain tasks, but still.

 

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, LAwLz said:

You got to be kidding. AMD is ahead of Intel for like 2 years and all of a sudden people question if Intel has ever been ahead of AMD with anything?

Intel was consistently ahead of AMD with a ton of stuff if we go back more than 2 years. If my answer is confusing, here's an example: Intel did PCIe 3.0 before AMD.

 

If you had asked "is this the first time Intel will do something next gen before AMD, in the last 2 years and only limiting us to consumer electronics stuff that isn't too complicated" then the answer would probably be yes. But as the question is phrased right now the answer is an obvious no. This is not the first time. 

People like to conveniently forget that Bulldozer existed. Personally, I'd be okay with that but objectively, Bulldozer-era was infinitely worse for AMD than what Intel is going through right now.

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, GoodBytes said:

They did the same thing back with the Pentium 4. From 2000 to 2008 they were pushed on consumer face. Even saying claims like dual core was useless, no games or applications would use it. And how the best gaming experience is not a dual core CPU from AMD, but rather on a Pentium 4. The P4 architecture was shit. Was an oven with its overly sized pipeline mess, that they even changed the way they measure CPU frequency to show fancy numbers, while in reality they were much slower chips.

 

Intel is a company that is quick to get complacent and not innovate. If they are not pushed to the edge, they don't innovate at all.

I still remember the Core i7 first gen days... that CPU and its chipset was supposed to bring USB 3.0 and SATA-III. What did Intel did? Didn't put it in at the end. Surprise!

Now motherboard manufacture, especially those who promised to bring those features, had to last minute find an affordable USB 3.0 controller and SATA-3 controller (which both were shit... well, especially the SATA-III one which on all early boards, had such high latency that SATA-2 was actually faster in day-to-day usage). Consumers got shit in the face thanks to Intel. 

 

Don't forget about the security issues where teh fix would bring down the performance by 3-5%. Granted on certain tasks, but still.

 

Rocket Lake is just part of Intel's annual subscription and a place holder for Alder Lake.

And not to mention that the current Comet Lake is suspiciously similar to 8th gen and 9th gen Coffee Lake.

In short Intel is spending a lot of money on small improvements,they hit major limitations with their architecture - just like AMD did with GCN 3.0 in 2015,yet continued to invest R&D money on it (GCN 4 was weaker than GCN 3 offerings and GCN 5 was not competitive).

A PC Enthusiast since 2011
AMD Ryzen 7 5700X@4.65GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2085MHz Memory 5000MHz
Cinebench R23: 15669cb | Unigine Superposition 1080p Extreme: 3566
Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Lord Szechenyi said:

odd, because for me Zilog could sound like a monster

but the "ro" in motorola makes it a very generic name for me

I’ll admit “Motorola” does not normally conjure up images of a guy in a rubber suit stomping on model buildings.  With the Zilog and the vs though it managed it.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

Well based on leaked info about Ryzen 5,000, Intel is going to need that double-digit IPC improvement to match Ryzen 5,000 on single-thread performance, if not marginally scrape back their crown.

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/30/2020 at 8:53 PM, Bombastinator said:

They also said it about two cores. There’s more stuff like that too.  First there was 4 bit, then there was 8 bit, then there was 16 bit then there was 32 but, then there was 64 bit.  I was a kid for the 8 to 16 bit switch, but I remember the 16 to 32 bit one pretty well.  Funny thing.  People are apparently saying the same thing now as they were then.  “It’s wasteful” no one will need addresses that big”, “we do fine with 16 bit” yet now 32 bit is dead and we’re on 64.  This “we’ll never need more than X” has never held up in computing that I’ve seen.  Not once.  Happens every time though.

1982 called.  The Commodore 64 wants it 64K of RAM back. 48911 BASIC BYTES FREE

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, GoodBytes said:

They did the something similar back with the Pentium 4 days. From 2000 to 2008 they just refreshed that CPU architecture with minimal improvements. In similar fashion to current Intel claims about AMD CPU offering, they were claiming like dual core was useless, no games or applications would use it (turned out, that even if they didn't use it, you noted a substantial performance boost, just by having background processes and A/V thrown on a separate core from your game or actively used demanding application. And of course, dual core CPUs shined with Vista was designed to leverage dual core CPUs (which of course 'caused a big hit in performance for single core CPU owners). They also claimed how the best gaming experience is on their Pentium 4... same jazz as now.

 

Intel is a company that is quick to get complacent and not innovate, if they are not pushed to the edge.

 

Don't forget about the security issues where the fix would bring down the performance by 3-5%. Granted on certain tasks, but still.

 

Yeah, I also remember AMD had a cpu faster than Pentium 4 and of course, Intel had to pull some filthy and sneaky tricks and manipulated the benchmarks. Internet was not really widely used at that time so Intel could lie and manipulate easily. Isn't Pentium D just two Pentium 4 packed on one die?

 

Intel indeed is quick to get complacent. Intel probably paid developers to program only for 4cores? If AMD didn't kick back: i7-10900K with 4cores...

 

And they probably knew about the security flaws but money gotta roll so screw security! That's what Intel thinks/thought. Internet is now widely used and a lot of people interested in hardware are increasing so they no longer can hide like a rat. How Linus roasted Intel! :D

DAC/AMPs:

Klipsch Heritage Headphone Amplifier

Headphones: Klipsch Heritage HP-3 Walnut, Meze 109 Pro, Beyerdynamic Amiron Home, Amiron Wireless Copper, Tygr 300R, DT880 600ohm Manufaktur, T90, Fidelio X2HR

CPU: Intel 4770, GPU: Asus RTX3080 TUF Gaming OC, Mobo: MSI Z87-G45, RAM: DDR3 16GB G.Skill, PC Case: Fractal Design R4 Black non-iglass, Monitor: BenQ GW2280

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, CTR640 said:

Intel indeed is quick to get complacent. Intel probably paid developers to program only for 4cores?

I love how you say that when the only mainstream CPUs with more than 4 cores before Ryzen were FX, and noone sane would develop with FX in mind.

Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler

^-^

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Vishera said:

Rocket Lake is just part of Intel's annual subscription and a place holder for Alder Lake.

And not to mention that the current Comet Lake is suspiciously similar to 8th gen and 9th gen Coffee Lake.

In short Intel is spending a lot of money on small improvements,they hit major limitations with their architecture - just like AMD did with GCN 3.0 in 2015,yet continued to invest R&D money on it (GCN 4 was weaker than GCN 3 offerings and GCN 5 was not competitive).

Rocket Lake is the biggest shift since Skylake. It is not just another lake. How it will compete remains to be seen, but it will be more different than anything you seen since Skylake.

 

Also Intel's problem is NOT architecture. They have newer and faster architectures, the problem is process, and the complications that come from coupling architecture to process. Rocket Lake is the first step to decouple architecture from process, giving the first post-Skylake architecture on desktop, while "still" on 14nm. 

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

56 minutes ago, Elisis said:

I love how you say that when the only mainstream CPUs with more than 4 cores before Ryzen were FX, and noone sane would develop with FX in mind.

But were they really true cores? 

DAC/AMPs:

Klipsch Heritage Headphone Amplifier

Headphones: Klipsch Heritage HP-3 Walnut, Meze 109 Pro, Beyerdynamic Amiron Home, Amiron Wireless Copper, Tygr 300R, DT880 600ohm Manufaktur, T90, Fidelio X2HR

CPU: Intel 4770, GPU: Asus RTX3080 TUF Gaming OC, Mobo: MSI Z87-G45, RAM: DDR3 16GB G.Skill, PC Case: Fractal Design R4 Black non-iglass, Monitor: BenQ GW2280

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, CTR640 said:

But were they really true cores? 

Nope,they were not.

FX processors had each core "split in two" creating the illusion of 8 cores/8 threads,

The problem is that AMD called each "half core" a core,and not to mention the single core performance penalty since every two "cores" shared resources with each other.

AMD clustered each "half" in order to compete with Intel's Hyperthreading technology.

The FX processors were basically 4 modules and 8 threads with terrible single core performance but good multi-threaded performance.

I regreted buying one back in the day,when i could buy a 2600K instead which is still good to this day and will play modern games after you overclock it to 4.7GHz and above.

A PC Enthusiast since 2011
AMD Ryzen 7 5700X@4.65GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2085MHz Memory 5000MHz
Cinebench R23: 15669cb | Unigine Superposition 1080p Extreme: 3566
Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, CTR640 said:

But were they really true cores? 

If you argue otherwise, there would be zero reason for developers to develop for CPUs with more than 4 cores as they didn't really exist before Ryzen for most people. So if you were to argue that these aren't cores, then it directly disproves:

8 hours ago, CTR640 said:

Intel indeed is quick to get complacent. Intel probably paid developers to program only for 4cores?

 

Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler

^-^

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Elisis said:

If you argue otherwise, there would be zero reason for developers to develop for CPUs with more than 4 cores as they didn't really exist before Ryzen for most people. So if you were to argue that these aren't cores, then it directly disproves:

 

You are right,both FX and consumer Intel were 4 modules 8 threads CPUs,and in addition Intel had their enthusiast products with more cores such as the 8 cores 16 threads 5960X in 2014.

The only exception is Crysis - which was hardcoded to use only 4 threads (made for Core 2 Quad),but it wasn't Intel's fault,it was Crytek's.

A PC Enthusiast since 2011
AMD Ryzen 7 5700X@4.65GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2085MHz Memory 5000MHz
Cinebench R23: 15669cb | Unigine Superposition 1080p Extreme: 3566
Link to comment
Share on other sites

Link to post
Share on other sites

CPU - Ryzen 5 5600X | CPU Cooler - EVGA CLC 240mm AIO  Motherboard - ASRock B550 Phantom Gaming 4 | RAM - 16GB (2x8GB) Patriot Viper Steel DDR4 3600MHz CL17 | GPU - MSI RTX 3070 Ventus 3X OC | PSU -  EVGA 600 BQ | Storage - PNY CS3030 1TB NVMe SSD | Case Cooler Master TD500 Mesh

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Random_Person1234 said:

 

Obviously there are many caveats to this, but if we do some quick-and-dirty estimations. There is some very interesting stuff to work with here:

 

So, the Rocket Lake ES ran @ 4.2GHz, and scored 179 pts. The Core i9-10900K @ 5.3GHz, scores 152 pts. This means Rocket Lake is 18% faster than Comet Lake, while at the same time having 20% slower/lower clock speeds (hello IPC improvements).

 

If we were to be reasonable and say Rocket Lake can at least clock to 4.8GHz. That gives us around 14-15% wiggle room. If we were to apply that clock speed bump to Rocket Lake @ 4.8GHz (from 4.2GHz) that gives us a score around 200-203 points. This would mean compared to Comet Lake (10th Gen), Rocket Lake is 33% faster, while still having 10% slower/lower clocks. 

 

If we are/we're to give Rocket Lake clock speed parity with Comet Lake @ 5.3GHz. We have another 10% to work with approximately. This brings the Rocket Lake score closer to 218-220. Making it roughly 44% faster than the 10th gen Core i9-10900k @ 5.3GHz (clock for clock).

 

Now obviously scaling isn't typically this linear, therefore, even if we trim the fat slightly (−) 9-13%. We still are looking at about 38-40% faster than a i9-10900k @ 5.3GHz vs. Rocket Lake @ 5.3GHz (clock for clock).  

 

On a side note: None of this even takes into account the fact that the i9-10900k is a 10c20t chip, while the Rocket Lake is "only" an 8c16t chip; meaning the Rocket Lake ES is managing to do all this while still having 20% fewer cores/threads! (If we are to extrapolate performance to multi-threaded workloads as well)

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, BiG StroOnZ said:

 

Obviously there are many caveats to this, but if we do some quick-and-dirty estimations. There is some very interesting stuff to work with here:

 

So, the Rocket Lake ES ran @ 4.2GHz, and scored 179 pts. The Core i9-10900K @ 5.3GHz, scores 152 pts. This means Rocket Lake is 18% faster than Comet Lake, while at the same time having 20% slower/lower clock speeds (hello IPC improvements).

 

If we were to be reasonable and say Rocket Lake can at least clock to 4.8GHz. That gives us around 14-15% wiggle room. If we were to apply that clock speed bump to Rocket Lake @ 4.8GHz (from 4.2GHz) that gives us a score around 200-203 points. This would mean compared to Comet Lake (10th Gen), Rocket Lake is 33% faster, while still having 10% slower/lower clocks. 

 

If we are/we're to give Rocket Lake clock speed parity with Comet Lake @ 5.3GHz. We have another 10% to work with approximately. This brings the Rocket Lake score closer to 218-220. Making it roughly 44% faster than the 10th gen Core i9-10900k @ 5.3GHz (clock for clock).

 

Now obviously scaling isn't typically this linear, therefore, even if we trim the fat slightly (−) 9-13%. We still are looking at about 38-40% faster than a i9-10900k @ 5.3GHz vs. Rocket Lake @ 5.3GHz (clock for clock).  

 

On a side note: None of this even takes into account the fact that the i9-10900k is a 10c20t chip, while the Rocket Lake is "only" an 8c16t chip; meaning the Rocket Lake ES is managing to do all this while still having 20% fewer cores/threads! (If we are to extrapolate performance to multi-threaded workloads as well)

 

 

1. we don't know that the clock speeds are being reported properly.

 

2. we don' know how frequency optimised this benchmark is or isn't, not all workloads scale cleanly with frequency.

 

3. We don't have cache details that i can see, depending on the cache size and the size of the benchmark data this could be over or under stressing the memmory and cache subsystems.

 

Basically we need more data samples.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, BiG StroOnZ said:

 

Obviously there are many caveats to this, but if we do some quick-and-dirty estimations. There is some very interesting stuff to work with here:

 

So, the Rocket Lake ES ran @ 4.2GHz, and scored 179 pts. The Core i9-10900K @ 5.3GHz, scores 152 pts. This means Rocket Lake is 18% faster than Comet Lake, while at the same time having 20% slower/lower clock speeds (hello IPC improvements).

 

If we were to be reasonable and say Rocket Lake can at least clock to 4.8GHz. That gives us around 14-15% wiggle room. If we were to apply that clock speed bump to Rocket Lake @ 4.8GHz (from 4.2GHz) that gives us a score around 200-203 points. This would mean compared to Comet Lake (10th Gen), Rocket Lake is 33% faster, while still having 10% slower/lower clocks. 

 

If we are/we're to give Rocket Lake clock speed parity with Comet Lake @ 5.3GHz. We have another 10% to work with approximately. This brings the Rocket Lake score closer to 218-220. Making it roughly 44% faster than the 10th gen Core i9-10900k @ 5.3GHz (clock for clock).

 

Now obviously scaling isn't typically this linear, therefore, even if we trim the fat slightly (−) 9-13%. We still are looking at about 38-40% faster than a i9-10900k @ 5.3GHz vs. Rocket Lake @ 5.3GHz (clock for clock).  

 

On a side note: None of this even takes into account the fact that the i9-10900k is a 10c20t chip, while the Rocket Lake is "only" an 8c16t chip; meaning the Rocket Lake ES is managing to do all this while still having 20% fewer cores/threads! (If we are to extrapolate performance to multi-threaded workloads as well)

Smells not so good.  The problem with accountant math is it misses things.  That’s three times as high as even intel marketers claimed, and marketers like accountant math a lot, which makes it highly highly likely that something major is missing the marketers didn’t think even they could get away with. We’ll see when the various chips are tested against each other. 

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×