Jump to content

MSI has just confirmed Intel Alder Lake CPU launch date (November 4th)

Lightwreather
1 minute ago, leadeater said:

Why would you switch to an 11900K when it's more likely the 10900K is actually faster for you, never mind that Ryzen 5000 is faster than both as well. Regardless if you play games and that's what you really care about then you don't want an 11900K since you can do better and for less money.

My cooler is LGA1200 compatible and a 5900x costs 650€ now so.. Was gonna say cheaper but im seeing its about the same price for boards and processors regardless if its intel or amd now. (5900X vs 11900K)

 

Probably because last time i used intel and i dropped to ryzen, i lost 30 to 40 frames in Cities Skylines.. I went from a 4 cores 4 thread processor.. To a 8 core 16 thread processor.. From a I5-4570 to a Ryzen 1700.. And i still lost so much framerates, mostly because of this one game.. Intel somehow have the upper hand in the game ive dumped nearly 2000 hours into.

 

Not planning to upgrade anytime soon, waiting if AM5 is gonna be more interesting or to see if this processor above will be more interesting.

 

Hopefully this new launch makes an upgrade worth it, still cant play cyberpunk on 60fps to this day with a newer processor and a 3080 ti..

Maybe im just tired of being letdown from installing a newer processor like the r5-3600 and seeing it perform worse in some titles than my r7-1700..

 

I need to mine out a chunk in minecraft and blow some stuff up in fallout 4 now, ive typed enough for today haha 😄

Useful threads: PSU Tier List | Motherboard Tier List | Graphics Card Cooling Tier List ❤️

Baby: MPG X570 GAMING PLUS | AMD Ryzen 9 5900x /w PBO | Corsair H150i Pro RGB | ASRock RX 7900 XTX Phantom Gaming OC (3020Mhz & 2650Memory) | Corsair Vengeance RGB PRO 32GB DDR4 (4x8GB) 3600 MHz | Corsair RM1000x |  WD_BLACK SN850 | WD_BLACK SN750 | Samsung EVO 850 | Kingston A400 |  PNY CS900 | Lian Li O11 Dynamic White | Display(s): Samsung Oddesy G7, ASUS TUF GAMING VG27AQZ 27" & MSI G274F

 

I also drive a volvo as one does being norwegian haha, a volvo v70 d3 from 2016.

Reliability was a key thing and its my second car, working pretty well for its 6 years age xD

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, MultiGamerClub said:

From a I5-4570 to a Ryzen 1700.. And i still lost so much framerates, mostly because of this one game.. Intel somehow have the upper hand in the game ive dumped nearly 2000 hours into.

True however Ryzen 1000 series isn't Ryzen 5000 series. Ryzen 1000 and 2000 weren't exactly the pinnacle of performance in regards to gaming however that is the case with Ryzen 5000 but both AMD and Intel have great options and performance.

 

Still go with 10th Gen as it's faster than 11th Gen for what you do.

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, leadeater said:

If you want to stick to Intel and are going to choose something that isn't current generation when Alder Lake is out then your best pick is a 10700K with MCE enabled and remove turbo time and power limits (which all of these is probably motherboard default anyway on Z series).

Generally speaking MCE is a bad idea. It is an overclock. You might get slightly more performance at a cost of massive increase in power usage. Setting unlimited power limit is not an overclock, and you're getting the best "stock" performance. With some CPUs now supporting specific-core turbo, something like MCE might even cost you single core performance for a possible benefit to all core. 

 

6 minutes ago, MultiGamerClub said:

Probably because last time i used intel and i dropped to ryzen, i lost 30 to 40 frames in Cities Skylines.. I went from a 4 cores 4 thread processor.. To a 8 core 16 thread processor.. From a I5-4570 to a Ryzen 1700.. And i still lost so much framerates, mostly because of this one game.. Intel somehow have the upper hand in the game ive dumped nearly 2000 hours into.

First gen Ryzen was pretty much sold on Cinebench and had serious shortcomings in many other use cases, in part due to low clocks and other architectural limitations. AMD didn't really pass Skylake two generations later with Zen 2. I made a similar mistake around that time. My 1700 system, overclocked, was losing in most of my gaming to a stock 6600k.

 

6 minutes ago, MultiGamerClub said:

Maybe im just tired of being letdown from installing a newer processor like the r5-3600 and seeing it perform worse in some titles than my r7-1700..😄

Make sure you compare like with like, otherwise you can end up in a rock-paper-scissors loop on what's better.

 

Just now, leadeater said:

Still go with 10th Gen as it's faster than 11th Gen for what you do.

Personally unless you had to buy a new system right now, I'd see what Alder Lake brings, and at what price. If that is too spendy I'd personally still consider Rocket Lake over Comet Lake. Sure there may be some differences, give or take here or there between those and Zen 3, but unless you focus on a specific niche where one excels over others, the difference isn't that major, and wider platform considerations may come into play. Get a 500 chipset mobo and slap Rocket Lake into it, you're at least on parity with AM4 when it comes to PCIe speeds and usable lanes. Might age slightly slower from that than if you go for Comet Lake.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, leadeater said:

Still go with 10th Gen as it's faster than 11th Gen for what you do.

Wait what? The previous generation is faster? I feel like ive slept under a rock reading this.

Useful threads: PSU Tier List | Motherboard Tier List | Graphics Card Cooling Tier List ❤️

Baby: MPG X570 GAMING PLUS | AMD Ryzen 9 5900x /w PBO | Corsair H150i Pro RGB | ASRock RX 7900 XTX Phantom Gaming OC (3020Mhz & 2650Memory) | Corsair Vengeance RGB PRO 32GB DDR4 (4x8GB) 3600 MHz | Corsair RM1000x |  WD_BLACK SN850 | WD_BLACK SN750 | Samsung EVO 850 | Kingston A400 |  PNY CS900 | Lian Li O11 Dynamic White | Display(s): Samsung Oddesy G7, ASUS TUF GAMING VG27AQZ 27" & MSI G274F

 

I also drive a volvo as one does being norwegian haha, a volvo v70 d3 from 2016.

Reliability was a key thing and its my second car, working pretty well for its 6 years age xD

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, MultiGamerClub said:

Wait what? The previous generation is faster? I feel like ive slept under a rock reading this.

Yea not always but for a great majority of things yes. Gaming almost always favors 10th Gen and well threaded applications like Blender/Rendering are faster as well since 10th Gen goes up to 10 cores and 11th Gen goes up to 8 cores.

 

Before buying anything check the reviews for it on Anandtech or similar.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, leadeater said:

Yea not always but for a great majority of things yes. Gaming almost always favors 10th Gen and well threaded applications like Blender/Rendering are faster as well since 10th Gen goes up to 10 cores and 11th Gen goes up to 8 cores.

 

Before buying anything check the reviews for it on Anandtech or similar.

Huh.. Thats.. Refreshing news, i honestly thought without even checking the 11900k would have more cores than the 10900k but i seem to be way out of the loop now..

 

I will check their reviews and still wait for am5 to launch, then kind of decide.

 

Sorry if i sounded sassy or tired in the reply above there 😕

Useful threads: PSU Tier List | Motherboard Tier List | Graphics Card Cooling Tier List ❤️

Baby: MPG X570 GAMING PLUS | AMD Ryzen 9 5900x /w PBO | Corsair H150i Pro RGB | ASRock RX 7900 XTX Phantom Gaming OC (3020Mhz & 2650Memory) | Corsair Vengeance RGB PRO 32GB DDR4 (4x8GB) 3600 MHz | Corsair RM1000x |  WD_BLACK SN850 | WD_BLACK SN750 | Samsung EVO 850 | Kingston A400 |  PNY CS900 | Lian Li O11 Dynamic White | Display(s): Samsung Oddesy G7, ASUS TUF GAMING VG27AQZ 27" & MSI G274F

 

I also drive a volvo as one does being norwegian haha, a volvo v70 d3 from 2016.

Reliability was a key thing and its my second car, working pretty well for its 6 years age xD

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, porina said:

Generally speaking MCE is a bad idea. It is an overclock. You might get slightly more performance at a cost of massive increase in power usage. Setting unlimited power limit is not an overclock, and you're getting the best "stock" performance. With some CPUs now supporting specific-core turbo, something like MCE might even cost you single core performance for a possible benefit to all core. 

MCE has been around for a long time and hasn't caused any wide scale problems, not unless someone like Asrock made it default on lower end not Z series platforms. Odds are most DIY systems running Z series have MCE on whether anyone realizes that or not.

 

Also far as I know MCE doesn't prevent single core turbo from working, it's not applying a fixed multiplier to the cores it's just overriding the boost table and changing the value for workloads for 2 cores and greater to the same value and 2 cores and less.

 

image.thumb.png.b7299c7ed826b17c5f8c17b3857988a1.png

 

https://www.gamersnexus.net/guides/3268-multi-core-enhancement-and-core-performance-boost-testing

 

Something like a 10700K on a Z series with a half decent cooler will be in all likelihood fine with MCE on and no power limits.

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, porina said:

Personally unless you had to buy a new system right now, I'd see what Alder Lake brings, and at what price. If that is too spendy I'd personally still consider Rocket Lake over Comet Lake. Sure there may be some differences, give or take here or there between those and Zen 3, but unless you focus on a specific niche where one excels over others, the difference isn't that major, and wider platform considerations may come into play. Get a 500 chipset mobo and slap Rocket Lake into it, you're at least on parity with AM4 when it comes to PCIe speeds and usable lanes. Might age slightly slower from that than if you go for Comet Lake.

I was mainly basing the comment off the desire to go with an Intel platform and that 10th Gen would be cheaper. I just don't see much point spending more to get the same or slightly worse. When a 10700K is beating a 11900K in almost all game benchmarks I can't find any reason to spend more, and a 10900K is faster due to it's stock higher clocks, rarely do the extra 2 cores account for the performance increase in games.

 

10700K or 10900K just make the most sense with those factors, personally I'd buy nothing which I have done and will continue to do.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, leadeater said:

MCE has been around for a long time and hasn't caused any wide scale problems, not unless someone like Asrock made it default on lower end not Z series platforms. Odds are most DIY systems running Z series have MCE on whether anyone realizes that or not.

 

Also far as I know MCE doesn't prevent single core turbo from working, it's not applying a fixed multiplier to the cores it's just overriding the boost table and changing the value for workloads for 2 cores and greater to the same value and 2 cores and less.

I haven't actually used it on systems post quad core era given how bad it was before. Certainly in the quad core days it overclocked all core to equal single core turbo clock. Old turbo philosophy was that all cores could reach the single core clock, and MCE used that unofficially. But now some Intel CPUs have moved towards single cores going above that. Still lifting all core that way is power inefficient with slim gains now that we're operating closer to the limit at stock than ever. Unless I'm missing something, that chart included as example showed practically no difference between all states.

 

5 minutes ago, leadeater said:

11th Gen is just weird though, not normal at all

What's that supposed to mean? It is certainly a unique place in Intel's products in that they redesigned a newer architecture for older process. It isn't as good as it could be because of that, but still there is a clear and significant uplift in performance at architecture level. There is also speculation that it gives Intel more current skills in designing products to go on multiple processes, should they decide to de-risk key products or offer more manufacturing flexibility in future.

 

8 minutes ago, leadeater said:

I was mainly basing the comment off the desire to go with an Intel platform and that 10th Gen would be cheaper. I just don't see much point spending more to get the same or slightly worse. When a 10700K is beating a 11900K in almost all benchmarks I can't find any reason to spend more, and a 10900K is faster purely due to it's stock higher clocks.

 

We're not looking at the same benchmarks, or interpreting the same way then? I just re-skimmed Anandtech's quickly to make sure I wasn't mis-remembering. In short, if you compare like for like core counts Rocket Lake does take the lead much of the time even if you exclude tests that use AVX-512, where it runs away even faster. The only time Comet Lake gets significantly ahead is if you compare 10 vs 8 cores for highly scalable workloads, in which situation you're probably better off going for a 12 or more core Zen 3 anyway. For gaming, there is more of a give and take at 1080p, but that returns to my earlier argument, the differences there just aren't significant (even vs Zen 3). Yes, if you have to have 1st, 2nd, 3rd ordering it isn't clear cut, but in absolute differences there are low single digit % between them. If you go to higher resolutions, you're more GPU bound. If and only if you're an insane high fps type player, then optimising the CPU in that way might be worth doing. 

 

 

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, porina said:

What's that supposed to mean?

That usually generation over generation improvements are brought to the table most typically without regressions along with it. For example every single rendering application is significantly slower on 11th Gen for the highest model. These are not insignificant performance regressions.

 

38 minutes ago, porina said:

We're not looking at the same benchmarks, or interpreting the same way then? I just re-skimmed Anandtech's quickly to make sure I wasn't mis-remembering. In short, if you compare like for like core counts Rocket Lake does take the lead much of the time even

Then don't because that's not real world to how anyone buys or how Intel even markets them. An 11900K is slower than both the 10900K and 10700K in practically all game benchmarks. In rendering the 10900K is always faster than the 11900K.

 

So when it comes down to it if you have $500 to spend you'll get more performance from 10th Gen than you would 11th Gen, and in furtherance to this issue you could in fact spend as little as $400 and get more performance than a $500 11th Gen part in many cases.

 

And there in lies the problem and why 11th Gen is weird, you shouldn't have to be checking that your model equivalent product isn't slower than the last one, it's become more complex than looking at performance per dollar between generations because now performance across the board is no longer at a minimum the same as the previous product. This is very un-Intel.

 

Save the money, buy a better GPU and increase the graphical quality.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, MultiGamerClub said:

Huh.. Thats.. Refreshing news, i honestly thought without even checking the 11900k would have more cores than the 10900k but i seem to be way out of the loop now..

 

I will check their reviews and still wait for am5 to launch, then kind of decide.

 

Sorry if i sounded sassy or tired in the reply above there 😕

Puget Systems do really good write ups. They're workstation based though. GamersNexus is great at gaming benchmarks, and then there's these guys.

https://www.pugetsystems.com/labs/articles/11th-Gen-Intel-Core-CPU-Review-Roundup-2106/

I'm not actually trying to be as grumpy as it seems.

I will find your mentions of Ikea or Gnome and I will /s post. 

Project Hot Box

CPU 13900k, Motherboard Gigabyte Aorus Elite AX, RAM CORSAIR Vengeance 4x16gb 5200 MHZ, GPU Zotac RTX 4090 Trinity OC, Case Fractal Pop Air XL, Storage Sabrent Rocket Q4 2tbCORSAIR Force Series MP510 1920GB NVMe, CORSAIR FORCE Series MP510 960GB NVMe, PSU CORSAIR HX1000i, Cooling Corsair XC8 CPU block, Bykski GPU block, 360mm and 280mm radiator, Displays Odyssey G9, LG 34UC98-W 34-Inch,Keyboard Mountain Everest Max, Mouse Mountain Makalu 67, Sound AT2035, Massdrop 6xx headphones, Go XLR 

Oppbevaring

CPU i9-9900k, Motherboard, ASUS Rog Maximus Code XI, RAM, 48GB Corsair Vengeance LPX 32GB 3200 mhz (2x16)+(2x8) GPUs Asus ROG Strix 2070 8gb, PNY 1080, Nvidia 1080, Case Mining Frame, 2x Storage Samsung 860 Evo 500 GB, PSU Corsair RM1000x and RM850x, Cooling Asus Rog Ryuo 240 with Noctua NF-12 fans

 

Why is the 5800x so hot?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, leadeater said:

Then don't because that's not real world to how anyone buys or how Intel even markets them.

Ok, so that's the difference. While I don't disagree with your position, I can't fully agree with it either. It is way more complicated than that. I like to first look at architecture in isolation, as that allows you first to understand the differences in what you're getting. Pricing in particular is dynamic and how value is perceived will swing that, so I'd tend to only consider it at the end.  

 

Also looking at CPU value tends to be flawed in many ways. Over focusing on limited performance metrics would be as silly as choosing a car based only on one measure, say fuel economy or acceleration time, to the neglect of all else.

 

As an example of my thinking process, I first choose my target performance level. My latest build, which did go for Rocket Lake, had a vision of a higher end gaming system for current and near future gaming, say next two years. This rules out 6 core CPUs, and 8 core is the sweet spot, on the basis that's what current gen consoles also have. More than that would be unlikely to give a significant return in this timeframe. In the specific case of AMD, they also have the major downside of CCX splitting caches when going beyond 8 core. The shortlist was 5800X, 11700k, and maybe 10700k. That's where factors beyond performance comes in. 10700k does not support PCIe 4.0. While it doesn't make a significant difference today, there is some not insignificant chance it will start to become useful within the intended life of the system The choice was then down to 11700k and 5800X. Skimming one UK retailer I often use, right now the 11700k is £350, the 5800X is £390. 10700k is £310. Price in this case wasn't a factor in my choice to go Rocket Lake, couldn't resist playing with AVX-512 outside of gaming uses, although it isn't a must have. TechPowerUp rated the 5800X as 3% faster for gaming at 1080p than 11700k, 10700k 1% faster, which to me is insignificant, putting aside I'm running above that resolution.

 

13 minutes ago, leadeater said:

buy a better GPU 

In 2021? 😄 It you're looking at a high end GPU right now, CPU pricing is not your biggest problem.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, porina said:

In 2021? 😄 It you're looking at a high end GPU right now, CPU pricing is not your biggest problem.

Oh I just mean from buying a 10700K rather than a 11900K, because that difference is enough to go up a model of GPU unless you're buying RTX 3080 or better. Logically speaking I wouldn't advise buying the very top end of CPUs or GPUs for gaming anyway, I still do but I know I'm being silly.

 

Buying around the x700K and x70 products more often gives you a slightly better and much more consistent performance trend line than x900K and x90 long term staircase buying does.

 

Also until PCIe 4.0 actually has a practical benefit, NVMe as well, these specific specification advantages really don't matter or improve the user experience. If you for example put a PCIe 4.0 NVMe SSD into a PCIe 3.0 slot or down spec the slot and there is no tangible change then you're increasing platform cost for as yet no benefit. This might change later but then the issue is how much later? And then how wide spread is that benefit.

 

I generally keep my impractical and illogical buying habits to myself and keep advice to the here and now and maybe 12 months in to the future. Like there is a high non zero chance my next gaming system will be a TR 5000, I'd never advise anyone to do that lol.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, porina said:

This puts it around the mid range of what I consider reasonable launch windows. Early Oct could sync with Win11. Late Nov would be interrupted by US thanksgiving. 

 

The tech in me wants to personally bench one as it is the most radical thing in desktop x86 space for a long time. The sensible part says I have too many systems, and they're not selling easily.

 

 

Can we also assume AMD will finally "launch" their high cache Zen 3 desktop CPUs around then too?

I mean Thanksgiving is right before black Friday so it might actually be a pretty good time to do a release as alot of people will be shopping for Christmas stuff around that time. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, leadeater said:

Also until PCIe 4.0 actually has a practical benefit, NVMe as well, these specific specification advantages really don't matter or improve the user experience. If you for example put a PCIe 4.0 NVMe SSD into a PCIe 3.0 slot or down spec the slot and there is no tangible change then you're increasing platform cost for as yet no benefit. This might change later but then the issue is how much later? And then how wide spread is that benefit.

I got a 980 Pro, something will use it before it is obsolete, right? 😄 That >7GB/s must be good for something. I might even figure it out some day. It made no economic sense, but then again I am downgrading from Optane which also didn't provide any tangible benefit outside of CrystalDiskMark 4k random read results.

 

Again, I'm looking to the current console generation for indication where things might go on PC gaming if not already. DirectStorage in particular might be that use for 4.0. If it gets supported, 4.0 will offer some benefit over 3.0. We can look at RTX which has been out for a few years now. No, not everything uses RT, but it is appearing in new titles, and the odd older title revisit. DirectStorage is at or even behind that curve in relative cycle, and will probably remain so as I'd speculate there are fewer PCIe 4.0 SSDs used on 4.0 systems than RTX GPUs in the wild right now.

 

A bit more speculative, but GPU to system ram bandwidth would also be helped by faster PCIe. I don't know if DirectStorage relates to non volatile storage only, but if there is sufficient spare ram in system that would be preferable to use over NVMe. With PCIe 5.0 the bandwidth is in the ball park of dual channel DDR4 4000. Have Intel stated if Arc will be 5.0? Otherwise there'll be practically nothing running at max speed in those slots on ADL mobos.

 

1 minute ago, Brooksie359 said:

I mean Thanksgiving is right before black Friday so it might actually be a pretty good time to do a release as alot of people will be shopping for Christmas stuff around that time. 

 

I guess you mean the rest of the system parts could be on sale? You're not going to put a brand new product straight into a sale. Still, the CPU and mobo will be brand new, as would DDR5 ram. GPUs, doubt they're going to be any better in pricing or availability any time soon. Rest of the system is comparatively lower cost. It will be interesting to see if there are any interesting deals, but it may not match up with someone building a cutting edge system as I'd guess it would apply more to less desirable or older parts.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

49 minutes ago, porina said:

Again, I'm looking to the current console generation for indication where things might go on PC gaming if not already. DirectStorage in particular might be that use for 4.0. If it gets supported, 4.0 will offer some benefit over 3.0.

I know but that still comes back to the when and how many problem. Likely going to be multiple CPU generations release between now and then.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Caroline said:

More ground pins to make the chips physically incompatible with current boards... yayy!! 

 

/s

Tho, tbh, I doubt they would've been able to fit a whole new µarch and system of computing withing the tiny FCLGA1200 socket

"A high ideal missed by a little, is far better than low ideal that is achievable, yet less effective"

 

If you think I'm wrong, correct me. If I've offended you in some way tell me what it is and how I can correct it. I want to learn, and along the way one can make mistakes; Being wrong helps you learn what's right.

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, Stahlmann said:

It'll be really interesting to see how the first non-Skylake CPU will perform. This should finally give a notable generational improvement.

 

The next few years should be really interesting with 2 entities fighting for the crown.

 

Rocket Lake was the first non skylake CPU (that's why there was an IPC increase as well as fixes to prevent the "Internal Parity Error" encountered in quite a few recent games (and legendary in Minecraft).

 

It just got a bad rap because it used a chpset that was a stopgap between existing compatibility with Skylake cores (10900k) and Cypress Cove (basically, Ice Lake running on 14nm).  This caused a latency penalty due to the Gear 2 mode for DDR4 (in Gear 1 mode it was almost as low in latency as Comet Lake).

 

ADL is going to be very interesting.  But I wonder what the benchmark results are going to be if you disable all the atom cores and compare it to Rocket Lake at the same mhz, with the huge latency penalty of early DDR5?  (i'm guessing from the leaks that DDR4 on both systems will still be 19% faster).  I guess we'll find out in just over a month or so...

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, leadeater said:

Why would you switch to an 11900K when it's more likely the 10900K is actually faster for you, never mind that Ryzen 5000 is faster than both as well. Regardless if you play games and that's what you really care about then you don't want an 11900K since you can do better and for less money.

 

If you want to stick to Intel and are going to choose something that isn't current generation when Alder Lake is out then your best pick is a 10700K with MCE enabled and remove turbo time and power limits (which all of these is probably motherboard default anyway on Z series).

The fix for the internal parity error is enough reason to switch.  Needing 20 threads instead of 16 threads (10 vs 8 physical cores) for gaming is like someone saying they need a RTX 3090 because 3080 10 GB isn't enough RAM...

 

There's been a whole lot of discussion about that error in the 6 months since RKL launched, and a lot more people have encountered it in more and more newer titles, sometimes having to completely remove their overclocks to stop it..

One person did a test over on OCN and determined that if you disabled 2 cores on the 10900k (turning it into a 9900k, ahem), it greatly reduced or eliminated the parity error (except in Minecraft, which is a worst case example of cache based Java garbage collection at work, as it loads one full thread at 100% constantly).  That's proof that Intel stretched the Skylake ring arch to its breaking point.  Yes the 10900k clocks on average better than the 9900k but that's to be expected from any process refinement as silicon purity increases.

 

The challenge is getting the memory latency low enough to compensate for the MC (memory controller) changes.

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, Falkentyne said:

The fix for the internal parity error is enough reason to switch.  Needing 20 threads instead of 16 threads (10 vs 8 physical cores) for gaming is like someone saying they need a RTX 3090 because 3080 10 GB isn't enough RAM...

Hence the recommendation to go with the 10700K 😉

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, leadeater said:

I know but that still comes back to the when and how many problem. Likely going to be multiple CPU generations release between now and then.

I just looked up DirectStorage, and it looks like it is further off than I thought. It looks like it was only enabled in preview versions of Windows middle of this year. MS miscommunicated it was going to be a Win11 feature then clarified Win10 back to 1909 will support it. Still, if you bought a system this year, chances are there will be some level of support in some new games within the life of that system. CPU launch cycles tend to be yearly, give or take, and most sensible people don't replace their system every generation. I'm only just transitioning away from my 8086k system which is over 3 years old. And if pennies were tight, I wouldn't even really need to replace it.

 

4 hours ago, Falkentyne said:

with the huge latency penalty of early DDR5?  (i'm guessing from the leaks that DDR4 on both systems will still be 19% faster).  I guess we'll find out in just over a month or so...

Interesting... IMO bandwidth > latency in enabling the increasing core count CPUs to unleash their potential. AMD seem to be going a different route, instead of doing something about the bandwidth choke directly they're trying to mitigate it with ever bigger caches.

 

Historically latency hasn't changed that much through the generations if you compare like with like. That is, standards based modules against others, and XMP modules against others. XMP will tend to perform better as it is much more aggressive, but stability and compatibility is not a given which it generally is when running standard modules. 

 

JEDEC DDR4 3200 C22 13.75ns CAS latency

JEDEC DDR5 4800 C40 16.67ns CAS latency

XMP DDR4 3600 C16 8.9ns CAS latency

 

Ok, I shot my own argument a bit there. I'm taking the JEDEC mid timing bins, with the expected initial speed at 4800. There is usually a faster latency grade but it isn't usually so commonly used. Still, will have to see how it really performs in practice, and what XMP DDR5 modules will be out, when. DDR5 might help slightly in latency given it has double the channels at half the width. It might not increase peak bandwidth but it might increase effective bandwidth and/or reduce effective latency.

 

4 hours ago, Falkentyne said:

sometimes having to completely remove their overclocks to stop it..

Are people blaming the manufacturer for OCs not being stable now?

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, leadeater said:

 

Still go with 10th Gen as it's faster than 11th Gen for what you do.

Unless you're doing ML stuff on the CPU, there's no performance gain by having the 11th gen rocket lake CPU. And Alder Lake will end up being a step backwards if the ML load actually uses the AVX512 instructions. I don't see how games can use AVX512, since they're not typically on the training end of a machine learning even if they do find a use for it (eg some games have taken up some primative voice recognition such as Phasmophobia)

 

It always seems like Intel is simultaneously shooting itself in the foot when it releases a new CPU. Like the GNA feature on the 11th gen CPU, I had to comb over Intel's download site to try and find the driver for it, and still have yet to find any software that makes use of it. Nothing I can find uses AVX512 (some software libraries like OVRLipsync have been compiled by Occulus to not us ANY AVX, as well as various other games that use it.)

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, leadeater said:

So many rocks now days, can be hard to know you're under one lol

Just add some diet coke on top of it, and you have a virgin cuba libre 😄

 

Anywhoozle, I hope that we also see DDR5 sticks before 5 november.

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, Kisai said:

Unless you're doing ML stuff on the CPU, there's no performance gain by having the 11th gen rocket lake CPU. And Alder Lake will end up being a step backwards if the ML load actually uses the AVX512 instructions. I don't see how games can use AVX512, since they're not typically on the training end of a machine learning even if they do find a use for it (eg some games have taken up some primative voice recognition such as Phasmophobia)

I doubt anyone would use a CPU over a GPU for ML

"A high ideal missed by a little, is far better than low ideal that is achievable, yet less effective"

 

If you think I'm wrong, correct me. If I've offended you in some way tell me what it is and how I can correct it. I want to learn, and along the way one can make mistakes; Being wrong helps you learn what's right.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×