Jump to content

Intel 12th Gen Core Alder Lake for Desktops: Top SKUs Only, Coming November 4th +Z690 Chipset

Lightwreather
3 hours ago, dizmo said:

I'm really interested to see how these chips do. Especially the 12600k, which sadly I don't think many are going to test off the bat.

LTT  usually does the i5 and i9.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Kisai said:

Apple doesn't have HT in the M1's.

Is Hyperthreading in a RISC architecture even a thing? I thought, that's basically the CISC-to-Microcode-Block is doubled in Hyperthreading, while the ALU and FPU are not.
RISC does not require such a complicated microcode-translation, as the individual assembly-functions are already quite close to microcode.
 

I just wonder, how the operating system can manage the efficiency cores. I would love to see some OneDrive Sync stuff to be handled by these cores... but I fear of Windows assigning the game I play to the efficiency cores.

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, Kisai said:

blown up PSU's due to OCP

What? If OCP kicks in, the PSU doesn't blow up. That's half the point of the protection.

Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler

^-^

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, Laborant said:

Is Hyperthreading in a RISC architecture even a thing?

It is and with even more threads per core, look up Marvel ThunderX and IBM Power, both are RISC CPU with SMT. ThunderX3 is 4 way SMT and Power10 is 8 way SMTs these are the latest version of each platform and The Thunder is an ARM based design.

this is one of the greatest thing that has happened to me recently, and it happened on this forum, those involved have my eternal gratitude http://linustechtips.com/main/topic/198850-update-alex-got-his-moto-g2-lets-get-a-moto-g-for-alexgoeshigh-unofficial/ :')

i use to have the second best link in the world here, but it died ;_; its a 404 now but it will always be here

 

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Elisis said:

What? If OCP kicks in, the PSU doesn't blow up. That's half the point of the protection.

Because they blow up when the OCP doesn't trip in time.

 

I'm not saying people are going to have their PSU's explode every time here, I'm saying that that "constant" TDP value gave you proper reference for what size of PSU to have. If a 35w CPU can pull 250w for several seconds under a specific load, then suddenly the cheapy 300w power supplies found in Dell and HP systems as well as budget builds are going to hit the OCP seemingly at random.

 

And the OCP is not something you can just keep hammering and it will still work. You might get a few shots at it, and then the PSU is just dead.

 

If you can steer the P/E cores to not go above a fixed power draw/TDP, then that potential is avoided entirely, at the cost of performance being left on the table. So if you really really want that air-cooled ITX system, you need that ceiling to exist.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Kisai said:

Because they blow up when the OCP doesn't trip in time.

Right, so, that's an issue entirely separate from component power draw.

2 minutes ago, Kisai said:

then suddenly the cheapy 300w power supplies found in Dell and HP systems as well as budget builds are going to hit the OCP seemingly at random.

OCP is set higher than rated current on 12v for exactly this reason.

2 minutes ago, Kisai said:

And the OCP is not something you can just keep hammering and it will still work. You might get a few shots at it, and then the PSU is just dead.

Uh, no.

Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler

^-^

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, leadeater said:

Well that's nothing new. You either get zero information about the product before it goes on sale and they don't talk about anything at all or you get this situation, where said company outlines the new archecture and it's improvements and gives indications of actual product performance gains.

 

Think I'd rather this than nothing, or leaks only.

If it were these PR marketing slides using windows 11 without updates, or nothing at all except for leaks,

I'd rather have leaks because a lot of leaks nowadays are on purpose to get news articles and so people decide they want a product way before there is even a confirmed release date.

8 hours ago, poochyena said:

what do you mean by "issues"?

See the power consumption, it's still higher than AMD Ryzen, i know a lot of people won't care though I don't want my PC to be a space heater, a modern GPU already heats up a room.

4 hours ago, saltycaramel said:

who’s wrong

or what’s the reasoning

(probably the devil is in the details of how these are manufactured and binned)

You're comparing two completely different architectures and operating systems. And I doubt it has anything to do with manufacturing. Apple has a complete vertical monopoly over everything.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Blademaster91 said:

You're comparing two completely different architectures and operating systems. And I doubt it has anything to do with manufacturing. Apple has a complete vertical monopoly over everything.


They’re different architectures but if on the performance-oriented chip Intel shoots for 1:1 p/e and Apple shoots for 4:1 p/e, I think the approach feels so different that it is legitimate to wonder why. OK they’re different OSes, does Windows need more e-cores than macOS to check the mail? On a desktop PC at that? I mean you’re right but wondering about the possible reasons of the two approaches is inevitable. 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Blademaster91 said:

But it looks like Intel still has issues with power consumption, or they're just being more truthful with TDP numbers.

So people have gone from misunderstanding TDP to misunderstanding Max Turbo Power. Can we call that MTP for short? Just because that power may be reached in some circumstances, doesn't mean it will be reached often or always. Historically highest power draws have been with Prime95 and similar loads. Most people don't run software that demanding, and power draws will be lower. Exactly how much power under what situation will have to be seen in real testing, but you can see similar in older Intel CPUs too.

 

That contrasts with AMD's approach of enforcing a lower power limit, where instead of seeing high power draws, you see lower core clocks for those workloads.

 

4 hours ago, suicidalfranco said:

yeah, but if it means less performance, i really do not care.

Leave that crap to mobile platform. I'd rather have 2 extra P-cores than waste space on 8 E-cores 

For multi-threaded workloads, the expectation is that those 4 E cores will do more work than an extra P core. We will have to wait and see how that works out in practice.

 

2 hours ago, Kisai said:

In a way, it's kinda disgusting that Intel still markets the CPU with unrealistic TDP.

In other words, you still don't understand what TDP really means.

 

1 hour ago, Kisai said:

I'm not saying people are going to have their PSU's explode every time here, I'm saying that that "constant" TDP value gave you proper reference for what size of PSU to have. If a 35w CPU can pull 250w for several seconds under a specific load, then suddenly the cheapy 300w power supplies found in Dell and HP systems as well as budget builds are going to hit the OCP seemingly at random.

Those cheap OEM systems will run at a lower power limit, with a lower end CPU than the k models that have been announced. You're making up problems that will not happen.

 

The correct way to do this has always been to power limit for the system. No sensible person is going to put the i9 into the cheapest office system and expect it to perform at full potential.

 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, porina said:

So people have gone from misunderstanding TDP to misunderstanding Max Turbo Power. Can we call that MTP for short? Just because that power may be reached in some circumstances, doesn't mean it will be reached often or always. Historically highest power draws have been with Prime95 and similar loads. Most people don't run software that demanding, and power draws will be lower. Exactly how much power under what situation will have to be seen in real testing, but you can see similar in older Intel CPUs too.

 

That contrasts with AMD's approach of enforcing a lower power limit, where instead of seeing high power draws, you see lower core clocks for those workloads

Well it really doesn't help that Intel added new acronyms for average power draw and peak power draw.

But yeah I'd like to see what real world power consumption is, though Intel's power numbers don't look much better compared to the previous 11th gen cpu's.

The issue with power draw on Intel is motherboard manufacturers like to set core enhancement by default with unlimited boost and too much voltage. With AMD on the other hand, I don't see any issue on setting a lower power limit and letting it boost automatically based on temps, the drawback is little room for OC's though I don't see much point in overclocking a CPU anymore it's maybe 5% extra performance in actual use for twice the power consumption.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Blademaster91 said:

*snip*...I don't see much point in overclocking a CPU anymore it's maybe 5% extra performance in actual use for twice the power consumption.

Isn't there some recent news with people OCing AL cpu to 6+ GHz and getting like 60 % more performance in benchmarks? 

 

 

Of course that was a liquid nitrogen setup with no real world use, just that there is performance to be had 🙂 

 

EDIT:// It was clocked to 6,8 GHz

https://www.notebookcheck.net/Alder-Lake-Overclocked-Intel-Core-i9-12900K-sets-new-benchmark-records-at-6-8-GHz.575802.0.html

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, Blademaster91 said:

Well it really doesn't help that Intel added new acronyms for average power draw and peak power draw.

Well, there is no average power draw. What was TDP now base is more like a recommended sustained load power for planning low end cooling options. 

 

31 minutes ago, Blademaster91 said:

The issue with power draw on Intel is motherboard manufacturers like to set core enhancement by default with unlimited boost and too much voltage.

MCE default on is a thing that happens on some enthusiast boards, but it is still technically overclocking so running outside what Intel recommend. This is not Intel's fault, it is mobo manufacturer's fault. However I've only ever owned one mobo that defaulted to that on. It was a higher end OC board so not totally unexpected in that case. On general OC boards you may get "suggested" to enable it when you turn on XMP. Nowadays I always decline.

 

The more usual state is MCE off, but (practically) no power limit. That is the scenario Intel are clarifying with the new metrics.

 

31 minutes ago, Blademaster91 said:

With AMD on the other hand, I don't see any issue on setting a lower power limit and letting it boost automatically based on temps, the drawback is little room for OC's though I don't see much point in overclocking a CPU anymore it's maybe 5% extra performance in actual use for twice the power consumption.

You could argue AMD got to "max turbo power" concept before Intel. Their PPT value seems to serve the same effect, with the difference that AMD CPUs tend to hit it often, which I don't think will be the case on Intel since their operation philosophy is different. This will remain an ongoing point of misunderstanding.

 

I do agree that mainstream OC is not worth the time any more.

 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Kisai said:

Personally, my opinion is that the e-cores should have "replaced" hyperthreading, as that could have solved the spectre/meltdown and similar attacks from even being possible

Just wait, there could be a similar vulnerability that can be exploited when threads are moved between P cores and E cores. There's no reason to assume this is any more secure. There are other methods, similar to what AMD/Zen have, that can protect against these so it's not really a problem with SMT at all.

 

4 hours ago, Kisai said:

We're probably going to see a wave of blown up PSU's due to OCP and failed CPU's due to using air-cooling on these parts until Intel starts admitting that 90w+ parts should have been liquid cooled and have a 250w PSU headroom.

 

None of these CPUs could really ever trigger OCP, unless you have a 250W PSU, and the CPUs wouldn't fail either. Bad cooling equals throttling and the throttling point is a temperature that the CPU can run at for literally years on end.

 

3 hours ago, Kisai said:

I'm not saying people are going to have their PSU's explode every time here, I'm saying that that "constant" TDP value gave you proper reference for what size of PSU to have. If a 35w CPU can pull 250w for several seconds under a specific load, then suddenly the cheapy 300w power supplies found in Dell and HP systems as well as budget builds are going to hit the OCP seemingly at random.

That would be your fault for using a thermal specification for power when Intel literally offers you PL1 and PL2 power specifications. Does Intel need to throw a brick through your Window with a note on it for you to take noticed of this?

 

Basically all Intel has done is put the PL2 value on the standard spec sheet.

 

3 hours ago, Kisai said:

If you can steer the P/E cores to not go above a fixed power draw/TDP, then that potential is avoided entirely, at the cost of performance being left on the table. So if you really really want that air-cooled ITX system, you need that ceiling to exist.

A limit does exist, PL2

 

3 hours ago, Kisai said:

And the OCP is not something you can just keep hammering and it will still work. You might get a few shots at it, and then the PSU is just dead.

Nope, a working OCP can be triggered as much as you like. Functioning OCP is set below and within the component tolerances so you'll be able to trip that as much as you like.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Laborant said:

Is Hyperthreading in a RISC architecture even a thing? I thought, that's basically the CISC-to-Microcode-Block is doubled in Hyperthreading, while the ALU and FPU are not.
RISC does not require such a complicated microcode-translation, as the individual assembly-functions are already quite close to microcode.
 

I just wonder, how the operating system can manage the efficiency cores. I would love to see some OneDrive Sync stuff to be handled by these cores... but I fear of Windows assigning the game I play to the efficiency cores.

Yes, IBM Power has SMT4 and SMT8.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, leadeater said:

That would be your fault for using a thermal specification for power when Intel literally offers you PL1 and PL2 power specifications. Does Intel need to throw a brick through your Window with a note on it for you to take noticed of this?

Honestly that would be a really good way to get the message across.

"The most important step a man can take. It’s not the first one, is it?
It’s the next one. Always the next step, Dalinar."
–Chapter 118, Oathbringer, Stormlight Archive #3 by Brandon Sanderson

 

 

Older stuff:

Spoiler

"A high ideal missed by a little, is far better than low ideal that is achievable, yet far less effective"

 

If you think I'm wrong, correct me. If I've offended you in some way tell me what it is and how I can correct it. I want to learn, and along the way one can make mistakes; Being wrong helps you learn what's right.

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Mister Woof said:

I disagree. Most people I know use google drive or apple icloud, or even MS onedrive to store documents, photos, videos, etc. I think you grossly underestimate the desktop/mobile integration for your average user.

 

Power users are going to use a NAS or backup their shit on the reg. 

 

But you don't care about that and want to put words in my mouth.

 

If you actually read my post, I never implied it was an excuse. I literally stated:

 

Never said a thing about excusing the operating system.

 

But I digress - I personally don't feel the risk of "losing your data" is a significant enough reason to be weary to adopt Windows 11...especially if you're going to be using a 12th generation CPU that relies on it for optimized scheduler.....especially when you should be using cloud or your own local network storage to back your shit up.

 

If you don't want to - obviously you do you. I just gave a very modern and legitimate reason why a normal person might not care.

Right, but you probably live in a more techcentric circle. None of my friends, family, or coworkers use backups or NAS. 

 

I also never said I don't back up my data, I said I don't want to wake up and have it all gone. Restoring it takes time, not something I have in the morning as I try to fire off things before work. Something I shouldn't have to contend with. 

 

I wasn't putting words in your mouth at all. We were talking about Windows stability, why wouldn't I assume your reply was related to it? 

6 hours ago, leadeater said:

All of these only exist in marketing PR news announcements, there are zero on the market above 5000 that I know of. When they become real tangible products you can hold in your hand then we can have a different discussion 🙂

Jay has a set of 6000 😉 Pretty much every review kit came with 5200.

CPU: Ryzen 9 5900 Cooler: EVGA CLC280 Motherboard: Gigabyte B550i Pro AX RAM: Kingston Hyper X 32GB 3200mhz

Storage: WD 750 SE 500GB, WD 730 SE 1TB GPU: EVGA RTX 3070 Ti PSU: Corsair SF750 Case: Streacom DA2

Monitor: LG 27GL83B Mouse: Razer Basilisk V2 Keyboard: G.Skill KM780 Cherry MX Red Speakers: Mackie CR5BT

 

MiniPC - Sold for $100 Profit

Spoiler

CPU: Intel i3 4160 Cooler: Integrated Motherboard: Integrated

RAM: G.Skill RipJaws 16GB DDR3 Storage: Transcend MSA370 128GB GPU: Intel 4400 Graphics

PSU: Integrated Case: Shuttle XPC Slim

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

Budget Rig 1 - Sold For $750 Profit

Spoiler

CPU: Intel i5 7600k Cooler: CryOrig H7 Motherboard: MSI Z270 M5

RAM: Crucial LPX 16GB DDR4 Storage: Intel S3510 800GB GPU: Nvidia GTX 980

PSU: Corsair CX650M Case: EVGA DG73

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

OG Gaming Rig - Gone

Spoiler

 

CPU: Intel i5 4690k Cooler: Corsair H100i V2 Motherboard: MSI Z97i AC ITX

RAM: Crucial Ballistix 16GB DDR3 Storage: Kingston Fury 240GB GPU: Asus Strix GTX 970

PSU: Thermaltake TR2 Case: Phanteks Enthoo Evolv ITX

Monitor: Dell P2214H x2 Mouse: Logitech MX Master Keyboard: G.Skill KM780 Cherry MX Red

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, dizmo said:

Jay has a set of 6000 😉 Pretty much every review kit came with 5200.

Yea but have you actually tried buying them? Good luck with that lol

 

Heck even for G.Skill try and find the product on their site, not the news announcement page the product page.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Kisai said:

 

In a way, it's kinda disgusting that Intel still markets the CPU with unrealistic TDP.

 

We're probably going to see a wave of blown up PSU's due to OCP and failed CPU's due to using air-cooling on these parts until Intel starts admitting that 90w+ parts should have been liquid cooled and have a 250w PSU headroom.

 

 

How? 125W is guaranteed minimum consumption under load so to speak. 240W is its highest consumption under turbo.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, leadeater said:

Yea but have you actually tried buying them? Good luck with that lol

 

Heck even for G.Skill try and find the product on their site, not the news announcement page the product page.

I think they're just waiting until the CPUs are available for whatever reason. Just like how Asus doesn't have Z690 boards on their website (as of yesterday), but Gigabyte and Asrock do 🤷🏻‍♂️ Doesn't mean they won't be available at launch. 5200 is widely available though. 

CPU: Ryzen 9 5900 Cooler: EVGA CLC280 Motherboard: Gigabyte B550i Pro AX RAM: Kingston Hyper X 32GB 3200mhz

Storage: WD 750 SE 500GB, WD 730 SE 1TB GPU: EVGA RTX 3070 Ti PSU: Corsair SF750 Case: Streacom DA2

Monitor: LG 27GL83B Mouse: Razer Basilisk V2 Keyboard: G.Skill KM780 Cherry MX Red Speakers: Mackie CR5BT

 

MiniPC - Sold for $100 Profit

Spoiler

CPU: Intel i3 4160 Cooler: Integrated Motherboard: Integrated

RAM: G.Skill RipJaws 16GB DDR3 Storage: Transcend MSA370 128GB GPU: Intel 4400 Graphics

PSU: Integrated Case: Shuttle XPC Slim

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

Budget Rig 1 - Sold For $750 Profit

Spoiler

CPU: Intel i5 7600k Cooler: CryOrig H7 Motherboard: MSI Z270 M5

RAM: Crucial LPX 16GB DDR4 Storage: Intel S3510 800GB GPU: Nvidia GTX 980

PSU: Corsair CX650M Case: EVGA DG73

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

OG Gaming Rig - Gone

Spoiler

 

CPU: Intel i5 4690k Cooler: Corsair H100i V2 Motherboard: MSI Z97i AC ITX

RAM: Crucial Ballistix 16GB DDR3 Storage: Kingston Fury 240GB GPU: Asus Strix GTX 970

PSU: Thermaltake TR2 Case: Phanteks Enthoo Evolv ITX

Monitor: Dell P2214H x2 Mouse: Logitech MX Master Keyboard: G.Skill KM780 Cherry MX Red

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Kisai said:

Personally, my opinion is that the e-cores should have "replaced" hyperthreading, as that could have solved the spectre/meltdown and similar attacks from even being possible.

Those attacks are not solely due to HT, but due to branch prediction and cache eviction. A simple example is seeing how they also affect other ARM CPUs with no HT.

 

4 hours ago, Laborant said:

RISC does not require such a complicated microcode-translation, as the individual assembly-functions are already quite close to microcode.

You still have a decoder since you want to break those instructions into µOPs or perform macro-op fusion beforehand.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, igormp said:

Someone managed 6200MT/s with some overclocking:

https://www.tomshardware.com/news/core-i9-12900k-smashes-multiple-world-records-at-68-ghz

 

Still not quite the 6400 you mentioned, and that everyone loves to throw around as if it were going to be standard from day 1.

 

Maybe 3 years from now DDR5 will be reasonable, both in terms of price and speeds/latencies.

You are overthinking the importance of latency/frequency on DDR5. You cannot compare DDR5 against DDR4 at identical clock speeds and expect performance to be identical, that's just far from true. The big piece everyone keeps forgetting to discuss about DDR5 and their independent 32-bit channels per DIMM is the ability to read from one of those channels while still writing to another. The implication this has on latency is significant, and should far exceed the similar benefits we already get from rank interleaving, which is known to be a 10% boon at worst in and of itself when it comes to latency at identical clocks.

 

I also want to point out some misinformation that was shared in Linus's video regarding the PMIC voltage regulator on these DIMM's. He mentioned that them being limited to 1.1V would impact the ability to OC on these DIMM's and that you'd need special DIMM's in order to OC, but this simply isn't true. PMIC is responsible for:

  • VTT (termination resistor voltage for command, address and control signals)
  • VPP (DRAM row access voltage)
  • VREF (reference voltage for DC power bias, specifically for address/command/control bus)
  • VDDSPD (voltage for the Serial Presence Detect EEPROM)

This is all mentioned in Micron's DDR5 whitesheets:

https://media-www.micron.com/-/media/client/global/documents/products/technical-marketing-brief/ddr5_key_module_features_tech_brief.pdf?la=en&rev=f3ca96bed7d9427ba72b4c192dfacb56

Quote

image.png.995ecb6a9b27ef370cd9890a84375c54.png

VDIMM is still very much supplied from the board, and matters far more than the voltages listed above in terms of DIMM limitations. The PMIC just helps with making boards that have poor memory VRM's remain competitive, which is a good thing for most (though pricing is likely going to suffer as a result). EDIT: I should clarify this, since it's a little confusing now. The voltage rail isn't going to be called "VDIMM" anymore, it's split between VDD (IO voltage) and VDDQ (DRAM core voltage). You also get access to special IVR and IMC voltages that were not present on previous board generations. No idea what these do to PMIC yet, but I am working on it.

 

To cut my long rant short, frequency/latency is going to be very deceptive on DDR5 when it comes to discerning actual real-world performance. The old rules of "use absolute latency for a rough estimate) is simply insufficient as tertiary timings are going to matter far more this time around, given the heavy importance that the independent 32-bit channels have on performance. Even with that in mind, don't let the deceptively low base frequency of DDR5 serve as a distraction. DDR5 overclocks really well, even the garbage kits.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, MageTank said:

You are overthinking the importance of latency/frequency on DDR5. You cannot compare DDR5 against DDR4 at identical clock speeds and expect performance to be identical, that's just far from true. The big piece everyone keeps forgetting to discuss about DDR5 and their independent 32-bit channels per DIMM is the ability to read from one of those channels while still writing to another. The implication this has on latency is significant, and should far exceed the similar benefits we already get from rank interleaving, which is known to be a 10% boon at worst in and of itself when it comes to latency at identical clocks.

 

I also want to point out some misinformation that was shared in Linus's video regarding the PMIC voltage regulator on these DIMM's. He mentioned that them being limited to 1.1V would impact the ability to OC on these DIMM's and that you'd need special DIMM's in order to OC, but this simply isn't true. PMIC is responsible for:

  • VTT (termination resistor voltage for command, address and control signals)
  • VPP (DRAM row access voltage)
  • VREF (reference voltage for DC power bias, specifically for address/command/control bus)
  • VDDSPD (voltage for the Serial Presence Detect EEPROM)

This is all mentioned in Micron's DDR5 whitesheets:

https://media-www.micron.com/-/media/client/global/documents/products/technical-marketing-brief/ddr5_key_module_features_tech_brief.pdf?la=en&rev=f3ca96bed7d9427ba72b4c192dfacb56

VDIMM is still very much supplied from the board, and matters far more than the voltages listed above in terms of DIMM limitations. The PMIC just helps with making boards that have poor memory VRM's remain competitive, which is a good thing for most (though pricing is likely going to suffer as a result).

 

To cut my long rant short, frequency/latency is going to be very deceptive on DDR5 when it comes to discerning actual real-world performance. The old rules of "use absolute latency for a rough estimate) is simply insufficient as tertiary timings are going to matter far more this time around, given the heavy importance that the independent 32-bit channels have on performance. Even with that in mind, don't let the deceptively low base frequency of DDR5 serve as a distraction. DDR5 overclocks really well, even the garbage kits.

Thanks for the info. Do you have any idea if high-ish density sticks (32~64gb) will be available soon? I saw many news sites mentioning a max of 128gb per stick, but so far it seems that only 16gb are available/announced. 

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, igormp said:

Thanks for the info. Do you have any idea if high-ish density sticks (32~64gb) will be available soon? I saw many news sites mentioning a max of 128gb per stick, but so far it seems that only 16gb are available/announced. 

Sorry, I do not. My samples were just 16GB DIMM's. 

 

As for there being a max limitation on capacity, I wouldn't be too concerned with that just yet. We used to use LR-DIMM's in our AM3 boards to far exceed the memory capacity that was officially supported on Bulldozer, so anything is technically possible.

 

I wouldn't mind testing this, but I can only imagine how difficult its going to be to source LR DIMM's for DDR5.

 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, dizmo said:

Right, but you probably live in a more techcentric circle. None of my friends, family, or coworkers use backups or NAS. 

 

I also never said I don't back up my data, I said I don't want to wake up and have it all gone. Restoring it takes time, not something I have in the morning as I try to fire off things before work. Something I shouldn't have to contend with. 

 

I wasn't putting words in your mouth at all. We were talking about Windows stability, why wouldn't I assume your reply was related to it? 

Jay has a set of 6000 😉 Pretty much every review kit came with 5200.

Will have to agree to disagree on some points.

 

I'll agree reinstalling is a pain, even if you don't lose data.

 

But I never said it was an excuse. I only said that even if it did require a reinstall, most people won't really lose data. 

 

Then again, my personal experience with this specific operating system has not reflected any such issues. Windows 11 has been critically received as a relatively smooth transition with most issues being nitpicks. My experience mirrors that.

 

I seem to remember the same doom and gloom about Windows 10....never had an issue with that, either. I'm no systems admin, but I had about 7 devices upgrade to 10 relatively early on with unremarkable experiences. I have 2 systems on W11 now for months, again, without issue.

 

While I can't vouch for everyone, I believe the amount of apprehension is disproportionate to the actual risk, which is further mitigated by the availability and accessibility of automated cloud saving. For every negative experience you hear on the internet, there's 100+ that didn't have an issue and just never said anything because they didn't have an issue. 

 

Of course, if your use case is a mission-critical one, then you won't be taking any chances, despite backups, since downtime = money lost. However, those people aren't in this thread considering Windows 11. My workplace just moved to W10 last year........most business seem to live in legacy mode until they're kicked out. They were never part of this conversation.

Before you reply to my post, REFRESH. 99.99% chance I edited my post. 

 

My System: i7-13700KF // Corsair iCUE H150i Elite Capellix // MSI MPG Z690 Edge Wifi // 32GB DDR5 G. SKILL RIPJAWS S5 6000 CL32 // Nvidia RTX 4070 Super FE // Corsair 5000D Airflow // Corsair SP120 RGB Pro x7 // Seasonic Focus Plus Gold 850w //1TB ADATA XPG SX8200 Pro/1TB Teamgroup MP33/2TB Seagate 7200RPM Hard Drive // Displays: LG Ultragear 32GP83B x2 // Royal Kludge RK100 // Logitech G Pro X Superlight // Sennheiser DROP PC38x

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×