Jump to content

Intel 12th Gen Core Alder Lake for Desktops: Top SKUs Only, Coming November 4th +Z690 Chipset

Lightwreather

Alder lake seems to be running fine on linux, but still misses support for the thread director:

embed.php?i=2111030-TJ-ALDERLAKE34&sha=12e70e3e2956&p=2

Source

 

13 minutes ago, RejZoR said:

It's almost as if you don't understand memory management

Hmm, seems like you're the one who doesn't.

 

14 minutes ago, RejZoR said:

Just because system would use 100% of RAM wouldn't mean it's not available to apps. Cached data can be discarded at any time without any issues.

And then that stuff is paged out, and you have the delay between that stuff being flushed out and new data coming in.

Also, most systems don't report cached data as used ram anyway. If you see 100% RAM usage, that's likely the actual working area for running stuff, not cached things.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

Any reviews that have done A/B testing with DDR4/5 in benchmarks and games yet? GN hasn't done it yet, they just released their main review with DDR5. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, suicidalfranco said:

but its not true and doesn't net better performance

literally linus made a test regarding the impact of background tasks in the most extreme of possible scenarios, only got 3% more cpu usage. the biggest impact was on ram usage

 

 

still makes more money for intel, that's what they truly care about, along with every other company on Earth

 

far better efficiency though

34 minutes ago, Dracarris said:

2098237135_ScreenShot2021-11-04at14_13_20.thumb.png.66550d2b5567be0ea9e2d3253043b8b2.png

 

So. 33% higher productivity performance at 70% more power? No thank you, the 5900X looks pretty appealing to me.

Performance is good to have, but my power bill and room temperature in summer are more important to me.

 

Ignoring cost for better cooler, DDR5 and those ludicrous expensive mainboards.

yeah, but think about this

what if it gives 25% better productivity with 30% more power, or 15% better productivity with only 10% more power...

 

note that this is less of a "33% higher productivity performance at 70% more power," and more of a "70% more power at 33% higher productivity performance," since you need an exponential increase in power to get a linear increase in performance.

58 minutes ago, MageTank said:

No, we realize that the ideal RAM condition would be "having enough to avoid swapping to disk". 100% utilization is a terrible idea for system memory.

technically 100% is ideal, but if you used 101% then that's completely terrible, 100% means no wasted money while still being sufficient, but that's essentially impossible

░█▀▀█ ▒█░░░ ▒█▀▀▄ ▒█▀▀▀ ▒█▀▀█   ▒█░░░ ░█▀▀█ ▒█░▄▀ ▒█▀▀▀ 
▒█▄▄█ ▒█░░░ ▒█░▒█ ▒█▀▀▀ ▒█▄▄▀   ▒█░░░ ▒█▄▄█ ▒█▀▄░ ▒█▀▀▀ 
▒█░▒█ ▒█▄▄█ ▒█▄▄▀ ▒█▄▄▄ ▒█░▒█   ▒█▄▄█ ▒█░▒█ ▒█░▒█ ▒█▄▄▄

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, igormp said:

Alder lake seems to be running fine on linux, but still misses support for the thread director:

embed.php?i=2111030-TJ-ALDERLAKE34&sha=12e70e3e2956&p=2

Source

 

Hmm, seems like you're the one who doesn't.

 

And then that stuff is paged out, and you have the delay between that stuff being flushed out and new data coming in.

Also, most systems don't report cached data as used ram anyway. If you see 100% RAM usage, that's likely the actual working area for running stuff, not cached things.

Right, clearing RAM takes half an hour... Oh wait, it doesn't. So we instead keep it 80% empty most of the time. Just like they added thread director, same can be done for memory allocation and use.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, RejZoR said:

Right, clearing RAM takes half an hour... Oh wait, it doesn't. So we instead keep it 80% empty most of the time.

Paging stuff in and out does take a long time, and as yourself mentioned before, when it's not in use by active applications the system caches disk stuff into ram (which isn't shown as usage under task manager or whatever else you use), and this can be simply thrown away without caring about paging.

 

5 minutes ago, RejZoR said:

Just like they added thread director, same can be done for memory allocation and use.

What do you mean? That doesn't make much sense to me.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, igormp said:

Paging stuff in and out does take a long time, and as yourself mentioned before, when it's not in use by active applications the system caches disk stuff into ram (which isn't shown as usage under task manager or whatever else you use), and this can be simply thrown away without caring about paging.

 

What do you mean? That doesn't make much sense to me.

Caches can literally be dumped into a void which takes nanoseconds. You don't need to page them back to L2 storage aka SSD/HDD.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, RejZoR said:

Caches can literally be dumped into a void which takes nanoseconds. You don't need to page them back to L2 storage aka SSD/HDD.

Yes, but caches don't count towards the ram usage you see under task manager. When you say 100% usage, that implies actual working area from running applications.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, yesyes said:

 

technically 100% is ideal, but if you used 101% then that's completely terrible, 100% means no wasted money while still being sufficient, but that's essentially impossible

100% still isn't ideal for memory. There is always a valid reason to have more than you need at a given time because use case varies and you always want a buffer in the event that something unexpected happens. 100% memory utilization at all times implies that all reads/writes are equal and do not deviate, which simply isn't true.

 

It is important to understand that redundancy is not a waste of money, there are entire industries built around this principle and they exist (and are quite profitable) for a reason.

 

50 minutes ago, RejZoR said:

@MageTank

It's almost as if you don't understand memory management. Just because system would use 100% of RAM wouldn't mean it's not available to apps. Cached data can be discarded at any time without any issues. Win10 and Win11 are already doing this, but not to an extent one would want to really make great use of RAM.

Friend, you can ask any soul on this forum if they think I do not understand memory management. Go ahead... I'll wait.

 

While we wait on that, let Professor MageTank (not to be confused with Past MageTank, that guy is a genius in his own right) take you to school for a moment. Cached data can't be "discarded at any time without any issues". This is the entire reason why DDR5 moved to dual 32-bit channels per DIMM, so you can still read from one channel while writing to another without waiting for an entire cycle completion. This was a significant bottleneck, and why we always saw significant performance boosts when adjusting tREFI and tRFC as far as latency is concerned. It paid dividends being able to stave off the frequency in which you had to recharge, coupled with recharging faster in general. Again, you're speaking from ignorance here and I hope nobody takes your ramblings serious.

1 minute ago, RejZoR said:

Caches can literally be dumped into a void which takes nanoseconds. You don't need to page them back to L2 storage aka SSD/HDD.

You still have to refresh those rows/columns/addresses BEFORE they can be used again. No matter how you cut it, this process takes time. Having capacity that isn't already wasted would mitigate this process and save you that time. Do I have to break out the MS Paint charts?

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, MageTank said:

Do I have to break out the MS Paint charts?

Not the one you're answering to, but I'd actually love that, if it's not much of a hassle to you.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, MageTank said:

100% still isn't ideal for memory. There is always a valid reason to have more than you need at a given time because use case varies and you always want a buffer in the event that something unexpected happens. 100% memory utilization at all times implies that all reads/writes are equal and do not deviate, which simply isn't true.

 

It is important to understand that redundancy is not a waste of money, there are entire industries built around this principle and they exist (and are quite profitable) for a reason.

 

Friend, you can ask any soul on this forum if they think I do not understand memory management. Go ahead... I'll wait.

 

While we wait on that, let Professor MageTank (not to be confused with Past MageTank, that guy is a genius in his own right) take you to school for a moment. Cached data can't be "discarded at any time without any issues". This is the entire reason why DDR5 moved to dual 32-bit channels per DIMM, so you can still read from one channel while writing to another without waiting for an entire cycle completion. This was a significant bottleneck, and why we always saw significant performance boosts when adjusting tREFI and tRFC as far as latency is concerned. It paid dividends being able to stave off the frequency in which you had to recharge, coupled with recharging faster in general. Again, you're speaking from ignorance here and I hope nobody takes your ramblings serious.

You still have to refresh those rows/columns/addresses BEFORE they can be used again. No matter how you cut it, this process takes time. Having capacity that isn't already wasted would mitigate this process and save you that time. Do I have to break out the MS Paint charts?

the point was if you have 16gb, and the most you ever use is 15.9gb or whatever, then you bought just about the perfect amount of RAM, I also stated how that was impossible due to different tasks using different amounts of RAM

 

I mean, redundancy sort of is a waste if you think about it, there's a difference between spending $1000 on a 2000W psu on something that only requires 700, and buying 2 1000W ones that cost $500 (ignore how that's impossible, it's just to give an idea))

tldr: overkill≠redundancy

 

I feel like you're not understanding what I'm saying, if you just look in task manager and see 100% RAM usage somehow, that means you've run out of RAM, meaning you don't have enough, but if you never go above 95%, then you didn't overspend or underkill your PC, that's what I'm trying to say, but ofc that's not possible

░█▀▀█ ▒█░░░ ▒█▀▀▄ ▒█▀▀▀ ▒█▀▀█   ▒█░░░ ░█▀▀█ ▒█░▄▀ ▒█▀▀▀ 
▒█▄▄█ ▒█░░░ ▒█░▒█ ▒█▀▀▀ ▒█▄▄▀   ▒█░░░ ▒█▄▄█ ▒█▀▄░ ▒█▀▀▀ 
▒█░▒█ ▒█▄▄█ ▒█▄▄▀ ▒█▄▄▄ ▒█░▒█   ▒█▄▄█ ▒█░▒█ ▒█░▒█ ▒█▄▄▄

Link to comment
Share on other sites

Link to post
Share on other sites

Oh my god. Forget it. Why do I bother and mention things you people fuck up extrapolating with assumptions of things I never said. Just forget it.

 

As for Alder Lake, I read TPU's review first and Alder Lake is a bit of a disappointment. Sure, it brings gains to the table, but at expense of ridiculous temperatures and power consumption and in many cases barely beating 5950X which has been on the market for a full year now. People talk about hopes of AMD decreasing prices of their CPU's, but I frankly see no reason for it. And I'm sure so does AMD. Intel didn't really make anything cheaper when Ryzen was meddling their sales a bit in the beginning, why would AMD now given that brand new right now released CPU is barely denting 5950X performance metrics? It would be nice if prices dropped so I could do something stupid like getting 5950X instead of existing 5800X for no reason other than YOLO. Hopefully those V-Cache or whatever new Ryzens will be will come to AM4. Nothing wrong with 5800X, but I like fiddling with new stuff and if I can still fit those in my system I might try it out.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, yesyes said:

the point was if you have 16gb, and the most you ever use is 15.9gb or whatever, then you bought just about the perfect amount of RAM, I also stated how that was impossible due to different tasks using different amounts of RAM

 

I mean, it sort of is if you think about it, there's a difference between spending $1000 on a 2000W psu on something that only requires 700, and buying 2 1000W ones that cost $500 (ignore how that's impossible, it's just to give an idea))

tldr: overkill≠redundancy

 

I feel like you're not understanding what I'm saying, if you just look in task manager and see 100% RAM usage somehow, that means you've run out of RAM, meaning you don't have enough, but if you never go above 95%, then you didn't overspend or underkill your PC, that's what I'm trying to say, but ofc that's not possible

You should know that only the first part of my post was to you, not the entirety of it. You might be confusing what I am saying to you vs me talking to someone else.

 

You also seem to be confusing my point in general without reviewing my previous posts for the full context. I am not an advocate for buying obnoxious amounts of memory. In fact, I am quite the opposite because I fully understand the limitations of memory controllers and the impact DR/2DPC configurations can have. Though I also wouldn't advise operating memory at 15.9GB if your max capacity is only 16GB. Ideally, you'll want a buffer of roughly 20-25% and capacity that reflects what your CPU threads can access at a given time. Again, nobody would ever advise using a pentium with 64GB of ram, not unless you are running a block-level cache or RAM disk for some reason.

 

As for the PSU analogy, this doesn't work for a few reasons. You can have redundancy in a PSU without completely throwing away your efficiency curve. If you have a system that pulls 500W from the wall (factoring in PSU rating/efficiency in general), there isn't anything wrong with buying a 1000W PSU to mitigate aging capacitors and trying to sit comfortably in that 50% load efficiency curve (which is often the most efficient). Would I personally do that? Not really, I don't care about efficiency and run a 2080 Ti with 5950W on a 750W PSU, but there is a purpose behind doing this that isn't nonsensical.

 

Lastly, I am not disagreeing that 95% usage isn't bad, I just wouldn't operate my system memory with only a 5% capacity buffer because I can't anticipate how any given application will handle their request for memory. I also wouldn't advise others to operate with such a small buffer either.

 

19 minutes ago, RejZoR said:

Oh my god. Forget it. Why do I bother and mention things you people fuck up extrapolating with assumptions of things I never said. Just forget it.

 

As for Alder Lake, I read TPU's review first and Alder Lake is a bit of a disappointment. Sure, it brings gains to the table, but at expense of ridiculous temperatures and power consumption and in many cases barely beating 5950X which has been on the market for a full year now. People talk about hopes of AMD decreasing prices of their CPU's, but I frankly see no reason for it. And I'm sure so does AMD. Intel didn't really make anything cheaper when Ryzen was meddling their sales a bit in the beginning, why would AMD now given that brand new right now released CPU is barely denting 5950X performance metrics? It would be nice if prices dropped so I could do something stupid like getting 5950X instead of existing 5800X for no reason other than YOLO. Hopefully those V-Cache or whatever new Ryzens will be will come to AM4. Nothing wrong with 5800X, but I like fiddling with new stuff and if I can still fit those in my system I might try it out.

I never assumed anything. The beauty of text-based mediums is that the context can always be found and is readily available with a simple scroll. I quoted you word for word without altering anything.

 

As for the Alder Lake reviews, I am a bit sketched out by them. Either my testing methodology is broken, or they are getting much lower thermals on their AIO's than I am. I am consistently hitting 100C across the P cores within seconds of Prime95 v30.3B6.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, MageTank said:

You should know that only the first part of my post was to you, not the entirety of it. You might be confusing what I am saying to you vs me talking to someone else.

 

You also seem to be confusing my point in general without reviewing my previous posts for the full context. I am not an advocate for buying obnoxious amounts of memory. In fact, I am quite the opposite because I fully understand the limitations of memory controllers and the impact DR/2DPC configurations can have. Though I also wouldn't advise operating memory at 15.9GB if your max capacity is only 16GB. Ideally, you'll want a buffer of roughly 20-25% and capacity that reflects what your CPU threads can access at a given time. Again, nobody would ever advise using a pentium with 64GB of ram, not unless you are running a block-level cache or RAM disk for some reason.

 

As for the PSU analogy, this doesn't work for a few reasons. You can have redundancy in a PSU without completely throwing away your efficiency curve. If you have a system that pulls 500W from the wall (factoring in PSU rating/efficiency in general), there isn't anything wrong with buying a 1000W PSU to mitigate aging capacitors and trying to sit comfortably in that 50% load efficiency curve (which is often the most efficient). Would I personally do that? Not really, I don't care about efficiency and run a 2080 Ti with 5950W on a 750W PSU, but there is a purpose behind doing this that isn't nonsensical.

 

Lastly, I am not disagreeing that 95% usage isn't bad, I just wouldn't operate my system memory with only a 5% capacity buffer because I can't anticipate how any given application will handle their request for memory. I also wouldn't advise others to operate with such a small buffer either.

 

I never assumed anything. The beauty of text-based mediums is that the context can always be found and is readily available with a simple scroll. I quoted you word for word without altering anything.

 

As for the Alder Lake reviews, I am a bit sketched out by them. Either my testing methodology is broken, or they are getting much lower thermals on their AIO's than I am. I am consistently hitting 100C across the P cores within seconds of Prime95 v30.3B6.

I would most certainly not recommend it either, I was just saying how it's technically the "not a dollar wasted" concept.

 

I personally am planning on buying a PC in around 6 months, and for it, I'm going to need two Corsair AX1600I PSUs, not only so they don't run at 90% all the time, but so they don't run at 120% capacity all the time

 


I think we can both agree that always having 95% usage is not ideal, I was only stating how it is the concept I mentioned earlier, I would agree with you entirely in the idea that you should have at LEAST 30% more RAM than you'll need just in case.

░█▀▀█ ▒█░░░ ▒█▀▀▄ ▒█▀▀▀ ▒█▀▀█   ▒█░░░ ░█▀▀█ ▒█░▄▀ ▒█▀▀▀ 
▒█▄▄█ ▒█░░░ ▒█░▒█ ▒█▀▀▀ ▒█▄▄▀   ▒█░░░ ▒█▄▄█ ▒█▀▄░ ▒█▀▀▀ 
▒█░▒█ ▒█▄▄█ ▒█▄▄▀ ▒█▄▄▄ ▒█░▒█   ▒█▄▄█ ▒█░▒█ ▒█░▒█ ▒█▄▄▄

Link to comment
Share on other sites

Link to post
Share on other sites

@MageTank

what's the point of quoting word for word when you don't seem to understand said words. A typical thing I'm experiencing on forums for years. People read shit without understanding it. Either way, we're done talking about this before some mod gets offended by my words yet again.

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, MageTank said:

As for the Alder Lake reviews, I am a bit sketched out by them. Either my testing methodology is broken, or they are getting much lower thermals on their AIO's than I am. I am consistently hitting 100C across the P cores within seconds of Prime95 v30.3B6.

Are you using AVX? I believe many reviewers do not in their tests as it isn't exactly a realistic benchmark.

 

This is the reason why many reviewers are moving to using real-world tests like Blender for their thermals, as opposed to synthetic loads like Prime95 - they are more representative of what you'll actually see when doing real work.

CPU: i7 4790k, RAM: 16GB DDR3, GPU: GTX 1060 6GB

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Blademaster91 said:

i'm curious if they were using DDR5.

They were.

 

2 hours ago, Blademaster91 said:

Cheaper CPU's are nice, but Z690 boards are expensive and DDR5 has a price premium over DDR4.

From what I've seen on the Swedish market, the boards and DDR5 doesn't seem to have that high of a price premium. They seem pretty competitive with AMD motherboards and DDR4 in terms of pricing. At least if you want products of similar tier.

The problem right now seems to be that even the cheapest Z690 boards are pretty high end, and DDR5 seems to be priced (and specced) similar to high end DDR4.

This will probably fix itself as time goes on. A lower end chipset will be released, and the initial high price of DDR5 will quickly drop. It might be a problem for people who want a budget Alder Lake system right now though.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, LAwLz said:

They were.

 

From what I've seen on the Swedish market, the boards and DDR5 doesn't seem to have that high of a price premium. They seem pretty competitive with AMD motherboards and DDR4 in terms of pricing. At least if you want products of similar tier.

The problem right now seems to be that even the cheapest Z690 boards are pretty high end, and DDR5 seems to be priced (and specced) similar to high end DDR4.

This will probably fix itself as time goes on. A lower end chipset will be released, and the initial high price of DDR5 will quickly drop. It might be a problem for people who want a budget Alder Lake system right now though.

https://www.tomshardware.com/news/intel-core-i9-12900k-and-core-i5-12600k-review-retaking-the-gaming-crown

 

DDR4 actually nets even better results than DDR5 in some specific uses, so saying "was it DDR5" is a sign that you are just an Intel hater

also, even if DDR5 performed better in these cases, why is that a problem? AMD''s 5050x used to cost $800, while the 11900k was $550, yet people didn't care about that, they cared about the performance

similarly, DDR5 is a premium, and for now, it only works for Intel, it's an advantage for intel, and will be for several more months

░█▀▀█ ▒█░░░ ▒█▀▀▄ ▒█▀▀▀ ▒█▀▀█   ▒█░░░ ░█▀▀█ ▒█░▄▀ ▒█▀▀▀ 
▒█▄▄█ ▒█░░░ ▒█░▒█ ▒█▀▀▀ ▒█▄▄▀   ▒█░░░ ▒█▄▄█ ▒█▀▄░ ▒█▀▀▀ 
▒█░▒█ ▒█▄▄█ ▒█▄▄▀ ▒█▄▄▄ ▒█░▒█   ▒█▄▄█ ▒█░▒█ ▒█░▒█ ▒█▄▄▄

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, tim0901 said:

Are you using AVX? I believe many reviewers do not in their tests as it isn't exactly a realistic benchmark.

 

This is the reason why many reviewers are moving to using real-world tests like Blender for their thermals, as opposed to synthetic loads like Prime95 - they are more representative of what you'll actually see when doing real work.

Yeah... now that you mention it, they are not using AVX. I've gone from cheapest ML240L AIO's all the way up to 360mm Asetek designs from Corsair and still can't prevent these things from hitting 100C in seconds under AVX Prime95. I'll have to run through Cinebench or something to see if I can replicate their results.

 

38 minutes ago, RejZoR said:

@MageTank

what's the point of quoting word for word when you don't seem to understand said words. A typical thing I'm experiencing on forums for years. People read shit without understanding it. Either way, we're done talking about this before some mod gets offended by my words yet again.

It's not that I do not understand you. It's that the message you are trying to convey is incorrect and should not be spread due to it being pure fiction. If you believe I misrepresented your point, feel free to clarify what you meant and I'll determine if what you are saying is incorrect. I've cited most of my sources, but if you need me to get technical and provide some whitesheets I do not mind.

 

47 minutes ago, yesyes said:

I would most certainly not recommend it either, I was just saying how it's technically the "not a dollar wasted" concept.

 

I personally am planning on buying a PC in around 6 months, and for it, I'm going to need two Corsair AX1600I PSUs, not only so they don't run at 90% all the time, but so they don't run at 120% capacity all the time

 


I think we can both agree that always having 95% usage is not ideal, I was only stating how it is the concept I mentioned earlier, I would agree with you entirely in the idea that you should have at LEAST 30% more RAM than you'll need just in case.

This is an agreeable outlook on the situation. One benefit of utilizing excess RAM to "get your money's worth" is to use your extra ram as a block level cache. Even prolongs your SSD's lifespan by letting the mundane writes hit your memory, not the SSD.

 

8 minutes ago, yesyes said:

https://www.tomshardware.com/news/intel-core-i9-12900k-and-core-i5-12600k-review-retaking-the-gaming-crown

 

DDR4 actually nets even better results than DDR5 in some specific uses, so saying "was it DDR5" is a sign that you are just an Intel hater

also, even if DDR5 performed better in these cases, why is that a problem? AMD''s 5050x used to cost $800, while the 11900k was $550, yet people didn't care about that, they cared about the performance

similarly, DDR5 is a premium, and for now, it only works for Intel, it's an advantage for intel, and will be for several more months

I actually have a couple Alder Lake setups behind me if anyone wants me to run through some specific tests. Both a DDR4 and DDR5 system, similar ASUS boards too. I can't test latency right now, and bandwidth results seem extremely inaccurate due to how software is perceiving DDR5's individual 32-bit memory channels, so you end up with higher peak theoretical bandwidth than what should be possible (still trying to figure this out as we speak).

 

In my preliminary tests, my high speed DDR4 kits seem to out-perform the 5200 C38 DDR5 kit I have, and both certainly outperform the 4800 C40 kit, lol. DDR5 will no doubt be better as it matures and we get a second iteration of the IMC to push these limits. Right now, I haven't the slightest clue on how to properly OC these DDR5 kits, even with a long history of memory OCing. You have an RTL and IO channel per 32-bit channel, per DIMM. I have to assume these are strapped and can't be altered individually yet these ASUS boards totally let me do exactly that.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Dracarris said:

2098237135_ScreenShot2021-11-04at14_13_20.thumb.png.66550d2b5567be0ea9e2d3253043b8b2.png

 

So. 33% higher productivity performance at 70% more power? No thank you, the 5900X looks pretty appealing to me.

Performance is good to have, but my power bill and room temperature in summer are more important to me.

 

Ignoring cost for better cooler, DDR5 and those ludicrous expensive mainboards.

Oh please...

You're talking about a peak power consumption difference of like 80 watts. 

 

I am not sure where you live but if it's in the US the average electricity cost is 10.42 cent per kilowatt-hour. That means 80 watts of extra power will cost you 0.8336 cents an hour.

You could literally be having your CPU pegged at 100% load for an entire year and the difference in power costs would only be ~7 dollars. 

 

The impact the i9 vs the R9 will have on your power bill is laughably small. It's such a non-argument I find it laughable that you would even bring it up. Feels like you're really scraping the bottom of the barrel to find excuses for AMD.

 

 

There are ways to paint AMD as having the edge still, but "it will cost more to run because of the higher power draw" is a laughably stupid argument.

Especially when we're talking about a theoretical cost difference of 7 dollars a year in a completely unrealistic scenario.

 

 

A better argument would be to point out that Intel is clearly shipping the i9-12900K outside of its peak efficiency curve, probably so they can be at the top of the benchmarks. It would be interesting to see what the power consumption would look like if you downclocked the i9 to match the R9's performance. Maybe it would use less power than the AMD processor? Who knows...

 

 

By the way, if you care so much about efficiency then please bear in mind that for gaming, these new Intel CPUs seems to be getting higher performance than AMD's processors, at lower power consumption and temps. It's only in these very heavy CPU load scenarios that Intel uses more power.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, RejZoR said:

@MageTank

It's almost as if you don't understand memory management. Just because system would use 100% of RAM wouldn't mean it's not available to apps. Cached data can be discarded at any time without any issues. Win10 and Win11 are already doing this, but not to an extent one would want to really make great use of RAM.

100% would still cause excessive swapping since any page that needs to go in to system memory needs a page to be swapped or released, or both. This is why it is explicitly important to limit MSSQL Server to a value below the maximum capacity of the server minus OS amount you wish to allow for that otherwise performance will totally tank and be super erratic.

 

If you don't limit MSSQL Server it literally will swallow all the system memory, 384GB sure no problem nom nom nom, 512GB sure no problem nom nom nom etc. System then grinds to a halt.

 

Generally go up to 90% maximum, minus 4GB-8GB for OS, that is the maximum set point to configure in MSSQL Server.

 

The above applies in general to everything, don't go about consistent 90% memory utilization or you'll have a bad time. Windows memory management under memory pressure is complete ass.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, LAwLz said:

There are ways to paint AMD as having the edge still, but "it will cost more to run because of the higher power draw" is a laughably stupid argument.

Especially when we're talking about a theoretical cost difference of 7 dollars a year in a completely unrealistic scenario.

Apart from the fact that electricity costs much more than that where I live, the additional heat output is the much stronger argument for me. ACs aren't a thing here for environmental reasons and the heat output of my computer is a real issue in the summer given that it's in the room where I sleep.

 

Also, not everyone buys a PC primarily for gaming.

Link to comment
Share on other sites

Link to post
Share on other sites

58 minutes ago, tim0901 said:

Are you using AVX? I believe many reviewers do not in their tests as it isn't exactly a realistic benchmark.

 

This is the reason why many reviewers are moving to using real-world tests like Blender for their thermals, as opposed to synthetic loads like Prime95 - they are more representative of what you'll actually see when doing real work.

AVX1 is super widely used, AVX2 is widely used. The difference is mainly that when actually running applications generally you are not pegging the CPU to 100% all the time and going down an AVX/AVX2 code path all the time.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

AVX1 is super widely used, AVX2 is widely used. The difference is mainly that when actually running applications generally you are not pegging the CPU to 100% all the time and going down an AVX/AVX2 code path all the time.

I know - I meant that flat out 100% AVX as seen in Prime95 is not realistic. Should've made that clearer.

CPU: i7 4790k, RAM: 16GB DDR3, GPU: GTX 1060 6GB

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, leadeater said:

100% would still cause excessive swapping since any page that needs to go in to system memory needs a page to be swapped or released, or both. This is why it is explicitly important to limit MSSQL Server to a value below the maximum capacity of the server minus OS amount you wish to allow for that otherwise performance will totally tank and be super erratic.

 

If you don't limit MSSQL Server it literally will swallow all the system memory, 384GB sure no problem nom nom nom, 512GB sure no problem nom nom nom etc. System then grinds to a halt.

 

Generally go up to 90% maximum, minus 4GB-8GB for OS, that is the maximum set point to configure in MSSQL Server.

 

The above applies in general to everything, don't go about consistent 90% memory utilization or you'll have a bad time. Windows memory management under memory pressure is complete ass.

We're not talking about servers here. Also if OS can be aware what tasks is throwing to what cores, it could very well know what can be discarded at will and what absolutely has to remain in memory. Windows 7 and further 8, 10 and now 11 even further. Where Windows XP was this only absolutely needed stuff and rest was literally wasting RAM by not utilizing it. Which is also confusion for most people even today who still look at Task Manager and go MY GOD MY MEMORY USAGE!!!!11111 Also yes, most people here talking about it obviously don't seem to understand any of it.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, RejZoR said:

We're not talking about servers here

It's doesn't matter, Windows is Windows and Apps are Apps. You allow something to swallow memory until 100% utilization and Windows just does not handle that properly. Anything below 99% is fine however the closer to 100% the more danger of spiking to 100% and then Windows running like ass.

 

Makes no difference if it's active memory or some kind of cached data, once you hit that magic 100% RIP.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×