Jump to content

Intel 12th Gen Core Alder Lake for Desktops: Top SKUs Only, Coming November 4th +Z690 Chipset

Lightwreather

Some interesting graphs
 

Spoiler

Image of performance benchmarks between 12th gen Intel and Ryzen 5000

Spoiler

Anno 1800 - AvCPUWattFPS_DE - 1440p

 

3 hours ago, Kinda Bottlenecked said:

It'll definitely cost more to cool this thing 😐

 

I'll give it a pass.

Depends on what you are doing, for gaming, it is quite efficient.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, WolframaticAlpha said:

Some interesting graphs
 

  Reveal hidden contents

Image of performance benchmarks between 12th gen Intel and Ryzen 5000

  Reveal hidden contents

Anno 1800 - AvCPUWattFPS_DE - 1440p

 

Depends on what you are doing, for gaming, it is quite efficient.

 

I am speaking in terms of productivity. Sustained 250w will not be easy to keep cool. 

 

5600x seems to be keeping up well too

15-1080-Efficiency-1.png

 

i5 2400 | ASUS RTX 4090 TUF OC | Seasonic 1200W Prime Gold | WD Green 120gb | WD Blue 1tb | some ram | a random case

 

Link to comment
Share on other sites

Link to post
Share on other sites

  

14 hours ago, RejZoR said:

As for Alder Lake, I read TPU's review first and Alder Lake is a bit of a disappointment. Sure, it brings gains to the table, but at expense of ridiculous temperatures and power consumption and in many cases barely beating 5950X which has been on the market for a full year now. People talk about hopes of AMD decreasing prices of their CPU's, but I frankly see no reason for it. And I'm sure so does AMD. Intel didn't really make anything cheaper when Ryzen was meddling their sales a bit in the beginning, why would AMD now given that brand new right now released CPU is barely denting 5950X performance metrics? It would be nice if prices dropped so I could do something stupid like getting 5950X instead of existing 5800X for no reason other than YOLO. Hopefully those V-Cache or whatever new Ryzens will be will come to AM4. Nothing wrong with 5800X, but I like fiddling with new stuff and if I can still fit those in my system I might try it out.

>why would AMD now given that brand new right now released CPU is barely denting 5950X performance metrics

 

Might be because the 12900K is actually going ahead of a 750 dollar CPU while being a 600 dollar one? PCWorld(Intel Core i9-12900K review: Intel. Is. Back. | PCWorld) had a good take on it:

Quote

Some still think AMD’s Ryzen 9 5950X wasn’t a great value, while others think a $750 CPU that offered performance on par with a $2,000 CPU from the year before was a steal. If you’re in the camp that thinks the Ryzen 9 price was a steal, than Intel’s aggressive $589 pricing of the Core i9-12900K will have you screaming for the company to take your debit card. Yes, the price is the bulk price, but traditionally, the 1,000 unit “tray” pricing is within dollars of the street price once initial demand settles.

 

>it brings gains to the table, but at expense of ridiculous temperatures and power consumption

It really depends on your workload. Gaming is quite fine: Intel Core i9-12900K(F), Core i7-12700K and Core i5-12600K Review - Gaming in really fast and really frugal | Part 1 | Page 7 | igor ́sLAB (igorslab.de). It is only on productivity that the 12900K becomes a space heater. The 12900K was a stupid halo product from Intel. The 12600K and the 12700K are much cooler. There is very little incentive for a Zen 3 owner to upgrade, unless you clamor for 7-12% increases in performance.

 

>Intel didn't really make anything cheaper when Ryzen was meddling their sales a bit in the beginning

That might be a PR move. Ryzen 5xxx has been on the market for a year+. Won't be surprised if AMD decides to suddenly give pricecuts because of "improvements in it's supply chain".

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, WolframaticAlpha said:

  

>why would AMD now given that brand new right now released CPU is barely denting 5950X performance metrics

 

Might be because the 12900K is actually going ahead of a 750 dollar CPU while being a 600 dollar one? PCWorld(Intel Core i9-12900K review: Intel. Is. Back. | PCWorld) had a good take on it:

 

>it brings gains to the table, but at expense of ridiculous temperatures and power consumption

It really depends on your workload. Gaming is quite fine: Intel Core i9-12900K(F), Core i7-12700K and Core i5-12600K Review - Gaming in really fast and really frugal | Part 1 | Page 7 | igor ́sLAB (igorslab.de). It is only on productivity that the 12900K becomes a space heater. The 12900K was a stupid halo product from Intel. The 12600K and the 12700K are much cooler. There is very little incentive for a Zen 3 owner to upgrade, unless you clamor for 7-12% increases in performance.

 

>Intel didn't really make anything cheaper when Ryzen was meddling their sales a bit in the beginning

That might be a PR move. Ryzen 5xxx has been on the market for a year+. Won't be surprised if AMD decides to suddenly give pricecuts because of "improvements in it's supply chain".

I really don't know what to think of this launch right now, and I don't know what I would recommend to people.

 

It seems like the "it depends what you need" is more true than ever.

 

- If you are a budget gamer, 5600G or 5600X remain good choices I guess, but are falling behind Intel 12600K, albeit at lower cost factoring in the motherboards.

- If you are a midrange gamer only, the 12600K seems very compelling, even accounting for the motherboard pricing. Beats the 5800X in similar price range (when factoring in motherboard).

- Gaming + streaming you should probably go 12600K over the 5800X, I haven't seen specific benchmarks for 5800X vs 12600K, but multicore is probably similar, and 12600K beats it on gaming.

- For midrange productivity, no gaming, you could go either 12600K or 5800X. They are probably similar in performance, and price when factoring in the mobo.

- For high-end productivity only, you could go either way depending on budget and preferences. 5900X or 5950X are good choices if you care about power consumption and not gaming. 12900K if you don't care about power consumption, and if you also want really good gaming performance. Or 12900K if you simply want the newer platform with DDR5.

Link to comment
Share on other sites

Link to post
Share on other sites

 

15 hours ago, Blademaster91 said:

Boards are way overpriced for the US then, not sure if retailers are taking advantage of the consumer as they are with GPU's.  An Asus Z690 strix F is $399, while the Asus X570 strix F is $299. And high end DDR4 didn't really matter, at least for gaming and most applications.

What exactly makes me an "intel hater" for asking if the system used DDR5? The fact that testing uses DDR5 means Intel has the absolute best performance possible, although DDR5 being much more expensive than DDR4 isn't any advantage unless cost doesn't matter to you.

No everyone whined when AMD was more expensive, regardless of performance, there seems to be a double standard on pricing, with intel it's "only $100 more" than the previous i9, although IMO $550 for the flagship desktop CPU is already too expensive.

The 5950X is too expensive for most use cases, and the 5900X is the best value per core on AMD, but at least you can put one on a $200 board, and only need to spend $100-150 on RAM, and can use it with a $50 air cooler. Not the case with a 12900K as Z690 boards start around $300, RAM is $300, and you need A $150 water cooler for it as well.

as of right now, certain tasks literally perform better with ddr4

so for some of the tests, it may not have exactly been a complete "best case scenario" for intel

░█▀▀█ ▒█░░░ ▒█▀▀▄ ▒█▀▀▀ ▒█▀▀█   ▒█░░░ ░█▀▀█ ▒█░▄▀ ▒█▀▀▀ 
▒█▄▄█ ▒█░░░ ▒█░▒█ ▒█▀▀▀ ▒█▄▄▀   ▒█░░░ ▒█▄▄█ ▒█▀▄░ ▒█▀▀▀ 
▒█░▒█ ▒█▄▄█ ▒█▄▄▀ ▒█▄▄▄ ▒█░▒█   ▒█▄▄█ ▒█░▒█ ▒█░▒█ ▒█▄▄▄

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, MageTank said:

Yeah... now that you mention it, they are not using AVX. I've gone from cheapest ML240L AIO's all the way up to 360mm Asetek designs from Corsair and still can't prevent these things from hitting 100C in seconds under AVX Prime95. I'll have to run through Cinebench or something to see if I can replicate their results.

 

It's not that I do not understand you. It's that the message you are trying to convey is incorrect and should not be spread due to it being pure fiction. If you believe I misrepresented your point, feel free to clarify what you meant and I'll determine if what you are saying is incorrect. I've cited most of my sources, but if you need me to get technical and provide some whitesheets I do not mind.

 

This is an agreeable outlook on the situation. One benefit of utilizing excess RAM to "get your money's worth" is to use your extra ram as a block level cache. Even prolongs your SSD's lifespan by letting the mundane writes hit your memory, not the SSD.

 

I actually have a couple Alder Lake setups behind me if anyone wants me to run through some specific tests. Both a DDR4 and DDR5 system, similar ASUS boards too. I can't test latency right now, and bandwidth results seem extremely inaccurate due to how software is perceiving DDR5's individual 32-bit memory channels, so you end up with higher peak theoretical bandwidth than what should be possible (still trying to figure this out as we speak).

 

In my preliminary tests, my high speed DDR4 kits seem to out-perform the 5200 C38 DDR5 kit I have, and both certainly outperform the 4800 C40 kit, lol. DDR5 will no doubt be better as it matures and we get a second iteration of the IMC to push these limits. Right now, I haven't the slightest clue on how to properly OC these DDR5 kits, even with a long history of memory OCing. You have an RTL and IO channel per 32-bit channel, per DIMM. I have to assume these are strapped and can't be altered individually yet these ASUS boards totally let me do exactly that.

Hey mate! I’ve actually been looking for some data on the AVX-512 performance, beyond what Anandtech have posted!
 

Especially looking to compare performance of the i9 vs i7 when both have all ecores disabled. The i9 should retain an extra 5MB of L3 as compared to the i7, but have the same amount of cores and all other options. 
 

Especially interested performance in compute workloads like CFD as well as code Paths/performance when using different AVX512 compiler flags 


have you done any testing as to what happens when you load up all the cores while also thrashing I/o? Something like ZFS parity compute with dedup using NVME tends to be a good test for pushing every part of a system to its limits 

 

(the AVX 512 is supposed to be the same as sapphire rapids minus some minor cache line differences as far as I know.)

 

 I’m looking at the i7 for a small CFD workstation, with a few accelerator cards for Cuda/SYCL/HIP using the CPU lanes, and high bandwidth network and storage over the chipset/dmi

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Camofelix said:

Hey mate! I’ve actually been looking for some data on the AVX-512 performance, beyond what Anandtech have posted!
 

Especially looking to compare performance of the i9 vs i7 when both have all ecores disabled. The i9 should retain an extra 5MB of L3 as compared to the i7, but have the same amount of cores and all other options. 
 

Especially interested performance in compute workloads like CFD as well as code Paths/performance when using different AVX512 compiler flags 


have you done any testing as to what happens when you load up all the cores while also thrashing I/o? Something like ZFS parity compute with dedup using NVME tends to be a good test for pushing every part of a system to its limits 

 

(the AVX 512 is supposed to be the same as sapphire rapids minus some minor cache line differences as far as I know.)

 

 I’m looking at the i7 for a small CFD workstation, with a few accelerator cards for Cuda/SYCL/HIP using the CPU lanes, and high bandwidth network and storage over the chipset/dmi

 

 

I haven't (truth be told, I wasn't aware that these supported AVX512 until someone here showed me that review). I'll check if my boards have the setting to enable it and if so, I'll run some tests. Note that I only have 12900K/F's, 12700K/F's, no i5's at the moment.

 

What is interesting to note is that apparently some CPU's will have AVX512 completely fused off, so I'll be curious to compare my retail vs engineering samples if this is true.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, MageTank said:

I haven't (truth be told, I wasn't aware that these supported AVX512 until someone here showed me that review). I'll check if my boards have the setting to enable it and if so, I'll run some tests. Note that I only have 12900K/F's, 12700K/F's, no i5's at the moment.

 

What is interesting to note is that apparently some CPU's will have AVX512 completely fused off, so I'll be curious to compare my retail vs engineering samples if this is true.

So far it seems like it's only an option on Asus, Gigabyte and Asrock boards, with Asus having the most documented uses so far.

 

AFAIK there haven't been any cases of AVX-512 not working once enabled, but it's also a very small subset of people who would be trying to use it, so we might have to wait and see

Link to comment
Share on other sites

Link to post
Share on other sites

Follow up to the above:

In Y-Chruncher:

 

By turning off the e core's and enabling AVX-512, Hardwareluxx.de saw a massive increase in performance over having the ecore's enabled.

 

 

*On the same CPU*

Without AVX-512 and using 24 threads of AVX2 (16P AVX2 and 8 E AVX2) the i9-12900k took 40.56% longer than the AVX-512 configuration (16 P AVX-512).

 

Without AVX-512 and using 16 threads of AVX2 (12p AVX2 and 4 E AVX2) the i5-12600k took 39.86% longer than the AVX-512 configuration (12 P AVX-512).

 

image.png.f76866b015b559319b3f40f8154caf83.png

 

The power draw during computation was close between both cases. For the i-9 the AVX-512 compute pulled 10W less AND finished faster, while in the case of the i-5, the AVX2 (e-core ON) drew slightly less power (10w), but took significantly longer, meaning the AVX-512 version still comes out ahead for overall power consumption.

 

I haven't been able to track down an idle power consumption number with good methodology that compare's 8p/16t vs 16p+8e/24T idle power draw. (it seems to be the only power number not on AnandTech)

Spoiler

or I'm blind 

 

 

Hardwareluxx.de:

Direct source link: https://www.hardwareluxx.de/index.php/artikel/hardware/prozessoren/57430-core-i9-12900k-und-core-i5-12600k-hybrid-desktop-cpus-alder-lake-im-test.html?start=16
Internet Archive link: https://web.archive.org/web/20211105171349/https://www.hardwareluxx.de/index.php/artikel/hardware/prozessoren/57430-core-i9-12900k-und-core-i5-12600k-hybrid-desktop-cpus-alder-lake-im-test.html?start=16

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Camofelix said:

So far it seems like it's only an option on Asus, Gigabyte and Asrock boards, with Asus having the most documented uses so far.

 

AFAIK there haven't been any cases of AVX-512 not working once enabled, but it's also a very small subset of people who would be trying to use it, so we might have to wait and see

I use CAD and finite element analysis FEA. Some of my simulations does make use of AVX2 and AVX512 on rare occasions.

See the link below by Dr. Ian Cutress on Alder Lake and AVX512

This is one thing that's holding me back from Threadripper as many FEA simulation platforms were compiled using the Intel C++ library.

Those libraries actively search for intel ONLY CPUs, Once they detect a non Intel CPU the it would flag to disable AVX and AVX2 instructions. So Simulations would take forever on AMD.

I am also interested in ECC RAM, so that throws the 12900K out the window, as it's on-die ECC only, not Full ECC, I have to wait for the Xeon version of Alder Lake, the W-1400 series or Saphire Rapids Xeons.

Then there's the story of PCIE lanes count... I don't want to throw away my P4000 Quadros, Current new Quadros are super expensive.

I need Saphire Rapids ... now!

 

https://www.anandtech.com/show/17047/the-intel-12th-gen-core-i912900k-review-hybrid-performance-brings-hybrid-complexity/15

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, bassam64i said:

This is one thing that's holding me back from Threadripper as many FEA simulation platforms were compiled using the Intel C++ library.

Those libraries actively search for intel ONLY CPUs, Once they detect a non Intel CPU the it would flag to disable AVX and AVX2 instructions. So Simulations would take forever on AMD.

I don't think that is really the case, you can compile something with the Intel compiler and AVX/AVX2 will work on AMD systems. Up until AMD Zen 3 FP throughput was just much, much lower than Intel on a per core basis and still is even for Zen 3. Additionally Threadripper does not even have any Zen 3 products yet so even though it's quite a lot better in that regard the best you can use is a 5950X.

 

A 10 core 10900K in many cases is able to keep up to the 5950X, in heavy FP workloads that utilize AVX.

 

What you said was only really the case in 2009/2010 but Intel got dragged passed the FTC and hand smacked but even then note it's more complicated than that because AMD CPUs back then largely didn't even support AVX2 so the compiler was doing a correct job anyway. There's simply a lot of false sentiment and information about Intel in regards to AMD around, some true but most not.

 

Puget Systems is happily recommending and selling AMD Ryzen 5000 systems for Solidworks, however AutoCAD is still a little more "Intel friendly" but you can still use Ryzen 5000 just fine.

https://www.pugetsystems.com/recommended/Recommended-Systems-for-SOLIDWORKS-150/Hardware-Recommendations

 

Quote

AMD Ryzen 9 5900X 3.7GHz 12 Core - Simulation workloads are still sensitive to clock speed, but many also scale with additional CPU cores. Because of that, AMD's higher core count Ryzen processors are ideal choices, especially for FEA and Flow simulations, while still offering great performance for general modeling tasks in SOLIDWORKS. The additional cores also improve rendering performance with PhotoView 360, but if that is your main focus then an even higher-end Threadripper will be faster yet. AMD's 16-core model in this family, the Ryzen 9 5950X, will also be slightly faster with rendering and certain simulations.

 

https://www.pugetsystems.com/labs/articles/SOLIDWORKS-2020-SP5-AMD-Ryzen-5000-Series-CPU-Performance-2011/

 

TL;DR: AMD vs Intel Simulation performance is very close so neither is a wrong or limiting choice, Render AMD is significantly faster. Alder Lake is probability now the best because it's likely the fastest for Simulation and now able to match Render. However I have not seen a Alder Lake Solidworks review/benchmark yet so just my speculation based of past results and current reviews.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, leadeater said:

I don't think that is really the case, you can compile something with the Intel compiler and AVX/AVX2 will work on AMD systems. Up until AMD Zen 3 FP throughput was just much, much lower than Intel on a per core basis and still is even for Zen 3. Additionally Threadripper does not even have any Zen 3 products yet so even though it's quite a lot better in that regard the best you can use is a 5950X.

 

A 10 core 10900K in many cases is able to keep up to the 5950X, in heavy FP workloads that utilize AVX.

 

What you said was only really the case in 2009/2010 but Intel got dragged passed the FTC and hand smacked but even then note it's more complicated than that because AMD CPUs back then largely didn't even support AVX2 so the compiler was doing a correct job anyway. There's simply a lot of false sentiment and information about Intel in regards to AMD around, some true but most not.

 

Puget Systems is happily recommending and selling AMD Ryzen 5000 systems for Solidworks, however AutoCAD is still a little more "Intel friendly" but you can still use Ryzen 5000 just fine.

https://www.pugetsystems.com/recommended/Recommended-Systems-for-SOLIDWORKS-150/Hardware-Recommendations

 

 

https://www.pugetsystems.com/labs/articles/SOLIDWORKS-2020-SP5-AMD-Ryzen-5000-Series-CPU-Performance-2011/

CAD packages benefit from single thread performance.

FEA solvers (which are sometimes part of a CAD package) are almost always compiled with intel C++ compilers. and are a different kettle of fish.

The issue of FEA solvers compiled using Intel C++ compilers and running slow on AMD processors is very well covered all over the internet. Please see the link below by Puget Systems on how they found a loop hole to enable AVX with these FEA solvers.

https://www.pugetsystems.com/labs/hpc/How-To-Use-MKL-with-AMD-Ryzen-and-Threadripper-CPU-s-Effectively-for-Python-Numpy-And-Other-Applications-1637/

 

and more here on the ANSYS forums, ANSYS is largest provider of FEA solvers in the industry;

 

https://forum.ansys.com/discussion/25678/amd-vs-intel-processors-for-non-hpc-computing-mkl-issues-still-relevant

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, leadeater said:

I don't think that is really the case, you can compile something with the Intel compiler and AVX/AVX2 will work on AMD systems. Up until AMD Zen 3 FP throughput was just much, much lower than Intel on a per core basis and still is even for Zen 3. Additionally Threadripper does not even have any Zen 3 products yet so even though it's quite a lot better in that regard the best you can use is a 5950X.

 

A 10 core 10900K in many cases is able to keep up to the 5950X, in heavy FP workloads that utilize AVX.

 

What you said was only really the case in 2009/2010 but Intel got dragged passed the FTC and hand smacked but even then note it's more complicated than that because AMD CPUs back then largely didn't even support AVX2 so the compiler was doing a correct job anyway. There's simply a lot of false sentiment and information about Intel in regards to AMD around, some true but most not.

 

Puget Systems is happily recommending and selling AMD Ryzen 5000 systems for Solidworks, however AutoCAD is still a little more "Intel friendly" but you can still use Ryzen 5000 just fine.

https://www.pugetsystems.com/recommended/Recommended-Systems-for-SOLIDWORKS-150/Hardware-Recommendations

 

 

https://www.pugetsystems.com/labs/articles/SOLIDWORKS-2020-SP5-AMD-Ryzen-5000-Series-CPU-Performance-2011/

 

 

Further more, mathematical models fed into these FEA solvers often require huge amounts of RAM. It's typical that a model may take anything from 8 hours to several days to complete and obtain results. Hence why dual channel memory CPUs aren't good enough. historically, Xeons have always been kings in FEA ...

Please see more below;

http://www.ozeninc.com/wp-content/uploads/2020/01/Understanding-Hardware-Selection-for-ANSYS-2019-Presentation-1.pdf

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, bassam64i said:

CAD packages benefit from single thread performance.

FEA solvers (which are sometimes part of a CAD package) are almost always compiled with intel C++ compilers. and are a different kettle of fish.

The issue of FEA solvers compiled using Intel C++ compilers and running slow on AMD processors is very well covered all over the internet. Please see the link below by Puget Systems on how they found a loop hole to enable AVX with these FEA solvers.

https://www.pugetsystems.com/labs/hpc/How-To-Use-MKL-with-AMD-Ryzen-and-Threadripper-CPU-s-Effectively-for-Python-Numpy-And-Other-Applications-1637/

 

and more here on the ANSYS forums, ANSYS is largest provider of FEA solvers in the industry;

 

https://forum.ansys.com/discussion/25678/amd-vs-intel-processors-for-non-hpc-computing-mkl-issues-still-relevant

Yes I am aware that single thread matters a lot, what you're reading about is Intel's Math library not the Intel compiler FYI.

 

Solidworks Simulation performance is exactly what you are talking about, default nothing changed in current versions of Solidworks on Ryzen 5000 gives comparable and actually faster performance than Intel 10th Gen.

 

It's literally stated as so in my quotes from Puget, specifically about FEA. I was addressing specifically that, however if you also do render then it's basically a walkover with Ryzen 5000.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, bassam64i said:

 

 

Further more, mathematical models fed into these FEA solvers often require huge amounts of RAM. It's typical that a model may take anything from 8 hours to several days to complete and obtain results. Hence why dual channel memory CPUs aren't good enough. historically, Xeons have always been kings in FEA ...

Please see more below;

http://www.ozeninc.com/wp-content/uploads/2020/01/Understanding-Hardware-Selection-for-ANSYS-2019-Presentation-1.pdf

 

Then use a EPYC 7003 workstation 😉

Link to comment
Share on other sites

Link to post
Share on other sites

I'll wait for AMD to release new generation in spring. Probably March 2022 as they usually do. A 4-ish months difference will be much more comparable than whole frigging year difference we have now. And then you decide which option is better. Unless you just absolutely want a new system this moment and you do gaming in 99% of the time in which case 12900K might be compelling. Kinda like I jumped on Ryzen 5800X the moment it was launched, because I was essentially waiting for that and Intel didn't have anything worthwhile anyway at the time.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, bassam64i said:

I use CAD and finite element analysis FEA. Some of my simulations does make use of AVX2 and AVX512 on rare occasions.

See the link below by Dr. Ian Cutress on Alder Lake and AVX512

This is one thing that's holding me back from Threadripper as many FEA simulation platforms were compiled using the Intel C++ library.

Those libraries actively search for intel ONLY CPUs, Once they detect a non Intel CPU the it would flag to disable AVX and AVX2 instructions. So Simulations would take forever on AMD.

I am also interested in ECC RAM, so that throws the 12900K out the window, as it's on-die ECC only, not Full ECC, I have to wait for the Xeon version of Alder Lake, the W-1400 series or Saphire Rapids Xeons.

Then there's the story of PCIE lanes count... I don't want to throw away my P4000 Quadros, Current new Quadros are super expensive.

I need Saphire Rapids ... now!

 

https://www.anandtech.com/show/17047/the-intel-12th-gen-core-i912900k-review-hybrid-performance-brings-hybrid-complexity/15

Yeah, the shenanigans with the intel MKL library in the last 1-2 years wasn't a good look on intel, tho that seems to have gone away now with the intel OneAPI push, the switch to the LLVM based ICX, ICPX IFX compilers over the propriatary ICC, ICPC and IFORTAN compilers.

 

Not sure which part of industry you're in, but typically a bit flip that would be caught by ECC is either large enough to be caught and then rerun for that subsection of a mesh on that node, or close enough to the bottom of a float rounding error/sparse bounding line to be removed anyway.

 

In my case ECC is only relevant because it's hard to find high capacity dimms that don't have it (think back to LR DDR3 32GB dimms for example).

 

not sure what you mean about the PCI-e lane count. P4000's are compute limited these days no? Unless you're doing something that mandates that they return the data after every single calculation, at which point the code is just bad.

 

And yeah, Sapphire looks like it should be nice. more interesting to me is that we can use ADL (when properly setup) to simulate performance expectation of AVX-512 workloads.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, leadeater said:

I don't think that is really the case, you can compile something with the Intel compiler and AVX/AVX2 will work on AMD systems. Up until AMD Zen 3 FP throughput was just much, much lower than Intel on a per core basis and still is even for Zen 3. Additionally Threadripper does not even have any Zen 3 products yet so even though it's quite a lot better in that regard the best you can use is a 5950X.

 

A 10 core 10900K in many cases is able to keep up to the 5950X, in heavy FP workloads that utilize AVX.

 

What you said was only really the case in 2009/2010 but Intel got dragged passed the FTC and hand smacked but even then note it's more complicated than that because AMD CPUs back then largely didn't even support AVX2 so the compiler was doing a correct job anyway. There's simply a lot of false sentiment and information about Intel in regards to AMD around, some true but most not.

 

Puget Systems is happily recommending and selling AMD Ryzen 5000 systems for Solidworks, however AutoCAD is still a little more "Intel friendly" but you can still use Ryzen 5000 just fine.

https://www.pugetsystems.com/recommended/Recommended-Systems-for-SOLIDWORKS-150/Hardware-Recommendations

 

 

https://www.pugetsystems.com/labs/articles/SOLIDWORKS-2020-SP5-AMD-Ryzen-5000-Series-CPU-Performance-2011/

 

TL;DR: AMD vs Intel Simulation performance is very close so neither is a wrong or limiting choice, Render AMD is significantly faster. Alder Lake is probability now the best because it's likely the fastest for Simulation and now able to match Render. However I have not seen a Alder Lake Solidworks review/benchmark yet so just my speculation based of past results and current reviews.

Yup, but where Zen struggles is that due to the lack of 512, it can't pre-pack items like the initial portions of CFD solvers before offloading them to an accelerator.

 

(Warning: CFD nerd time:)

 

Specifically important is the decrease in number of clocks required to process AVX-512 in processor in combination with both the larger window and the the wider decoder. I don't have the paper in front of me ATM, but IIRC just to compute the total prepack before sending it off to a GPU was 3-4x slower with AVX-2 vs AVX-512 going as far back as skylake.

This linear stage typically takes up about 30% of the time of each time step per simulation while things like the actual KSP solvers we're taking ~60-65%, but were able to be incredibly parallelize to the point were Amdahls law was starting to kick our butt's.

 

Joys of FOSS, since my base stack is OpenMP, UCX, PETSC and OpenFOAM

 

 

All to say, the very specific changes in architecture that SR brings to AVX-512 looks very promising for CFD workloads, and being able to have some of that in a home system under my desk will be superb for rapid iteration on code.

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Camofelix said:

All to say, the very specific changes in architecture that SR brings to AVX-512 looks very promising for CFD workloads, and being able to have some of that in a home system under my desk will be superb for rapid iteration on code.

Zen 4 is supposed to be getting that so end of 2022 and you might have similar cross platform capabilities, actual performance who knows heh.

 

15 minutes ago, Camofelix said:

Yup, but where Zen struggles as that, due to the lack of 512 packing, it can't pre-pack items like the initial portions of CFD solvers before offloading them to an accelerator.

Well it also struggled before because each core had two 128bit FP units so when doing AVX2 256bit operations it was fusing both FP units to do it aka half throughput so AVX1 and AVX2 were largely the same performance throughput wise however as you note there are other implications to a wider data stream so there were performance gains just not as much as Intel where both FP units were 256bit with an additional 512bit only FP unit in some core archs but not others.

 

Golden Cove (Alder Lake P cores) now has 3 fully fledged FP units, the 3rd unit is now no longer AVX-512 only, so can do 3x 256bit operations or 2x 512bit operations. Previously this was 2x 256bit operations and 1x 512bit operations, or on Xeon (some not all)/HEDT 2x 256bit operations and 2x 512bit operations (if you had a CPU with the additional AVX-512 unit)

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, leadeater said:

Zen 4 is supposed to be getting that so end of 2022 and you might have similar cross platform capabilities, actual performance who knows heh.

 

Well it also struggled before because each core had two 128bit FP units so when doing AVX2 256bit operations it was fusing both FP units to do it aka half throughput so AVX1 and AVX2 were largely the same performance throughput wise however as you note there are other implications to a wider data stream so there were performance gains just not as much as Intel where both FP units were 256bit with an additional 512bit only FP unit in some core archs but not others.

 

Golden Cove (Alder Lake P cores) now has 3 fully fledged FP units, the 3rd unit is now no longer AVX-512 only, so can do 3x 256bit operations or 2x 512bit operations. Previously this was 2x 256bit operations and 1x 512bit operations, or on Xeon (some not all)/HEDT 2x 256bit operations and 2x 512bit operations (if you had a CPU with the additional AVX-512 unit)

Hoping to see AMD finally adopt it properly, especially since the spec was published as far back as 2013 

 

The funny part to me is that Centaur (the part of Via that actually owns the x86 license [and seems to be being spun off?]) already has AVX-512 in market. I don't expect Zen4's AVX performance to be all that exceptional unfortunately. Compared to AVX1 and 2 it will be amazing, but hopefully they give us more than just the foundation set.

 

Good point on the AVX-2 implementation, I'd forgotten about that. Amazes me how many HPC devs don't look at the ISA/cluster they're targeting in more detail. But that's probably a topic for a different thread. (Haven't been on the LTT forums often in the last few years, any HPC/hardcore turbo nerd areas worth taking a gander at?)

 

Back to Alder Lake, still no word from intel r.e. AVX-512 enablement, but MSI have now come out and said that they're going to be enabling support across their entire z690 lineup (They reached out to Dr. Ian C at AD).

 

To me that's as good a sign as any that, at least for the K sku, AVX-512 will be treated like overclocking

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/4/2021 at 1:45 PM, MageTank said:

That is possible, I just don't know how my numbers are not matching everyone else when we use the same monitoring software. Have tried both the latest (and beta) versions of Aida64 and HWinfo64 and the results are the same across my test benches. I can't quite determine what variable I am not controlling, unless they have different board firmware that was released for reviewers...

I've been pretty pleased with CoreTemp over the years.  Just make sure you watch during install to avoid getting one of those free-to-play games installed with it (just have to uncheck a box).  Other than that, it doesn't bug you about buying or try to shove ads at you in any way, and it seems reliable enough to me.  No specific update for Alder Lake at this time (last update was in April), but it's worth trying.

Link to comment
Share on other sites

Link to post
Share on other sites

Follow up on AVX-512 support:

here's a post from Der8auer showing how to turn it on for Asus boards:

 

Please note The part about Anandtech calling it a leak and being wrong etc. are flat out misleading/mischaracterizations of the facts.

 

See this twitter thread for context:

 

Edited by Camofelix
Clarrified remarks made in video that were of questionable truthfullness
Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, Camofelix said:

See this twitter thread for context:

Yep I remember that, it was my understanding that it was going to be completely disabled, to the point of also being so in the microcode and impossible to enable. I don't know where things got missed/mixed in information flow or if that is actually mistake and it's actually not supposed to be possible, who knows at this point and likely never will.

 

Worst case a new stepping is released that actually does outright disabled it, which in that case the old stepping CPU value will go up (no I'm not saying investment buy the bloody things).

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/6/2021 at 8:18 AM, RejZoR said:

I'll wait for AMD to release new generation in spring. Probably March 2022 as they usually do. A 4-ish months difference will be much more comparable than whole frigging year difference we have now. And then you decide which option is better. Unless you just absolutely want a new system this moment and you do gaming in 99% of the time in which case 12900K might be compelling. Kinda like I jumped on Ryzen 5800X the moment it was launched, because I was essentially waiting for that and Intel didn't have anything worthwhile anyway at the time.

Or we can stop paying so much attention to halo products like the i9 and Ryzen 9 and look at the more sane segment, the i5 and i7.

If you ask me, very few people should be looking at the i9 and Ryzen 9, especially people on this forum. Complete waste of money. I bet that 9 out of 10 people buying these extremely high end consumer chips are just doing it for bragging rights, not because it is actually a good buy.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×