Jump to content

Intel passing off bogus Xeon Server 'metrics'

WillyW
Go to solution Solved by justpoet,
1 hour ago, jagdtigger said:

You say this like intel never had issues when releasing a whole new cpu architecture......

Intel is perfect, itanium is the future.

1 hour ago, Jito463 said:

Wasn't that actually ATI, not AMD?  I seem to recall those e-mails were from before the AMD acquisition.

It's pretty hard to follow the timeline but it seems like things started before AMD bought ATI, but that they also continued after the purchase. The lawsuit itself happened several years after AMD had bought ATI though.

 

But we don't have to go back that far to find shady benchmarks from AMD.

https://hothardware.com/news/intel-amd-misleading-rome-epyc-zen-2-benchmarks-xeon-9282

 

When AMD was comparing their processor to the Xeon 8280 they ran the benchmarks with Intel's optimizations (specifically NAMD) disabled, which reduced performance on the Intel system by about 30%.

 

Then we also had all these things they did during some of the earlier Zen benchmarks:

On 3/2/2017 at 9:16 PM, LAwLz said:

I think it's best if you watch it for yourself. He goes over a lot of different ways AMD tries to "cheat" to make their processor look better during the performance in decent detail.

But here are the things:

1) Created GPU bottlenecks for their gaming benchmarks.

2) Looked at the sky box more during the Sniper Elite demo. For example looked up into the skybox when reloading in the AMD test, but not in the Intel test.

3) During the Battlefield test they zoomed in a lot more often in the AMD run, and zooming in means there is less draw calls to generate by the CPU (because there is less things to draw).

4) Their blender test was really lightweight. As Gamers Nexus says, it was basically "preview quality" and not settings you would actually use.

5) Did not allow the 6900K to use quad channel memory for the Cinebench run, thus halving its memory bandwidth.

 

Might be more things as well. AMD also told Gamers Nexus that they should do the benchmarks with a GPU bottleneck (might not have used those exact words, but those are the words Gamers Nexus describes their conversation with AMD as). AMD probably wanted them to do that to hide the big performance gap between their processors and the 7700K (which performs much better for gaming).

 

 

 

And this is why you shouldn't trust first party benchmarks. They almost always don't tell the full story. All computer manufacturers are guilty of this, in recent times (not just old stuff). They almost always pick the settings which shows the largest difference between them and their competitor.

If some optimization setting gives company A a 5% performance boost but company B a 10% performance, then company A will probably have it disabled in their official benchmarks. Sure it makes their product seem weaker, but it makes the competing product seem even weaker, and that's what matters. How much better you can make your own product seem compared to the competitor.

Link to comment
Share on other sites

Link to post
Share on other sites

Quote

On the GROMACS results, our testing used the 2019.3 version released in June with best known optimizations applied to both platforms. This included proactive enabling of 256b AVX2 for AMD. Since our original testing, an updated version of GROMACS 2019.4 now automates the AMD build options for their newest core, including autodetecting 256b AVX2 support. We have now tested using GROMAC 2019.4 and found no material difference to the performance geomean of the five GROMACS workloads (difference of 1.08%). The 2019.4 results are in-line with our previous 2019.3 results.

https://medium.com/performance-at-intel/hpc-leadership-where-it-matters-real-world-performance-b16c47b11a01

 

So in short, this turned out to be the worlds biggest storm in a teacup. Intel had themselves implemented updated AVX support for AMD in 2019.3 which gave near enough same results as the official 2019.4 release.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, porina said:

So in short, this turned out to be the worlds biggest storm in a teacup. Intel had themselves implemented updated AVX support for AMD in 2019.3 which gave near enough same results as the official 2019.4 release.

Looks like it. Intel is still comparing a (seriously this is really what it is) 4S CPU configuration at a higher per socket price to a 2S configuration to pull off their 'victory'. It's also a specialty platform vs a generic open platform so you may not even be able to use the Intel system even if you wanted to. These may seem like small nit picks but the requirements you'll need to meet to use the Intel system really make it more than just simple nit picks, plus like Naples there's no hiding your tricks to software so just slapping multiple CPUs on a single package is not going to allow you to bypass/side step NUMA.

 

Edit:

Also worth noting what the industry socket count (1S/2S/4S) meant has now been invalidated by what both Intel and AMD have done. In the past that was used to show the separation between CPUs and Memory banks essentially NUMA domain count so a 2S system meant 2 NUMA domains, not anymore. We're going to need a new or amended notation like we had to do with multi node systems in single chassis (not blades) e.g. 2U4N.

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, GOTSpectrum said:

Thanks for the general overview, its interesting to say the least

ram bandwidth is one of the main bottlenecks of avx workloads usually, so even zen epyc didn't do that badly 

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, leadeater said:

Also worth noting what the industry socket count (1S/2S/4S) meant has now been invalidated by what both Intel and AMD have done. In the past that was used to show the separation between CPUs and Memory banks essentially NUMA domain count so a 2S system meant 2 NUMA domains, not anymore. We're going to need a new or amended notation like we had to do with multi node systems in single chassis (not blades) e.g. 2U4N.

I now have similar pains with consumer CPUs especially since Zen 2 came out. Now I have to worry about split L3 cache per CCX and limited IF bandwidth connectivity. NUCA? Things were so much simpler when they were monolithic, but I recognise if we're to scale beyond today those days are coming to an end. I hope rumours that Zen 3 will have unified L3 per CCD come true as that will make things a lot better.

 

32 minutes ago, cj09beira said:

ram bandwidth is one of the main bottlenecks of avx workloads usually, so even zen epyc didn't do that badly 

That's one problem with chewing through data quickly, feeding it fast enough. Will be interesting to see how both sides address this going forward. AMD's large L3 approach isn't bad actually but with room for improvement. I'd also take an L4, or good old fashioned more ram channels.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, porina said:

I now have similar pains with consumer CPUs especially since Zen 2 came out. Now I have to worry about split L3 cache per CCX and limited IF bandwidth connectivity. NUCA? Things were so much simpler when they were monolithic, but I recognise if we're to scale beyond today those days are coming to an end. I hope rumours that Zen 3 will have unified L3 per CCD come true as that will make things a lot better.

 

That's one problem with chewing through data quickly, feeding it fast enough. Will be interesting to see how both sides address this going forward. AMD's large L3 approach isn't bad actually but with room for improvement. I'd also take an L4, or good old fashioned more ram channels.

it will be very interesting to see, i have a feeling the united cache wont be better in all cases, as it will have higher latency, L4 will be the next big performance booster, hopefully it comes really soon, just a single hbm like l4 would be great, slightly better than ddr performance latency wise but 10x the bandwidth with very reasonable capacities, i cant wait

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, cj09beira said:

it will be very interesting to see, i have a feeling the united cache wont be better in all cases, as it will have higher latency, L4 will be the next big performance booster, hopefully it comes really soon, just a single hbm like l4 would be great, slightly better than ddr performance latency wise but 10x the bandwidth with very reasonable capacities, i cant wait

I said in the AMD thread about having 1-4gb hmb2 stack as L4 wholly be insane for EPYC especially, even TR. but they could use it as a way to differentiate the two lines.

My Folding Stats - Join the fight against COVID-19 with FOLDING! - If someone has helped you out on the forum don't forget to give them a reaction to say thank you!

 

The only true wisdom is in knowing you know nothing. - Socrates
 

Please put as much effort into your question as you expect me to put into answering it. 

 

  • CPU
    Ryzen 9 5950X
  • Motherboard
    Gigabyte Aorus GA-AX370-GAMING 5
  • RAM
    32GB DDR4 3200
  • GPU
    Inno3D 4070 Ti
  • Case
    Cooler Master - MasterCase H500P
  • Storage
    Western Digital Black 250GB, Seagate BarraCuda 1TB x2
  • PSU
    EVGA Supernova 1000w 
  • Display(s)
    Lenovo L29w-30 29 Inch UltraWide Full HD, BenQ - XL2430(portrait), Dell P2311Hb(portrait)
  • Cooling
    MasterLiquid Lite 240
Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, GOTSpectrum said:

I said in the AMD thread about having 1-4gb hmb2 stack as L4 wholly be insane for EPYC especially, even TR. but they could use it as a way to differentiate the two lines.

they could tottally have two lines one with it one without, after all it will probably not be cheap, seems like history does love to repeat itself (how you could buy cpus with and without cache)

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, AnonymousGuy said:

I translate this to mean what I already know: if you want your shit to work off the bat you buy Intel.  If you want to worry about shit like what exact RAM your using and having to update everything and be on a buggy platform to save an amount of $ that is trivial to enterprise, buy AMD.

This argument had a fair amount of validity two years ago, but you're way behind the times if you think that's accurate today. Hell, even Netflix is considering upgrading to Epyc servers.

Make sure to quote or tag me (@JoostinOnline) or I won't see your response!

PSU Tier List  |  The Real Reason Delidding Improves Temperatures"2K" does not mean 2560×1440 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, cj09beira said:

they could tottally have two lines one with it one without, after all it will probably not be cheap, seems like history does love to repeat itself (how you could buy cpus with and without cache)

well, in 2017 according to this GN Article the cost for 8GB of HMB2 was 150USD, now even if the prices are the same(most likely it is slightly cheaper now) then you are only looking at 75USD for a stack, not really all that much more expensive when you think about it.... Especially for the EPYC chips that run in multiple thousands of USD. 

My Folding Stats - Join the fight against COVID-19 with FOLDING! - If someone has helped you out on the forum don't forget to give them a reaction to say thank you!

 

The only true wisdom is in knowing you know nothing. - Socrates
 

Please put as much effort into your question as you expect me to put into answering it. 

 

  • CPU
    Ryzen 9 5950X
  • Motherboard
    Gigabyte Aorus GA-AX370-GAMING 5
  • RAM
    32GB DDR4 3200
  • GPU
    Inno3D 4070 Ti
  • Case
    Cooler Master - MasterCase H500P
  • Storage
    Western Digital Black 250GB, Seagate BarraCuda 1TB x2
  • PSU
    EVGA Supernova 1000w 
  • Display(s)
    Lenovo L29w-30 29 Inch UltraWide Full HD, BenQ - XL2430(portrait), Dell P2311Hb(portrait)
  • Cooling
    MasterLiquid Lite 240
Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, GOTSpectrum said:

well, in 2017 according to this GN Article the cost for 8GB of HMB2 was 150USD, now even if the prices are the same(most likely it is slightly cheaper now) then you are only looking at 75USD for a stack, not really all that much more expensive when you think about it.... Especially for the EPYC chips that run in multiple thousands of USD. 

but amd now expects 40%+ profit margins, that 75 bucks per stack is a very significant part of the raw cost of the chip for amd, specially as epyc will probably have multiple of these chips 2-4 is my guess

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, cj09beira said:

but amd now expects 40%+ profit margins, that 75 bucks per stack is a very significant part of the raw cost of the chip for amd, specially as epyc will probably have multiple of these chips 2-4 is my guess

I think they could get away with a 4GB stack, and profit margin on EPYC is probably much higher than 40% IMO. All they would have to do is increase the cost by 100USD, easy done when they have such a great performance/$ anyway. P.S. I know that only 33% but I'm working on the assumption HBM2 is a little cheaper now. 

My Folding Stats - Join the fight against COVID-19 with FOLDING! - If someone has helped you out on the forum don't forget to give them a reaction to say thank you!

 

The only true wisdom is in knowing you know nothing. - Socrates
 

Please put as much effort into your question as you expect me to put into answering it. 

 

  • CPU
    Ryzen 9 5950X
  • Motherboard
    Gigabyte Aorus GA-AX370-GAMING 5
  • RAM
    32GB DDR4 3200
  • GPU
    Inno3D 4070 Ti
  • Case
    Cooler Master - MasterCase H500P
  • Storage
    Western Digital Black 250GB, Seagate BarraCuda 1TB x2
  • PSU
    EVGA Supernova 1000w 
  • Display(s)
    Lenovo L29w-30 29 Inch UltraWide Full HD, BenQ - XL2430(portrait), Dell P2311Hb(portrait)
  • Cooling
    MasterLiquid Lite 240
Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, cj09beira said:

but amd now expects 40%+ profit margins, that 75 bucks per stack is a very significant part of the raw cost of the chip for amd, specially as epyc will probably have multiple of these chips 2-4 is my guess

If it brings a significant performance uplift to workloads then it's not really going to matter how much more it costs, Intel has Xeons with FPGA on package that costs more. So even if such a part was created and cost $9k instead of $7k it'll be worth it and still cheaper than Intel's top spec main stream Xeon.

 

What will stop AMD from doing it is lack of engineering and software backing to make it actually work and properly implement and execute. Only way AMD would achieve such a part is partnering with Cray to jointly develop it then have it as a specialty platform like Intel Xeon 9200 series.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, valdyrgramr said:

>Can afford 100s of cores
Can't afford Windows

lol, who licenses POC boxes?  Must be mad

Please quote or tag me if you need a reply

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, valdyrgramr said:

My roommate.

Your room mate is daft ;)

Please quote or tag me if you need a reply

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, leadeater said:

What will stop AMD from doing it is lack of engineering and software backing to make it actually work and properly implement and execute.

A very real possibility. I feel like a lot of things that could have been never really developed. There's still almost no support for Thunderbolt on AMD platforms, and FreeSync was so poorly regulated that many FreeSync monitors have been repackaged as "G-Sync Compatible", without so much as an AMD logo on the box.

Make sure to quote or tag me (@JoostinOnline) or I won't see your response!

PSU Tier List  |  The Real Reason Delidding Improves Temperatures"2K" does not mean 2560×1440 

Link to comment
Share on other sites

Link to post
Share on other sites

49 minutes ago, leadeater said:

If it brings a significant performance uplift to workloads then it's not really going to matter how much more it costs, Intel has Xeons with FPGA on package that costs more. So even if such a part was created and cost $9k instead of $7k it'll be worth it and still cheaper than Intel's top spec main stream Xeon.

 

What will stop AMD from doing it is lack of engineering and software backing to make it actually work and properly implement and execute. Only way AMD would achieve such a part is partnering with Cray to jointly develop it then have it as a specialty platform like Intel Xeon 9200 series.

it should be just a cache layer, shouldn't need recoding on the side of 3rd parties (outside of possible optimizations)

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/6/2019 at 3:00 AM, RejZoR said:

They hired a new guy who was suppose to get Intel out of this shit and he's just doing the same thing Intel was doing before. LOL? I thought the point of hiring a new guy was to have an uncorrupted perspective from an outsider what needs to be done to gain market leadership again.

LOL, no, you could argue they were actually doing better before. If it's transparency you are after, you try to make things easier for independent testers, rather than strengthening in-house tests. What they did is hire a spin doctor to entrench in mind share - someone with know-how in presenting results to the public, the inner works of media outlets, and a couple of badges to show off in arguments, if it comes to that. bulshit and nothing else, but they also didn't hire him so in-house benchmarks become the same as third-party benchmarks. I'm just saying they hired him so their in-house skewed, biased benchmarks don't look like made by Principled Technologies :P And I'd say even the case in OP is a success from that perspective.

 

12 hours ago, S w a t s o n said:

I think people are more mad because ryan shrout keeps either making or linking to these articles and he used to be highly respected independent tech press.

You have to give it to him, he's officially working for Intel now :P

 

 

12 hours ago, Falconevo said:

No complaints here from AMD's current offerings, 2.96Ghz on ALL cores for 2 fully loaded 7452 Rome CPUs .

I hate you. Like, in a good way.

 

9 hours ago, valdyrgramr said:

>Can afford 100s of cores
Can't afford Windows

It does hurt a bit to see such system being handicapped by software, I must say. ^_^

 

2 hours ago, cj09beira said:

but amd now expects 40%+ profit margins, that 75 bucks per stack is a very significant part of the raw cost of the chip for amd, specially as epyc will probably have multiple of these chips 2-4 is my guess

The thing is, it would only make sense if it improves performance significantly, and better performance leads to higher pricing, to the point where margins could actually be higher, depending on where such a product lands. In principle, AMD won't hit Intel's competition-less prices, but comparing Intel and AMD current pricing, they have a lot of room to "meet in the middle", and to the extent that AMD can reduce the distance to Intel where it lags behind, extend it where it has an advantage, or increase its market share, that's what I expect to happen.

Link to comment
Share on other sites

Link to post
Share on other sites

Not having a Windows license makes absolutely no different to features or performance on Windows Server OS :)

 

Plus its only on that temporarily until it goes it to production, but its under testing currently anyway

Please quote or tag me if you need a reply

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Falconevo said:

Not having a Windows license makes absolutely no different to features or performance on Windows Server OS :)

 

Plus its only on that temporarily until it goes it to production, but its under testing currently anyway

I was thinking more about the fact that it's Windows, whether licensed or not ;)

 

(I'd still take it even if locked to windows, though :P)

Link to comment
Share on other sites

Link to post
Share on other sites

53 minutes ago, SpaceGhostC2C said:

I was thinking more about the fact that it's Windows, whether licensed or not ;)

 

(I'd still take it even if locked to windows, though :P)

Thats only part of the testing suite, thats the MSSQL testing which can be done on Linux but I certainly wouldn't want to use Linux for MSSQL at the moment.

Please quote or tag me if you need a reply

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, justpoet said:

Intel is perfect, itanium is the future.

Let's put an itanic on everyone's desk! With OS/2!

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/6/2019 at 2:06 AM, ErrantNyles said:

You should also add an update to the original story, the version used by Intel is outdated (2019.3 while 2019.4 is the most recent) and was NOT even optimized for Zen2 (didn't use AVX2 instructions), which now is. Patrick J Kennedy tweeted about it.

 

So the results are even more inaccurate than what is shown...

 

Edit: tweet referenced

 

 

That's referenced in both articles. The STH tweet has a twitter link with the article. The display of the tweet by the forum interface doesn't show it as clearly as it could be.

Link to comment
Share on other sites

Link to post
Share on other sites

@WillyW please check out and consider adding Intel's response to these allegations to top post.

 

 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×