Jump to content

Passmark - AMD "Decimated" In New Update

Lurick
38 minutes ago, Taf the Ghost said:

AVX is 128bit vector operations; AVX2 is 256bit and AVX512 is, as you can guess, 512bit operations.  https://en.wikipedia.org/wiki/Advanced_Vector_Extensions  

 

They're also what kills Intel's CPUs when they try to use them, as the units produce a lot of heat to do a calculation. AMD's current response is "just use a GPU, you moron", because that's what GPUs do. They're used in very little at the moment because Intel only just launched any normal consumer parts with AVX512 units (Icelake Mobile). 

So it's pretty much useless for us as consumers? 

DAC/AMPs:

Klipsch Heritage Headphone Amplifier

Headphones: Klipsch Heritage HP-3 Walnut, Meze 109 Pro, Beyerdynamic Amiron Home, Amiron Wireless Copper, Tygr 300R, DT880 600ohm Manufaktur, T90, Fidelio X2HR

CPU: Intel 4770, GPU: Asus RTX3080 TUF Gaming OC, Mobo: MSI Z87-G45, RAM: DDR3 16GB G.Skill, PC Case: Fractal Design R4 Black non-iglass, Monitor: BenQ GW2280

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, CTR640 said:

So it's pretty much useless for us as consumers? 

AVX and AVX2 get used heavily already, but AVX512 will be of limited utility for a long while. Really need DDR5 out before it makes too much sense. And small nodes for power usage.

Link to comment
Share on other sites

Link to post
Share on other sites

Don't mind if I wait a week to find out whats actually happening before I get my nickers bunched up.  Like nearly every other issue with bench marking and reviews that seem to unfairly favor one company, this will likely turn out to either be simply misunderstood by everyone or an accident that gets fixed.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, CTR640 said:

So it's pretty much useless for us as consumers? 

Negative. Most software uses AVX instructions if cpu is available to process it. 

DX12 games are using AVX and AVX2 instruction sets.

You can stress test with Prime95 and OCCT that utilize AVX. Good for heat build up!!

Cinebench uses AVX instructions for example. That's one of the reasons the cpu gets so hot.

Now compare a Cinebench load to a WPrime 1024m load, you'll see a big difference while Wprime does not use AVX instructions for the calculation.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, ShrimpBrime said:

Negative. Most software uses AVX instructions if cpu is available to process it. 

DX12 games are using AVX and AVX2 instruction sets.

You can stress test with Prime95 and OCCT that utilize AVX. Good for heat build up!!

Cinebench uses AVX instructions for example. That's one of the reasons the cpu gets so hot.

Now compare a Cinebench load to a WPrime 1024m load, you'll see a big difference while Wprime does not use AVX instructions for the calculation.

More importantly to normal consumers... yes AVX is power hungry. It's also flipping fast compared to the instructions it's replacing, and can sometimes mean a 4x+ speed increase in that part of the code.

 

So it is 100% worth the heat to use, not just people playing with their toys.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, Neftex said:

its paid off

 

12 hours ago, ARikozuM said:

This is definitely for ad money. 

Here's an idea: How about people stop making wild claims when, in reality, nobody knows. You're just as clueless as the rest of us, and pretending you know a company's reasons for doing stuff just spreads misinformation.

Make sure to quote or tag me (@JoostinOnline) or I won't see your response!

PSU Tier List  |  The Real Reason Delidding Improves Temperatures"2K" does not mean 2560×1440 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, descendency said:

I hate synthetic benchmarks for this reason. 

According to Intel Passmark is 100% representative of real work cases; unlike Cinebench and others.

 

  

7 hours ago, williamcll said:

What if these are all single threaded gaming benchmarks.


TIL a 4300U with 3.7GHz boost is better in single threaded than 3900X, or 3950X with boost exceeding it by 900Mhz - 1Ghz.

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Taf the Ghost said:

AVX is 128bit vector operations; AVX2 is 256bit and AVX512 is, as you can guess, 512bit operations.  https://en.wikipedia.org/wiki/Advanced_Vector_Extensions  

Note this is only instruction support, and tells you nothing about hardware support. For example, Zen 1 architecture has about half the throughput of Zen 2 yet both support AVX2. AVX-512 is also complicated in that Intel offer one unit and two unit implementations. The two unit ones offer potentially double the throughput of one unit ones. One unit ones seem no better than AVX2 in throughput. Having said that, AVX-512 is a family of instructions, so in other use cases even a single unit implementation may provide performance improvements in other areas.

 

Quote

They're also what kills Intel's CPUs when they try to use them, as the units produce a lot of heat to do a calculation. AMD's current response is "just use a GPU, you moron", because that's what GPUs do. They're used in very little at the moment because Intel only just launched any normal consumer parts with AVX512 units (Icelake Mobile). 

The problem is AMD (and nvidia) killed FP64 performance on consumer models, with the VII being the only recent exception to the rule. Also GPUs still remain "dumb" enough not to be able to cope with more complex workloads a CPU can. GPUs wont replace CPUs in all scenarios even if they have a lot of potential performance.

 

9 hours ago, descendency said:

I hate synthetic benchmarks for this reason. 

The problem isn't a benchmark is synthetic or not, even "real" task benchmarks will have problems. The problem is that everyone values different things different amounts. There is no good way to get a complex measure down to a single number that can be compared.

 

8 hours ago, Taf the Ghost said:

AVX and AVX2 get used heavily already, but AVX512 will be of limited utility for a long while. Really need DDR5 out before it makes too much sense. And small nodes for power usage.

In theory at least, anything capable of using older AVX can be scaled up to get AVX-512 performance. The bigger limit to software adoption I feel is simply that there aren't enough AVX-512 capable CPUs in the wild yet.

 

8 hours ago, ShrimpBrime said:

Cinebench uses AVX instructions for example. That's one of the reasons the cpu gets so hot.

We have to state version also. I'm a bit rusty on this, I think Cinebench R15 did not use AVX, but R20 does.

 

1 hour ago, Valentyn said:

TIL a 4300U with 3.7GHz boost is better in single threaded than 3900X, or 3950X with boost exceeding it by 900Mhz - 1Ghz.

As noted in my previous post, keep in mind that the scores presented appear to be based on user submissions of each CPU. A less common CPU's score could be skewed by a user who might overclock hard.

 

Hmm... if I have time I could try and separate out the Intel K skus from nearest non-OC ones and see if they punch above their weight from that. As most AMD CPUs can be overclocked there's no practical way to do it for them.

 

9 hours ago, Taf the Ghost said:

https://www.passmark.com/forum/pc-hardware-and-benchmarks/46757-single-thread-score-rating?p=46873#post46873

 

@porina @Lurick here's the Passmark dev discussing the changes and the effects. Notebookcheck is clickbaiting a bit. It really seems V9 was actually sandbagging Zen2 a bit and the V10 now makes the single-core test very sensitive to Clocks. (And it slams a bunch of the Intel Mobile parts.)

I was skimming that thread already. The way I read it, there seems to be a lot of noise from those who might be over-invested in this scoring metric. 

 

Quote

Given that Zen2 and Skylake have generally similar IPC (on net, per-tasks there's up to 25% differences; see The Stitl's in-depth testing), this actually isn't too surprising. But Passmark should probably clarify what the point of their test. Not that pure single-core really matters for anything.

I'm having some fun on another forum, and might have to buy a higher core count Zen 2 to investigate it. In Prime95 like workloads, I had previously determined the peak performance of Zen 2 architecture was greater than Skylake. But now I've seen multiple reports including the 3950X and the threadrippers above that, that performance on them might only be half that expected. Back down to Zen 1 levels. Ouch. Clocks are in expected range, with no thermal, power or current limiting. Prime95 bench itself doesn't show anything unusual, so I'm suspecting it is the other software (which uses the same compute code as Prime95) acting differently.

 

As a side effect of that testing, I did note my 3700X, currently running at 4.1 GHz, running two 4-core tasks, is doing individual tasks slower than a 6700k at 4.0 GHz. Of course, the 3700X still has near double the throughput from the extra cores, but in theory it should not be slower per 4-core task. These tasks are in theory small enough that ram bandwidth shouldn't be a limiting factor. This is something I could look into also.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, porina said:

Note this is only instruction support, and tells you nothing about hardware support. For example, Zen 1 architecture has about half the throughput of Zen 2 yet both support AVX2. AVX-512 is also complicated in that Intel offer one unit and two unit implementations. The two unit ones offer potentially double the throughput of one unit ones. One unit ones seem no better than AVX2 in throughput. Having said that, AVX-512 is a family of instructions, so in other use cases even a single unit implementation may provide performance improvements in other areas.

 

The problem is AMD (and nvidia) killed FP64 performance on consumer models, with the VII being the only recent exception to the rule. Also GPUs still remain "dumb" enough not to be able to cope with more complex workloads a CPU can. GPUs wont replace CPUs in all scenarios even if they have a lot of potential performance.

 

The problem isn't a benchmark is synthetic or not, even "real" task benchmarks will have problems. The problem is that everyone values different things different amounts. There is no good way to get a complex measure down to a single number that can be compared.

 

In theory at least, anything capable of using older AVX can be scaled up to get AVX-512 performance. The bigger limit to software adoption I feel is simply that there aren't enough AVX-512 capable CPUs in the wild yet.

 

We have to state version also. I'm a bit rusty on this, I think Cinebench R15 did not use AVX, but R20 does.

 

As noted in my previous post, keep in mind that the scores presented appear to be based on user submissions of each CPU. A less common CPU's score could be skewed by a user who might overclock hard.

 

Hmm... if I have time I could try and separate out the Intel K skus from nearest non-OC ones and see if they punch above their weight from that. As most AMD CPUs can be overclocked there's no practical way to do it for them.

 

I was skimming that thread already. The way I read it, there seems to be a lot of noise from those who might be over-invested in this scoring metric. 

 

I'm having some fun on another forum, and might have to buy a higher core count Zen 2 to investigate it. In Prime95 like workloads, I had previously determined the peak performance of Zen 2 architecture was greater than Skylake. But now I've seen multiple reports including the 3950X and the threadrippers above that, that performance on them might only be half that expected. Back down to Zen 1 levels. Ouch. Prime95 bench itself doesn't show anything unusual, so I'm suspecting it is the other software (which uses the same compute code as Prime95) acting differently.

 

As a side effect of that testing, I did note my 3700X, currently running at 4.1 GHz, running two 4-core tasks, is doing individual tasks slower than a 6700k at 4.0 GHz. Of course, the 3700X still has near double the throughput from the extra cores, but in theory it should not be slower per 4-core task. So this is something I could look into also.

If I refer to anything it would be current not dated as most are buying current and running current software and hardware, but yes software revision does matter. Thanks.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, ShrimpBrime said:

If I refer to anything it would be current not dated as most are buying current and running current software and hardware, but yes software revision does matter. Thanks.

While Cinebench R20 has been out for a while now, I feel that R15 still has a lot of momentum behind it so I wanted to be clear.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

So what was the purpose behind this passmark update? I heard that the devs said that v9 was sandbagging some Zen2 CPUs but it seems that the update to v10 simply sandbagged them more. Was there any useful reason that they updated the benchmark? 

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/16/2020 at 9:21 AM, JoostinOnline said:

 

Here's an idea: How about people stop making wild claims when, in reality, nobody knows. You're just as clueless as the rest of us, and pretending you know a company's reasons for doing stuff just spreads misinformation.

heres another idea, its just an opinion based on proven past intels fuckery with "sponsored" benchmarks and other shady shit they did when their product wasnt exactly great

MSI GX660 + i7 920XM @ 2.8GHz + GTX 970M + Samsung SSD 830 256GB

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Neftex said:

heres another idea, its just an opinion based on proven past intels fuckery with "sponsored" benchmarks and other shady shit they did when their product wasnt exactly great

So you do that for every company then?  Every time something doesn't look right to you you accuse a company of anti consumer behavior?   If not then remember that this is no different, wait until you know why the score is the way it is before outright claiming Intel paid for it.  

 

 

 

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Neftex said:

heres another idea, its just an opinion based on proven past intels fuckery with "sponsored" benchmarks and other shady shit they did when their product wasnt exactly great

You didn't disclose that it was an opinion.

Make sure to quote or tag me (@JoostinOnline) or I won't see your response!

PSU Tier List  |  The Real Reason Delidding Improves Temperatures"2K" does not mean 2560×1440 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, mr moose said:

So you do that for every company then?  Every time something doesn't look right to you you accuse a company of anti consumer behavior?   If not then remember that this is no different, wait until you know why the score is the way it is before outright claiming Intel paid for it.

when stuff doesnt look right (and it actually isnt right in this case btw), the dev doubles down on it knowing that, and it benefits third party that was proven to do similar stuff before. ye id do that for any other company like that

 

1 hour ago, JoostinOnline said:

You didn't disclose that it was an opinion.

i think it was pretty obvious from the post itself

MSI GX660 + i7 920XM @ 2.8GHz + GTX 970M + Samsung SSD 830 256GB

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, Neftex said:

i think it was pretty obvious from the post itself

Say "I think" or "I suspect" so people know you actually don't know anything.  You have absolutely no evidence that this was paid for.

Make sure to quote or tag me (@JoostinOnline) or I won't see your response!

PSU Tier List  |  The Real Reason Delidding Improves Temperatures"2K" does not mean 2560×1440 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, JoostinOnline said:

Say "I think" or "I suspect" so people know you actually don't know anything.  You have absolutely no evidence that this was paid for.

the 2 sentences right before it are enough context to make the connection that its an opinion... im sorry you didnt get it but thats not my concern. now you know its an opinion, so i believe you dont have anything else to bring up?

MSI GX660 + i7 920XM @ 2.8GHz + GTX 970M + Samsung SSD 830 256GB

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Neftex said:

when stuff doesnt look right (and it actually isnt right in this case btw), the dev doubles down on it knowing that, and it benefits third party that was proven to do similar stuff before. ye id do that for any other company like that

 

i think it was pretty obvious from the post itself

 

Again, do you apply that rule to ALL companies when something doesn't look right?  Or does the rule only apply to Intel?   So often people make absolute statements (as you have) with no regard for critical thinking or the impact it might have to the general internet ignorance that reads it.  

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, mr moose said:

Again, do you apply that rule to ALL companies when something doesn't look right?  Or does the rule only apply to Intel?   So often people make absolute statements (as you have) with no regard for critical thinking or the impact it might have to the general internet ignorance that reads it.  

i dont think i need to repeat myself, you can just read the quote you made

MSI GX660 + i7 920XM @ 2.8GHz + GTX 970M + Samsung SSD 830 256GB

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Neftex said:

i dont think i need to repeat myself, you can just read the quote you made

I'm not asking you to repeat yourself, I am asking if you apply the same standard of response to all issues of a similar nature?

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/16/2020 at 9:47 AM, porina said:

snip

 

So basically AMD and Intel share similar IPC but because Intel on the whole can run at higher speeds they fair better in a bench mark (specifically a user submitted one). Who'd of thunk it?

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Blasphemy, someone is a saboteur. Check that man's bank account

Ryzen 9 3900x & Ryzen 7 3700x 120 AIO / MSI 2070 Super Ventus OC / MSI B450 Tomahawk Max & MSI x570 Gaming Pro / Crucial Ballistix Elite 3600mhz 2x8gb / 970EVOplus m.2 NVME 250gb / Adata sata SSD 250gb/ 2TB Hard drive / 9x Noctua Redux 120mm fans / (spare Corsair Vengeance Pro 3200mhz 2x8gb) / Fanatec CSLE+ F1 / V3 Pedals / Next Level Sim Rig / LG Ultra Gear 34" curved ultrawide 144hz / Oculus Rift S / Asus 27" 144hz / crappy keyboard / crappy mouse

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×