Jump to content

Adobe Releases Lightroom for Apple M1 and Windows Arm, Adds Apple ProRAW Support

Spindel
1 minute ago, LAwLz said:

Yes you are.

Someone made a comment.

You said that comment was false and are now citing ultra-low power Zen chips as a reason for why that comment is false, even though the comment is true when applied to the high performance desktop parts.

 

The statement ("4 M1 cores uses less power than a single zen core) is true for some products, and false for some. Case in point, if you compare the M1 to the 5950X then it is true. Fully load four of the M1 cores and it will still use less power than if you were to load a single core on the 5950X. If you compare the M1 to for example the 4800U then the statement is false. A single core on the 4800U will not use more power than all four cores on the M1.

You can't say a statement is completely false just because it doesn't apply to all products in a product stack. If it applies to some products then it is at the very least partialy true.

4800U is also configured wildly differently in different products with different performance levels. It also gets destroyed in performance per watt

Dirty Windows Peasants :P ?

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

And they likely will be the same power targets, 15W package power (U series) and 35W/45W package power (HS & H series) .

And they're still all destroyed by the M1 in terms of performance per watt and just performance per core. They're also unreleased if they ever are so you can't use them.

Dirty Windows Peasants :P ?

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Lord Vile said:

It's more people getting excited for the desktop class chips that will be coming. If Apple is spanking AMD and intel in single thread performance at a quarter or a fifth of the power draw what will a 16 core that's allowed 100W do?  

theres no proof that the architecture will scale well with that many cores in terms of performance

theres no proof that power efficiency scale the same way with more cores.

 

people drawing conclusions based on assumptions.

please lets all focus on what we know today and not assume things about tomorrow.

 

M1 is exciting for apple, I get that but lets just wait and see what happens. you cant will things to happen, they will happen or not depending on what apple releases and how well there products perform

 

Link to comment
Share on other sites

Link to post
Share on other sites

50 minutes ago, LAwLz said:

You said that comment was false and are now citing ultra-low power Zen chips as a reason for why that comment is false, even though the comment is true when applied to the high performance desktop parts.

Right so using the evidence that exist to show the statement as being false is somehow picking the best case? Please show a little more thought process to this. If someone makes a statement which is wrong and can be proved to be wrong using evidence to show that it is wrong it is now somehow me picking a best case on purpose?

 

Or how about it's simply using evidence to show that a statement made was wrong.

 

I wasn't even complaining how silly it is to use the highest power desktop part to compare to a lower power part, two completely different design ecosystems.

 

Because the statement that was made was "The M1 runs 4 performance cores at less power than a Ryzen 3rd gen chip takes to run one.", not specifically a 59590X or 5900X or 5800X or 5600X or if I don't apply benefit of the doubt and actually use the generation of Ryzen used in the statement 3950X, 3900X, 3800X, 3700, 3600X, 3600.

 

And I can say it was false for that exact reason, he said it as if it applies to ALL products not specific products. That is the problem.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, tech.guru said:

theres no proof that the architecture will scale well with that many cores in terms of performance

theres no proof that power efficiency scale the same way with more cores.

From what we know once you hit a few cores the power effiecnecy actually improves. (Look at the 1-4 core rise vs the 4-16)

 

PerCore-1-5950X-Total.png

1 minute ago, tech.guru said:

 

people drawing conclusions based on assumptions.

please lets all focus on what we know today and not assume things about tomorrow.

 

M1 is exciting for apple, I get that but lets just wait and see what happens. you cant will things to happen, they will happen or not depending on what apple releases and how well there products perform

 

Leaks show up to 16 high performance core designs and from what we've seen it's not a crime to be optimistic. Obviously you don't expect 4x performance for 4x the cores but Zen 2 going from a 4 core to a 16 core netted you a 3.5X performance. That performance increase would put the M1 level with the 5950X.

Dirty Windows Peasants :P ?

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

Right so using the evidence that exist to show the statement as being false is somehow picking the best case? Please show a little more thought process to this. If someone makes a statement which is wrong and can be proved to be wrong using evidence to show that it is wrong it is now somehow me picking a best case on purpose?

 

Or how about it's simply using evidence to show that a statement made was wrong.

 

I wasn't even complaining how silly it is to use the highest power desktop part to compare to a lower power part, two completely different design ecosystems.

 

Because the statement that was made was "The M1 runs 4 performance cores at less power than a Ryzen 3rd gen chip takes to run one.", not specifically a 59590X or 5900X or 5800X or 5600X or if I don't apply benefit of the doubt and actually use the generation of Ryzen used in the statement 3950X, 3900X, 3800X, 3700, 3600X, 3600.

 

And I can say it was false for that exact reason, he said it as it it applies to ALL products not specific products. That is the problem.

You're being pedantic and using chips that are blown out of the water by the M1. The M1 per core is competitive with the desktop 5000 parts. The 3000 mobile parts are all severely beaten.

Dirty Windows Peasants :P ?

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Lord Vile said:

From what we know once you hit a few cores the power effiecnecy actually improves. (Look at the 1-4 core rise vs the 4-16)

 

PerCore-1-5950X-Total.png

Leaks show up to 16 high performance core designs and from what we've seen it's not a crime to be optimistic. Obviously you don't expect 4x performance for 4x the cores but Zen 2 going from a 4 core to a 16 core netted you a 3.5X performance. That performance increase would put the M1 level with the 5950X.

M1 uses a different design big/little its why its single core performance is good (using the big cores) but multi core performance cant match some of the bigger  x86 chips.

 

you should not be looking at the traditional x86 architecture for how the M1 or apple chips scale.

demands will be around multi core performance. i said lets see how the design of arm works in these use cases.

 

for what i know, even the best marketing material shows it being very close for similar comparisons for high core count ARM architectures. see, regarding an 80 core ARM processor in link below

https://www.nextplatform.com/2020/03/18/stacking-up-arm-server-chips-against-x86/

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, tech.guru said:

M1 uses a different design big/little its why its single core performance is good (using the big cores) but multi core performance cant match some of the bigger  x86 chips.

 

you should not be looking at the traditional x86 architecture for how the M1 or apple chips scale.

demands will be around multi core performance. i said lets see how the design of arm works in these use cases.

 

for what i know, even the best marketing material shows it being very close

see, regarding an 80 core ARM processor in link below

https://www.nextplatform.com/2020/03/18/stacking-up-arm-server-chips-against-x86/

But all ARM chips are different. That would be like comparing an Athlon from 2005 vs a 5950X. Servers also aren't the same type of hardware.

 

Multicore beats the desktop chips if you pit the M1 against a 4c/8t chip like the 3300X

Dirty Windows Peasants :P ?

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, Lord Vile said:

You're being pedantic and using chips that are blown out of the water by the M1. The M1 per core is competitive with the desktop 5000 parts. The 3000 mobile parts are all severely beaten.

No you are just missing the point, I take specific issue with you, repeatedly, spouting that nonsense and giving special treatment only to M1 and Apple. Zen 2 and Zen 3 cores can and are power efficient and good performance if you configure them in a  part to be that way, as is the M1. Yes I know the M1 is more power efficient and higher performance, where did I say it's not. The problem is this whole idea that M1 is also immune to the exponential power increase as you raise clock and thus power targets with it.

 

Ryzen Mobile 5000 U series is likely going to be, at least for CB R20, 560-570 points single thread and 3850 points multithread. Configured TDP up 25W it'll be 570-580 single thread and 4500 multithread. These increases bring the Ryzen 5000 U series parts to similar performance as the M1 for single thread while being not as power efficient, however it's nowhere near as bad as what you were trying to make out with your comparisons to desktop class parts. The multithread performance already is better and will be even higher than the M1.

 

The points you are making about the M1 are the exact same points reviewers made about the Ryzen Mobile parts, performance shockingly close to the desktop parts which is something not seen before on the mobile market with Intel based products.

 

So yes the M1 is better, at least for single thread and power efficiency, but it's not as much better as you are thinking or trying to say.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, leadeater said:

No you are just missing the point, I take specific issue with you, repeatedly, spouting that nonsense and giving special treatment only to M1 and Apple. Zen 2 and Zen 3 cores can and are power efficient and good performance if you configure them in a  part to be that way, as is the M1. Yes I know the M1 is more power efficient and higher performance, where did I say it's not. The problem is this whole idea that M1 is also immune to the exponential power increase as you raise clock and thus power targets with it.

 

Ryzen Mobile 5000 U series is likely going to be, at least for CB R20, 560-570 points single thread and 3850 points multithread. Configured TDP up 25W it'll be 570-50 single thread and 4500 multithread. These increases bring the Ryzen 5000 U series parts to similar performance as the M1 for single thread while being not as power efficient, however it's nowhere near as bad as what you were trying to make out with your comparisons to desktop class parts. The multithread performance already is better and will be even higher than the M1.

 

The points you are making about the M1 are the exact same points reviewers made about the Ryzen Mobile parts, performance shockingly close to the desktop parts which is something not seen before on the mobile market with Intel based products.

 

So yes the M1 is better, at least for single thread and power efficiency, but it's not as much better as you are thinking or trying to say.

What I find interesting here is how people are cheering for the low power consumption of M1 are in some cases the same people loving the latest gen of very power hungry GPUs from the green corner. When your GPU needs a 850-1000w PSU it is a bit of a moot point that your CPU is only drawing 10. In the data centre, power consumption is a huge problem as you know. We cannot have huge game streaming centres loaded with 3080s. I am sure this may also push GPU development towards lower power.

 

Personally, I am all for innovation on both sides, ARM and x86. It can only be good for the consumer, competition usually is. I am sure we will see many vendors compiling for ARM. IF it becomes a more compelling product to those devs we will see a huge growth in that market.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LAwLz said:

What's "cringy" is all the AMD and Intel fanboys that can't appreciate what Apple has been able to do and what it might mean for the future. 

 

I am sure that the people now trying to dismiss Apple's accomplishments would have been all over the M1 if it had been made by AMD instead.

 

Zen3 is really good. Apple's firestorm cores pretty much matches Zen3 in terms of performance so if we were raving about Zen3 why shouldn't we also be raving about the M1? Especially since Firestorm gets similar performance to Zen3 at way lower power consumption. 

 

 

Anyway, does anyone know what kind of performance this new release gets compared to the previous gen Intel Macs? 

I thought firestorm cores were sort of half way between intel ryzen 3 in performance. They did it while producing very littler heat though.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

Ryzen Mobile 5000 U series is likely going to be, at least for CB R20, 560-570 points single thread and 3850 points multithread. Configured TDP up 25W it'll be 570-580 single thread and 4500 multithread. 

 

So yes the M1 is better, at least for single thread and power efficiency, but it's not as much better as you are thinking or trying to say.

15W/25W for Zen3 U is the bare minimum of the power it consumes. Single core Zen3 3+ GHz can already saturate 15W. But 15W for M1 is the maximum. 

 

Just wanna post some statistics measured by `sudo powermetrics` (same as HWiNFO and Intel Power Gadget). 

CBr23.png.fe3432e39fa4b1c08e57bb6690978a7f.png

M1 idle: less than 100mW (CPU+GPU+DRAM)

M1 single-thread Firestorm@3.2GHz: CPU only 3.5W, CBr23 1514.

M1 multi-thread Firestorm@3.0GHz/Icestorm@2.1GHz: CPU only 14W, CBr23 7604. 

M1 GPU WoW arm64 Metal 1080p Preset 7 AA-off @Ogrimmar: CPU 3W, GPU 8W, 55fps

 

Power consumption doesn't scale linearly with Core Freq, yes. But performance does scale linearly with Core Freq. The reality is M1 Firestorm@3.2 GHz scored 1514 (472/1GHz) while Zen3@4.5 GHz scored 1600 (355/1GHz) while consuming way less power. 

 

Also why M1 lacks 4800U (10000@25W) so much? Fewer cores + no HT. 4 Icestorm = 1 Firestorm. But this can be easily addressed by adding more FS cores. 8x FS@3.2 consumes ~25W and it'll definitely outperform 4800U. 

1260798530_STF32.png

MT.png

Link to comment
Share on other sites

Link to post
Share on other sites

what i worry, is people pick a single test such as single core performance and determine that its superior in all cases.

single core performance.. should be always be stressed is just one test.

 

there is a variety of use cases that will be much faster with more cores...

 

lets step back and realize, that if there is a shift it will be very slowly if ever. apple may shift there software to arm, but they occupy a small percentage of the pc computing space. if you step back look at the industry as a whole, yes there has been some movement to arm. not just in the pc consumer space but also in the business server space.

 

transition is very slow in enterprises. development is expensive and unless a business case can put forth companies will not invest significant resources redeveloping applications for a different processor for marginal performance gain.

 

in short, could there be a shift to arm... possibly but its too early to tell.

however, for a lot of businesses and consumers i really dont think this transition will happen on the time line people here expect.

 

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, bruhsfx2 said:

15W/25W for Zen3 U is the bare minimum of the power it consumes. Single core Zen3 3+ GHz can already saturate 15W. But 15W for M1 is the maximum. 

 

Just wanna post some statistics measured by `sudo powermetrics` (same as HWiNFO and Intel Power Gadget). 

CBr23.png.fe3432e39fa4b1c08e57bb6690978a7f.png

M1 idle: less than 100mW (CPU+GPU+DRAM)

M1 single-thread Firestorm@3.2GHz: CPU only 3.5W, CBr23 1514.

M1 multi-thread Firestorm@3.0GHz/Icestorm@2.1GHz: CPU only 14W, CBr23 7604. 

M1 GPU WoW arm64 Metal 1080p Preset 7 AA-off @Ogrimmar: CPU 3W, GPU 8W, 55fps

 

Power consumption doesn't scale linearly with Core Freq, yes. But performance does scale linearly with Core Freq. The reality is M1 Firestorm@3.2 GHz scored 1514 (472/1GHz) while Zen3@4.5 GHz scored 1600 (355/1GHz) while consuming way less power. 

 

Also why M1 lacks 4800U (10000@25W) so much? Fewer cores + no HT. 4 Icestorm = 1 Firestorm. But this can be easily addressed by adding more FS cores. 8x FS@3.2 consumes ~25W and it'll definitely outperform 4800U. 

1260798530_STF32.png

MT.png

i dont buy a computer to run cinebench. However, if you want to look at synthetic test numbers you can look at 4800U in multi core tests it will outperform M1 by almost double.

 

however, the reality is if the computer does all the tasks you want thats what matters.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, tech.guru said:

i dont buy a computer to run cinebench. However, if you want to look at synthetic test numbers you can look at 4800U in multi core tests it will outperform M1 by almost double.

 

however, the reality is if the computer does all the tasks you want thats what matters.

I've already addressed that in my post. 10000 = 2x 7500 you say? CBr23 is literally the best case scenario for MT test.

 

Fun fact, software decoding 4K AV1 YouTube video with libdav1d, M1(Windows VM 25% perf loss) is on par with 4750G (3700X) at 25W. Source

 

Also fun fact, software decode HEVC video and software encode H.264 using libx264 a sample video, M1 achieve 1.1x speed, 4800HS is 1.33x. Let's shred 10% performance on 4800U for reduced (but still higher than M1) power consumption, it's about 1.17x. Source https://v2ex.com/t/733413#reply10

 

2x perf?

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, bruhsfx2 said:

15W/25W for Zen3 U is the bare minimum of the power it consumes. Single core Zen3 3+ GHz can already saturate 15W. But 15W for M1 is the maximum.

At 3.775Ghz Zen 3 cores use ~6.5W.

 

The 15W TDP is the maximum long term sustained package power allowed, it is in no way the minimum. What you're saying here is the minimum power the M1 can use is 15W, which is equally not true.

 

The minimum power for the 5950X is 18W, the minimum power the 5600X will use is 12W. Neither of these are the mobile parts with a monolithic die design, the minimum power I have seen for a 4800U is 0.56W total package power.

 

I don't know why people so gravely misunderstand CPU power usage or think that the M1 is unique in it's ability to be very lower power at idle. There is absolutely no way a Ryzen Mobile CPU is going to sit at idle using 15W, even based on idle on battery run times that's a mathematical impossibility based on the Wh of the batteries in laptops.

Link to comment
Share on other sites

Link to post
Share on other sites

It sounds to me like what is needed is a set of activities each cpu would perform.  The performance would be measured as raw seconds to completion as a measure of speed, and wattseconds as a measure of total power used. Something could be faster but use more power making equal wattseconds and visa versa.  I suspect this has probably already been done and such a benchmark has already been tweaked with more advanced concepts. 

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, tech.guru said:

if a release of an single application is a news worthy event, proves support was lacking on day one.

What is news worthy is that it is coming to windows on ARM too. That would not have happened without Apple. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/10/2020 at 8:45 PM, bruhsfx2 said:

I've already addressed that in my post. 10000 = 2x 7500 you say? CBr23 is literally the best case scenario for MT test.

 

Fun fact, software decoding 4K AV1 YouTube video with libdav1d, M1(Windows VM 25% perf loss) is on par with 4750G (3700X) at 25W. Source

 

Also fun fact, software decode HEVC video and software encode H.264 using libx264 a sample video, M1 achieve 1.1x speed, 4800HS is 1.33x. Let's shred 10% performance on 4800U for reduced (but still higher than M1) power consumption, it's about 1.17x. Source https://v2ex.com/t/733413#reply10

 

2x perf?

 

 

when would you use software decoding when M1 has integrated graphics???

in what scenario would someone who owns 3700x not have dedicated graphics???

 

why you posting meaningless benchmarks not based any real scenario pretending its a "game changer"?

 

in any modern computer you are not using software decoding.   this has been the case for most computers who play videos from youtube for i dont know at least 10 years.

 

using hardware based gpu decoding improves battery life and reduces cpu usage and makes system more responsive for other tasks. its turned on by default.

 

im not saying i trust your source that he ran test properly and results are accurate.

but even if i did, its meaningless metric in the year 2020 

 

just be happy with your M1 chip, its fine. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, tech.guru said:

when would you use software decoding when M1 has integrated graphics???

It can't do AV1 decoding on the GPU.

 

18 hours ago, tech.guru said:

in what scenario would someone who owns 3700x not have dedicated graphics???

The only desktop or laptop chips that can do AV1 decode on the GPU is the 30 series from Nvidia, and 6000 series from AMD. Everything else (which is like 99% of the market) does it on the CPU.

Anyway, I think video decode is a decent benchmark for performance. Even if you try to justify why it shouldn't matter in the real world, it's still a good test because it is a real world program (such as Dav1d or some future video decoder).

 

You can't just make a claim like "the 4800U is almost twice as fast" and then when people start posting real world benchmarks proving you otherwise go "well that task should be offloaded to the GPU so let's ignore that one".

 

What's next, "image upscaling in Photoshop is not a good benchmark when run on the CPU because it should run on the NPU"?

As long as both processors does the same work, why does it matter if it could be offloaded to some other component? It's still a way to measure performance.

 

What benchmark would you consider relevant, and what source do you have to that the 4800U has twice as high performance as the M1? That is a question for @leadeater too since he clicked "agree" on your post. We have already established that Cinebench is now a bad benchmark that can't be used to measure performance, and apparently video decoding is as well.

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, LAwLz said:

@leadeater too since he clicked "agree" on your post. We have already established that Cinebench is now a bad benchmark that can't be used to measure performance, and apparently video decoding is as well.

Well I clicked agree on it when the post only said "i dont buy a computer to run cinebench.", which I do agree with.

 

Benefit of being a moderator, I can bring up the edit history of a post so that is the actual quote of the post at the time I clicked agree.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LAwLz said:

It can't do AV1 decoding on the GPU.

 

The only desktop or laptop chips that can do AV1 decode on the GPU is the 30 series from Nvidia, and 6000 series from AMD. Everything else (which is like 99% of the market) does it on the CPU.

Anyway, I think video decode is a decent benchmark for performance. Even if you try to justify why it shouldn't matter in the real world, it's still a good test because it is a real world program (such as Dav1d or some future video decoder).

 

You can't just make a claim like "the 4800U is almost twice as fast" and then when people start posting real world benchmarks proving you otherwise go "well that task should be offloaded to the GPU so let's ignore that one".

 

What's next, "image upscaling in Photoshop is not a good benchmark when run on the CPU because it should run on the NPU"?

As long as both processors does the same work, why does it matter if it could be offloaded to some other component? It's still a way to measure performance.

 

What benchmark would you consider relevant, and what source do you have to that the 4800U has twice as high performance as the M1? That is a question for @leadeater too since he clicked "agree" on your post. We have already established that Cinebench is now a bad benchmark that can't be used to measure performance, and apparently video decoding is as well.

 

First off AV1, is a poor benchmark because the lack of content on youtube in AV1

I prefer to pay attention to benchmarks that matter. 

 

we know the following things,

         lack of 4K content of youtube (even worse for AV1)

         youtube limiting default video quality during covid19

         slow adoption of 4k displays in households

     

there is also alternative codecs such as h265 and VP9 that has wider hardware support for 4k video. there will be a movement to AV1 in time... once more hardware supports it

 

just reading this states

Quote

Because it is new, streaming AV1 in HD requires a powerful computer, and only some videos have AV1 available at this time. Choosing to stream AV1 in SD will use AV1 up to 480p, and VP9 for higher formats.

its not even enabled by default by youtube at this time. 

linus videos dont even support AV1 in youtube for the few i tested.

 

in general,  software rendering is not preferred for any video content

its bad idea to push the AV1 rendering just because the graphics card has yet to fully support the content. i stand by my statement, if AV1 is important to you... get a grahics card or modern processor integrated graphics that supports hardware based decoding.

 

why would you want to potentially have lower battery life and dropped frames as the processor competes with other tasks and processes? the world has moved away from software decoding until there is wider hardware support expect VP9 to be default codec on youtube for most content.

 

its not a good use case to test because its not a feature people will use. a better test would be VP9 playback on the GPU, using hardware based decoding if your planning on testing youtube playback at 4k.

 

we will not even mention  the m1 laptop doesnt even support 4k display, but  2560-by-1600 (ref https://support.apple.com/kb/SP824?locale=en_CA) yet the test was to decode 4k video. a good test or benchmark is based on real life scenarios

Link to comment
Share on other sites

Link to post
Share on other sites

Quote

What benchmark would you consider relevant

depends on the user and use case...'what benchmarks are important

these are just some examples.

 

it depends on the use case of the person.

for example person who travels and mostly does light work

      benchmarks around battery life

      benchmarks around cpu single core performance (light workload)

      wifi connectivity

      size and weight

 

for example a video editor

    benchmarks around storage performance

    benchmarks around video encoding time

         * dedicated graphics features such as NVENC

         * integrated grahic features such as quicksync

    benchmarks around multicore performance

 

for example a streamer

    benchmarks around game fps

    benchmarks around storage performance

    benchmarks around video encoding time

         * dedicated graphics features such as NVENC

         * integrated grahic features such as quicksync

    benchmarks around multicore performance

 

for example gamer

    benchmarks around storage performance

    benchmarks around game fps

    benchmarks around multicore amd single core performance

 

on and on..

 

i would ask you to argue how AV1 decoding a 4k video is somehow relevant and captures benchmarking the performance for all the different user profiles out there??

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, comander said:

M1 and Zen 3 have different strengths and weaknesses. 

 

M1 is expensive to make (in terms of die area), has Zen3-like performance (stronger in some areas, weaker in others) and has awesome perf/watt. 

 

Zen 3 is cheaper to make (on a per-core basis) and that's great if you value MOAR COARS. You can even get closer to M1 perf/watt if you clock low enough. 

I think it’s not so much that it’s better, so much as it isn’t that much worse, which is what a lot of people were expecting, and it’s from such an unexpected direction.  A lot of people consider it a first gen chip, so it could potentially see the performance increases seen by zen.  Me I don’t know what generation it actually is or how such things would increase with that architecture, so it’s up in the air. 

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, comander said:

A14x had core-for-core parity vs Intel laptop parts while having better perf/watt (and much higher transistor counts). 

 

I'm not really all that surprised with M1, other than how well it handles x86 programs. 

None the less a lot of people were.  The a14 went into cell phones.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×