Jump to content

M1 Macs Reviewed

randomhkkid
6 minutes ago, like_ooh_ahh said:

I will light up a candle and include you to my thoughts and prayers for the salvation of your sanity for using a phone without proper apps instead of getting an iPhone 11 or Galaxy S20 on sale. 😂

What are apps? No idea what you are talking about?

 

Only thing I use on my phone is calling, txt and email (I love the Windows Phone email client, why I still use it)

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, leadeater said:

What are apps? No idea what you are talking about?

 

Only thing I use on my phone is calling, txt and email (I love the Windows Phone email client, why I still use it)

The thing I do least on my phone is to use it for calls :P 

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, leadeater said:

Only thing I use on my phone is calling, txt and email

I have unlimited calls, texts and 10 GB of data with my postpaid plan, but most of my communications are through WhatsApp, Telegram and Google Chat so apps matter for me. The only ones I I communicate using standard SMS and calls is my family members and my very close friends. 

There is more that meets the eye
I see the soul that is inside

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

The reviews of the M1 MBA and MBP are pretty incredible :D Not only are the products amazing, but this release will ultimately push more Windows ARM laptops as ARM software support becomes more widespread. Rosetta's performance is insane, it's a whole compatibility layer not just an emulator :o

ʕ•ᴥ•ʔ

MacBook Pro 13" (2018) | ThinkPad x230 | iPad Air 2     

~(˘▾˘~)   (~˘▾˘)~

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Spindel said:

Or microsoft could have some balls and actually drop all legacy crap in its code. 

This funny, legacy code does not make much different and to be truefull some of it better them most of the new code. 

 

Windows problem is not  legacy code, if the code work why change it?   The main problem with Windows have been new code most (e.g. GUI) first old GUI was quickly and better performance, the new one got a lot of crap things.   

 

Do you know that Mac OS X mostly have legacy code going back to 1989?  Also Windows did have total rewrite to remove all  legacy code from MS DOS in Windows NT that want on to be Windows XP. This was to remove limits that MS DOS and MS DOS windows base (95,98,ME) had not to remove legacy code.

 

The Windows ME was the last windows base on the old code base. You may find that there are far more older code in Mac OS X but if it work. 

 

Old != Bad

New != Good

 

Also Microsoft have been looking at replace Windows for years, but it would take years for any new OS to come out if any new OS come out. a experimental OS was made call Singularity, using most C#

 

Link to comment
Share on other sites

Link to post
Share on other sites

I think I'm fairly impressed with the GPU performance. Not a lot runs on it, so it'll be interesting to actually compare it to Nvidia/AMD performance in the future. Though I think a lot of it comes down to the more tightly integrated memory with the package, but we'll see.

 

Pretty underwhelmed by the CPU performance, however. It's a stupid wide Core running at fairly low clocks, but if Intel hadn't messed up with 10nm so badly they'd have waited to move.

 

119372.png

 

The Mac Mini vs the Intel 1165G7 is the give away. It's a 4c/8t Tigerlake part running on Windows. That's a pretty hefty Per-Core performance difference in a task that will highly favor something like a Tile-based renderer like Cinema4D. Apple has done good work, especially on the GPU side, but they're a full node ahead & with full packaging control while still being quite far behind.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAwLz said:

"beating all the competition"

Yeah... by 1%... Way to make the lead sound bigger than it is.

 

 

The relative performance numbers changed by 5 to 10 percentage units.

Did you see the previous thread about this? People were saying SPEC2006 was a joke because it was 14 years old. People said it was only a good benchmark if you wanted to know what old and outdated software would be like on a CPU.

What I said, and was flamed for, was me saying that I don't think SPEC2006 and SPEC2017 are different enough to actually change any conclusion. I said that the scores would probably differ slightly but not enough to discard or call SPEC2006 irrelevant. Someone even said SPEC2006 was only "useful for a few people" because it was so old and the numbers it spits out was not relevant to today's software.

For crying out loud, the people in the other thread I was arguing with didn't even realize that the SPEC2006 was recompiled with a new compiler. They said things like "SPEC2006 doesn't use newer x86 extensions like AVX", which it absolutely does.

 

All the previous threads have been shitshows.

  • As soon as one benchmark gets posted people tried their hardest to figure out reasons for why the results were not valid.
  • Geekbench? Nahh I heard someone say 8 years ago that it was bad so therefore it doesn't count.
  • SPEC2006? Well it's old so it doesn't count.
  • Encoding tests? Probably uses some hardware acceleration so it doesn't count.
  • Other benchmarks posted? Well they are synthetic so they don't count. Oh they aren't synthetic? Well they aren't "actual work" and since it's a Mac it's not a "real computer" so therefore they don't count.
  • A very popular MacOS app was benchmarked? Well I haven't heard of it so therefore it doesn't count.
  • Cinebench? Ehm... It's running on different OSes so it doesn't count. Also this 12 core CPU beats this quad core so therefore the results aren't impressive.

 

In the "M1 benchmarks continue" thread we now have someone trying to dismiss all benchmarks currently released because apparently you can not trust any review out right now because:  

and:  

 

So apparently benchmarks posted at launch or near launch doesn't count now.

I guess that was a misunderstanding of what you meant. I was thinking you meant that the 2006 benchmark is a good determination of overall performance which i would disagree with simply because I believe no one benchmark is a good determination of a cpus overall performance. Granted I don't believe a 10% difference between the 2006 and the 2017 is insignificant. 10% difference is more than enough to change the conclusion if that was all that you were going by. 

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, Brooksie359 said:

I guess that was a misunderstanding of what you meant. I was thinking you meant that the 2006 benchmark is a good determination of overall performance which i would disagree with simply because I believe no one benchmark is a good determination of a cpus overall performance. Granted I don't believe a 10% difference between the 2006 and the 2017 is insignificant. 10% difference is more than enough to change the conclusion if that was all that you were going by. 

No you're not misunderstanding me.

I mean that SPEC2006 is a good determination of overall performance. What you have to remember is that SPEC2006 isn't "one benchmark". It's 12.

If you don't think 12 tests different tests are enough to give an overall picture of performance then I am not sure what would satisfy you.

 

Also, it's not a 10% difference. It's 5% and 10%. 

The arguments that were made in the previous threads were essentially "it's a garbage benchmark that do not reflect real world performance because it is so old" and my arguments were "it does give a good overview of what performance is like. Newer benchmarks might give slightly different results but not dramatically enough to chance any conclusions about the chip". Plenty of people saw the "2006" in the name and went "well this is useless lol" without even understanding the basics.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, LAwLz said:

"it's a garbage benchmark that do not reflect real world performance because it is so old"

Well I'd like to just say that is your opinion of what was said, certainly not what I did at least. Which far as I can tell you do looking at your post in this topic you tagged me in. I'm not even sure anyone said this at all either, not in the way you are portraying it here. But this explains very well my suspicion that I had and mentioned in my reply to that post you tagged me in.

 

@Brooksie359

This is why in my above mentioned reply I said it was time to just drop this, it's a complete waste of time.

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, leadeater said:

Well I'd like to just say that is your opinion of what was said, certainly not what I did at least. Which far as I can tell you do looking at your post in this topic you tagged me in. I'm not even sure anyone said this at all either, not in the way you are portraying it here. But this explains very well my suspicion that I had and mentioned in my reply to that post you tagged me in.

 

@Brooksie359

This is why in my above mentioned reply I said it was time to just drop this, it's a complete waste of time.

Yes, people did say those things. You might not have said it directly like some other did, but you heavily implied it. It's hard to quote exact things you said that were wrong because you mostly dodged any questions I threw at you. The implications were there though, and that is obvious if we look at what other people took from your comments.

For example when I said:

  

On 11/11/2020 at 9:56 AM, LAwLz said:

I would like some evidence that:

1) SPEC2006 is outdated to the point where it no longer gives accurate depictions of the performance difference between two processors in modern workloads.

2) That the tests in SPEC2006 and SPEC2017 are different enough that they provide a significant difference in overall performance scores, to the point where they might be used to come to different conclusions regarding performance difference between two chips. 

and your responses were along the lines of:  

On 11/11/2020 at 10:15 AM, leadeater said:

Call me crazy but a newer benchmark suite seems like a better idea to me, SPEC didn't create it for no reason.

On 11/11/2020 at 11:20 AM, leadeater said:

literally SPEC2006 contains tests that have been superseded by more modern options that are included in SPEC2017. Unless you're saying h264ref is more relevant than what it actually is today? As an example.

 

No I said the same tests in SPEC2006 and SPEC2017 have different performance deltas for the same products. I did point you to the names of two of them.

  

Then you are implying (on purpose or not) that SPEC2006 do not accurately represent real world performance in modern applications, which it clearly does because the difference between SPEC2006 and SPEC2017 is like 5-10%. 5-10% is not enough to make someone go "oh this is completely irrelevant to modern applications! The benchmark scores are completely different! This is only applicable if you want to run a program from 2010!".

 

 

When someone says:

Quote

The overall estimate of performance will not be that different between SPEC2006 and SPEC2017.

and you replies with what boils down to:

Quote

SPEC2006 is old. The same tests in SPEC2006 and SPEC2017 have different performance deltas for the same products.

then the implication is that SPEC2006 do not accurately represent the performance of modern programs. Even though everything you said is technically correct, such as SPEC2017 and SPEC2006 having performance deltas, and SPEC2006 being "old", you're still implying things that are incorrect such as the statement you are replying to is wrong (which it wasn't I might add).

 

 

Here are two quotes from our conversation that sums it up pretty well. One from me, and then your response.  

On 11/11/2020 at 11:52 AM, LAwLz said:

I strongly disagree with the idea you are pushing that just because there is a newer benchmark out means the old one is irrelevant and inaccurate.

On 11/11/2020 at 11:59 AM, leadeater said:

These SPEC benchmarks were created for a reason, they are real workloads. What's in SPEC2006 is relevant to that era, what's in SPEC2017 is relevant to that era. Not sure why this is so hard.

  

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, ColeWorld said:

 This rift between LMG and Apple just puts LMG a bit behind on reviews for Apple products. 

 

I don't know if I'd call it a "rift". LMG goes back and forth between being super thirsty for apple juice and being bitter they can't get it, while Apple couldn't care less about LMG.

 

 

15 hours ago, BuckGup said:

So when are we going to see a Mac Pro with discrete apple graphics and an equivalent xeon based M1 chip?

I'd think the Mac Pro level stuff is going to be a year at least...or could be the rumored christmas surprise....one or the other.

🖥️ Motherboard: MSI A320M PRO-VH PLUS  ** Processor: AMD Ryzen 2600 3.4 GHz ** Video Card: Nvidia GeForce 1070 TI 8GB Zotac 1070ti 🖥️
🖥️ Memory: 32GB DDR4 2400  ** Power Supply: 650 Watts Power Supply Thermaltake +80 Bronze Thermaltake PSU 🖥️

🍎 2012 iMac i7 27";  2007 MBP 2.2 GHZ; Power Mac G5 Dual 2GHZ; B&W G3; Quadra 650; Mac SE 🍎

🍎 iPad Air2; iPhone SE 2020; iPhone 5s; AppleTV 4k 🍎

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, like_ooh_ahh said:

But you can see iPads being used by pilots, and soldiers,. :P  

Or in space - I believe several iPads were floating around in the Crew Dragon Demo 2 mission half a year ago.

I like cute animal pics.

Mac Studio | Ryzen 7 5800X3D + RTX 3090

Link to comment
Share on other sites

Link to post
Share on other sites

Can anyone explain to me why the M1 chip is able to compete against a high end desktop CPU like the 5950X in benchmarks? Is the SoC the same on the Max Mini and on the MacBook Air?

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, IAmAndre said:

Can anyone explain to me why the M1 chip is able to compete against a high end desktop CPU like the 5950X in benchmarks? Is the SoC the same on the Max Mini and on the MacBook Air?

5nm process, which is more advanced than what we are seeing in 5950x and since apple M1 runs on SoC which allows way faster communication between CPU, memory and GPU.

Link to comment
Share on other sites

Link to post
Share on other sites

The make or break for these 1st gen apple silicon mac is how well will it run x86 apps during the transition period

 

Rosetta 2 is an absolute master piece of software architecture.

 

Almost all the reviewers are basically blown away by the fact it runs faster than the x86 chips, while running benchmark and apps designed for x86

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, IAmAndre said:

Can anyone explain to me why the M1 chip is able to compete against a high end desktop CPU like the 5950X in benchmarks? Is the SoC the same on the Max Mini and on the MacBook Air?

Dumping all legacy shit x86 have to deal with. 

And also a bit of "think different" mindset when it comes to design of the CPU (which is possible by not having to care for 45 year old legacy functionality). 

 

But it's important to remember that this is a SoC, it is the entire packaging that helps with this not just that the CPU is the brain of a T800 that was crushed in a hydraulic press in 1985.  

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, xtroria said:

5nm process, which is more advanced than what we are seeing in 5950x and since apple M1 runs on SoC which allows way faster communication between CPU, memory and GPU.

I believe the unified memory plays a part as well, instead of the data traveling across the entire motherboard where the CPU, RAM and GPU, it is all inside the SoC, perhaps that results to reduced latency. Not to mention, the M1 chip also has a Secure Enclave coprocessor which is responsible for things like encryption acceleration and an integrated HEVC/H.264 decoder. As far as I know, these new Macs also has a NVME SSD so faster sequential read and write. 

There is more that meets the eye
I see the soul that is inside

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Soo,for computer literate people,is macboom m1 worth buying now,compared to equal windows laptop?

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, AzzaNezz said:

Soo,for computer literate people,is macboom m1 worth buying now,compared to equal windows laptop?

If all the software you use (or equivalent) is available for MacOS, then yes. Otherwise no. 

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, xtroria said:

The make or break for these 1st gen apple silicon mac is how well will it run x86 apps during the transition period

 

Rosetta 2 is an absolute master piece of software architecture.

 

Almost all the reviewers are basically blown away by the fact it runs faster than the x86 chips, while running benchmark and apps designed for x86

Rosetta 2 is indeed wonderful software, and it covers a noticeable chunk of the in-app-store software (and some non-app-store software). However, there is still major software that has not had the time to update. I am also doubting that Rosetta 2 can handle the translation for Intel-platform hardware acceleration (i.e. QuickSync/NVEnc; Virtualisation extensions).

 

It's the chicken and egg problem that Apple partially solved with the DTK. In order for developers to work out if their app work on the platform, the app needs to be tested on the platform. In the end they just needed to launch it and go for the low end where most people won't notice their work-critical app is missing or too slow.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, xtroria said:

5nm process, which is more advanced than what we are seeing in 5950x and since apple M1 runs on SoC which allows way faster communication between CPU, memory and GPU.

All a node reduction does is improve efficiency by 20-30%. What we're see is definitely not an 20-30% improvement with less performance. What we're seeing is a massive gain in efficiency and a massive gain in performance, compared to the equivalent class CPUs and GPUs

1 hour ago, IAmAndre said:

Can anyone explain to me why the M1 chip is able to compete against a high end desktop CPU like the 5950X in benchmarks? Is the SoC the same on the Max Mini and on the MacBook Air?

The Architecture mainly. Firestorm and Icestorm cores have been praised for their performance and efficiency, the same used in A14. 

Unified memory also plays a huge part along with ultra wide execution unit. Plus many of Apple's secret sauce optimization due to full vertical integrations

 

Johny Srouji and team looks to be some beasts in chip design

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Soppro said:

Windows ARM laptops as ARM software support becomes more widespread

What Game Is X To Doubt Meme From? | Screen Rant

One day I will be able to play Monster Hunter Frontier in French/Italian/English on my PC, it's just a matter of time... 4 5 6 7 8 9 years later: It's finally coming!!!

Phones: iPhone 4S/SE | LG V10 | Lumia 920 | Samsung S24 Ultra

Laptops: Macbook Pro 15" (mid-2012) | Compaq Presario V6000

Other: Steam Deck

<>EVs are bad, they kill the planet and remove freedoms too some/<>

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, LAwLz said:

No you're not misunderstanding me.

I mean that SPEC2006 is a good determination of overall performance. What you have to remember is that SPEC2006 isn't "one benchmark". It's 12.

If you don't think 12 tests different tests are enough to give an overall picture of performance then I am not sure what would satisfy you.

 

Also, it's not a 10% difference. It's 5% and 10%. 

The arguments that were made in the previous threads were essentially "it's a garbage benchmark that do not reflect real world performance because it is so old" and my arguments were "it does give a good overview of what performance is like. Newer benchmarks might give slightly different results but not dramatically enough to chance any conclusions about the chip". Plenty of people saw the "2006" in the name and went "well this is useless lol" without even understanding the basics.

I still disagree that a single benchmark suite is sufficient to make an accurate conclusion even if it is a compilation of different benchmarks because often times things are missed. Also my point was that it was up to 10% difference which is important because that shows that results could vary by that amount and maybe even more in some cases between the two benchmarks. I guess if you are looking at the a ballpark determination of performance it would be ok to use a single benchmark suite but I for one like to have as much information as possible before making a determination of performance that way the performance determination is more accurate. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, RedRound2 said:

All a node reduction does is improve efficiency by 20-30%. What we're see is definitely not an 20-30% improvement with less performance. What we're seeing is a massive gain in efficiency and a massive gain in performance, compared to the equivalent class CPUs and GPUs

The Architecture mainly. Firestorm and Icestorm cores have been praised for their performance and efficiency, the same used in A14. 

Unified memory also plays a huge part along with ultra wide execution unit. Plus many of Apple's secret sauce optimization due to full vertical integrations

 

Johny Srouji and team looks to be some beasts in chip design

I don't actually buy this. Been a while but at the time TR first gen was coming out there were efforts at looking at the per core power consumption and it was maxing around 4-6W. Persuming that stayed similar for Zen 3, that actually puts it very competitive on the literal core end with A14, and it still does have both small ST and massive MT performance leads.

 

The big deal is interconnects. Chiplets is a highly inefficient system for power efficiency of interconnects, and likewise motherboards. But the design goals and targets have been very different. By avoiding SoCs the efficiency of everything not the cores itself becomes a problem. I mean x570 Chipset (by itself) draws 7 watts. Additonally, the core designs are made for dramatically higher total interconnect throughputs than anything Apple is needing to do because Apple isn't using the architecture for literally anything other than these fixed format devices.

 

And then it also becomes a problem of target market. With desktops moving so heavily into the core wars, these sort of lean mean low core count (4 core) designs are far away from the standard basis. The goal becomes making things scale up rather than scale down.

 

It isn't like Apple could suddenly turn around a 16-32-64 core SOC. It's a rather limited approach and why AMD went away from monolithic systems in the first place.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×