Jump to content

M1 Macs Reviewed

randomhkkid
1 hour ago, RedRound2 said:

All a node reduction does is improve efficiency by 20-30%. What we're see is definitely not an 20-30% improvement with less performance. What we're seeing is a massive gain in efficiency and a massive gain in performance, compared to the equivalent class CPUs and GPUs

The Architecture mainly. Firestorm and Icestorm cores have been praised for their performance and efficiency, the same used in A14. 

Unified memory also plays a huge part along with ultra wide execution unit. Plus many of Apple's secret sauce optimization due to full vertical integrations

 

Johny Srouji and team looks to be some beasts in chip design

 

1 hour ago, Spindel said:

Dumping all legacy shit x86 have to deal with. 

And also a bit of "think different" mindset when it comes to design of the CPU (which is possible by not having to care for 45 year old legacy functionality). 

 

But it's important to remember that this is a SoC, it is the entire packaging that helps with this not just that the CPU is the brain of a T800 that was crushed in a hydraulic press in 1985.  

 

2 hours ago, xtroria said:

5nm process, which is more advanced than what we are seeing in 5950x and since apple M1 runs on SoC which allows way faster communication between CPU, memory and GPU.

So what you're saying now is Apple is the new CPU king or will be when all apps get optimized? Can we expect a trend to start from here and AMD/Intel to offer SoC's or similar architectures as well?

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, IAmAndre said:

 

 

So what you're saying now is Apple is the new CPU king or will be when all apps get optimized? Can we expect a trend to start from here and AMD/Intel to offer SoC's or similar architectures as well?

That is what they seem to be saying, but it's not clear that is what would actually happen.

 

Certainly for laptops, SoCs are almost always more efficient than the same design with flexible arrangement and configurable IO. There are however fundamental limits to the size of a chip that can be made and made reasonably well and AMD literally just has been proving its chiplet design (a move in the exact opposite direction from SoCs) because it dramatically allows them to scale up both individual cores and the total core count. Ofc, AMD does now make some pretty aggressive SoCs in XBX and PS5, so yes I do expect certain applications to move in that direction.

 

It is somewhat unfortunate because that means almost no chance of user upgradeable ram or many other parts in the future.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Curufinwe_wins said:

That is what they seem to be saying, but it's not clear that is what would actually happen.

 

Certainly for laptops, SoCs are almost always more efficient than the same design with flexible arrangement and configurable IO. There are however fundamental limits to the size of a chip that can be made and made reasonably well and AMD literally just has been proving its chiplet design (a move in the exact opposite direction from SoCs) because it dramatically allows them to scale up both individual cores and the total core count. Ofc, AMD does now make some pretty aggressive SoCs in XBX and PS5, so yes I do expect certain applications to move in that direction.

 

It is somewhat unfortunate because that means almost no chance of user upgradeable ram or many other parts in the future.

That's interesting. Of course it sucks that it's not user upgradable but since consoles all use SoC's it shouldn't be that hard for AMD to allow developers to optimize their PC games for their (potential) PC SoC lineup.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Brooksie359 said:

I still disagree that a single benchmark suite is sufficient to make an accurate conclusion even if it is a compilation of different benchmarks because often times things are missed. Also my point was that it was up to 10% difference which is important because that shows that results could vary by that amount and maybe even more in some cases between the two benchmarks. I guess if you are looking at the a ballpark determination of performance it would be ok to use a single benchmark suite but I for one like to have as much information as possible before making a determination of performance that way the performance determination is more accurate. 

Ballpark determination of performance is what I was going for, yes.

I mean, more info is clearly better, but I think SPEC and even Geekbench (assuming the memory is already determined to be good) are more than good enough to give accurate and reliable estimates of performance. Of course different benchmarks can give different results but it's not like when someone asks "which processor is better, this Intel or this AMD" that people post 20 different benchmarks to accurately determine exactly within a couple of % which CPU is the best at each and every benchmark. People make generalizations and give ballpark numbers and for that SPEC, even SPEC2006 is enough in my eyes.

 

The entire purpose of SPEC is in fact to give accurate and reliable generalized performance scores that could be used to compare products against one another. AMD, Intel, Nvidia and a bunch of other companies literally created SPEC with the intention of it being a single benchmark suite that could be used to compare their products with.

 

Saying that SPEC isn't good indicator of performance is like saying AMD, Intel, Nvidia, and others don't know what's important to test when comparing products. They most certainly do.

It doesn't give you exact performance metric for a specific application someone might be interested in, but it does give you accurate and reliable "ballpark figures" that people can quote when they say generalized statements like "processor X is better than processor Y" or "Processor PURPLE performs roughly the same as processor YELLOW, either one will be good for you".

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, Curufinwe_wins said:

I don't actually buy this. Been a while but at the time TR first gen was coming out there were efforts at looking at the per core power consumption and it was maxing around 4-6W. Persuming that stayed similar for Zen 3, that actually puts it very competitive on the literal core end with A14, and it still does have both small ST and massive MT performance leads.

AMD cores do not "max out at 4-6W per core".

 

 

When running CineBench in single core mode, the 5950X is 8% faster than the M1.

The 5950X consumes 49 watts during the test.

The M1 consumes ~10 watts during the test.

 

Light up another core on the 5950X and the power consumption goes up to 59 watts. Light up a third core and power consumption goes up to 92 watts.

 

Each Zen3 core uses around 10 watts of power, not 4-6 or whatever numbers I've seen get thrown around.

The only time the per core wattage goes down to the ~5 watt per core number on Zen(3) is when the cores are downclocking themselves a lot, and all the interconnects and other parts of the chip are already powered. For example if you are already loading 15 cores on a 16 core CPU then the voltage and power is "already there" on the chiplet, and loading up that 16th core can be really "efficient" since all the inefficiency of the design is already used. I hope I am being clear but I am not sure if I am explaining it that well.

Power consumption of each core stays very similar in Zen3 chips all the way up until you get to around 12 cores lit up. It's only after that where per core power consumption starts going down quite a bit, and that's because each 100Mhz you drop affects A LOT of cores, but it also affects performance A LOT.

 

 

The reason why some AMD chips can get really low per core wattage is because they downclock themselves a lot, but at that point even Zen3 is getting significantly lower per core performance than the M1.

In the 5800X for example, each core consumes around 15 watts of power, even when they are down to 4.4GHz, at which point they perform worse per core than the M1.

 

 

In order for Zen3 cores to be competitive with Firestorm cores, they have to be clocked outside of their absolute optimal power efficiency spot. 

With Zen3 you have to pick between high performance or power efficiency. With the M1 you get both.

Link to comment
Share on other sites

Link to post
Share on other sites

42 minutes ago, IAmAndre said:

So what you're saying now is Apple is the new CPU king or will be when all apps get optimized? Can we expect a trend to start from here and AMD/Intel to offer SoC's or similar architectures as well?

I wouldn't call them CPU king yet. But it looks very likely especially in the ultrabook class of laptops. I mean we're literally seeing some amazing things happen on a fan-less MacBook Air on battery (improved even) that just wasn't possible before. But we are yet to see if they can still hold up similar numbers on higher end of chips, the ones destined to be inside the 14", 16" iMacs and Mac Pro.

 

But what I can say for sure is that this some really exciting time in PC space. We have a third very different entry in a monopoly that turned duopoly CPU/GPU space and it's safe to say at this point that they entered in with a slam dunk

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, LAwLz said:

AMD cores do not "max out at 4-6W per core".

 

 

When running CineBench in single core mode, the 5950X is 8% faster than the M1.

The 5950X consumes 49 watts during the test.

The M1 consumes ~10 watts during the test.

 

Light up another core on the 5950X and the power consumption goes up to 59 watts. Light up a third core and power consumption goes up to 92 watts.

 

Each Zen3 core uses around 10 watts of power, not 4-6 or whatever numbers I've seen get thrown around.

The only time the per core wattage goes down to the ~5 watt per core number on Zen(3) is when the cores are downclocking themselves a lot, and all the interconnects and other parts of the chip are already powered. For example if you are already loading 15 cores on a 16 core CPU then the voltage and power is "already there" on the chiplet, and loading up that 16th core can be really "efficient" since all the inefficiency of the design is already used. I hope I am being clear but I am not sure if I am explaining it that well.

Power consumption of each core stays very similar in Zen3 chips all the way up until you get to around 12 cores lit up. It's only after that where per core power consumption starts going down quite a bit, and that's because each 100Mhz you drop affects A LOT of cores, but it also affects performance A LOT.

 

 

The reason why some AMD chips can get really low per core wattage is because they downclock themselves a lot, but at that point even Zen3 is getting significantly lower per core performance than the M1.

In the 5800X for example, each core consumes around 15 watts of power, even when they are down to 4.4GHz, at which point they perform worse per core than the M1.

 

 

In order for Zen3 cores to be competitive with Firestorm cores, they have to be clocked outside of their absolute optimal power efficiency spot. 

With Zen3 you have to pick between high performance or power efficiency. With the M1 you get both.

Yes that is a fair correction now that I've looked back at it. Was looking at the old Zen TR levels which were noticably lower frequency. 

 

However my main point was that the majority of the 50W was not core, but IO/interconnects which is literally a design 'feature' of the chiplet set. Even at the 6 or 10W l

or 20W even level, it isn't that AMD is more efficient at low thread counts (we haven't seen anything big enough from Apple to care about ring buses or mesh or chiplet), it's just that 50W to 10W is rather non-reflective of relative core strengths. The real issue atm is literally everything other than the cores. (Or rather the biggest ones).

 

It might be closer to 1.5 to 1 on purely core side (efficiency). Which yes isn't better, but it's a lot less worse than the apparent 5 to 1 difference.

 

It is interesting that the 5600x is closer to 70% per core power in 1T at 4650/4700 than the equivalent thread/frequency loads on the other chips (17 vs 12W).

 

https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-dive-review-5950x-5900x-5800x-and-5700x-tested/8

 

 

Again, I have never argued that Apple doesn't have by far the most efficient high perf system here, just that majority of distinction is mostly in things besides the cores themselves atm.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, IAmAndre said:

So what you're saying now is Apple is the new CPU king or will be when all apps get optimized? Can we expect a trend to start from here and AMD/Intel to offer SoC's or similar architectures as well?

I think it's a bit more complicated than that.

In my opinion, there is no debate. Apple's core design is by far the best. It matches AMD in terms of performance but at a fraction of the power consumption.

It beats the crap out of anything Intel has on the market right now.

But that's for "optimized apps" that have been written for the M1. It seems like a lot of programs are already supported, and it seems like support is growing at a very rapid phase so it won't be long until basically all apps gives you optimal performance on the M1. But during this coming year or so, there will be some programs that would give you higher performance on a theoretical "Zen3 Mac", but that would also consume more power.

 

But the problem right now is that Apple has only released their "low end" SoC.

Both AMD and Intel are still the "CPU kings" simply because they offer 6 and 8 core processors while Apple only offers a quad core right now.

 

Apple is shared first place in terms of single core performance, with AMD.

Apple is the new king of CPU efficiency.

Apple is the new king of quad core performance, the only category where they offer their own CPUs right now.

 

 

I don't think neither AMD nor Intel can really answer this. AMD has really been pushing and doing their best for several years already and are going as fast as they can. Intel has recently started pushing the pedal to the metal as well just to keep up with AMD.

x86 is already being pushed forward as fast as it can. Meanwhile, Apple are able to beat AMD and Intel from the get-go and probably has a couple of aces up their sleeves for future versions.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, LAwLz said:

I think it's a bit more complicated than that.

In my opinion, there is no debate. Apple's core design is by far the best. It matches AMD in terms of performance but at a fraction of the power consumption.

It beats the crap out of anything Intel has on the market right now.

But that's for "optimized apps" that have been written for the M1. It seems like a lot of programs are already supported, and it seems like support is growing at a very rapid phase so it won't be long until basically all apps gives you optimal performance on the M1. But during this coming year or so, there will be some programs that would give you higher performance on a theoretical "Zen3 Mac", but that would also consume more power.

 

But the problem right now is that Apple has only released their "low end" SoC.

Both AMD and Intel are still the "CPU kings" simply because they offer 6 and 8 core processors while Apple only offers a quad core right now.

 

Apple is shared first place in terms of single core performance, with AMD.

Apple is the new king of CPU efficiency.

Apple is the new king of quad core performance, the only category where they offer their own CPUs right now.

 

 

I don't think neither AMD nor Intel can really answer this. AMD has really been pushing and doing their best for several years already and are going as fast as they can. Intel has recently started pushing the pedal to the metal as well just to keep up with AMD.

x86 is already being pushed forward as fast as it can. Meanwhile, Apple are able to beat AMD and Intel from the get-go and probably has a couple of aces up their sleeves for future versions.

I think I agree for almost all of this, I would only caveat that we haven't see yet how Apple will try (if they will try) to target the heavily multithreaded load systems with a higher power chip (the space for example for ARM servers or workstations on Linux now or that a 5900x actually competes in)

 

I want to see a more in depth comparison between the MB Air and Mac Mini because it seems like the doubling of power might not be getting very much performance wise, which in fairness makes the 10W variants even more insanely good. 

 

The biggest problem as I see it for AMD is that the strategy to answer and surpass Intel on the desktop is exactly the wrong direction to answer Apple in fixed format devices. Chiplet is the worst possible choice for a laptop, and while AMD has done some pretty good work with higher power SOCs, I don't know they have the bandwidth to move both directions at once with the pace of the two juggernauts (funding/RnD wise).

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, LAwLz said:

I think it's a bit more complicated than that.

In my opinion, there is no debate. Apple's core design is by far the best. It matches AMD in terms of performance but at a fraction of the power consumption.

It beats the crap out of anything Intel has on the market right now.

But that's for "optimized apps" that have been written for the M1. It seems like a lot of programs are already supported, and it seems like support is growing at a very rapid phase so it won't be long until basically all apps gives you optimal performance on the M1. But during this coming year or so, there will be some programs that would give you higher performance on a theoretical "Zen3 Mac", but that would also consume more power.

 

But the problem right now is that Apple has only released their "low end" SoC.

Both AMD and Intel are still the "CPU kings" simply because they offer 6 and 8 core processors while Apple only offers a quad core right now.

 

Apple is shared first place in terms of single core performance, with AMD.

Apple is the new king of CPU efficiency.

Apple is the new king of quad core performance, the only category where they offer their own CPUs right now.

 

 

I don't think neither AMD nor Intel can really answer this. AMD has really been pushing and doing their best for several years already and are going as fast as they can. Intel has recently started pushing the pedal to the metal as well just to keep up with AMD.

x86 is already being pushed forward as fast as it can. Meanwhile, Apple are able to beat AMD and Intel from the get-go and probably has a couple of aces up their sleeves for future versions.

I would add that Apple seems to have given up a few fairly critical things while achieving those numbers.  The big one for me is there are apparently no pcie lanes to spare.  Imho they need 6.  They’re Apple so they don’t have to adhere to 4/8/16 like everyone else.  6 lanes will likely be enough for 2 thunderbolt ports (yea they theoretically require 8, but a lot of people don’t Max their bandwidth on each port) or(not and) one big gpu.  So thunderbolt for the audio people and GPUs for the graphics people. 

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, LAwLz said:

AMD cores do not "max out at 4-6W per core".

 

 

When running CineBench in single core mode, the 5950X is 8% faster than the M1.

The 5950X consumes 49 watts during the test.

The M1 consumes ~10 watts during the test.

 

Light up another core on the 5950X and the power consumption goes up to 59 watts. Light up a third core and power consumption goes up to 92 watts.

 

Each Zen3 core uses around 10 watts of power, not 4-6 or whatever numbers I've seen get thrown around.

The only time the per core wattage goes down to the ~5 watt per core number on Zen(3) is when the cores are downclocking themselves a lot, and all the interconnects and other parts of the chip are already powered. For example if you are already loading 15 cores on a 16 core CPU then the voltage and power is "already there" on the chiplet, and loading up that 16th core can be really "efficient" since all the inefficiency of the design is already used. I hope I am being clear but I am not sure if I am explaining it that well.

Power consumption of each core stays very similar in Zen3 chips all the way up until you get to around 12 cores lit up. It's only after that where per core power consumption starts going down quite a bit, and that's because each 100Mhz you drop affects A LOT of cores, but it also affects performance A LOT.

 

 

The reason why some AMD chips can get really low per core wattage is because they downclock themselves a lot, but at that point even Zen3 is getting significantly lower per core performance than the M1.

In the 5800X for example, each core consumes around 15 watts of power, even when they are down to 4.4GHz, at which point they perform worse per core than the M1.

 

 

In order for Zen3 cores to be competitive with Firestorm cores, they have to be clocked outside of their absolute optimal power efficiency spot. 

With Zen3 you have to pick between high performance or power efficiency. With the M1 you get both.

Where is cinebench stats on m1?

Could you link it

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Spindel said:

Or microsoft could have some balls and actually drop all legacy crap in its code. 

Well, this is what Windows 10 X will be (in the works). Legacy apps won't work. They will have to use modern framework/technologies to be able to run on it. (Office and the new Edge (Chromium based) web browser will run natively on it).

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, pas008 said:

Where is cinebench stats on m1?

Could you link it

image.png.c0e39884a19c8988338a80502724e21f.png

 

Cinebench is probably the most favorable benchmark so far for x86 btw. I'm sure some actual AVX codes will try to be run soon enough which might look silly, but this for now can be mostly seen as a worst case for Apple.

 

image.png.7b38f796b47398b6f2f357e968d86c4c.png

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Curufinwe_wins said:

image.png.c0e39884a19c8988338a80502724e21f.png

 

Cinebench is probably the most favorable benchmark so far for Intel btw.

 

image.png.7b38f796b47398b6f2f357e968d86c4c.png

Thx and i disagree usually lines up with my cad and gaming needs

 

And this m1 is looking to be little bang

Impressed 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Portablejim said:

Rosetta 2 is indeed wonderful software, and it covers a noticeable chunk of the in-app-store software (and some non-app-store software). However, there is still major software that has not had the time to update. I am also doubting that Rosetta 2 can handle the translation for Intel-platform hardware acceleration (i.e. QuickSync/NVEnc; Virtualisation extensions).

 

It's the chicken and egg problem that Apple partially solved with the DTK. In order for developers to work out if their app work on the platform, the app needs to be tested on the platform. In the end they just needed to launch it and go for the low end where most people won't notice their work-critical app is missing or too slow.

Hardware acceleration is an entirely different ball game because it needs proprietary hardware and software in the first place.

 

The execution of apple to release the product for MBA, MBP 13 and Mac mini is on point TBH. The performance gain can be felt the most on these products and the vast majority of the users on these products are probably not going to notice the "transition period" since all they do are browse the web and word-processing

 

Apple won't encounter the problem of developers not developing apps for mac, because they only need to focus on making one version in the future and companies like adobe and autodesk will more likely to put more effort on apple silicon development

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, pas008 said:

Thx and i disagree usually lines up with my cad and gaming needs

 

And this m1 is looking to be little bang

Impressed 

It does do some pretty interesting things, but not everywhere for everything.  whether it’s of use or not is going to have to line up with your use case.   It’s not hard to make a desktop that will crush it for a lot of things for the same money for example.  What’s really really interesting is not for absolutely everything.  It’s got some hefty weaknesses.  It’s gpu is great for an iGPU, but iGPUs still suck, and there is NO WAY to stick on anything else.  There’s also a hard memory cap.

Edited by Bombastinator

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, pas008 said:

Thx and i disagree usually lines up with my cad and gaming needs

 

And this m1 is looking to be little bang

Impressed 

It is impressive. Just noting this is the only benchmark I've seen so far that shows Intel can hold its own single core, and that shows a 4800U as being faster than the M1 in multithreaded loading. Everything else we've seen so far looks even better for the M1 than this does.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Bombastinator said:

It does do some pretty interesting things, but not everywhere for everything.  whether it’s of use or not is going to have to line up with your use case.   It’s not hard to make a desktop that will crush it for a lot of things for the same money for example.

 

6 minutes ago, Curufinwe_wins said:

It is impressive. Just noting this is the only benchmark I've seen so far that shows Intel can hold it's own single core, and that shows a 4800U as being faster than the M1 in multithreaded loading. Everything else we've seen so far looks even better for the M1 than this does.

To me this isn't only a win for Apple

It's a win for arm

 

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, LAwLz said:

But the problem right now is that Apple has only released their "low end" SoC.

Both AMD and Intel are still the "CPU kings" simply because they offer 6 and 8 core processors while Apple only offers a quad core right now.

Why does it matter, since this 4-core CPU is able to compete against a 5950X in benchmarks? I guess that would mean that the high end models that's going on the iMac should obliterate Threadripper CPUs on optimized apps?

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, pas008 said:

 

To me this isn't only a win for Apple

It's a win for arm

 

Mmmm.  Maybe.  Apple silicon is arm based, but actual arm cortex stuff is a different beast.  We’ll see what Nvidia does. 

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, IAmAndre said:

Why does it matter, since this 4-core CPU is able to compete against a 5950X in benchmarks? I guess that would mean that the high end models that's going on the iMac should obliterate Threadripper CPUs on optimized apps?

It matters because it’s doing it with die shrink but more importantly vertical integration. Vertical integration has costs as well as gains.    It’s got a hard max of 16mb of memory, and it’s got no pcie out at all.  Not even important for a lot of uses, but still crippling for some.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Bombastinator said:

It matters because it’s doing it with die shrink but more importantly vertical integration. Vertical integration has costs as well as gains.    It’s got a hard max of 16mb of memory, and it’s got no pcie out at all.  Not even important for a lot of uses, but still crippling for some.

How would that impact a higher end SoC?

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, IAmAndre said:

How would that impact a higher end SoC?

Well for example threadripper can handle LOTS of memory.  It’s one reason people buy it.  If you’re working with large files that need even 20gb of space with a m1 you’ll be hitting swap.  Swap makes things dirt slow. It’s all use case.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/17/2020 at 2:41 PM, JoseGuya said:

I'm still very conflicted. Do I switch my aging 15 inch late 2013 MBP to the M1 MBP, or to the 2020 16 inch Intel? Do I wait for the 16 inch with Apple Silicon? WHAT TO DO

Depends what you do I guess.

Dirty Windows Peasants :P ?

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×