Jump to content

M1 Macs Reviewed

randomhkkid
8 minutes ago, Bombastinator said:

Well for example threadripper can handle LOTS of memory.  It’s one reason people buy it.  If you’re working with large files that need even 20gb of space with a m1 you’ll be hitting swap.  Swap makes things dirt slow. It’s all use case.

Why wouldn't they be able to embed say 32GB of RAM and 16 cores in a SoC? I mean they have enough room and cooling on an iMac.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, IAmAndre said:

Why wouldn't they be able to embed say 32GB of RAM and 16 cores in a SoC? I mean they have enough room and cooling on an iMac.

Apple could, but I don't know that it will in the first generation. My hunch is that Apple might only use embedded memory on high-efficiency mobile parts like the M1, and will use discrete memory for the iMac, Mac Pro and higher-end MacBook Pros.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Curufinwe_wins said:

I don't actually buy this. Been a while but at the time TR first gen was coming out there were efforts at looking at the per core power consumption and it was maxing around 4-6W. Persuming that stayed similar for Zen 3, that actually puts it very competitive on the literal core end with A14, and it still does have both small ST and massive MT performance leads.

Do we even know if this is true? It seemed like something @leadeater just assumed when he saw the high base power consumption of the 5950X.

I haven't seen any evidence that it's the IO causing the high base power consumption in x86 processors. In my mind, it is just as likely that it is something else causing the high base power consumption. For example maybe the decode stage consumes that much power, and since the instructions for "core #2" are already decoded and stored in the μops cache it is far less expensive to add additional cores? That seems like an equally likely explanation in my mind.

Or maybe it's something with the cache that makes the power usage very high at single core usage? Parts of the cache is shared between multiple cores so as long as a single core is active, all the shared cache is active. When another core gets activated the shared cache is already active and d not consume any additional power.

 

I don't know enough about core design to specifically say which parts of the chip uses what amount of power, but I don't think neither you nor @leadeater are qualified to do that either.

 

Unless I see some evidence, I will regard that as speculation and not facts. But who knows, maybe we will see power consumption balloon once Apple releases a chip that has more IO capability. I find that unlikely though.

 

 

1 hour ago, Curufinwe_wins said:

I want to see a more in depth comparison between the MB Air and Mac Mini because it seems like the doubling of power might not be getting very much performance wise, which in fairness makes the 10W variants even more insanely good. 

I assume you're thinking of the Anandtech claim that the M1 has a ~22W TDP?

Things aren't really that simple. Even in the Mac Mini, which has a "TDP" of let's say 23W, that's for the entire chip, not just the CPU cores. So it includes the CPU, GPU, all the IO, the NPU and the 4266MHz RAM. The power consumption measurements Anandtech published were also not just the chip itself. It was from the wall (which is AC power).

 

During CineBench R23, the power consumption of the entire M1 chip is:

Single threaded - 3.8W

Multi threaded - 15.3W

 

Let me repeat that, when running a benchmark that fully load all CPU cores and also uses the memory (not sure how stressful CineBench is on the memory) the entire package uses 15.3 watts.

 

I suspect the 10 watt variant will fall behind when it comes to sustained loads where both the CPU, GPU and memory are heavily utilized. For most use cases though I don't think the difference will be that big. That will be interesting to see though.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Commodus said:

Apple could, but I don't know that it will in the first generation. My hunch is that Apple might only use embedded memory on high-efficiency mobile parts like the M1, and will use discrete memory for the iMac, Mac Pro and higher-end MacBook Pros.

Time will tell. But even with discrete or soldered RAM modules, I can't see why it couldn't overpower the Threadripper line with the way things are gong, at the very least on optimized programs.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Bombastinator said:

I would add that Apple seems to have given up a few fairly critical things while achieving those numbers.  The big one for me is there are apparently no pcie lanes to spare.  Imho they need 6.  They’re Apple so they don’t have to adhere to 4/8/16 like everyone else.  6 lanes will likely be enough for 2 thunderbolt ports (yea they theoretically require 8, but a lot of people don’t Max their bandwidth on each port) or(not and) one big gpu.  So thunderbolt for the audio people and GPUs for the graphics people. 

I am not worried about that.

I think that when the time comes for bigger and higher end devices, they will increase the connectivity. We have to remember that these are the low end devices, not mid range or high end.

 

 

 

1 hour ago, pas008 said:

To me this isn't only a win for Apple

It's a win for arm

Sadly for us non-Mac users, the ARM cores we have access to are shit.

Apple uses their in-house designed ARM cores which blows everything else ARM out of the water. Apple is ahead by like 60% compared to the best ARM core from Qualcomm.

That might change next year with the release of the Cortex-X1, but until then, Apple is in a league of its own.

 

 

1 hour ago, IAmAndre said:

Why does it matter, since this 4-core CPU is able to compete against a 5950X in benchmarks? I guess that would mean that the high end models that's going on the iMac should obliterate Threadripper CPUs on optimized apps?

It competes with the 5950X in single threaded applications, but not multi-threaded applications.

The 5950X crushes the M1 if you can utilize more than 4 cores. The M1 isn't meant to compete with the 5950X though. They are completely different categories.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, LAwLz said:

competes with the 5950X in single threaded applications, but not multi-threaded applications.

The 5950X crushes the M1 if you can utilize more than 4 cores. The M1 isn't meant to compete with the 5950X though. They are completely different categories

That's my point. If the entry level SoC competes with a 5950X with 4 times as many cores, why wouldn't the high end models beat the Threadripper CPU's?

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, IAmAndre said:

That's my point. If the entry level SoC competes with a 5950X with 4 times as many cores, why wouldn't the high end models beat the Threadripper CPU's?

Because that’s what forums like this wish for. 

Link to comment
Share on other sites

Link to post
Share on other sites

Not sure if this has been posted yet, but:
Apple-M1-Cinebench-R23-Benchmarks.thumb.jpg.757ffa64b82338cf4cc3696ef609df16.jpg

 

In multicore the M1 gets beat by all Ryzen 4000 mobile chips and edges out the i7. Not bad, but let's move on from using Geekbench. When people post articles about the new iPhone CPU being "on par with AMD desktop chips in Geekbench!" you know it's a flawed utility. Not saying we should limit ourselves to just Cinebench, obviously, but it gives a much clearer picture as to what the chip is capable of in a real world scenario.

QUOTE ME IF YOU WANT A REPLY!

 

PC #1

Ryzen 7 3700x@4.4ghz (All core) | MSI X470 Gaming Pro Carbon | Crucial Ballistix 2x16gb (OC 3600mhz)

MSI GTX 1080 8gb | SoundBlaster ZXR | Corsair HX850

Samsung 960 256gb | Samsung 860 1gb | Samsung 850 500gb

HGST 4tb, HGST 2tb | Seagate 2tb | Seagate 2tb

Custom CPU/GPU water loop

 

PC #2

Ryzen 7 1700@3.8ghz (All core) | Aorus AX370 Gaming K5 | Vengeance LED 3200mhz 2x8gb

Sapphire R9 290x 4gb | Asus Xonar DS | Corsair RM650

Samsung 850 128gb | Intel 240gb | Seagate 2tb

Corsair H80iGT AIO

 

Laptop

Core i7 6700HQ | Samsung 2400mhz 2x8gb DDR4

GTX 1060M 3gb | FiiO E10k DAC

Samsung 950 256gb | Sandisk Ultra 2tb SSD

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, BigDamn said:

Not sure if this has been posted yet, but:
Apple-M1-Cinebench-R23-Benchmarks.thumb.jpg.757ffa64b82338cf4cc3696ef609df16.jpg

 

In multicore the M1 gets beat by all Ryzen 4000 mobile chips and edges out the i7. Not bad, but let's move on from using Geekbench. When people post articles about the new iPhone CPU being "on par with AMD desktop chips in Geekbench!" you know it's a flawed utility. Not saying we should limit ourselves to just Cinebench, obviously, but it gives a much clearer picture as to what the chip is capable of in a real world scenario.

Yeah, don't think this is too surprising though? The M1 is basically a four core CPU w/o SMT (assuming the low power cores don't get assigned compute tasks, which I think is reasonable), even the lowest 4600 mobile chip has 6 cores w/ SMT. The 4600U edges out by 7% on the multicore benchmark, despite having 50% more cores. Similarly, the 4800s have 8c/16t (i.e. 100% more cores) and only beat by 35-40%. Obviously, non-linear scaling with number of cores isn't too surprising (though, that's mostly on the workload not necessarily the CPU), but I think that still paints the M1 in fairly good light? 

15" MBP TB

AMD 5800X | Gigabyte Aorus Master | EVGA 2060 KO Ultra | Define 7 || Blade Server: Intel 3570k | GD65 | Corsair C70 | 13TB

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, Blade of Grass said:

Yeah, don't think this is too surprising though? The M1 is basically a four core CPU w/o SMT (assuming the low power cores don't get assigned compute tasks, which I think is reasonable), even the lowest 4600 mobile chip has 6 cores w/ SMT. The 4600U edges out by 7% on the multicore benchmark, despite having 50% more cores. Similarly, the 4800s have 8c/16t (i.e. 100% more cores) and only beat by 35-40%. Obviously, non-linear scaling with number of cores isn't too surprising (though, that's mostly on the workload not necessarily the CPU), but I think that still paints the M1 in fairly good light? 

People seem to forget this. Apple is coming close to a six-core AMD chip with an SoC that likely only uses four cores at a time... with its first-generation computer-oriented design. It won't necessarily take gigantic strides with the second gen, but I would be nervous about the next few years if I were running AMD or Intel.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAwLz said:

 

Sadly for us non-Mac users, the ARM cores we have access to are shit.

Apple uses their in-house designed ARM cores which blows everything else ARM out of the water. Apple is ahead by like 60% compared to the best ARM core from Qualcomm.

That might change next year with the release of the Cortex-X1, but until then, Apple is in a league of its own.

 

 

But it shows the potential of arm even more

If you want to spend Some rnd funds on it

 

Along with nvidia having its chip designing hooks in it which could help push arm farther

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, IAmAndre said:

Why wouldn't they be able to embed say 32GB of RAM and 16 cores in a SoC? I mean they have enough room and cooling on an iMac.

I’m just saying they didnt

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, BigDamn said:

Not sure if this has been posted yet, but:
Apple-M1-Cinebench-R23-Benchmarks.thumb.jpg.757ffa64b82338cf4cc3696ef609df16.jpg

 

In multicore the M1 gets beat by all Ryzen 4000 mobile chips and edges out the i7. Not bad, but let's move on from using Geekbench. When people post articles about the new iPhone CPU being "on par with AMD desktop chips in Geekbench!" you know it's a flawed utility. Not saying we should limit ourselves to just Cinebench, obviously, but it gives a much clearer picture as to what the chip is capable of in a real world scenario.

This is true, but the Ryzen 4000 chips are usually aronud 35W parts and have 8 normal cores. The M1 has 4 powerful cores and 4 efficiency cores and no SMT. So the fact that the raw cores just get close in a just 10W or 25W package, is pretty solid. Think about it this way, it can get close to the performance of the 4000 series chips while using much less power and, in many tests, less thermal throttling, which means it's more consistent in its performance while having longer battery life, which in an ultrabook is pretty important. Also, keep in mind these are the low-end machines Apple started replacing and it start becoming clearer in context what's going on right now. Apple's Aim is not to outperform every cpu in raw power, it's to have a good balance of power and efficiency that allows them to build Macs with the best battery and the best performance they can have with that battery life or the best power efficiency (which means less heat) which lets them build smaller or thinner Mac desktops and enable designs not possible with x86 chips before.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, BigDamn said:

Not sure if this has been posted yet, but:
Apple-M1-Cinebench-R23-Benchmarks.thumb.jpg.757ffa64b82338cf4cc3696ef609df16.jpg

 

In multicore the M1 gets beat by all Ryzen 4000 mobile chips and edges out the i7. Not bad, but let's move on from using Geekbench. When people post articles about the new iPhone CPU being "on par with AMD desktop chips in Geekbench!" you know it's a flawed utility. Not saying we should limit ourselves to just Cinebench, obviously, but it gives a much clearer picture as to what the chip is capable of in a real world scenario.

Hey everyone! Look at how bad Apple is! 

Their quad core running at 10 watts can't even keep up with this 54 watt octa core! What a fail, am I right? 

 

This is totally evidence of geekbench being a bad benchmark as well since clearly this other benchmark gives a different result (except they don't, I just don't know the difference between single and multi core scores)! Everyone knows that if two benchmarks give different results then only the one that shows the result I want is valid! 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, LAwLz said:

Do we even know if this is true? It seemed like something @leadeater just assumed when he saw the high base power consumption of the 5950X.

I haven't seen any evidence that it's the IO causing the high base power consumption in x86 processors. In my mind, it is just as likely that it is something else causing the high base power consumption. For example maybe the decode stage consumes that much power, and since the instructions for "core #2" are already decoded and stored in the μops cache it is far less expensive to add additional cores? That seems like an equally likely explanation in my mind.

Or maybe it's something with the cache that makes the power usage very high at single core usage? Parts of the cache is shared between multiple cores so as long as a single core is active, all the shared cache is active. When another core gets activated the shared cache is already active and d not consume any additional power.

 

I don't know enough about core design to specifically say which parts of the chip uses what amount of power, but I don't think neither you nor @leadeater are qualified to do that either.

 

Unless I see some evidence, I will regard that as speculation and not facts. But who knows, maybe we will see power consumption balloon once Apple releases a chip that has more IO capability. I find that unlikely though.

 

 

I assume you're thinking of the Anandtech claim that the M1 has a ~22W TDP?

Things aren't really that simple. Even in the Mac Mini, which has a "TDP" of let's say 23W, that's for the entire chip, not just the CPU cores. So it includes the CPU, GPU, all the IO, the NPU and the 4266MHz RAM. The power consumption measurements Anandtech published were also not just the chip itself. It was from the wall (which is AC power).

 

During CineBench R23, the power consumption of the entire M1 chip is:

Single threaded - 3.8W

Multi threaded - 15.3W

 

Let me repeat that, when running a benchmark that fully load all CPU cores and also uses the memory (not sure how stressful CineBench is on the memory) the entire package uses 15.3 watts.

 

I suspect the 10 watt variant will fall behind when it comes to sustained loads where both the CPU, GPU and memory are heavily utilized. For most use cases though I don't think the difference will be that big. That will be interesting to see though.

It would be fair to say that I am not an expert at these things either. 

 

Ask me about nuclear power and/or engineering materials for power systems and I'll straight up claim expert level knowledge. 

 

This? Nope. 

 

I do remember anandtech trying to break out the IO chiplet and infinity fabric consumption from first gen Zen and TR, and that was what I was basing my standards on. I agree with most of the details you say here. According to Andrei's work, Cinebench is relatively light on both Zen and M1 compared to some other workloads. They saw consumption in spec type loads peaking around 25W for the compute heavy tasks and around 15W for the more memory stuff.

 

 

----

There have been reviews on both HEDT platforms trying to show what a pretty huge amount of overhead the IO (I included infinity fabric and the ring/mesh buses in IO, because it's somewhat specific to the extremely large core systems that x86 is moving towards) obviously that isn't nearly the same excuse for an APU that isn't chiplet based nor requires the complex bus transfers.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, LAwLz said:

Hey everyone! Look at how bad Apple is! 

Their quad core running at 10 watts can't even keep up with this 54 watt octa core! What a fail, am I right? 

 

This is totally evidence of geekbench being a bad benchmark as well since clearly this other benchmark gives a different result (except they don't, I just don't know the difference between single and multi core scores)! Everyone knows that if two benchmarks give different results then only the one that shows the result I want is valid! 

34W I think is the normal TDP of the 4000 series chips, with some going up to 45W. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, vectorhacker00 said:

34W I think is the normal TDP of the 4000 series chips, with some going up to 45W. 

The 4900HS is the 35 watt version.

The 4900H is a 45 to 54 watt part, depending on how the OEM tunes it and cooling.

 

The 4800H is also a 45 watt part.

 

The 4800U is a 25 watt part though. But it's worth noting that the M1 is a ~22W TDP part including things like the memory, which the Ryzen chips do not include in that TDP calculation.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Curufinwe_wins said:

It would be fair to say that I am not an expert at these things either. 

 

Ask me about nuclear power and/or engineering materials for power systems and I'll straight up claim expert level knowledge. 

 

This? Nope. 

 

I do remember anandtech trying to break out the IO chiplet and infinity fabric consumption from first gen Zen and TR, and that was what I was basing my standards on. I agree with most of the details you say here. According to Andrei's work, Cinebench is relatively light on both Zen and M1 compared to some other workloads. They saw consumption in spec type loads peaking around 25W for the compute heavy tasks and around 15W for the more memory stuff.

 

 

----

There have been reviews on both HEDT platforms trying to show what a pretty huge amount of overhead the IO (I included infinity fabric and the ring/mesh buses in IO, because it's somewhat specific to the extremely large core systems that x86 is moving towards) obviously that isn't nearly the same excuse for an APU that isn't chiplet based nor requires the complex bus transfers.

I think we will get a better understanding of things as more tests and done and as Apple releases more chips.

Until then I think it is very premature to say the M1 is mostly power efficient because of less IO.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Commodus said:

but I would be nervous about the next few years if I were running AMD or Intel.

Why? Unless Apple starts selling their SoCs to third parties, which I don't see happening, then they are not in competition with each other. Apple won't take over market share in the CPU space because they are in their own space

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

As Apple has shown, what ARM-Processors (with suitable acceleration) are capable of, Qualcomm and Microsoft could step up their game and make Windows on ARM usable. X86(_64) is still present, because it is the industry standard. 
Is there - apart from just being the wide-spread standard - any compelling reason to stick to CISC? Even on Windows?

Better integration, shared (or "unified" memory as Apple names it) and different accelerators (NPU, DSP) will lead to better performance in the long run. 

The current "powerhouse" of Qualcomm - the 8cx (or its Microsoft-Modified Brothers) has a TDP of 7 Watts. We need to push these numbers! Looking at the offerings of NVIDIA, they have a 20 Watt competitor in form of the Jetson Xavier (that's basically a 10th of a Titan V with 8 ARM Cores next to it). 
Working regularly with the Xavier's older brother (TX2), it's quite impressive what these can achieve, when utilizing the GPU correctly. 

The CPU Cores on the ARM-Chips need to get better, there needs to be a compelling offering with accelerator-cores, a reasonable GPU (the Apple M1 is "reasonable" for integrated graphics).
I guess, with Nvidia wanting to buy ARM and AMD buying Xilinx, there could be something in the making. 
 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Laborant said:

As Apple has shown, what ARM-Processors (with suitable acceleration) are capable of, Qualcomm and Microsoft could step up their game and make Windows on ARM usable. X86(_64) is still present, because it is the industry standard. 
Is there - apart from just being the wide-spread standard - any compelling reason to stick to CISC? Even on Windows?
 

Compatibility. Microsoft bets on it, Apple does not.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Laborant said:

As Apple has shown, what ARM-Processors (with suitable acceleration) are capable of, Qualcomm and Microsoft could step up their game and make Windows on ARM usable. X86(_64) is still present, because it is the industry standard. 
Is there - apart from just being the wide-spread standard - any compelling reason to stick to CISC? Even on Windows?

I think a lot of people see this as x86 vs ARM. But IMHO it has more to with Intel vs Apple design and backwards compatibility vs only new and optimized things.

 

x86 is here to stay on Windows. There is just too much x86 software people rely on and Microsoft has too little control over developers to make them port applications. Also, there is neither any compelling ARM laptops (because there isn't many software) nor the demand from users to have ARM versions of the software (nobody buys Windows ARM stuff).

 

Apple has historically deprecated things much more quickly and devs adapted to that and also Apple is betting everything on their designs so there is no choice but to go with the transition. Also, there's many options for Apple to strong-arm developers by enforcing new rules in Mac AppStore (which is used much more than Microsoft Store). One thing Apple still needs to demonstrate is how they can scale their designs up to Mac Pro.

 

Still, I am interested what kind of side effects Apple transition will create. For example: maybe Apple users will like to run production code on the same arch their laptops are going to run too increasing demand for ARM servers. AWS Graviton chips are interesting, but we tried them and still get better perf from Intel ones for our .NET CPU bound senarios. Although it is unknown how much of that might be because of optimization. Compilers and JITs have much more work poured into x86 optimizations.

Link to comment
Share on other sites

Link to post
Share on other sites

Watched Linus disassembling the Mac Mini..

One weird thing he keeps repeating is something along the lines of: “these are thunderbolt ports but weirdly they don’t support eGPUs, what gives, are they even real thunderbolt ports”.

 

Like he thinks eGPU support is something integral to the thunderbolt interface itself and not OS support and drivers. It’s like asking “this is a usb port right, why doesn’t it support the peripheral XYZ that the OS has no drivers for”. Thunderbolt-without-eGPUs was the norm for Macs from 2011 to 2018.

 

Also, someone please get them a thunderbolt to 10GigE external NIC, so they can use it on their 10G workflow like the other Minis they own. (note that the price of the Mini was reduced by 100$, enough to get at least a usb-c to 5GigE NIC, or half the price of a 10GigE NIC)

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, saltycaramel said:

Watched Linus disassembling the Mac Mini..

One weird thing he keeps repeating is something along the lines of: “these are thunderbolt ports but weirdly they don’t support eGPUs, what gives, are they even real thunderbolt ports”.

 

Like he thinks eGPU support is something integral to the thunderbolt interface itself and not OS support and drivers. It’s like asking “this is a usb port right, why doesn’t it support the peripheral XYZ that the OS has no drivers for”. Thunderbolt-without-eGPUs was the norm for Macs from 2011 to 2018.

It's good that it's a driver issue, not something explicitly blocked. It may be that in the future Apple or a GPU manufacturer can possibly announce this feature.

 

"No external GPU support" is simpler to hear than "External GPUs are supported as long as the GPUs are on our supported list. Our supported list is: ."

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, saltycaramel said:

Watched Linus disassembling the Mac Mini..

One weird thing he keeps repeating is something along the lines of: “these are thunderbolt ports but weirdly they don’t support eGPUs, what gives, are they even real thunderbolt ports”.

 

Like he thinks eGPU support is something integral to the thunderbolt interface itself and not OS support and drivers. It’s like asking “this is a usb port right, why doesn’t it support the peripheral XYZ that the OS has no drivers for”. Thunderbolt-without-eGPUs was the norm for Macs from 2011 to 2018.

 

Also, someone please get them a thunderbolt to 10GigE external NIC, so they can use it on their 10G workflow like the other Minis they own. (note that the price of the Mini was reduced by 100$, enough to get at least a usb-c to 5GigE NIC, or half the price of a 10GigE NIC)

Welcome to Linus’ ad-ridden, ill-informed, clickbait Apple reviews even when proven wrong 🥴🙃

 

 

 

I can think of some of his videos when he was a cheerleader 

Edited by like_ooh_ahh

There is more that meets the eye
I see the soul that is inside

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×