Jump to content

M1 Macs Reviewed

randomhkkid
48 minutes ago, saltycaramel said:

Watched Linus disassembling the Mac Mini..

One weird thing he keeps repeating is something along the lines of: “these are thunderbolt ports but weirdly they don’t support eGPUs, what gives, are they even real thunderbolt ports”.

 

Like he thinks eGPU support is something integral to the thunderbolt interface itself and not OS support and drivers. It’s like asking “this is a usb port right, why doesn’t it support the peripheral XYZ that the OS has no drivers for”. Thunderbolt-without-eGPUs was the norm for Macs from 2011 to 2018.

 

Also, someone please get them a thunderbolt to 10GigE external NIC, so they can use it on their 10G workflow like the other Minis they own. (note that the price of the Mini was reduced by 100$, enough to get at least a usb-c to 5GigE NIC, or half the price of a 10GigE NIC)

 
 
 
 

Thunderbolt selling feature over USB Type-C (20Gbps) is eGPU support. If you don't care about it, then all you need is standard USB Type-C 10 or 20Gbps.

 

The reality of things, is that I expect Apple next gen ARM SoC (M2 chip), next year, to have this limitation rectified. And probably have other behind the scene improvements (beside performance, I mean). This is obvious move, as USB 4 is coming next year, and Apple will be there showcasing it with its new updated device.

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, GoodBytes said:

Thunderbolt selling feature over USB Type-C (20Gbps) is eGPU support. If you don't care about it, then all you need is standard USB Type-C 10 or 20Gbps.

No. So far only TB docks/screens manage to combine both 4k@60Hz and USB3.0 Hubs over a single cable, same for daisy-changing screens. Regular Type-C still requires all 4 differential pairs of the cable for said resolution.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Dracarris said:

No. So far only TB docks/screens manage to combine both 4k@60Hz and USB3.0 Hubs over a single cable, same for daisy-changing screens. Regular Type-C still requires all 4 differential pairs of the cable for said resolution.

Type-C doesn't have display adapter. It is up to the GPU (and DisplayPort). If your GPU max resolution is 800x600, and can output to USB Type-C, then that is what you get. And if the monitor doesn't support USB as well (or your GPU doesn't have a USB controller to support sending data with the image, that is all you get). Type-C doesn't have any limitations on that regard, beside its bandwidth. Currently, USB Type-C max out at 20GBps. 

 

4K@60Hz 8-bit per channel color (16.7million colors), caps out at 14.93Gbps. Plenty of room for a single cable with USB connectivity.

USB 4, is expected to bring 40Gbps on the table, via USB Type-C.

 

All the GPU does is take the DisplayPort signal, which is packet based, and instead of passing it into the DisplayPort connector, it passes it to USB Type-C connector (ok well, there is a handshake and mode switching that occurs, of course, but those are just details).

 

Today, most people don't have 4K display, let alone a 1440p display.

Having 2x 4K displays is even more rare.

And that is looking at Steam hardware survey, which is only gamers, and not the reality of the whole market.

 

If you have the money for 2x 4K display, good chances that you won't be buying any M1 chip powered, but rather something more powerful.

Or you know... the fact that you would most likelly have something more powerful already, and can wait for USB 4.

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, GoodBytes said:

4K@60Hz 8-bit per channel color (16.7million colors), caps out at 14.93Gbps. Plenty of room for a single cable with USB connectivity.

Do you know how Type-C electrically works? I never claimed 4k 60Hz would not work over a single cable. It does work. But it takes up all 4 diff pairs for DP-alt mode, so besides the USB2.0 fallback in the center of the cable, you have no connectivity for USB peripherals on your monitor or dock. This is a know limitation of Type C (not in the spec but in the actual chips that handle the bus) that still no manufacturer found a way around.

 

TB3 manages to do all that since it passes native PCIe lanes.

 

35 minutes ago, GoodBytes said:

USB 4, is expected to bring 40Gbps on the table, via USB Type-C.

That is very nice. Where can I buy a computer and a screen with USB4? TB3 is available for years already.

 

35 minutes ago, GoodBytes said:

If you have the money for 2x 4K display, good chances that you won't be buying any M1 chip powered, but rather something more powerful.

Or you know... the fact that you would most likelly have something more powerful already, and can wait for USB 4.

Exactly. So no need for an eGPU. But probably quite some need for 4K@60Hz and some USB3 ports in your monitor. I really don't care about that Steam survey that people always bring up. It is ridiculously biased because it almost only reaches gamers while Macs are heavily focused for productivity.

 

4K screens have dropped heavily in price and are not expesnive anymore, at all. Entry level models can be had for 250$, for 350-400 you get decent ones with FreeSync support. But independent of price, hooking up two 4K displays to an M1 Mac Mini makes a ton of sense for many productivity scenarios. Simply because you have large screen real estate does not mean you need a super powerful machine.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Dracarris said:

Do you know how Type-C electrically works? I never claimed 4k 60Hz would not work over a single cable.

I do, and Type-C fully support the latest Display Port standard. It is up to your hardware and peripheral to support them.

You mention that you need 4 USB Type-C wires to drive 2x 4K displays, with USB. I am going with the assumption that you mean, 1x USB Type-C for the 4K resolution, and the other for the USB hub, for each display. The other assumption is you think you need 2x USB Type-C connector per single 4K display, to output 4K 60Hz image on it, using multi-stream. And that is even more inaccurate so, I gave you the benefit of the doubt.

 

 

8 minutes ago, Dracarris said:

It does work. But it takes up all the diff pairs, so besides the USB2.0 fallback in the center of the cable,

All you, (and anyone, really) should care about is that it works properly, not which pin you think it should be using and isn't.

 

8 minutes ago, Dracarris said:

you have no connectivity for USB peripherals on your monitor or dock.

Nope, it is a limitation of your monitor or dock. 

Assuming Wikipedia is correct, it notes the following:

644823806_Screenshot2020-11-18211715.png.a7cdd086481cfd58771c7fd2dc621afc.png

 

It shows that under DisplayPort 1.4 (and lower), USB 3.1 is supported.

So, both are sent out.

Source: https://en.wikipedia.org/wiki/USB-C

 

8 minutes ago, Dracarris said:

This is a know limitation of Type C (not in the spec but in the actual chips that handle the bus) that still no manufacturer found a way around. TB3 manages to do all that since it passes native PCIe lanes.

 
 

Ah! Well, no... this is just manufacture(s) of the peripherals saying B.S, or incorrect information you are getting.

It does support it. Dell has a monitor the Dell P2419HC/P2719HC which supports both data and video from 1 cable (and power too). Those are 2019 monitors.

Another example is the Dell 34inch 3440x1440 UltraWide P3421W. You also have the LG 4K 32UD89-W, which again, also has both data and video support on the USB Type-C.

 

So, as you can see, they are peripherals, monitors of various resolutions, which supports not only video but also data from the USB port.

The specs are there, and the products are there on the consumer market.

 

What we can agree on, is that if you have a resolution that requires 20Gbps, then yes, there is no room for USB, and so, you'll need to wait for USB 4.

 

 

 

8 minutes ago, Dracarris said:

That is very nice. Where can I buy a computer and a screen with USB4? TB3 is available for years already.

Next year, as mentioned in my post.

 

8 minutes ago, Dracarris said:

Exactly. So no need for an eGPU. But probably quite some need for 4K@60Hz and some USB3 ports in your monitor.

 

See above.

 

8 minutes ago, Dracarris said:

I really don't care about that Steam survey that people always bring up. It is ridiculously biased because it almost only reaches gamers while Macs are heavily focused for productivity.

I'll ignore this comment.

 

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, GoodBytes said:

What we can agree on, is that if you have a resolution that requires 20Gbps, then yes, there is no room for USB, and so, you'll need to wait for USB 4.

That is literally all I was saying. Everying else in your lengthy reply is a mis-quote of what I have said. 4K@60Hz is such a resolution that requires the full 20Gbps Bandwidth of a single Type C cable (more precisely, all 4 lanes since you cannot do a 3/1 lane division). I never claimed that there are no devices on the market that do not support simultaneous display and USB. But such resolutions and USB3.x simply do not work for the reason I am now stating for the 3rd time.

 

And since 4K@60Hz is a commonly used resolution in productivity settings (yes, Steam survey may disagree), this limitation is very relevant for Mac buyers, and hence the inclusion of TB3.

 

It does not make a difference if the standard defines it. Also that I have already written. There are simply still no chips on the market that support sending 4K@60Hz over two diff pairs.

 

31 minutes ago, GoodBytes said:

I'll ignore this comment.

That is your choice to do so. It does however not change the very little relevance a survey amongst gamers has on other fields. The reason most of them are still stuck at 1440p or even below is the somehow widespread myth that as a gamer you either have a high-refresh rate display (that are simply not affordable or even available with 4K) or you stand no chance in games.

 

31 minutes ago, GoodBytes said:

You mention that you need 4 USB Type-C wires to drive 2x 4K displays, with USB

You should really look up the difference between a wire/cable and a differential pair. Your bizarrely mis-quoted me here since you mix up the two.

 

30 minutes ago, GoodBytes said:

644823806_Screenshot2020-11-18211715.png.a7cdd086481cfd58771c7fd2dc621afc.png

 

It shows that under DisplayPort 1.4 (and lower), USB 3.1 is supported.

So, both are sent out.

Source: https://en.wikipedia.org/wiki/USB-C

 

I Invite you to have a look at footnote b. Hint: Lane = diff pair.

 

31 minutes ago, GoodBytes said:

All you, (and anyone, really) should care about is that it works properly, not which pin you think it should be using and isn't.

Why? I want to understand how things work, and why they do work and why not.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, LAwLz said:

Do we even know if this is true? It seemed like something @leadeater just assumed when he saw the high base power consumption of the 5950X.

I haven't assumed it, this like I mentioned is known fact about I/O in silicon processes. Now I'm not saying the M1 would be massively higher power usage with more PCIe lanes and an integrated 10Gb controller, not on the levels of a Zen 3 desktop chip, but it would be an increase to the degree that package TDP would go up or clocks to reduce to stay within a desired limit if one had been chosen. If you want to stay within say 15W and that is your hard limit then having more I/O would compared to the otherwise same architecture impact core clocks. At ultra low power using an additional ~1W to provide additional I/O is not insignificant, it is however on a desktop chip or high power laptop chip.

 

You're looking at it from the point of view that I think that because the IOD power usage is high, no I'm using the IOD to demonstrate a known truth. I don't think this because the IOD uses a fair bit of power, I know this because of already existing knowledge and engineers talking about this very thing. A lot of this was covered when Zen 2 and chiplets came out, you'll have to forgive the issue of not being able to find all that again as it's much later and search engines aren't serving it back as readily anymore, there is information you can find about this still however.

 

Quote

I/O Power

Running high bandwidth interconnects that are common in modern CPU designs can contribute a large percentage of the CPU power. This is particularly true in the emerging low-power microserver space. In some of these products, the percentage of power consumed on I/O devices tends to be a larger percentage of the overall SoC power.

 

There are conceptually two types of I/O devices: those that consume power in a manner that is proportional to the amount of bandwidth that they are transmitting (DDR), and those that consume an (almost) constant power when awake (PCIe/QPI).

 

I/O interfaces also have active and leakage power, but it is useful to separate them out for power management discussions. The switching rate in traditional I/O interfaces is directly proportional to the bandwidth of data flowing through that interconnect.

 

In order to transmit data at very high frequencies, many modern I/O devices have moved to differential signaling. A pair of physical wires is used to communicate a single piece of information. In addition to using multiple wires to transmit a single bit of data, typically the protocols for these lanes are designed to toggle frequently and continuously in order to improve signal integrity. As a result, even at low utilizations, the bits continue to toggle, making the power largely insensitive to bandwidth.

https://link.springer.com/chapter/10.1007/978-1-4302-6638-9_2

 

Note the part about PCIe and QPI, for Zen that would be IF (Which Apple M1 does not have). However Apple would have at least solved the PCIe idle power usage issue and solutions for that exist, however that does not solve active PCIe power usage, if you are going to add PCIe lanes/IO then you have to also assume it'll be used which will use power.

 

Here is the information on how PCIe idle power usage was solved: https://www.synopsys.com/designware-ip/technical-bulletin/power-gating.html

 

As to my other point I made about I/O not benefiting all that well from smaller nodes:

Quote

 By keeping the memory interfaces and SerOes in mature 12nm technology, costs are mitigated since those circuits see a very small shrink factor to 7nm and very little performance or power gain from advanced nodes.

https://ieeexplore.ieee.org/document/9063103

 

These are things I do follow and read up on, I'm not just making assumptions or backwards logic assumptions on observations.

 

13 hours ago, LAwLz said:

In my mind, it is just as likely that it is something else causing the high base power consumption. For example maybe the decode stage consumes that much power, and since the instructions for "core #2" are already decoded and stored in the μops cache it is far less expensive to add additional cores? That seems like an equally likely explanation in my mind.

When the CPU is completely idle every core is below 0.02W and the entire CCD is less than 1W, yet package power is around that 20W mark. Decode and everything else is contained in the CPU cores in the CCX/CCD so if the cores are using 0.02W or less then where is the power? It's in the IOD or in the IF link between the IOD and CCD but not in the CPU cores or anything to do with x86 specifically (pretty sure of that but could be x86 memory controller related, I don't know).

 

Then benefit of AMD's chiplet architecture is our ability to see and show this, it is simply a fact and demonstrated directly to you even in the existing Anandtech article that Zen 2 and Zen 3 cores (they did it last generation) that the core power usage at idle is extremely low. You can't very well claim the power usage has something do with a component found within a CPU core when we can prove the power usage is not there.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, GoodBytes said:

This is obvious move, as USB 4 is coming next year, and Apple will be there showcasing it with its new updated device.

I thought the new devices were all USB 4 already? 

15" MBP TB

AMD 5800X | Gigabyte Aorus Master | EVGA 2060 KO Ultra | Define 7 || Blade Server: Intel 3570k | GD65 | Corsair C70 | 13TB

Link to comment
Share on other sites

Link to post
Share on other sites

I’m a 71 year-old old fart who has been around this industry since the late 1960’s. A few observations:

  • High-performance design requires short electrical paths, and systems as small as possible.  For these reasons attempts to boost performance by using discrete units connected with buses, irrespective of the type of bus, will ultimately be limited by the speed of light. Keep it small, keep it close.  Keep things on chip, and integrate chips in tight 3D packaging (SoC). Only loosely coupled units can be at a distance. 
  • High-performance design is best achieved by power efficiency.  
  • The smaller the system the less power is wasted on communication between units. High speed communication is a power hog. 
  • High-performance design requires a total system view. 
  • Software and hardware co-design is vital for good performance.  Aside from low-level drivers and signal processing algorithms much software is written to be easy on the developer, and not good for efficiency. Look at the processor that flew Apollo to the moon. And in the mid-1980s we were 200 engineers sharing a cluster of three VAX-11/785 with a total Mips rating of 4.5.
  • Systems must be designed with specialized co-workers performing dedicated tasks that would be inefficient to perform on general purpose CPUs. Thus, dedicated encryption, video en-coding/de-coding and machine learning are examples of increasing speed while reducing power. 
  • Administering dedicated co-workers is most efficiently achieved by very long instruction words, both on-chip and off. 
  • Interrupt-driven architectures have expensive context shift overhead. Game consoles use polling of architectures.  
  • Cache memory, in principle, slows things down.  Caches are primarily built as buffers between systems with different bandwidths. They are a necessary and useful evil, but waste power and chip real estate, and waste energy and time for every cache misses to their power consumption, and overhead with cache misses.  The closer you can move your memory to the computing cores the less cache you will need.  Well designed memory pools on chip or very close to the CPU in reduce latency. 
  • Speculative execution and look-ahead branching are wasteful examples due to mismatching of resource availability and bandwidths. In addition, they are inherently complex and potentially dangerous.
  • Object-oriented programming often hurts performance significantly.  A friend who is the lead programmer at a major game provider told me: “The first thing we teach kids straight out of school is how to write efficient, re-usable code without using object-oriented languages.” 

As I have closely followed the evolution of the system designs leading up to the launch of the M1, I see that many of these principles have wisely been adhered to. The. M1 laptop I just bought bore this out by being the most “boring” personal computer I have ever used.  Boring in the sense that its responsiveness and smoothness made it so non-instrusive. For me that is the ultimate benchmark.

 

Carsten Thomsen

Link to comment
Share on other sites

Link to post
Share on other sites

P.S.  A consequence of the tighter system design described above is the gradual fading away of hobbyist “tinkering” industry.  It is sort of sad, because tinkering with loudspeakers, hi-fi systems, cars, and computers, has attracted a lot of fantastic talent.  This is probably why systems as Raspberry Pi are seeing such a surge of interest.

 

Carsten Thomsen 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, GoodBytes said:

Thunderbolt selling feature over USB Type-C (20Gbps) is eGPU support. If you don't care about it, then all you need is standard USB Type-C 10 or 20Gbps.

.

uhm, no?

This is a forced argument.

Thunderbolt selling point is being a fast “external pcie over a cable” interface, for anything, and always faster than usb at any point of its history (until usb4 just resolved to becoming thunderbolt on then physical layer). Again, from 2011 to 2018 Mac users enjoyed it and back then there was no eGPU support.

eGPUs are just one of the many uses of thunderbolt. 

Linus can be and should be critical of the lack of OS SUPPORT of eGPUs, after apple was adamant eGPUs for Macs were a thing for 2 years. (now we know they were more of a “bridge” to another phase or maybe the last remnants of the pre-2019_MacPro philosophy)

But making it about the thunderbolt ports being like a weird unexpected flavour of thunderbolt is dumb. Thunderbolt without eGPU support is a thing, has been a thing and will always be a thing.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, saltycaramel said:

uhm, no?

This is a forced argument.

Thunderbolt selling point is being a fast “external pcie over a cable” interface, for anything, and always faster than usb at any point of its history (until usb4 just resolved to becoming thunderbolt on then physical layer). Again, from 2011 to 2018 Mac users enjoyed it and back then there was no eGPU support.

eGPUs are just one of the many uses of thunderbolt. 

Linus can be and should be critical of the lack of OS SUPPORT of eGPUs, after apple was adamant eGPUs for Macs were a thing for 2 years. (now we know they were more of a “bridge” to another phase or maybe the last remnants of the pre-2019_MacPro philosophy)

But making it about the thunderbolt ports being like a weird unexpected flavour of thunderbolt is dumb. Thunderbolt without eGPU support is a thing, has been a thing and will always be a thing.

0.1% of mac users will ever use eGPU. Apple does not give a fuck about them.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, avg123 said:

0.1% of mac users will ever use eGPU. Apple does not give a fuck about them.

Fact.

That’s maybe what’s happening here.

And maybe they have internal data to back it up.

 

What I found weird was Linus acting like eGPU support comes bundled with thunderbolt ports inevitably.

 

Like, FreeBSD/pfsense on a Intel NUC with tb3 supports thunderbolt (for 10G NICs and storage), but no one in his right mind would expect it to support eGPUs because “it’s thunderbolt, duh”. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, avg123 said:

0.1% of mac users will ever use eGPU. Apple does not give a fuck about them.

The only reason a Mac user would use an egpu is if there’s no other choice. They actually work a lot better for lag resistant stuff like graphics than actually gaming.  If they create a situation where a user cannot use a Mac for their work when they used to though they will lose that customer to PC, and their customer base is not thick to begin with.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Bombastinator said:

The only reason a Mac user would use an egpu is if there’s no other choice. They actually work a lot better for lag resistant stuff like graphics than actually gaming.  If they create a situation where a user cannot use a Mac for their work when they used to though they will lose that customer to PC, and their customer base is not thick to begin with.

No eGPU support yet does not mean it will not come (at least for compute apis) it is clear from what apple has been telling us devs that we should expect that apple silicon macs might have multiple metal gpu targets and some might be flagged as `removable` . There are no eGPU support today since there are no ARM64 drivers for AMD gpus on macOS.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, avg123 said:

So is the M1 more powerful than zen2 or zen3?

More powerful than Zen2. Trades blows with zen3 but at a fraction of the power usage. 

The two benefits of zen3 are:

1) higher core count parts being available today. The M1 is essentially a quad core while the lowest end zen3 product is a 6-core. So for multithreaded workloads zen3 wins from the simple fact that it has more cores. 

2) the zen3 parts sold today have more PCIe lanes. 

Link to comment
Share on other sites

Link to post
Share on other sites

I suspect eGPU support in 2018, like the need to even introduce the iMac Pro in 2017, was a product of the dark ages of the trash bin Mac and how to damage control that disaster while waiting for the new tower Mac Pro (which is effectively a 2020 product, even if technically released at the end of 2019).

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, saltycaramel said:

Fact.

That’s maybe what’s happening here.

And maybe they have internal data to back it up.

 

What I found weird was Linus acting like eGPU support comes bundled with thunderbolt ports inevitably.

 

Like, FreeBSD/pfsense on a Intel NUC with tb3 supports thunderbolt (for 10G NICs and storage), but no one in his right mind would expect it to support eGPUs because “it’s thunderbolt, duh”. 

 

Well Linus has already made his mind up about the M1 so he has to find some reason to hate it. 

 

Edit: after watching the short circuit video I think he sounded a bit less biased than his previous videos. 

He is still very much a pessimist about it though. 

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Bombastinator said:

The only reason a Mac user would use an egpu is if there’s no other choice. They actually work a lot better for lag resistant stuff like graphics than actually gaming.  If they create a situation where a user cannot use a Mac for their work when they used to though they will lose that customer to PC, and their customer base is not thick to begin with.

can eGPU support be added via software or is it a hardware limitation? I think apple will make their own eGPU and sell it for 2 kidneys. they dont want to deal with amd and nvidia drivers again

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, LAwLz said:

Well Linus has already made his mind up about the M1 so he has to find some reason to hate it. 

exactly. and since he is being rebutted by the tech community because of his stupid video, he now has to stick to his opinion because of his big ego.

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, Bombastinator said:

The only reason a Mac user would use an egpu is if there’s no other choice. They actually work a lot better for lag resistant stuff like graphics than actually gaming.  If they create a situation where a user cannot use a Mac for their work when they used to though they will lose that customer to PC, and their customer base is not thick to begin with.

Theres plenty of uses for an eGPU, but people seem to assume a GPU is only for gaming, and not because everyone used the generalization is being made that no one needs a eGPU.

The apple crowd seems to be quite upset over the opinion Linus has with the mac mini, no upgradeable memory compared to the Intel mac mini, less ports, and no eGPU support, if there was eGPU support apple would be putting a ton of marketing into it. IMO those that don't like his opinion aren't forced to watch their videos.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, avg123 said:

can eGPU support be added via software or is it a hardware limitation? I think apple will make their own eGPU and sell it for 2 kidneys. they dont want to deal with amd and nvidia drivers again

We don't know, but right now the most probable answer is that eGPU support can be added through software updates. 

One possible showstopper right now might be drivers. We already know that kernel drivers are not compatible with rosetta2 so Apple can't just load the old drivers. I can't imagine AMD or Nvidia being that willing to develop ARM based drivers for Apple either. 

"Hey, we know that we just gave you the middle finger and developed our own GPU, but can you please spend time and money developing drivers for our new product that is competing with you?". 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Blademaster91 said:

if there was eGPU support apple would be putting a ton of marketing into it. IMO those that don't like his opinion aren't forced to watch their videos.

Apple did have eGPU support before but didn't market it, so I don't know where you're getting this idea from. 

 

As for the "if you don't like it don't watch it" argument:

 

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, LAwLz said:

Well Linus has already made his mind up about the M1 so he has to find some reason to hate it. 

 

Edit: after watching the short circuit video I think he sounded a bit less biased than his previous videos. 

He is still very much a pessimist about it though. 

Yeah he toned down the edginess.


What I lamented about eGPUs and thunderbolt is more of a question of semantics.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×