Jump to content

Apple will announce move to ARM-based Macs later this month, says report

r3d0c
2 minutes ago, hishnash said:

They have been very clear they are moving everything to apples own chips.  but that will not have that much of an impact.

remember apple already sell more iPads every year than macs!

 

Apple is more likely to buy some fabs than build from the ground up through. They recently purchased all of intels modem deviation for example. If apple see a new fab tec that might be the next big thing they will buy it outright, they might already have an are just waiting for it to mature... chances are they have a few different bets on the table hopping at least one pays out.  The other thing apple does a lot of is invest into existing fab (efectivly fronting them cash) so that they build exclusive fabs for apple/give apple pirority

Apple already made TSMC all but a subsidiary. Apple is the one that has paid for TSMC to leapfrog Intel. Starting a Fab is generally out of the question, even for Apple. They'd never see the return. A lot cheaper to pay TSMC.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, hishnash said:

They have been very clear they are moving everything to apples own chips.  but that will not have that much of an impact.

remember apple already sell more iPads every year than macs!

 

Apple is more likely to buy some fabs than build from the ground up through. They recently purchased all of intels modem deviation for example. If apple see a new fab tec that might be the next big thing they will buy it outright, they might already have an are just waiting for it to mature... chances are they have a few different bets on the table hopping at least one pays out.  The other thing apple does a lot of is invest into existing fab (efectivly fronting them cash) so that they build exclusive fabs for apple/give apple pirority

And at the same presentation they also said that some professional software (namely Blender Maya but I'm sure that will not be the only one) will never be able to run on ARM CPUs. You think their just gonna ditch the Mac Pros entirely?

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Master Disaster said:

And at the same presentation they also said that some professional software (namely Blender Maya but I'm sure that will not be the only one) will never be able to run on ARM CPUs. You think their just gonna ditch the Mac Pros entirely?

Pardon my ignorance, but doesn't maya work better on nvidia GPU's anyway?  or is that just a shit internet thing I picked up.

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, mr moose said:

Pardon my ignorance, but doesn't maya work better on nvidia GPU's anyway?  or is that just a shit internet thing I picked up.

I really don't know but it still seems pretty dumb to cannibalise you're highest tier and most expensive product either way. I guess either they realise nobody uses a Mac for that stuff anyway or they intend to keep selling Mac Pros that aren't capable of doing any professional work (just like the Macbook Pros).

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Master Disaster said:

You think their just gonna ditch the Mac Pros entirely?

No i think they will keep it, it might be quite different with a lot of `fixed`/`hybrid` function compute units like the afterburner card but more.

 

 

11 minutes ago, mr moose said:

Pardon my ignorance, but doesn't maya work better on nvidia GPU's anyway

The compelx part of maya is not gpu bound in fact it is the link between the gpu and the cpu, unlike a game in an editing software you and constantly changing the data you are seeing, this is why you might bet better UX on unified memory GPU like what apple was demoing since the cpu and cpu can read/write to the same memory in place without any copy operations needed. (not what you need for games so much however).

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, hishnash said:

The compelx part of maya is not gpu bound in fact it is the link between the gpu and the cpu, unlike a game in an editing software you and constantly changing the data you are seeing, this is why you might bet better UX on unified memory GPU like what apple was demoing since the cpu and cpu can read/write to the same memory in place without any copy operations needed. (not what you need for games so much however).

CUDA back in around 2017 and AMD has similar, Apple isn't doing anything new or something that already isn't being used right now. Note proper hardware cooperation level unified memory was done first by Intel, AMD and ARM not Nvidia but you'll see a ton of arcticles and information online from them or about them doing it first which isn't the complete truth, they had virtual unified memory up until Pascal.

https://developer.nvidia.com/blog/unified-memory-cuda-beginners/

http://gpgpu10.athoura.com/ROCM_GPGPU_Keynote.pdf

https://www.nextplatform.com/2019/01/24/unified-memory-the-final-piece-of-the-gpu-programming-puzzle/

 

Everyone company embellishes truths and situations and will almost never clarify or mention others can do the same thing they are up on stage hyping up, like they don't have to just keep in mind almost nothing is unique and competitors almost always have the same things or will do in short order. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, leadeater said:

CUDA back in around 2017 and AMD has similar, Apple isn't doing anything new or something that already isn't being used right now.

Not very different. (might have the same name but not the same tec) What im talking about is being able to use a struct directly in c/c++/swift on the the cpu and directly on the gpu at the same time without any synchronisation calls.

Even the unified memory on the intel platforms with iGPU requires locks to work (have have used it) you can modify from both ends but not at the same time.

the unified memory in CUDA is even worse since it requires a GPU cache flush, effectively if you just change it in system memory the gpu does not even know it has changed unless a message is sent to it to inform it of the change then it can go get it. All of these things add up to massive latency. 

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, hishnash said:

Not very different. (might have the same name but not the same tec) What im talking about is being able to use a struct directly in c/c++/swift on the the cpu and directly on the gpu at the same time without any synchronisation calls.

You can, from Pascal onward.

 

19 minutes ago, hishnash said:

Even the unified memory on the intel platforms with iGPU requires locks to work (have have used it) you can modify from both ends but not at the same time.

Wasn't talking about iGPUs, Xeon Phi is where it's used but I guess maybe you can do it on iGPUs as well. Not all that interested in those.

 

19 minutes ago, hishnash said:

the unified memory in CUDA is even worse since it requires a GPU cache flush, effectively if you just change it in system memory the gpu does not even know it has changed unless a message is sent to it to inform it of the change then it can go get it. All of these things add up to massive latency. 

Not Pascal onward.

Quote

Demand paging can be particularly beneficial to applications that access data with a sparse pattern. In some applications, it’s not known ahead of time which specific memory addresses a particular processor will access. Without hardware page faulting, applications can only pre-load whole arrays, or suffer the cost of high-latency off-device accesses (also known as “Zero Copy”). But page faulting means that only the pages the kernel accesses need to be migrated.

 

Quote

Keep in mind that your system has multiple processors running parts of your CUDA application concurrently: one or more CPUs and one or more GPUs. Even in our simple example, there is a CPU thread and one GPU execution context. Therefore, we have to be careful when accessing the managed allocations on either processor, to ensure there are no race conditions.

 

Simultaneous access to managed memory from the CPU and GPUs of compute capability lower than 6.0 is not possible. This is because pre-Pascal GPUs lack hardware page faulting, so coherence can’t be guaranteed. On these GPUs, an access from the CPU while a kernel is running will cause a segmentation fault.

 

On Pascal and later GPUs, the CPU and the GPU can simultaneously access managed memory, since they can both handle page faults; however, it is up to the application developer to ensure there are no race conditions caused by simultaneous accesses.

This applies to Apple just the same.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, VegetableStu said:

i just realised:

 

what about thunderbolt? or will Apple try to bee-line to USB4 and ditch the Thunderbolt alt-mode?

 

because that would mean dropping AMD as well. so far there's no indicator on the state of eGPUs on USB4 outside of requiring the optional thunderbolt alt-mode

Didn't Intel talk about opening it up for others to use?

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Master Disaster said:

ILets not forget, Apple are the richest company on the planet with a net value hovering close to $1T. Building a new fab, staffing it, creating the RnD department, staffing it and setting up the supply chain would still be chump change to them.

Apple is still second to Microsoft by market capitalisation and 11th by revenue.

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, rhn94 said:

you were saying? lmao, so many "experts" on this forum

And if you ever read anything I said, I said the investment people who spread these rumors have been doing so for over a decade, they might be right one year, and since Apple didn't announce any hardware, they're still wrong for now.

 

image.thumb.png.0209a548f89474ebb9f64f72ca00c08a.png

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, leadeater said:

This applies to Apple just the same.

Yes you can but you are access system memory over the PCIe bus... this is very slow you cant do this to render a mesh or texture, you need to copy to the gpu. For compute that is different but we are talking about Maya ux aka when you are modeling/animating etc. (not rendering out the final scene that is going to be better on a dedicated gpu for sure). You cant use shared system memory for meshes on pascal since you need to re-shape the memory for the gpu to be able to handle messages definitely (unless you massively limit how you use them on the cpu).
 

6 hours ago, leadeater said:

Xeon Phi is where it's used

Yep the Xeon Phi is much closer to what apple is doing, but again just focused on compute. the key thing that apples iGPUs have is the abilty to share geometry in shared memory with the cpu in such a way that it is packed/shaped the same for both the cpu and gpu (SIMD3, and SIMD4) arrays. This is very useful if you have an application were the cpu is constantly making small changes to meshes and you want to render those changes in real time on the gpu without needed the overhead of copying a patch to the gpu applying it in gpu memory .... or copying the entire mesh object.


 

 

5 hours ago, VegetableStu said:

what about thunderbolt? or will Apple try to bee-line to USB4 and ditch the Thunderbolt alt-mode?

 

So apple are including the PCIe driver kit for the Arm cpus so that means there will be a way to connect PCIe devices to these macs.... i expect USB4 including TB3

Link to comment
Share on other sites

Link to post
Share on other sites

Who would buy a new Apple desktop or laptop in the next 2 years? And who would buy one of the first ones?

I remember going through this malarkey when they moved to PPC. And back then they had Steve.

I can't imagine Tim doing a good job here.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, willies leg said:

Who would buy a new Apple desktop or laptop in the next 2 years? And who would buy one of the first ones?

I remember going through this malarkey when they moved to PPC. And back then they had Steve.

I can't imagine Tim doing a good job here.

Probably nobody unless they need an immediate replacement. I almost bought a PPC MacMini before the PPC switch and it was like ... oh. Oh... oh, later when the C2D Mac Mini couldn't install later OSX versions. This is a lesson everyone who bought a first-generation product should only have to learn once.

 

In an ideal situation, the "fat binary" setup should have been the intermediate code to compile into the final CPU code when it's unpacked the first time, and if the cpu suddenly switches, the OS goes back to the intermediate code, checks the code signing and then recompiles it. Or Apple could just do this on the app store side and send the optimized binary for the target device. Either way, usually the program binary/libraries are the smallest portion of a game, but a fairly disorganized mess in an application, and anything that reduces the user's experience down to just moving one icon around is better than trying to uninstall and reinstall the thing.

Link to comment
Share on other sites

Link to post
Share on other sites

53 minutes ago, hishnash said:

Yes you can but you are access system memory over the PCIe bus... this is very slow you cant do this to render a mesh or texture, you need to copy to the gpu. For compute that is different but we are talking about Maya ux aka when you are modeling/animating etc. (not rendering out the final scene that is going to be better on a dedicated gpu for sure). You cant use shared system memory for meshes on pascal since you need to re-shape the memory for the gpu to be able to handle messages definitely (unless you massively limit how you use them on the cpu).

Which all of this works perfectly fine right now with GPU acceleration as it is, modeling also isn't actually all that latency sensitive either.

 

59 minutes ago, hishnash said:

the key thing that apples iGPUs have is the abilty to share geometry in shared memory with the cpu in such a way that it is packed/shaped the same for both the cpu and gpu (SIMD3, and SIMD4) arrays.

Which again is completely possible with CUDA and has been for a while. All the rule, all the recommendation, how this works fundamentally is the same for Apple and Apple advises the same things, because Apple isn't doing anything anyone else can do or is doing.

 

Quote

Choose a Resource Storage Mode for Buffers


For information about setting a storage mode, see Setting Resource Storage Modes. Several options are available, depending on your buffer access needs.

Accessed exclusively by the GPU. Choose the MTLStorageMode.private mode if you populate your buffer with the GPU through a compute, render, or blit pass. This case is common for intermediary buffers used between passes.

 

Populated once by the CPU and accessed frequently by the GPU. Choose the MTLStorageMode.managed mode. First, populate the buffer’s data with the CPU and then synchronize the buffer. Finally, access the buffer's data with the GPU.

 

Changes frequently, is relatively small, and is accessed by both the CPU and the GPU. Choose the MTLStorageMode.shared mode.

 

Changes frequently , is relatively large, and is accessed by both the CPU and the GPU. Choose the MTLStorageMode.managed mode. Always synchronize the buffer after modifying its contents with the CPU or the GPU.

 

Quote

Choose a Resource Storage Mode for Textures


For information about setting a storage mode, see Setting Resource Storage Modes. You can create textures with a MTLStorageMode.managed or MTLStorageMode.private mode only, but not with MTLStorageMode.shared. Several options are available, depending on your texture access needs.

 

Accessed exclusively by the GPU. Choose the MTLStorageMode.private mode if you populate your texture with the GPU through a compute, render, or blit pass. This case is common for render targets and drawables.

 

Populated once by the CPU and accessed frequently by the GPU. Use the CPU to create a buffer with MTLStorageMode.shared mode and populate its contents with your texture data. Then, use the GPU to copy the buffer’s contents into a texture with a MTLStorageMode.private mode.

 

Accessed frequently by both the CPU and the GPU. Choose the MTLStorageMode.managed mode. Always synchronize the texture after modifying its contents with the CPU or the GPU.

https://developer.apple.com/documentation/metal/setting_resource_storage_modes/choosing_a_resource_storage_mode_in_macos

 

Quote

Understand the Managed Mode


In a unified memory model, a resource with a MTLStorageMode.managed mode resides in system memory accessible to both the CPU and the GPU.

 

In a discrete memory model, a managed resource exists as a synchronized pair of memory allocations. One copy of the resource resides in system memory accessible only to the CPU; the other resides in video memory accessible only to the GPU. However, you don’t manage the copies separately; Metal creates a single MTLResource object for both copies.

 

In both memory models, Metal optimizes CPU and GPU access to managed resources. However, you must explicitly synchronize a managed resource after modifying its contents with the CPU or the GPU. For information about synchronizing a managed resource, see Synchronizing a Managed Resource.

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, willies leg said:

Who would buy a new Apple desktop or laptop in the next 2 years? And who would buy one of the first ones?

I remember going through this malarkey when they moved to PPC. And back then they had Steve.

I can't imagine Tim doing a good job here.

Lots of people.  I bought one of the last PPC iMacs and a 2007 Intel iMac, and compatibility issues were pretty rare in both cases.  Maybe give the very first ones a few weeks or months to identify and resolve any teething troubles if you're nervous.

 

Also, why wouldn't Tim do a good job here? It's not like Apple hasn't been through a major macOS transition before, and it has been developing macOS on ARM for years in advance.  Besides, you make it sound like Tim is personally reviewing the software stack... er, no, it's the engineers that determine how well this works.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Commodus said:

Lots of people.  I bought one of the last PPC iMacs and a 2007 Intel iMac, and compatibility issues were pretty rare in both cases.  Maybe give the very first ones a few weeks or months to identify and resolve any teething troubles if you're nervous.

 

Also, why wouldn't Tim do a good job here? It's not like Apple hasn't been through a major macOS transition before, and it has been developing macOS on ARM for years in advance.  Besides, you make it sound like Tim is personally reviewing the software stack... er, no, it's the engineers that determine how well this works.

I got a Performa 638CD (still have it!), and a friend got one of the original PPC 6100's. My 638 blew the doors off the 6100, not just because he was emulating, but the enormous memory overhead of that whole subsystem. Probably gonna be the same way with these new Apple chips.

 

One thing Steve learned was the value of commodity hardware, and the innovation that goes with it. I can't imagine going back to that whole custom proprietary stuff. Tim's lost the vision and lost the control. It's too bad. Time will tell. I hope you're right-but I know, based on having Apple computers back to the Apple 2 days, we've seen this before and it's not good.

Link to comment
Share on other sites

Link to post
Share on other sites

Honestly, would just like a good dock to connect my phone to a display and trusty mechanical keyboard. I like Pages, and Microsoft Office exists here too, but typing a lot on the touchscreen is kind of a pain. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/24/2020 at 8:16 AM, Zodiark1593 said:

Honestly, would just like a good dock to connect my phone to a display and trusty mechanical keyboard. I like Pages, and Microsoft Office exists here too, but typing a lot on the touchscreen is kind of a pain. 

I find the iPad's screen is completely serviceable. Phones, not so much.

 

But with everything moving to USB-C docks, it's the first time it's ever been possible to use ANY usb-c dock (even things like the Nintendo Switch's dock) with any USB-C device. You can plug a USB-C android phone into a USB-C laptop dock, and get the ethernet, mouse and keyboard without having to change anything. (YMMV, I've seen this personally though.)

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×