Jump to content

Apple April 20 event - New iMac and iPad Pro with M1 chips, 'AirTags' and a new iPhone color

AndreiArgeanu
12 minutes ago, leadeater said:

Will just have to wait until there is an M1 native AutoCAD version.

Rendering tasks like AutoCAD can really make use of TBDR pipelines and the shared memory pointers that lets you avoid memory copies. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Bombastinator said:

Well it’s not going to be an Nvidia or intel gpu.  AMD is all that is left.

No apple have their own GPU tec, I expect it will be based on apples own inshore GPU cores that have much better perf/W than those from AMD. Likely apple will go with a McM package for the SoC with GPU dies in there, and the dedicated add in card will just be a repate of that package without any CPU dies and more GPU dies but otherwise the same as the main SoC.

 

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, hishnash said:

No apple have their own GPU tec, I expect it will be based on apples own inshore GPU cores that have much better perf/W than those from AMD. Likely apple will go with a McM package for the SoC with GPU dies in there, and the dedicated add in card will just be a repate of that package without any CPU dies and more GPU dies but otherwise the same as the main SoC.

 

If it doesn’t run what I need it to run the perf. Is totally irrelevant to me. Also I don’t believe it. Or believe IN it perhaps.  Can Apple make a GPU? Apparently they have so yes. Can they make a decent GPU? Maybe. Depending on who you read here it’s either decent or totally dependent on a faster memory trick.   Can they make a gpu first try better than companies who have been running flat out at it for 30 years? Um.......

Edited by Bombastinator

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

Ah, I like the smell of a right-to-repair LTT video in the morning, just after a positive news cycle for apple. (but that’s just a coincidence!1! RTR was not a priority in March amirite?)


Linus managed to shoehorn RTR in the original M1 Macs unveiling video last year.

 

Now he puts up a whole RTR video mere hours after the M1 iMacs rocked the world with their fresh next-decade design, negligible power draw, the M1 speed we know and love, and the only “4 and a half key” monitor in a baseline desktop in the industry.

 

Good, good. 

No amount of FUD will stop the outflow from PC to Mac and the upcoming revolution. 

LMG is even hedging against this by establishing their new Apple-centered channel. 

 

And now let’s see what Apple comes up with for the format most people use nowadays: laptops.

Macbook Pro 14” and 16” are due for this summer.

LMG better start shooting the obligatory RTR video for those too, ‘cause they’re gonna melt faces if they go miniLED + M1X 12-core.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Bombastinator said:

Can they make a gpu first try better than companies who have been running flat out at it for 30 years? Um.......

Yes, for multiple reasons:
 

1) They have hired a good fraction of the GPU talent from around them world
2) They did not start making GPUs IP back 10 years ago but rather they have purchases PowerVR gpu tec who were founded back in 1985 (that is longer than Nvidia)
3) Making a GPU is a lot simple than making a high performance CPU core. CPUs are way way more complex than GPUs. The complexity in GPUs is mostly in the driver side of things and not in the hardware itself (compared to the complexity of a modern speculative execution cpu with out of order expiation, multiple tiers of cache etc). And for apple they have many years a driver expirance, also they don't need to worry about DX or Vulcan, instead they have developed metal to match the hardware they are making they have made this part of the job much simpler for themselves than it is for AMD or Nvidia by not supporting any display apis that are deigned by commity (even apples OpenGL implementation is built onto of metal, like how MoltenVK works in fact). 

But most of all, they already have the GPU cores in the M1 have almost 2x the perf/W of AMD. And unlike a CPU scaling out a GPU is not hard, in fact some of the features of metal mean that is is easier for apple to make a 32Core or 64Core or even 128Core GPU than it is for AMD or Nvidia to scale out since apple have explicitly not added many GPU to GPU core communication pathways so they do not need a high speed interface between GPU cores, in many many ways you can consider the 8Core GPU in the M1 to really be 8 1 Core GPUs this is what helps apple reduce power draw at the expense of development time for us software devs.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, saltycaramel said:

Ah, I like the smell of a right-to-repair LTT video in the morning, just after a positive news cycle for apple. (but that’s just a coincidence!1! RTR was not a priority in March amirite?)


Linus managed to shoehorn RTR in the original M1 Macs unveiling video last year.

 

Now he puts up a whole RTR video mere hours after the M1 iMac rocked the world with their next-decade design, negligible power draw, the M1 speed we know and low, and the only “4 and a half key” monitor in a baseline desktop in the industry.

 

Good, good. 

No amount of FUD will stop the outflow from PC to Mac and the upcoming revolution. 

LMG is even hedging against this by establishing their new Apple-centered channel. 

 

And now let’s see what Apple comes up with for the format most people use nowadays: laptops.

Macbook Pro 14” and 16” are due for this summer.

LMG better start shooting the obligatory RTR video for those too, ‘cause they’re gonna melt faces if they go miniLED + M1X 12-core.

RTR is one of those things I suspect will be gotten to eventually.  We’re still more or less on “trying not to be killed in the very short term by violence and disease” Nations may need a bit of breathing room to get to RTR.  It’s a bit like recycling myths.  It’s real but it doesn’t kill vast numbers of people horribly so it’s getting triaged a bit back in line.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, saltycaramel said:

they’re gonna melt faces if they go miniLED + M1X 12-core.

What will `melt faces` is the 120hz refresh rate they will have and the extremely low input latency that the iPad driver stack will bring to the mac (sub 9ms from wireless input device changes to pixel on screen updates!!!) that is what is going to upset the gaming industry like it or not.

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, hishnash said:

Yes, for multiple reasons:
 

1) They have hired a good fraction of the GPU talent from around them world
2) They did not start making GPUs IP back 10 years ago but rather they have purchases PowerVR gpu tec who were founded back in 1985 (that is longer than Nvidia)
3) Making a GPU is a lot simple than making a high performance CPU core. CPUs are way way more complex than GPUs. The complexity in GPUs is mostly in the driver side of things and not in the hardware itself (compared to the complexity of a modern speculative execution cpu with out of order expiation, multiple tiers of cache etc). And for apple they have many years a driver expirance, also they don't need to worry about DX or Vulcan, instead they have developed metal to match the hardware they are making they have made this part of the job much simpler for themselves than it is for AMD or Nvidia by not supporting any display apis that are deigned by commity (even apples OpenGL implementation is built onto of metal, like how MoltenVK works in fact). 

But most of all, they already have the GPU cores in the M1 have almost 2x the perf/W of AMD. And unlike a CPU scaling out a GPU is not hard, in fact some of the features of metal mean that is is easier for apple to make a 32Core or 64Core or even 128Core GPU than it is for AMD or Nvidia to scale out since apple have explicitly not added many GPU to GPU core communication pathways so they do not need a high speed interface between GPU cores, in many many ways you can consider the 8Core GPU in the M1 to really be 8 1 Core GPUs this is what helps apple reduce power draw at the expense of development time for us software devs.

Exactly.  They can make it run ONLY their effectively private system better than anyone else has bothered to.   This is precisely what I worry about.  A gpu that will potentially be totally useless for what I need it to do. I dunno.  Maybe I’ll get lucky. Perhaps virtualization will save me.  I’m kinda doubting it at this point though.

Edited by Bombastinator

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Bombastinator said:

 Perhaps virtualization will save me.  I’m kinda doubting it at this point though.

you mean running Vulkan or DX, VM GPU performance (for DX) is ok on the m1 (you are loosing about 20% of perf compared to running native metal) (the DX driver is written by the VM provider that maps the DX into Metal instructions). 

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, saltycaramel said:

but that’s just a coincidence!1! RTR was not a priority in March amirite?

Actually it was, though I suspect you aren't a regular WAN show viewer but it's been talked extensively on there as well as promotion of the funding campaign for it too. It's a fairly regular topic on WAN show, and if not a topic talked about within topics a lot.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, leadeater said:

Aka the requirements to run AutoCAD are "be a computer"

 

What your requirements are are for your workloads for your projects not the requirements of AutoCAD nor the requirements of everyone else. AutoCAD projects come in all different sizes and complexities, not everyone is designing a Boeing 747, not everyone is designing a simple PC case either.

I think a lot of people on this forum tend to forget that their use case might not be the same as everyone else's.

I have taken some basic courses in AutoCAD, Blender, Photoshop and Adobe Premier, and I completed those courses on a Core 2 duo laptop with 4GB of RAM, a 5400 RPM hard drive and integrated Intel graphics. It was a bit slow from times to times, especially the startup time for the "larger" (relatively speaking) projects, but it worked just fine. It wasn't so slow that I felt "I need a new laptop!" either. 

 

It's the same with people saying you "need" an 8 core CPU and an RTX 3070 to play games. You absolutely don't. You might want to have it, but not every needs it. Depends on use case.

 

I mean, the example of Photoshop speaks volume. 200 layers? I wonder how many Photoshop users actually have such massive projects. If you are designing a thumbnail for your Youtube video, or are participating in "photoshop battle" on Reddit then you probably don't even need 5 layers, and if that's what you are doing you are probably fine with it being a bit slow as well.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

Actually it was, though I suspect you aren't a regular WAN show viewer but it's been talked extensively on there as well as promotion of the funding campaign for it too. It's a fairly regular topic on WAN show, and if not a topic talked about within topics a lot.

 

Linus has been a RTR advocate for as long as I can remember, not denying that.

The timing and the reach of an LTT main channel video is what I’m looking at. 

The reach can’t be compared to a segment lost in a 1.5hrs WAN show.

The timing sounds a bit like “look, we just had to compliment Apple’s new iMac in our last video, but make no mistake..”. PC partners will like that. Even though most of them could be bashed for RTR issues as well. But the video is a “take a vodka shot every time Linus mentions Apple” game. 

 

I should check again, but it also looks like the video has zero mentions of Apple products announced this Tuesday. Makes it look like it was ready for some time. Just ready to be uploaded in whatever spring week Apple chose to post their event. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, saltycaramel said:

 

Linus has been a RTR advocate for as long as I can remember, no denying that.

The timing and the reach of an LTT main channel video is what I’m looking at. 

The reach can’t be compared to a segment lost in a 1.5hrs WAN show.

The timing sounds a bit like “look, we just had to compliment Apple’s new iMac in our last video, but make no mistake..”. PC partners will like that. Even though most of them could be bashed for RTR issues as well. But the video is a “take a vodka shot every time Linus mentions Apple” game. 

 

I should check again, but it also looks like the video has zero mentions of Apple products announced this Tuesday. Makes it look like it was ready for some time. Just ready to be uploaded in whatever spring week Apple chose to post their event. 

Video was probably in the works or ready a few days ago. LMG has an upload schedule with certain priorities. The video was most likely in the works for a while since I'd imagine writing the script for it wasn't the easiest thing to do. Also Apple's event had no relevance whatsoever, like what was he going to say about it, 'oh look apple released devices recently and they're probably difficult to repair too'. Additionally 'PC partners' don't play a huge part in being against right to repair to my knowledge, they may have practices in place that go against RTR but they're not actively working against it like John Deere or Apple.

 

I'll be happily proved wrong if you can provide evidence for your theories that LMG was specifically waiting to upload this right after the Apple event, but for now, your claims are unfounded and unproven.

image.png.6d8275c6a205a0b0f6d1c8cc8278c301.png

Link to comment
Share on other sites

Link to post
Share on other sites

I'm going to be buying the iPad Pro 12.9" 128GB WiFi so I can finally play my mobile rhythm games not on emulator using my Cintiq Pro. I was hoping to upgrade my PC this year, but it seems that I'll have a better chance getting a iPad Pro instead of high end CPU at MSRP these days (Also, because of DDR5 being out next year hopefully). I wouldn't get the iPad Pro for just gaming, but my other Vtuber face tracking software requires FaceID that the iPad Pros only have.

 

1 hour ago, LAwLz said:

 I completed those courses on a Core 2 duo laptop with 4GB of RAM, a 5400 RPM hard drive and integrated Intel graphics.

It cracks me up because this is the exact configuration I had to deal with when I was an intern at an engineering firm. It wasn't the best, but it did the job well enough as long as I didn't push my luck. A lot of the full time employees also used similar setups.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, leadeater said:

 it's up to the buyer to know what they required and purchase that.

Yeah, I'm literally not moving on a M1 mac til I gain a better understanding of what RAM means to it. I have 32 GB in my current iMac, and it sometime seems too little just when I have browsers and chat windows open...but how does that translate to the M1 paradigm.

 

2 hours ago, saltycaramel said:

Ah, I like the smell of a right-to-repair LTT video in the morning, just after a positive news cycle for apple. (but that’s just a coincidence!1! RTR was not a priority in March amirite?)

....

Now he puts up a whole RTR video mere hours after the ....

 

To be fair, he talked about this video coming this week at least last friday on WAN show, and on the previous one, that they'd be doing it, which might have been before the Apple Event was announced.

 

23 minutes ago, AndreiArgeanu said:

I'll be happily proved wrong if you can provide evidence for your theories that LMG was specifically waiting to upload this right after the Apple event, but for now, your claims are unfounded and unproven.

 

TO ALSO BE FAIR, he said on WAN show that this would be the monday video. So could have taken longer to wrap up, could have been delayed specifically.

🖥️ Motherboard: MSI A320M PRO-VH PLUS  ** Processor: AMD Ryzen 2600 3.4 GHz ** Video Card: Nvidia GeForce 1070 TI 8GB Zotac 1070ti 🖥️
🖥️ Memory: 32GB DDR4 2400  ** Power Supply: 650 Watts Power Supply Thermaltake +80 Bronze Thermaltake PSU 🖥️

🍎 2012 iMac i7 27";  2007 MBP 2.2 GHZ; Power Mac G5 Dual 2GHZ; B&W G3; Quadra 650; Mac SE 🍎

🍎 iPad Air2; iPhone SE 2020; iPhone 5s; AppleTV 4k 🍎

Link to comment
Share on other sites

Link to post
Share on other sites

As we know, this virtual event was originally rumored for March 23, then it got “post-poned” (in the rumor-sphere) week after week. Plenty of wiggle room to time the subsequent RTR video. 

 

@AndreiArgeanu

This might be me not being a native English speaker, but since when a mild, timid “Makes it look like” calls for a full “show your evidence” escalation like we’re talking weapons of mass destruction? This is not a Senate committee, we share impressions, my impression is that it’s a non-insignificant coincidence that on the main LTT channel (with its huge reach) the video immediately after the video about the first ever really new Apple Silicon Mac (not an old chassis of the Intel era with an M1 shoved in it) is a video about RTR. You’ve got a different impression, good. Only a few LMG employees know what’s the reasoning behind this, maybe just optimization for views, maybe it also appeases the PC-building audience and PC partners like I imagined. Or maybe that’s really just a 1 in 56 coincidence for this video to be released exactly this week. We’ll likely never know.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Also, thank you Linus for trolling me (not me specifically) again with the “iPad on a stand” comments about the M1 iMac 🙏🏻 😏


An “iPad on a stand” with the first and only “larger than 4K” 16:9 display affordable by the masses. And with a full desktop OS, and virtualized Windows. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, scottyseng said:

 I wouldn't get the iPad Pro for just gaming, but my other Vtuber face tracking software requires FaceID that the iPad Pros only have.

 

An iPhone X/SE(2000) or XR is all you need for the Vtuber camera device. The M1 iMac/MacMini however will be slightly nerfed as one of the libraries (OVRLipSync) made by Occulus(Facebook) does not have a M1 native build, but that's for audio lipsync. The iPhone XS, 11, or 12 will also work. Those using the iPad have to rig up stands that obscure the rest of their equipment, and prefer the phone over the iPad. ARKit supports 52 blendshapes, which still puts it ahead of USB webcameras, and completely buries Android's ARCore.

 

That said, I'd still happily buy the iPad Pro as a portable device instead of using a laptop when travelling. It lasts far, far longer than an Intel laptop using an iGPU, and makes all "U" series CPU's look like a joke.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, hishnash said:

you mean running Vulkan or DX, VM GPU performance (for DX) is ok on the m1 (you are loosing about 20% of perf compared to running native metal) (the DX driver is written by the VM provider that maps the DX into Metal instructions). 

The 20% thing I was already counting on.  Even at 20% slower the thing is still fast enough to game on cpu wise.  Not the very fastest mind you, but fast enough to work OK.  Which is really all that is needed.  The question is will they be able to beat a or even get near a 5700xt using emulation? There are GPUs that are more than 20% faster than a 5700xt.  They’re not integrated though. 

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, Kisai said:

 

Their marketing used the 8GB model which is the one I was explicitly pointing out as being under-spec'd for EVERYTHING, as comparing it to the iMac's with GPU's, the GPU's had an additional 4GB. 16GB should have been a standard configuration in all devices since 2016. The fact that we're still seeing 8GB in systems that otherwise might have good specs, and can't be upgraded, is just wasteful.

 

Don't bring personal attacks here bud. You're not doing AutoCAD on an iGPU and more than you're going to do Photoshop on an iGPU. You're wasting your own productivity time running AutoCAD at 4fps. I have a hard time believing you or anyone else is employed in a professional manner deployed chromebook-tier computers unless the IT department was incompetent or run by accountants. I am actually working at a fortune 500 engineering company, and it's usually the employee or their manager that misunderstands the specs the employee needs. 

 

So I had to spell it out:

17" Laptop with a Quadro (A/T/P 3000), 64GB = Engineers running AutoCAD/Civ3D/ArcGIS/etc

15" Laptop with a Quadro (A/T/P 1000), 32GB = Office staff who review CAD, Acrobat, Photoshop, Premiere Pro and the ones working on massive spreadsheets. These are 30% below the recommended specs for CAD.

12/13/14" Laptop with iGPU, 16GB = This the the minimum required to run All Adobe software, Office Software, and is 85% below the recommended requirements for AutoCAD and 30% below the minimum requirements for AutoCAD.

 

 

If I even suggest that a lesser laptop can handle a program, they (people who haven't been informed) will take that as a cue that they can. Just because the software says "4GB minimum" does not mean you can load every project. There are people who I know who use photoshop professionally, who need 64GB+ RAM because they are working on projects with 200 layers. There are people who I know who actually use AutoCAD professionally, who need 64GB of RAM just to open their projects, never mind edit them. There are engineers that mistakenly picked 14" laptops with iGPU's who have wasted clients time being unable to load their billion dollar project. Time is money. 

 

If you are nickel and diming over the cost of the computer, when the cost of the computer is a drop in a bucket, you are not the right person for the job. Plenty of people here on this forum and elsewhere make this mistake, especially when they use the argument of "I can build a cheaper PC for the cost of a Mac". Nobody cares about that in a professional environment. Either the computer meets the requirements of the PROJECT/Department, or you don't even consider it.

 

The iMac's here? Probably sufficient for schools, where Apple has traditionally been the computer of choice. They're certainly underwhelming for typical Office, especially the 8GB models. I'd argue the iPad Pro's are a better option than the iMac's, because you at least get something portable for the same lifetime.

 

And never mind scope creep. Apple has a missing middle in their line up, and has had that missing middle ever since the G5. All the M1 switch has done is make that missing middle wider.

 

It's 2021, if you are still selling hardware with 8GB, you're targeting the chromebook market, not the home professional, not the enterprise business.

 

I don’t think many people realise that often a different architecture and OS requires far less memory for the same task. A case in point are massive databases run on mainframes. Often these will run superbly with just 8gb of memory in its VM, run even part of that same DB on a windows server and 128GB it will struggle. I have seen and done that first hand.  
 

We also have use cases. 8GB on an M1 Mac may not sound much when running huge numbers of layers in PS, but coupled with fast SSD storage it actually works very well. This is especially true where applications are optimised for the architecture.

 

People often read spec sheets and make assumptions without ever using a system. Marketing spec sheets rarely paint a true picture.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, Video Beagle said:

Yeah, I'm literally not moving on a M1 mac til I gain a better understanding of what RAM means to it. I have 32 GB in my current iMac, and it sometime seems too little just when I have browsers and chat windows open...but how does that translate to the M1 paradigm.

 

RAM is RAM bro, regardless of architecture. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Umberto said:

RAM is RAM bro, regardless of architecture. 

Sure but it's used differently. While in a older iMac with a dedicated gpu if you had 8gb of ram, those 8gb of ram were all allocated to the cpu and the gpu had it's own dedicated memory aka vram. In more recent imacs and macbooks that memory is now shared meaning that the ram has now to be split between the cpu and the iGPU, so the 8gb of ram in an M1 mac is not the same as 8gb of memory on an older mac with a dgpu.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, AndreiArgeanu said:

Sure but it's used differently. While in a older iMac with a dedicated gpu if you had 8gb of ram, those 8gb of ram were all allocated to the cpu and the gpu had it's own dedicated memory aka vram. In more recent imacs and macbooks that memory is now shared meaning that the ram has now to be split between the cpu and the iGPU, so the 8gb of ram in an M1 mac is not the same as 8gb of memory on an older mac with a dgpu.

It is worth nothing for regular desktop tasks (not gaming etc) in a dedicated GPU world the GPU memory is a direct copy of the CPU memory. This was very bad in the intel intergrated days when you used the same 8GB but the GPU and Cpu could not point to the same bytes so all data needed to be copied (duplicated on the same memory chips), with M1 GPUs they share the memory controller with the CPU cores so you can share memory pointers with the GPU the same as you can share memory between cpu cores this means you do not need to copy data to pass it to the GPU just pass the pointer and the GPU can rw just like any other CPU core on the chip.  

But i would add if you are doing anything other than the most simple of tasks you should get 16GB of memory, however if you consider the device the are replacing you are for sure only doing the most basic of basic of tasks.

 

12 hours ago, Bombastinator said:

 The question is will they be able to beat a or even get near a 5700xt using emulation?

Unfortunately do to how difficult it is to map DX (and Vulcan) to Metal the real world performance might depend a lot on what the game is doing. Some apis will have almost no overhead but others will be very hard for the mapping layer to map well at runtime. The reason for this is apple has not contained their hardware team to developer a GPU that matches the api but rather they have contained the API team to write an api that gets the most out of the GPU.  

Technically you could say Nvidia (and to a lesser extend AMD) have done the same, the APIs (vulcan and DX) are very much gerard to be easy to implement well on a TBIM pipeline but AMD and Nvidia hold all the good patents and IP in that space.

 

Apple could extend vulcan to add all the needed bits to it so it could really make use of their GPUs but what would the point be you would not be able to use that code on any other system and it would still be a dirty hack (a bit like all the nvidia only extensions to vulcan and DX).

They will have GPU options that are more powerful than a 5700xt (a 32core apple GPU will provide 10Tlops at ~= 30W ). 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, hishnash said:

It is worth nothing for regular desktop tasks (not gaming etc) in a dedicated GPU world the GPU memory is a direct copy of the CPU memory. This was very bad in the intel intergrated days when you used the same 8GB but the GPU and Cpu could not point to the same bytes so all data needed to be copied (duplicated on the same memory chips), with M1 GPUs they share the memory controller with the CPU cores so you can share memory pointers with the GPU the same as you can share memory between cpu cores this means you do not need to copy data to pass it to the GPU just pass the pointer and the GPU can rw just like any other CPU core on the chip.  

But i would add if you are doing anything other than the most simple of tasks you should get 16GB of memory, however if you consider the device the are replacing you are for sure only doing the most basic of basic of tasks.

 

Unfortunately do to how difficult it is to map DX (and Vulcan) to Metal the real world performance might depend a lot on what the game is doing. Some apis will have almost no overhead but others will be very hard for the mapping layer to map well at runtime. The reason for this is apple has not contained their hardware team to developer a GPU that matches the api but rather they have contained the API team to write an api that gets the most out of the GPU.  

Technically you could say Nvidia (and to a lesser extend AMD) have done the same, the APIs (vulcan and DX) are very much gerard to be easy to implement well on a TBIM pipeline but AMD and Nvidia hold all the good patents and IP in that space.

 

Apple could extend vulcan to add all the needed bits to it so it could really make use of their GPUs but what would the point be you would not be able to use that code on any other system and it would still be a dirty hack (a bit like all the nvidia only extensions to vulcan and DX).

They will have GPU options that are more powerful than a 5700xt (a 32core apple GPU will provide 10Tlops at ~= 30W ). 

So they will more or less do it but there’s going to be a LOT of variability as to how fast something runs or even whether it runs at all. To make it more fun different parts of a game could be faster than other parts. Conditional maybe sometimes. Might do what I need.  But not a no.  Argh. 

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Bombastinator said:

Might do what I need.  But not a no.  Argh. 

WWDC is the actual geek aimed tech level show. You're gonna probably need to wait til that (and not the keynote, as that'll most likely still be aimed at the mainstream)

🖥️ Motherboard: MSI A320M PRO-VH PLUS  ** Processor: AMD Ryzen 2600 3.4 GHz ** Video Card: Nvidia GeForce 1070 TI 8GB Zotac 1070ti 🖥️
🖥️ Memory: 32GB DDR4 2400  ** Power Supply: 650 Watts Power Supply Thermaltake +80 Bronze Thermaltake PSU 🖥️

🍎 2012 iMac i7 27";  2007 MBP 2.2 GHZ; Power Mac G5 Dual 2GHZ; B&W G3; Quadra 650; Mac SE 🍎

🍎 iPad Air2; iPhone SE 2020; iPhone 5s; AppleTV 4k 🍎

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×