Jump to content

Former Intel engineer said that Skylake was the turning point for the Mac's transition to Apple Silicon

captain_to_fire

Sources: 9to5 Mac, PC Gamer

 

Quote

François Piednoël, a former Intel engineer, told PCGamer that Apple has become unsatisfied with Intel processors since the introduction of the Skylake architecture in 2015. The report states that Intel’s Skylake processors had several problems at the time, and that Apple was the client with the highest number of complaints about the architecture.

Quote

"The quality assurance of Skylake was more than a problem," says Piednoël during a casual Xplane chat and stream session. "It was abnormally bad. We were getting way too much citing for little things inside Skylake. Basically our buddies at Apple became the number one filer of problems in the architecture. And that went really, really bad. 

“Basically the bad quality assurance of Skylake is responsible for them to actually go away from the platform. […] Apple must have really hated Skylake,” said Piednoël.

It’s no secret that Apple had the Mac pipeline affected by Intel on multiple occasions, but personally I don’t think that’s the only reason for the transition to the “Apple Silicon Mac.” Apple has always been a company that values the integration of hardware and software, and that’s only possible when you have control over everything that goes inside a device.

It would be interesting to know the bugs in Skylake other than Spectre and Meltdown that made Apple switch over to their own chips. Doing a quick Google search showed me articles about PCs crashing with hyperthreading enabled. I know that Apple no longer satisfied with Intel's product roadmap especially with their failures to shrink their transistor size beyond 14 nm+++++++, not to mention the current 10th gen Intel processors are kinda toasty too.

 

513360115_Screenshot(383).thumb.png.78ec10f3a9066108e7c7e20b5ac0bc63.png

 

I think as early as the iPhone 6s there are benchmarks shown that its A9 chip to be faster than some Intell chips already. ARM chips is what allows thin, light designs while bringing all day battery life. Apple promised that with their own Silicon, there would be lesser trade off between performance and battery life. With the Apple Silicon, there's no reason for Macs to have a separate T2 chip because the Apple Silicon itself houses the Secure Enclave coprossesor and with a Neural Engine inside, Macs can finally have Face ID. Many people ask why current Macs don't have Face ID and the answer is that the T2 chip is just a repurposed A10 Fusion chip found in the iPhone 7/7+ which doesn't have the Neural Engine. Right now, Apple's iPhone chips are the fastest among phones. It would be interesting how will Intel and AMD respond to this. Who knows? Maybe it's time for Intel and AMD to make their ARM chips too so that PC OEMs follow. I for one looking forward to Apple's future ads on how Intel chips produces so much heat, just like before when they made an ad on why the Power Mac G3 is better than PCs with Pentium 2 inside.

 

There is more that meets the eye
I see the soul that is inside

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Why, because there were no more improvements? Did they really think Apple could do better than them? My, how they've given up on themselves. 😔

Link to comment
Share on other sites

Link to post
Share on other sites

Let's be honest, if AMD and their (relatively) minuscule R&D budget can beat Intel, imagine what Apple can do!

Laptop:

Spoiler

HP OMEN 15 - Intel Core i7 9750H, 16GB DDR4, 512GB NVMe SSD, Nvidia RTX 2060, 15.6" 1080p 144Hz IPS display

PC:

Spoiler

Vacancy - Looking for applicants, please send CV

Mac:

Spoiler

2009 Mac Pro 8 Core - 2 x Xeon E5520, 16GB DDR3 1333 ECC, 120GB SATA SSD, AMD Radeon 7850. Soon to be upgraded to 2 x 6 Core Xeons

Phones:

Spoiler

LG G6 - Platinum (The best colour of any phone, period)

LG G7 - Moroccan Blue

 

Link to comment
Share on other sites

Link to post
Share on other sites

Frenchie is highly opinionated. Take what he says with more salt than a visit to WCCFTech. 

 

Above video gives more background on how the industry works from Ian Cutress (Anandtech) personal channel. Basically, another fuss over nothing.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, porina said:

Frenchie is highly opinionated. Take what he says with more salt than a visit to WCCFTech. 

Yeah, anyone that has even taken a cursory glance at his Twitter will realize he is the last person you want as an objective source.

[Out-of-date] Want to learn how to make your own custom Windows 10 image?

 

Desktop: AMD R9 3900X | ASUS ROG Strix X570-F | Radeon RX 5700 XT | EVGA GTX 1080 SC | 32GB Trident Z Neo 3600MHz | 1TB 970 EVO | 256GB 840 EVO | 960GB Corsair Force LE | EVGA G2 850W | Phanteks P400S

Laptop: Intel M-5Y10c | Intel HD Graphics | 8GB RAM | 250GB Micron SSD | Asus UX305FA

Server 01: Intel Xeon D 1541 | ASRock Rack D1541D4I-2L2T | 32GB Hynix ECC DDR4 | 4x8TB Western Digital HDDs | 32TB Raw 16TB Usable

Server 02: Intel i7 7700K | Gigabye Z170N Gaming5 | 16GB Trident Z 3200MHz

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, captain_to_fire said:

 

I think as early as the iPhone 6s there are benchmarks shown that its A9 chip to be faster than some Intell chips already.

When you compare a Honda Civic to a Bicycle (Smartphones), of course the Honda Civic (MacBook Air) looks better, that doesn't mean the Honda Civic (Macbook Air) is going to beat the Ferrari (Mac Pro.) But when the Bicycle can get more mileage and is 1/4 of the size, you have to ask what the Honda Civic is doing wrong. Nobody questions the Ferrari. 

 

That's the problem with making a ARM to x86 comparison, is that everyone is comparing the "bicycle" to another "bicycle". 

 

https://browser.geekbench.com/ios_devices/iphone-6s

Quote
iPhone 6s
Apple A9 @ 1.8 GHz

541

 

iPhone 11 Pro
Apple A13 Bionic @ 2.7 GHz

1327

 

Geekbench 5 scores are calibrated against a baseline score of 1000 (which is the score of an Intel Core i3-8100). Higher scores are better, with double the score indicating double the performance.

 

OK, now watch what happens when I run Geekbench on my desktop. (those numbers are the single-core numbers btw)

 

image.thumb.png.ac4d4f58106c12305b06f98dc9f30490.png

So by comparison, the iphone 6s:

image.thumb.png.acc2829a8c314c5d77b79f61957b4a61.png

and the iPhone 11 Pro:

image.thumb.png.0e32735b670d8206bf40c30180777d04.png

Hey look at that, according to geek bench, the iphone 11 is faster than a Haswell Quadcore, with the single core performance nearly 50% higher.

 

Yet...

image.thumb.png.da18dec2ab6e9502abe67f73c3bb0c5a.png

Intel's latest desktop CPU barely, single thread barely squeeks past it. The Multicore number is because it's a 10 core (20 thread) being compare to a 6-core A13 which has 2 2.66 cores and 4 1.82 cores, so the numbers are going to not be fair comparison for that reason alone.

 

But you know what else needs to be compared?

image.thumb.png.c96a94e0470b37a57c6e402c54314047.png

HMM, the AMD chip isn't as fast as the A13 on a single core score, how could that be. The 3950X is a 16 core (32 thread) cpu.

 

So what IS comparable right now?

https://browser.geekbench.com/processors/intel-core-i7-1068ng7

image.thumb.png.f55544e066c704cb8403e574437922b0.png

 

Yes, the chip in the 13" Macbook Pro

https://browser.geekbench.com/macs/macbook-pro-13-inch-mid-2020-intel-core-i7-1068ng7-2-3-ghz-4-cores , which is only unfair because of the core configuration, if the A13 was all full speed cores it would easily beat it since HT's are no replacement for cpu cores.

 

Quote

ARM chips is what allows thin, light designs while bringing all day battery life. Apple promised that with their own Silicon, there would be lesser trade off between performance and battery life. With the Apple Silicon, there's no reason for Macs to have a separate T2 chip because...

Stop stop, you're killing me.

 

While it makes sense to stuff as much as possible into a SoC, there is a reason why T2 and such are not built into the CPU. They have to have their own security keys, and you can't write keys into the CPU at fab time. That's why things like the Nintendo Switch were easily JB, because the keys couldn't be changed once shipped. It's a silly thing, but I'm sure Apple doesn't want to make it easier for hacks. Besides, there was nothing stopping Apple from putting FaceID on all their Mac's except for the lack of willingness of PC Monitor vendors for putting cameras in their monitors as part of the built in USB hub. So Mac Pro and MacMini couldn't have it, and the iMac/Macbook/MacBook Pro had no reason not to, and as far as I'm aware, have always had the capability.

Link to comment
Share on other sites

Link to post
Share on other sites

I don't believe it.  Companies like apple plan their products as far in advance as they can, they would have been looking at their own ARM silicon alongside moves to AMD ryzens years before Skylake became a measurable performance problem.  The fact we have rumors going back several years now for both is proof of that.

 

Also you can't exactly blame Intel for poor thermal engineering. the CPU's can perform better when they aren't hamstrung with shit cooling.

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

I have doubts with that claim, and if Skylake was so bad Apple would've been moving to ARM sooner, and used AMD's Threadripper and Epyc in the pro machines.

And you can't really blame Intel for the hot running laptops, it's Apple's engineering that has been kneecapping the performance of the Intel CPU's. So its more like Apple is giving up on x86 because there aren't any improvements when the cooler design is so inadequate the CPU throttles under a sustained workload.

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, Blademaster91 said:

I have doubts with that claim, and if Skylake was so bad Apple would've been moving to ARM sooner, and used AMD's Threadripper and Epyc in the pro machines.

And you can't really blame Intel for the hot running laptops, it's Apple's engineering that has been kneecapping the performance of the Intel CPU's. So its more like Apple is giving up on x86 because there aren't any improvements when the cooler design is so inadequate the CPU throttles under a sustained workload.

My guess is that the MacBook Air was originally planned to be their first ARM machine, but there was a delay and they had to shoehorn an Intel chip in which is why the cooling design is so bad. That kind of cooling would be perfectly adequate for a low/medium power ARM chip, but even Intel's low end is too hot for it.

 

As for the MacBook Pro line, I honestly think they've been giving the Intel chips horrendous cooling so they can say in a slide that their ARM chips run cooler and quieter

Laptop:

Spoiler

HP OMEN 15 - Intel Core i7 9750H, 16GB DDR4, 512GB NVMe SSD, Nvidia RTX 2060, 15.6" 1080p 144Hz IPS display

PC:

Spoiler

Vacancy - Looking for applicants, please send CV

Mac:

Spoiler

2009 Mac Pro 8 Core - 2 x Xeon E5520, 16GB DDR3 1333 ECC, 120GB SATA SSD, AMD Radeon 7850. Soon to be upgraded to 2 x 6 Core Xeons

Phones:

Spoiler

LG G6 - Platinum (The best colour of any phone, period)

LG G7 - Moroccan Blue

 

Link to comment
Share on other sites

Link to post
Share on other sites

I agree with the first part making perfect sense.  I don't think they've been killing cooling for a future marketing slide though.  They just value thin and light more than thermal performance, like most other laptop designs currently do as well.

11 minutes ago, yolosnail said:

My guess is that the MacBook Air was originally planned to be their first ARM machine, but there was a delay and they had to shoehorn an Intel chip in which is why the cooling design is so bad. That kind of cooling would be perfectly adequate for a low/medium power ARM chip, but even Intel's low end is too hot for it.

 

As for the MacBook Pro line, I honestly think they've been giving the Intel chips horrendous cooling so they can say in a slide that their ARM chips run cooler and quieter

 

The comparisons are interesting in the benchmarks.  But something important to note here as well is that you're comparing a tiny power limited and thermally limited design in a phone to a desktop as well.  It is absolutely true that simply by using a computer enclosure instead of a phone enclosure, for both power delivery and thermal cooling capacity, that performance will go up.  The question is exactly how much.  So, this further enhances the argument that for raw horsepower, at least in general situations, the Apple Silicon is likely to be better, especially once tailored to computer use with differing core inclusions.

1 hour ago, Kisai said:

When you compare a Honda Civic to a Bicycle (Smartphones), of course the Honda Civic (MacBook Air) looks better, that doesn't mean the Honda Civic (Macbook Air) is going to beat the Ferrari (Mac Pro.) But when the Bicycle can get more mileage and is 1/4 of the size, you have to ask what the Honda Civic is doing wrong. Nobody questions the Ferrari. 

 

That's the problem with making a ARM to x86 comparison, is that everyone is comparing the "bicycle" to another "bicycle". 

 

https://browser.geekbench.com/ios_devices/iphone-6sOK, now watch what happens when I run Geekbench on my desktop. (those numbers are the single-core numbers btw)

Spoiler


 

image.thumb.png.ac4d4f58106c12305b06f98dc9f30490.png

So by comparison, the iphone 6s:

image.thumb.png.acc2829a8c314c5d77b79f61957b4a61.png

and the iPhone 11 Pro:

image.thumb.png.0e32735b670d8206bf40c30180777d04.png

Hey look at that, according to geek bench, the iphone 11 is faster than a Haswell Quadcore, with the single core performance nearly 50% higher.

 

Yet...

image.thumb.png.da18dec2ab6e9502abe67f73c3bb0c5a.png

Intel's latest desktop CPU barely, single thread barely squeeks past it. The Multicore number is because it's a 10 core (20 thread) being compare to a 6-core A13 which has 2 2.66 cores and 4 1.82 cores, so the numbers are going to not be fair comparison for that reason alone.

 

But you know what else needs to be compared?

image.thumb.png.c96a94e0470b37a57c6e402c54314047.png

HMM, the AMD chip isn't as fast as the A13 on a single core score, how could that be. The 3950X is a 16 core (32 thread) cpu.

 

So what IS comparable right now?

https://browser.geekbench.com/processors/intel-core-i7-1068ng7

image.thumb.png.f55544e066c704cb8403e574437922b0.png

 

Yes, the chip in the 13" Macbook Pro

https://browser.geekbench.com/macs/macbook-pro-13-inch-mid-2020-intel-core-i7-1068ng7-2-3-ghz-4-cores , which is only unfair because of the core configuration, if the A13 was all full speed cores it would easily beat it since HT's are no replacement for cpu cores.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, Blademaster91 said:

I have doubts with that claim, and if Skylake was so bad Apple would've been moving to ARM sooner, and used AMD's Threadripper and Epyc in the pro machines.

And you can't really blame Intel for the hot running laptops, it's Apple's engineering that has been kneecapping the performance of the Intel CPU's. So its more like Apple is giving up on x86 because there aren't any improvements when the cooler design is so inadequate the CPU throttles under a sustained workload.

To jump onto another ISA, a lot of things have to go right simultaneously, not just the silicon. Even for Apple, this is likely to prove a formidable task, though unlike with Windows, this is actually feasible for them to do. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, justpoet said:

I agree with the first part making perfect sense.  I don't think they've been killing cooling for a future marketing slide though.  They just value thin and light more than thermal performance, like most other laptop designs currently do as well.

 

The comparisons are interesting in the benchmarks.  But something important to note here as well is that you're comparing a tiny power limited and thermally limited design in a phone to a desktop as well.  It is absolutely true that simply by using a computer enclosure instead of a phone enclosure, for both power delivery and thermal cooling capacity, that performance will go up.  The question is exactly how much.  So, this further enhances the argument that for raw horsepower, at least in general situations, the Apple Silicon is likely to be better, especially once tailored to computer use with differing core inclusions.

 

It's probably a bit of both to be honest, there are ways that they could have made the MacBook Pro run cooler, but is it really worth them putting the effort in if it'ls going to get canned next year anyway?

 

The issue with trying to guess what performance is going to be like, and even looking at benchmarks, is that it's all going to be down to the optimisation. A program that is well optimised for 'Apple Silicon' (are they really going to keep calling it that?) is going to run much better than a poorly coded X86 program.

 

I have absolutely no doubt that the likes of Final Cut is going to fly on their own chips, they control the hardware and software, so can optimise the hell out of it. The issue comes to whether others are going to optimise that well.

 

Windows is a much larger market, and they all still run on X86, is it really going to be worth their while putting in all the effort to optimise for 10% of the market? Why not make use of Rosetta and just let Apple do the hard work. I suppose it all depends what the overhead of Rosetta 2 is like, if Apple Silicon is 20% faster than Intel in an equivalent machine, and Rosetta has a performance loss of 10%, would you put the effort in?

 

At the end of the day, users will still see a performance increase

Laptop:

Spoiler

HP OMEN 15 - Intel Core i7 9750H, 16GB DDR4, 512GB NVMe SSD, Nvidia RTX 2060, 15.6" 1080p 144Hz IPS display

PC:

Spoiler

Vacancy - Looking for applicants, please send CV

Mac:

Spoiler

2009 Mac Pro 8 Core - 2 x Xeon E5520, 16GB DDR3 1333 ECC, 120GB SATA SSD, AMD Radeon 7850. Soon to be upgraded to 2 x 6 Core Xeons

Phones:

Spoiler

LG G6 - Platinum (The best colour of any phone, period)

LG G7 - Moroccan Blue

 

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, yolosnail said:

 

Windows is a much larger market, and they all still run on X86, is it really going to be worth their while putting in all the effort to optimise for 10% of the market? Why not make use of Rosetta and just let Apple do the hard work. I suppose it all depends what the overhead of Rosetta 2 is like, if Apple Silicon is 20% faster than Intel in an equivalent machine, and Rosetta has a performance loss of 10%, would you put the effort in?

 

At the end of the day, users will still see a performance increase

Well if they are doing 64-bit, they are supposed to use intrinsics, not hand-tuned assembly, and it's the hand-tuned assembly (eg AVX) instructions that are not portable in any shape.

 

https://docs.microsoft.com/en-us/cpp/intrinsics/compiler-intrinsics?view=vs-2019

 

Quote

If a function is an intrinsic, the code for that function is usually inserted inline, avoiding the overhead of a function call and allowing highly efficient machine instructions to be emitted for that function. An intrinsic is often faster than the equivalent inline assembly, because the optimizer has a built-in knowledge of how many intrinsics behave, so some optimizations can be available that are not available when inline assembly is used. Also, the optimizer can expand the intrinsic differently, align buffers differently, or make other adjustments depending on the context and arguments of the call.

 

The use of intrinsics affects the portability of code, because intrinsics that are available in Visual C++ might not be available if the code is compiled with other compilers and some intrinsics that might be available for some target architectures are not available for all architectures. However, intrinsics are usually more portable than inline assembly. The intrinsics are required on 64-bit architectures where inline assembly is not supported.

 

 

https://software.intel.com/content/www/us/en/develop/documentation/cpp-compiler-developer-guide-and-reference/top/compiler-reference/intrinsics/intrinsics-for-all-intel-architectures/miscellaneous-intrinsics.html

Quote

The following tables list and describe intrinsics that you can use across all Intel® architectures, except where noted. These intrinsics are available for both Intel® and non-Intel microprocessors but they may perform additional optimizations for Intel® microprocessors than they perform for non-Intel microprocessors.

 

 

 

https://clang.llvm.org/docs/LanguageExtensions.html

https://llvm.org/devmtg/2016-11/Slides/Finkel-IntrinsicsMetadataAttributes.pdf

 

This is too jargon-complex to quote. Suffice it to say, if you are using the intrinsincs correctly, you shouldn't be writing any hand-tuned assembler at all.  Because of legacy reasons, 32-bit code might have hand-tuned assembly in it (NT 3.1, Windows 32s (32-bit extensions to Windows 3.1x,) and Win9x binaries could all have assembly blobs in them linked in), but 64-bit is expressly forbidden from having it on Windows. So a direct consequence of that is 64-bit windows code is more portable than 32-bit code is. Microsoft may have intended for this to allow portability to Alpha, PowerPC, MIPS and ARM (because NT4.0 supported Alpha , PowerPC and MIPS at some point.) Ironically because Intel was going to replace the x86 with a RISC processor. And then didn't (they switched development to ARM, and then sold it off.)

 

So Microsofts foresight here may actually keep them relevant longer as it enabled an easy path to ARM platforms as they matured.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, mr moose said:

Also you can't exactly blame Intel for poor thermal engineering.

No, but you can blame Intel for misleading Apple and other OEMs with assurances of process shrinks and increases in efficiency that never came. 

Laptop: 2019 16" MacBook Pro i7, 512GB, 5300M 4GB, 16GB DDR4 | Phone: iPhone 13 Pro Max 128GB | Wearables: Apple Watch SE | Car: 2007 Ford Taurus SE | CPU: R7 5700X | Mobo: ASRock B450M Pro4 | RAM: 32GB 3200 | GPU: ASRock RX 5700 8GB | Case: Apple PowerMac G5 | OS: Win 11 | Storage: 1TB Crucial P3 NVME SSD, 1TB PNY CS900, & 4TB WD Blue HDD | PSU: Be Quiet! Pure Power 11 600W | Display: LG 27GL83A-B 1440p @ 144Hz, Dell S2719DGF 1440p @144Hz | Cooling: Wraith Prism | Keyboard: G610 Orion Cherry MX Brown | Mouse: G305 | Audio: Audio Technica ATH-M50X & Blue Snowball | Server: 2018 Core i3 Mac mini, 128GB SSD, Intel UHD 630, 16GB DDR4 | Storage: OWC Mercury Elite Pro Quad (6TB WD Blue HDD, 12TB Seagate Barracuda, 1TB Crucial SSD, 2TB Seagate Barracuda HDD)
Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, DrMacintosh said:

No, but you can blame Intel for misleading Apple and other OEMs with assurances of process shrinks and increases in efficiency that never came. 

What assurances?  Apple have full knowledge of all CPU technical requirements and limitations when they design their laptops, you can't blame Intel because they sacrificed cooling for form.  Intel's CPU's perform exactly to their TDP spec. no misinformation there.

 

 

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, DrMacintosh said:

No, but you can blame Intel for misleading Apple and other OEMs with assurances of process shrinks and increases in efficiency that never came. 

I don't think Intel assured Apple or anyone that their transistor size is gonna shrink below 14 nm

2 hours ago, mr moose said:

What assurances?  Apple have full knowledge of all CPU technical requirements and limitations when they design their laptops, you can't blame Intel because they sacrificed cooling for form.  Intel's CPU's perform exactly to their TDP spec. no misinformation there.

This reminds me [thread]

image.thumb.png.6c402f1992fc32a366b5b840f0a98dcf.png

To be fair, Apple sort of addressed the throttling issues with the first i9 MBP by releasing an over the air update that allowed the fans to spin more to maintain sustained clock speeds. The only real fix was the 2019 16" MacBook Pro that made the chassis thicker. It's possible now that Apple is in control of the SoC, they're going to bring back the slimmer MacBook Pro design because in theory, ARM chips generate less heat than x86 chips.

Edited by captain_to_fire

There is more that meets the eye
I see the soul that is inside

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Seems a bit of a reach. There have been rumors of Apple switching to ARM practically since the iPad 2 came out.
For me it seems more likely that since Apple was doing R&D into ARM for the iPad/iPhone, the idea of Mac OS running also on ARM was probably there since the success of the iPad. (supporting 1 architecture would simplify development, and advancements would benefit 2 categories of products)
They probably experimented with the idea (hence the early rumors) and as the years went on the experimentation went from "Is this even possible?" to "What do we need to make this work?"
By the time of Skylake I wouldn't doubt it was already pretty well into development. Skylake was probably not so much a turning point as much a justification to accelerate development, and make the switch sooner. Like looking at the clock and noticing your gonna be 5 minutes late to work, so you speed up just a bit.

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, captain_to_fire said:

To be fair, Apple sort of addressed the throttling issues with the first i9 MBP by releasing an over the air update that allowed the fans to spin more to maintain sustained clock speeds.

The fix for this was not a change of fan speed. The root issue was due to a firmware bug in the cpus intel provided to apple (and by implication not a bug present in the samples intel provided before release). 

 

The bug was that when the VRM send a message to the cpu saying it could not provide the requested amount of power, the cpu dropped all cores to min frequency rather than reducing the frequency a little bit. 
 

Apples fix (since they can't patch the intel firmware... see why they want their own cpus) was to add a hock into the kernel so that the cpu redirected this message to the kernel and let the macOS kernel handle the message (and reduce the cpu clock only a little bit so that the power was safe).

see: https://www.kitguru.net/lifestyle/mobile/apple/matthew-wilson/macbook-pro-2018-throttling-fix/
the created a patch for this very quickly but it required you to turn off SIP. I assume apple just copied the community implementation. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, yolosnail said:

Let's be honest, if AMD and their (relatively) minuscule R&D budget can beat Intel, imagine what Apple can do!

Budget doesn't really mean much here, often you can't know if an architecture is going to be competitive until it's too late to start from scratch. The difference in budget only tells you how many screw ups you can afford before you go bankrupt.

5 minutes ago, VegetableStu said:

linux people: TOPKEK STONKS Meh, RISC-V or gtfo

FTFY

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, yolosnail said:

My guess is that the MacBook Air was originally planned to be their first ARM machine, but there was a delay and they had to shoehorn an Intel chip in which is why the cooling design is so bad. That kind of cooling would be perfectly adequate for a low/medium power ARM chip, but even Intel's low end is too hot for it.

 

As for the MacBook Pro line, I honestly think they've been giving the Intel chips horrendous cooling so they can say in a slide that their ARM chips run cooler and quieter

The weird cooling design in the Macbook air does make more sense for a low power ARM chip, but still a copper block on the cpu with no heatpipe and a fan that doesn't really cool anything doesn't seem very efficient imo. It seems Apple can design a Macbook that doesn't throttle so hard with the 16" model, Apple just didn't with the 15" laptops, I agree it seems like they held back performance to say the ARM chip is faster and cooler.

1 hour ago, mr moose said:

What assurances?  Apple have full knowledge of all CPU technical requirements and limitations when they design their laptops, you can't blame Intel because they sacrificed cooling for form.  Intel's CPU's perform exactly to their TDP spec. no misinformation there.

 

 

 

 

Also since Intel sells custom sku's to Apple i'd assume they work much more closely on TDP specs than other OEM's, so I still can't blame Intel for awful thermals.

Link to comment
Share on other sites

Link to post
Share on other sites

I doubt that anything Intel has done (or in some instances didn’t do) forced Apples hand in this matter.

 

What I do think is that Intels ”problems” just made the decision easier for Apple.

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, hishnash said:

The fix for this was not a change of fan speed. The root issue was due to a firmware bug in the cpus intel provided to apple (and by implication not a bug present in the samples intel provided before release). 

 

The bug was that when the VRM send a message to the cpu saying it could not provide the requested amount of power, the cpu dropped all cores to min frequency rather than reducing the frequency a little bit. 

It was not a firmware bug in Intel CPUs it was bad firmware by Apple of their own system agents and control.

 

Quote

Following extensive performance testing under numerous workloads, we've identified that there is a missing digital key in the firmware that impacts the thermal management system and could drive clock speeds down under heavy thermal loads on the new MacBook Pro.

https://www.macrumors.com/2018/07/24/apple-addresses-macbook-pro-throttling/

 

Intel did not issue a microcode fix because it has nothing to do with their CPU at all. What you posted was merely a workaround which helps but wasn't actually the problem.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, mr moose said:

What assurances?  Apple have full knowledge of all CPU technical requirements and limitations when they design their laptops, you can't blame Intel because they sacrificed cooling for form.  Intel's CPU's perform exactly to their TDP spec. no misinformation there.

 

 

 

 

idk maybe he is talking about the security flaws

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Sauron said:

Budget doesn't really mean much here, often you can't know if an architecture is going to be competitive until it's too late to start from scratch. The difference in budget only tells you how many screw ups you can afford before you go bankrupt.

FTFY

Did someone say RISC-V?

 

https://abopen.com/news/huami-announces-risc-v-based-fitness-wearables-smartwatch/#:~:text=During the event%2C Huami described RISC-V – which,embedded systems and the Internet of Things (IoT).”

Quote

Huami, a subsidiary of Chinese electronics specialist Xiaomi, has announced a new family of smartwatches and fitness wearables, and in doing so is set to become the first company to bring a product based on the open RISC-V instruction set architecture (ISA) to the consumer market.

Announced by Huami at its technology event in Beijing this week, the Huangshan No. 1 system-on-chip (SoC) is based on the SiFive E31 processor core intellectual property (IP), which is itself based on the open RISC-V instruction set architecture (ISA). To launch in Amazfit-branded smartwatch and fitness band wearable devices, the Huangshan No. 1 features the SiFive E31 as its main processor, operating alongside an always-on (AON) module designed to transfer sensor data to internal static RAM without waking the primary processor, plus dedicated accelerators for neural network workloads.

During the event, Huami described RISC-V – which began life just eight years ago at the University of California, Berkeley, and which requires no expensive licensing in order to develop open- or closed-hardware implementations – as “the processor architecture of the era,” stating that it is “very suitable for small embedded systems and the Internet of Things (IoT).”

 

One day I will be able to play Monster Hunter Frontier in French/Italian/English on my PC, it's just a matter of time... 4 5 6 7 8 9 years later: It's finally coming!!!

Phones: iPhone 4S/SE | LG V10 | Lumia 920

Laptops: Macbook Pro 15" (mid-2012) | Compaq Presario V6000

 

<>EVs are bad, they kill the planet and remove freedoms too some/<>

Link to comment
Share on other sites

Link to post
Share on other sites

I still think even if Intel hadn't had these issues, Apple would've made the switch anyhow.

 

They hired Jim Keller back in the iPhone 4 days to lay the groundwork for their custom silicon. I would presume that they had already planned for an eventual transition of all their products on their own platform. Intel's 10nm troubles and everything else just served as a good excuse to announce it.

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×