Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
like_ooh_ahh

Former Intel engineer said that Skylake was the turning point for the Mac's transition to Apple Silicon

Recommended Posts

Posted · Original PosterOP

Sources: 9to5 Mac, PC Gamer

 

Quote

François Piednoël, a former Intel engineer, told PCGamer that Apple has become unsatisfied with Intel processors since the introduction of the Skylake architecture in 2015. The report states that Intel’s Skylake processors had several problems at the time, and that Apple was the client with the highest number of complaints about the architecture.

Quote

"The quality assurance of Skylake was more than a problem," says Piednoël during a casual Xplane chat and stream session. "It was abnormally bad. We were getting way too much citing for little things inside Skylake. Basically our buddies at Apple became the number one filer of problems in the architecture. And that went really, really bad. 

“Basically the bad quality assurance of Skylake is responsible for them to actually go away from the platform. […] Apple must have really hated Skylake,” said Piednoël.

It’s no secret that Apple had the Mac pipeline affected by Intel on multiple occasions, but personally I don’t think that’s the only reason for the transition to the “Apple Silicon Mac.” Apple has always been a company that values the integration of hardware and software, and that’s only possible when you have control over everything that goes inside a device.

It would be interesting to know the bugs in Skylake other than Spectre and Meltdown that made Apple switch over to their own chips. Doing a quick Google search showed me articles about PCs crashing with hyperthreading enabled. I know that Apple no longer satisfied with Intel's product roadmap especially with their failures to shrink their transistor size beyond 14 nm+++++++, not to mention the current 10th gen Intel processors are kinda toasty too.

 

513360115_Screenshot(383).thumb.png.78ec10f3a9066108e7c7e20b5ac0bc63.png

 

I think as early as the iPhone 6s there are benchmarks shown that its A9 chip to be faster than some Intell chips already. ARM chips is what allows thin, light designs while bringing all day battery life. Apple promised that with their own Silicon, there would be lesser trade off between performance and battery life. With the Apple Silicon, there's no reason for Macs to have a separate T2 chip because the Apple Silicon itself houses the Secure Enclave coprossesor and with a Neural Engine inside, Macs can finally have Face ID. Many people ask why current Macs don't have Face ID and the answer is that the T2 chip is just a repurposed A10 Fusion chip found in the iPhone 7/7+ which doesn't have the Neural Engine. Right now, Apple's iPhone chips are the fastest among phones. It would be interesting how will Intel and AMD respond to this. Who knows? Maybe it's time for Intel and AMD to make their ARM chips too so that PC OEMs follow. I for one looking forward to Apple's future ads on how Intel chips produces so much heat, just like before when they made an ad on why the Power Mac G3 is better than PCs with Pentium 2 inside.

 


There is more that meets the eye
I see the soul that is inside

 

Making Windows Defender as good or even better than paid options

Link to post
Share on other sites

Why, because there were no more improvements? Did they really think Apple could do better than them? My, how they've given up on themselves. 😔

Link to post
Share on other sites

Let's be honest, if AMD and their (relatively) minuscule R&D budget can beat Intel, imagine what Apple can do!


Laptop:

Spoiler

HP OMEN 15 - Intel Core i7 9750H, 16GB DDR4, 512GB NVMe SSD, Nvidia RTX 2060, 15.6" 1080p 144Hz IPS display

PC:

Spoiler

Vacancy - Looking for applicants, please send CV

Mac:

Spoiler

2009 Mac Pro 8 Core - 2 x Xeon E5520, 16GB DDR3 1333 ECC, 120GB SATA SSD, AMD Radeon 7850. Soon to be upgraded to 2 x 6 Core Xeons

Phones:

Spoiler

LG G6 - Platinum (The best colour of any phone, period)

LG G7 - Moroccan Blue

 

Link to post
Share on other sites

Frenchie is highly opinionated. Take what he says with more salt than a visit to WCCFTech. 

 

Above video gives more background on how the industry works from Ian Cutress (Anandtech) personal channel. Basically, another fuss over nothing.


Main system: Asus Maximus VIII Hero, i7-6700k stock, Noctua D14, G.Skill Ripjaws V 3200 2x8GB, Gigabyte GTX 1650, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, HP LP2475W 1200p wide gamut

Desktop Gaming system: Asrock Z370 Pro4, i7-8086k stock, Noctua D15, Corsair Vengeance Pro RGB 3200 4x16GB, Asus Strix 1080Ti, NZXT E850 PSU, Cooler Master MasterBox 5, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p 144Hz G-sync

TV Gaming system: Asus X299 TUF mark 2, 7920X @ 8c8t, Noctua D15, Corsair Vengeance LPX RGB 3000 8x8GB, Gigabyte RTX 2070, Corsair HX1000i, GameMax Abyss, Samsung 970 Evo 500GB, LG OLED55B9PLA

VR system: Asus Z170I Pro Gaming, i7-6700T stock, Scythe Kozuti, Kingston Hyper-X 2666 2x8GB, Zotac 1070 FE, Corsair CX450M, Silverstone SG13, Samsung PM951 256GB, Crucial BX500 1TB, HTC Vive

Gaming laptop: Asus FX503VD, i5-7300HQ, 2x8GB DDR4, GTX 1050, Sandisk 256GB + 480GB SSD

Link to post
Share on other sites
4 minutes ago, porina said:

Frenchie is highly opinionated. Take what he says with more salt than a visit to WCCFTech. 

Yeah, anyone that has even taken a cursory glance at his Twitter will realize he is the last person you want as an objective source.


[Out-of-date] Want to learn how to make your own custom Windows 10 image?

 

Desktop: AMD R9 3900X | ASUS ROG Strix X570-F | Radeon RX 5700 XT | EVGA GTX 1080 SC | 32GB Trident Z Neo 3600MHz | 1TB 970 EVO | 256GB 840 EVO | 960GB Corsair Force LE | EVGA G2 850W | Phanteks P400S

Laptop: Intel M-5Y10c | Intel HD Graphics | 8GB RAM | 250GB Micron SSD | Asus UX305FA

Server 01: Intel Xeon D 1541 | ASRock Rack D1541D4I-2L2T | 32GB Hynix ECC DDR4 | 4x8TB Western Digital HDDs | 32TB Raw 16TB Usable

Server 02: Intel i7 7700K | Gigabye Z170N Gaming5 | 16GB Trident Z 3200MHz

Link to post
Share on other sites
7 minutes ago, porina said:

Basically, big fuss over nothing.

That's what I was thinking in the first place. I really doubt that the error detection and things related to that are the reason. It is most likely for the money and the fact that they have full control over their own products, which is not given when they go with any other CPU manufacturer.

Link to post
Share on other sites
52 minutes ago, captain_to_fire said:

 

I think as early as the iPhone 6s there are benchmarks shown that its A9 chip to be faster than some Intell chips already.

When you compare a Honda Civic to a Bicycle (Smartphones), of course the Honda Civic (MacBook Air) looks better, that doesn't mean the Honda Civic (Macbook Air) is going to beat the Ferrari (Mac Pro.) But when the Bicycle can get more mileage and is 1/4 of the size, you have to ask what the Honda Civic is doing wrong. Nobody questions the Ferrari. 

 

That's the problem with making a ARM to x86 comparison, is that everyone is comparing the "bicycle" to another "bicycle". 

 

https://browser.geekbench.com/ios_devices/iphone-6s

Quote
iPhone 6s
Apple A9 @ 1.8 GHz

541

 

iPhone 11 Pro
Apple A13 Bionic @ 2.7 GHz

1327

 

Geekbench 5 scores are calibrated against a baseline score of 1000 (which is the score of an Intel Core i3-8100). Higher scores are better, with double the score indicating double the performance.

 

OK, now watch what happens when I run Geekbench on my desktop. (those numbers are the single-core numbers btw)

 

image.thumb.png.ac4d4f58106c12305b06f98dc9f30490.png

So by comparison, the iphone 6s:

image.thumb.png.acc2829a8c314c5d77b79f61957b4a61.png

and the iPhone 11 Pro:

image.thumb.png.0e32735b670d8206bf40c30180777d04.png

Hey look at that, according to geek bench, the iphone 11 is faster than a Haswell Quadcore, with the single core performance nearly 50% higher.

 

Yet...

image.thumb.png.da18dec2ab6e9502abe67f73c3bb0c5a.png

Intel's latest desktop CPU barely, single thread barely squeeks past it. The Multicore number is because it's a 10 core (20 thread) being compare to a 6-core A13 which has 2 2.66 cores and 4 1.82 cores, so the numbers are going to not be fair comparison for that reason alone.

 

But you know what else needs to be compared?

image.thumb.png.c96a94e0470b37a57c6e402c54314047.png

HMM, the AMD chip isn't as fast as the A13 on a single core score, how could that be. The 3950X is a 16 core (32 thread) cpu.

 

So what IS comparable right now?

https://browser.geekbench.com/processors/intel-core-i7-1068ng7

image.thumb.png.f55544e066c704cb8403e574437922b0.png

 

Yes, the chip in the 13" Macbook Pro

https://browser.geekbench.com/macs/macbook-pro-13-inch-mid-2020-intel-core-i7-1068ng7-2-3-ghz-4-cores , which is only unfair because of the core configuration, if the A13 was all full speed cores it would easily beat it since HT's are no replacement for cpu cores.

 

Quote

ARM chips is what allows thin, light designs while bringing all day battery life. Apple promised that with their own Silicon, there would be lesser trade off between performance and battery life. With the Apple Silicon, there's no reason for Macs to have a separate T2 chip because...

Stop stop, you're killing me.

 

While it makes sense to stuff as much as possible into a SoC, there is a reason why T2 and such are not built into the CPU. They have to have their own security keys, and you can't write keys into the CPU at fab time. That's why things like the Nintendo Switch were easily JB, because the keys couldn't be changed once shipped. It's a silly thing, but I'm sure Apple doesn't want to make it easier for hacks. Besides, there was nothing stopping Apple from putting FaceID on all their Mac's except for the lack of willingness of PC Monitor vendors for putting cameras in their monitors as part of the built in USB hub. So Mac Pro and MacMini couldn't have it, and the iMac/Macbook/MacBook Pro had no reason not to, and as far as I'm aware, have always had the capability.

Link to post
Share on other sites

I don't believe it.  Companies like apple plan their products as far in advance as they can, they would have been looking at their own ARM silicon alongside moves to AMD ryzens years before Skylake became a measurable performance problem.  The fact we have rumors going back several years now for both is proof of that.

 

Also you can't exactly blame Intel for poor thermal engineering. the CPU's can perform better when they aren't hamstrung with shit cooling.

 

 


QuicK and DirtY. Read the CoC it's like a guide on how not to be moron.  Also I don't have an issue with the VS series.

Sometimes I miss contractions like n't on the end of words like wouldn't, couldn't and shouldn't.    Please don't be a dick,  make allowances when reading my posts.

Link to post
Share on other sites

I have doubts with that claim, and if Skylake was so bad Apple would've been moving to ARM sooner, and used AMD's Threadripper and Epyc in the pro machines.

And you can't really blame Intel for the hot running laptops, it's Apple's engineering that has been kneecapping the performance of the Intel CPU's. So its more like Apple is giving up on x86 because there aren't any improvements when the cooler design is so inadequate the CPU throttles under a sustained workload.

Link to post
Share on other sites
26 minutes ago, Blademaster91 said:

I have doubts with that claim, and if Skylake was so bad Apple would've been moving to ARM sooner, and used AMD's Threadripper and Epyc in the pro machines.

And you can't really blame Intel for the hot running laptops, it's Apple's engineering that has been kneecapping the performance of the Intel CPU's. So its more like Apple is giving up on x86 because there aren't any improvements when the cooler design is so inadequate the CPU throttles under a sustained workload.

My guess is that the MacBook Air was originally planned to be their first ARM machine, but there was a delay and they had to shoehorn an Intel chip in which is why the cooling design is so bad. That kind of cooling would be perfectly adequate for a low/medium power ARM chip, but even Intel's low end is too hot for it.

 

As for the MacBook Pro line, I honestly think they've been giving the Intel chips horrendous cooling so they can say in a slide that their ARM chips run cooler and quieter


Laptop:

Spoiler

HP OMEN 15 - Intel Core i7 9750H, 16GB DDR4, 512GB NVMe SSD, Nvidia RTX 2060, 15.6" 1080p 144Hz IPS display

PC:

Spoiler

Vacancy - Looking for applicants, please send CV

Mac:

Spoiler

2009 Mac Pro 8 Core - 2 x Xeon E5520, 16GB DDR3 1333 ECC, 120GB SATA SSD, AMD Radeon 7850. Soon to be upgraded to 2 x 6 Core Xeons

Phones:

Spoiler

LG G6 - Platinum (The best colour of any phone, period)

LG G7 - Moroccan Blue

 

Link to post
Share on other sites

I agree with the first part making perfect sense.  I don't think they've been killing cooling for a future marketing slide though.  They just value thin and light more than thermal performance, like most other laptop designs currently do as well.

11 minutes ago, yolosnail said:

My guess is that the MacBook Air was originally planned to be their first ARM machine, but there was a delay and they had to shoehorn an Intel chip in which is why the cooling design is so bad. That kind of cooling would be perfectly adequate for a low/medium power ARM chip, but even Intel's low end is too hot for it.

 

As for the MacBook Pro line, I honestly think they've been giving the Intel chips horrendous cooling so they can say in a slide that their ARM chips run cooler and quieter

 

The comparisons are interesting in the benchmarks.  But something important to note here as well is that you're comparing a tiny power limited and thermally limited design in a phone to a desktop as well.  It is absolutely true that simply by using a computer enclosure instead of a phone enclosure, for both power delivery and thermal cooling capacity, that performance will go up.  The question is exactly how much.  So, this further enhances the argument that for raw horsepower, at least in general situations, the Apple Silicon is likely to be better, especially once tailored to computer use with differing core inclusions.

1 hour ago, Kisai said:

When you compare a Honda Civic to a Bicycle (Smartphones), of course the Honda Civic (MacBook Air) looks better, that doesn't mean the Honda Civic (Macbook Air) is going to beat the Ferrari (Mac Pro.) But when the Bicycle can get more mileage and is 1/4 of the size, you have to ask what the Honda Civic is doing wrong. Nobody questions the Ferrari. 

 

That's the problem with making a ARM to x86 comparison, is that everyone is comparing the "bicycle" to another "bicycle". 

 

https://browser.geekbench.com/ios_devices/iphone-6sOK, now watch what happens when I run Geekbench on my desktop. (those numbers are the single-core numbers btw)

Spoiler


 

image.thumb.png.ac4d4f58106c12305b06f98dc9f30490.png

So by comparison, the iphone 6s:

image.thumb.png.acc2829a8c314c5d77b79f61957b4a61.png

and the iPhone 11 Pro:

image.thumb.png.0e32735b670d8206bf40c30180777d04.png

Hey look at that, according to geek bench, the iphone 11 is faster than a Haswell Quadcore, with the single core performance nearly 50% higher.

 

Yet...

image.thumb.png.da18dec2ab6e9502abe67f73c3bb0c5a.png

Intel's latest desktop CPU barely, single thread barely squeeks past it. The Multicore number is because it's a 10 core (20 thread) being compare to a 6-core A13 which has 2 2.66 cores and 4 1.82 cores, so the numbers are going to not be fair comparison for that reason alone.

 

But you know what else needs to be compared?

image.thumb.png.c96a94e0470b37a57c6e402c54314047.png

HMM, the AMD chip isn't as fast as the A13 on a single core score, how could that be. The 3950X is a 16 core (32 thread) cpu.

 

So what IS comparable right now?

https://browser.geekbench.com/processors/intel-core-i7-1068ng7

image.thumb.png.f55544e066c704cb8403e574437922b0.png

 

Yes, the chip in the 13" Macbook Pro

https://browser.geekbench.com/macs/macbook-pro-13-inch-mid-2020-intel-core-i7-1068ng7-2-3-ghz-4-cores , which is only unfair because of the core configuration, if the A13 was all full speed cores it would easily beat it since HT's are no replacement for cpu cores.

 

 

 

Link to post
Share on other sites
44 minutes ago, Blademaster91 said:

I have doubts with that claim, and if Skylake was so bad Apple would've been moving to ARM sooner, and used AMD's Threadripper and Epyc in the pro machines.

And you can't really blame Intel for the hot running laptops, it's Apple's engineering that has been kneecapping the performance of the Intel CPU's. So its more like Apple is giving up on x86 because there aren't any improvements when the cooler design is so inadequate the CPU throttles under a sustained workload.

To jump onto another ISA, a lot of things have to go right simultaneously, not just the silicon. Even for Apple, this is likely to prove a formidable task, though unlike with Windows, this is actually feasible for them to do. 


The pursuit of knowledge for the sake of knowledge.

Forever in search of my reason to exist.

Link to post
Share on other sites
4 minutes ago, justpoet said:

I agree with the first part making perfect sense.  I don't think they've been killing cooling for a future marketing slide though.  They just value thin and light more than thermal performance, like most other laptop designs currently do as well.

 

The comparisons are interesting in the benchmarks.  But something important to note here as well is that you're comparing a tiny power limited and thermally limited design in a phone to a desktop as well.  It is absolutely true that simply by using a computer enclosure instead of a phone enclosure, for both power delivery and thermal cooling capacity, that performance will go up.  The question is exactly how much.  So, this further enhances the argument that for raw horsepower, at least in general situations, the Apple Silicon is likely to be better, especially once tailored to computer use with differing core inclusions.

 

It's probably a bit of both to be honest, there are ways that they could have made the MacBook Pro run cooler, but is it really worth them putting the effort in if it'ls going to get canned next year anyway?

 

The issue with trying to guess what performance is going to be like, and even looking at benchmarks, is that it's all going to be down to the optimisation. A program that is well optimised for 'Apple Silicon' (are they really going to keep calling it that?) is going to run much better than a poorly coded X86 program.

 

I have absolutely no doubt that the likes of Final Cut is going to fly on their own chips, they control the hardware and software, so can optimise the hell out of it. The issue comes to whether others are going to optimise that well.

 

Windows is a much larger market, and they all still run on X86, is it really going to be worth their while putting in all the effort to optimise for 10% of the market? Why not make use of Rosetta and just let Apple do the hard work. I suppose it all depends what the overhead of Rosetta 2 is like, if Apple Silicon is 20% faster than Intel in an equivalent machine, and Rosetta has a performance loss of 10%, would you put the effort in?

 

At the end of the day, users will still see a performance increase


Laptop:

Spoiler

HP OMEN 15 - Intel Core i7 9750H, 16GB DDR4, 512GB NVMe SSD, Nvidia RTX 2060, 15.6" 1080p 144Hz IPS display

PC:

Spoiler

Vacancy - Looking for applicants, please send CV

Mac:

Spoiler

2009 Mac Pro 8 Core - 2 x Xeon E5520, 16GB DDR3 1333 ECC, 120GB SATA SSD, AMD Radeon 7850. Soon to be upgraded to 2 x 6 Core Xeons

Phones:

Spoiler

LG G6 - Platinum (The best colour of any phone, period)

LG G7 - Moroccan Blue

 

Link to post
Share on other sites
12 minutes ago, yolosnail said:

 

Windows is a much larger market, and they all still run on X86, is it really going to be worth their while putting in all the effort to optimise for 10% of the market? Why not make use of Rosetta and just let Apple do the hard work. I suppose it all depends what the overhead of Rosetta 2 is like, if Apple Silicon is 20% faster than Intel in an equivalent machine, and Rosetta has a performance loss of 10%, would you put the effort in?

 

At the end of the day, users will still see a performance increase

Well if they are doing 64-bit, they are supposed to use intrinsics, not hand-tuned assembly, and it's the hand-tuned assembly (eg AVX) instructions that are not portable in any shape.

 

https://docs.microsoft.com/en-us/cpp/intrinsics/compiler-intrinsics?view=vs-2019

 

Quote

If a function is an intrinsic, the code for that function is usually inserted inline, avoiding the overhead of a function call and allowing highly efficient machine instructions to be emitted for that function. An intrinsic is often faster than the equivalent inline assembly, because the optimizer has a built-in knowledge of how many intrinsics behave, so some optimizations can be available that are not available when inline assembly is used. Also, the optimizer can expand the intrinsic differently, align buffers differently, or make other adjustments depending on the context and arguments of the call.

 

The use of intrinsics affects the portability of code, because intrinsics that are available in Visual C++ might not be available if the code is compiled with other compilers and some intrinsics that might be available for some target architectures are not available for all architectures. However, intrinsics are usually more portable than inline assembly. The intrinsics are required on 64-bit architectures where inline assembly is not supported.

 

 

https://software.intel.com/content/www/us/en/develop/documentation/cpp-compiler-developer-guide-and-reference/top/compiler-reference/intrinsics/intrinsics-for-all-intel-architectures/miscellaneous-intrinsics.html

Quote

The following tables list and describe intrinsics that you can use across all Intel® architectures, except where noted. These intrinsics are available for both Intel® and non-Intel microprocessors but they may perform additional optimizations for Intel® microprocessors than they perform for non-Intel microprocessors.

 

 

 

https://clang.llvm.org/docs/LanguageExtensions.html

https://llvm.org/devmtg/2016-11/Slides/Finkel-IntrinsicsMetadataAttributes.pdf

 

This is too jargon-complex to quote. Suffice it to say, if you are using the intrinsincs correctly, you shouldn't be writing any hand-tuned assembler at all.  Because of legacy reasons, 32-bit code might have hand-tuned assembly in it (NT 3.1, Windows 32s (32-bit extensions to Windows 3.1x,) and Win9x binaries could all have assembly blobs in them linked in), but 64-bit is expressly forbidden from having it on Windows. So a direct consequence of that is 64-bit windows code is more portable than 32-bit code is. Microsoft may have intended for this to allow portability to Alpha, PowerPC, MIPS and ARM (because NT4.0 supported Alpha , PowerPC and MIPS at some point.) Ironically because Intel was going to replace the x86 with a RISC processor. And then didn't (they switched development to ARM, and then sold it off.)

 

So Microsofts foresight here may actually keep them relevant longer as it enabled an easy path to ARM platforms as they matured.

 

 

 

Link to post
Share on other sites
3 hours ago, mr moose said:

Also you can't exactly blame Intel for poor thermal engineering.

No, but you can blame Intel for misleading Apple and other OEMs with assurances of process shrinks and increases in efficiency that never came. 


Laptop: 2020 13" MacBook Pro i5, 512GB, G7 Graphics, 16GB LPDDR4X | Phone: iPhone 8 Plus 64GB | Wearables: Apple Watch Sport Series 2 | CPU: R5 2600 | Mobo: ASRock B450M Pro4 | RAM: 16GB 2666 | GPU: ASRock RX 5700 8GB | Case: Apple PowerMac G5 | OS: Win 10 | Storage: 480GB PNY SSD & 2TB WD Green HDD | PSU: Corsair CX600M | Display: Dell 27 Gaming Monitor S2719DGF 1440p @155Hz, Dell UZ2215H 21.5" 1080p, ViewSonic VX2450wm-LED 23.6" 1080p | Cooling: Wraith Prism | Keyboard: G610 Orion Cherry MX Brown | Mouse: G303 | Audio: Audio Technica ATH-M50X & Blue Snowball
Link to post
Share on other sites
2 hours ago, DrMacintosh said:

No, but you can blame Intel for misleading Apple and other OEMs with assurances of process shrinks and increases in efficiency that never came. 

What assurances?  Apple have full knowledge of all CPU technical requirements and limitations when they design their laptops, you can't blame Intel because they sacrificed cooling for form.  Intel's CPU's perform exactly to their TDP spec. no misinformation there.

 

 

 

 


QuicK and DirtY. Read the CoC it's like a guide on how not to be moron.  Also I don't have an issue with the VS series.

Sometimes I miss contractions like n't on the end of words like wouldn't, couldn't and shouldn't.    Please don't be a dick,  make allowances when reading my posts.

Link to post
Share on other sites
Posted (edited) · Original PosterOP
4 hours ago, DrMacintosh said:

No, but you can blame Intel for misleading Apple and other OEMs with assurances of process shrinks and increases in efficiency that never came. 

I don't think Intel assured Apple or anyone that their transistor size is gonna shrink below 14 nm

2 hours ago, mr moose said:

What assurances?  Apple have full knowledge of all CPU technical requirements and limitations when they design their laptops, you can't blame Intel because they sacrificed cooling for form.  Intel's CPU's perform exactly to their TDP spec. no misinformation there.

This reminds me [thread]

image.thumb.png.6c402f1992fc32a366b5b840f0a98dcf.png

To be fair, Apple sort of addressed the throttling issues with the first i9 MBP by releasing an over the air update that allowed the fans to spin more to maintain sustained clock speeds. The only real fix was the 2019 16" MacBook Pro that made the chassis thicker. It's possible now that Apple is in control of the SoC, they're going to bring back the slimmer MacBook Pro design because in theory, ARM chips generate less heat than x86 chips.

Edited by captain_to_fire

There is more that meets the eye
I see the soul that is inside

 

Making Windows Defender as good or even better than paid options

Link to post
Share on other sites

Seems a bit of a reach. There have been rumors of Apple switching to ARM practically since the iPad 2 came out.
For me it seems more likely that since Apple was doing R&D into ARM for the iPad/iPhone, the idea of Mac OS running also on ARM was probably there since the success of the iPad. (supporting 1 architecture would simplify development, and advancements would benefit 2 categories of products)
They probably experimented with the idea (hence the early rumors) and as the years went on the experimentation went from "Is this even possible?" to "What do we need to make this work?"
By the time of Skylake I wouldn't doubt it was already pretty well into development. Skylake was probably not so much a turning point as much a justification to accelerate development, and make the switch sooner. Like looking at the clock and noticing your gonna be 5 minutes late to work, so you speed up just a bit.

Link to post
Share on other sites
44 minutes ago, captain_to_fire said:

To be fair, Apple sort of addressed the throttling issues with the first i9 MBP by releasing an over the air update that allowed the fans to spin more to maintain sustained clock speeds.

The fix for this was not a change of fan speed. The root issue was due to a firmware bug in the cpus intel provided to apple (and by implication not a bug present in the samples intel provided before release). 

 

The bug was that when the VRM send a message to the cpu saying it could not provide the requested amount of power, the cpu dropped all cores to min frequency rather than reducing the frequency a little bit. 
 

Apples fix (since they can't patch the intel firmware... see why they want their own cpus) was to add a hock into the kernel so that the cpu redirected this message to the kernel and let the macOS kernel handle the message (and reduce the cpu clock only a little bit so that the power was safe).

see: https://www.kitguru.net/lifestyle/mobile/apple/matthew-wilson/macbook-pro-2018-throttling-fix/
the created a patch for this very quickly but it required you to turn off SIP. I assume apple just copied the community implementation. 

 

 

Link to post
Share on other sites
7 hours ago, yolosnail said:

Let's be honest, if AMD and their (relatively) minuscule R&D budget can beat Intel, imagine what Apple can do!

Budget doesn't really mean much here, often you can't know if an architecture is going to be competitive until it's too late to start from scratch. The difference in budget only tells you how many screw ups you can afford before you go bankrupt.

5 minutes ago, VegetableStu said:

linux people: TOPKEK STONKS Meh, RISC-V or gtfo

FTFY


...is there a question here? 🤔

sudo chmod -R 000 /*

What is scaling and how does it work? Asus PB287Q unboxing! Console alternatives :D Watch Netflix with Kodi on Arch Linux Sharing folders over the internet using SSH Beginner's Guide To LTT (by iamdarkyoshi)

Sauron'stm Product Scores:

Spoiler

Just a list of my personal scores for some products, in no particular order, with brief comments. I just got the idea to do them so they aren't many for now :)

Don't take these as complete reviews or final truths - they are just my personal impressions on products I may or may not have used, summed up in a couple of sentences and a rough score. All scores take into account the unit's price and time of release, heavily so, therefore don't expect absolute performance to be reflected here.

 

-Lenovo Thinkpad X220 - [8/10]

Spoiler

A durable and reliable machine that is relatively lightweight, has all the hardware it needs to never feel sluggish and has a great IPS matte screen. Downsides are mostly due to its age, most notably the screen resolution of 1366x768 and usb 2.0 ports.

 

-Apple Macbook (2015) - [Garbage -/10]

Spoiler

From my perspective, this product has no redeeming factors given its price and the competition. It is underpowered, overpriced, impractical due to its single port and is made redundant even by Apple's own iPad pro line.

 

-OnePlus X - [7/10]

Spoiler

A good phone for the price. It does everything I (and most people) need without being sluggish and has no particularly bad flaws. The lack of recent software updates and relatively barebones feature kit (most notably the lack of 5GHz wifi, biometric sensors and backlight for the capacitive buttons) prevent it from being exceptional.

 

-Microsoft Surface Book 2 - [Garbage - -/10]

Spoiler

Overpriced and rushed, offers nothing notable compared to the competition, doesn't come with an adequate charger despite the premium price. Worse than the Macbook for not even offering the small plus sides of having macOS. Buy a Razer Blade if you want high performance in a (relatively) light package.

 

-Intel Core i7 2600/k - [9/10]

Spoiler

Quite possibly Intel's best product launch ever. It had all the bleeding edge features of the time, it came with a very significant performance improvement over its predecessor and it had a soldered heatspreader, allowing for efficient cooling and great overclocking. Even the "locked" version could be overclocked through the multiplier within (quite reasonable) limits.

 

-Apple iPad Pro - [5/10]

Spoiler

A pretty good product, sunk by its price (plus the extra cost of the physical keyboard and the pencil). Buy it if you don't mind the Apple tax and are looking for a very light office machine with an excellent digitizer. Particularly good for rich students. Bad for cheap tinkerers like myself.

 

 

Link to post
Share on other sites
7 hours ago, yolosnail said:

My guess is that the MacBook Air was originally planned to be their first ARM machine, but there was a delay and they had to shoehorn an Intel chip in which is why the cooling design is so bad. That kind of cooling would be perfectly adequate for a low/medium power ARM chip, but even Intel's low end is too hot for it.

 

As for the MacBook Pro line, I honestly think they've been giving the Intel chips horrendous cooling so they can say in a slide that their ARM chips run cooler and quieter

The weird cooling design in the Macbook air does make more sense for a low power ARM chip, but still a copper block on the cpu with no heatpipe and a fan that doesn't really cool anything doesn't seem very efficient imo. It seems Apple can design a Macbook that doesn't throttle so hard with the 16" model, Apple just didn't with the 15" laptops, I agree it seems like they held back performance to say the ARM chip is faster and cooler.

1 hour ago, mr moose said:

What assurances?  Apple have full knowledge of all CPU technical requirements and limitations when they design their laptops, you can't blame Intel because they sacrificed cooling for form.  Intel's CPU's perform exactly to their TDP spec. no misinformation there.

 

 

 

 

Also since Intel sells custom sku's to Apple i'd assume they work much more closely on TDP specs than other OEM's, so I still can't blame Intel for awful thermals.

Link to post
Share on other sites

I doubt that anything Intel has done (or in some instances didn’t do) forced Apples hand in this matter.

 

What I do think is that Intels ”problems” just made the decision easier for Apple.

 

Link to post
Share on other sites
6 hours ago, hishnash said:

The fix for this was not a change of fan speed. The root issue was due to a firmware bug in the cpus intel provided to apple (and by implication not a bug present in the samples intel provided before release). 

 

The bug was that when the VRM send a message to the cpu saying it could not provide the requested amount of power, the cpu dropped all cores to min frequency rather than reducing the frequency a little bit. 

It was not a firmware bug in Intel CPUs it was bad firmware by Apple of their own system agents and control.

 

Quote

Following extensive performance testing under numerous workloads, we've identified that there is a missing digital key in the firmware that impacts the thermal management system and could drive clock speeds down under heavy thermal loads on the new MacBook Pro.

https://www.macrumors.com/2018/07/24/apple-addresses-macbook-pro-throttling/

 

Intel did not issue a microcode fix because it has nothing to do with their CPU at all. What you posted was merely a workaround which helps but wasn't actually the problem.

Link to post
Share on other sites
8 hours ago, mr moose said:

What assurances?  Apple have full knowledge of all CPU technical requirements and limitations when they design their laptops, you can't blame Intel because they sacrificed cooling for form.  Intel's CPU's perform exactly to their TDP spec. no misinformation there.

 

 

 

 

idk maybe he is talking about the security flaws

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×