Jump to content

Intel caught fudging benchmarks in M1 vs Core i7 11th Gen comparison

Jet_ski

For once, I don't think Intel's in the wrong. I think their arguments in the slidedeck make sense.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

53 minutes ago, AluminiumTech said:

For once, I don't think Intel's in the wrong. I think their arguments in the slidedeck make sense.

Really? PCWorld and others note that Intel is cherry-picking tests and, in some cases, changing the test machines in ways that cast doubts on its claims. It's right about gaming, but you don't need Intel to tell you that a MacBook isn't a premier gaming machine.

 

To borrow from Shakespeare: methinks Intel doth protest too much. If a company releases defensive benchmarks like this, it's because the competition poses a serious threat. Intel wouldn't do this if the M1 was a mediocre chip, and there are areas where Apple absolutely kicks Intel's ass.

 

And like many have pointed out, this is just Apple's first in-house silicon. Souped-up chips for higher-end machines, not to mention second-generation hardware, will likely push things further still. Remember how Apple's iPhone chips went from okay substitute to performance leader in a few years? That's what Intel's afraid of — that there may come a point in a few years where Apple has an unassailable lead and people say "if you want the fastest laptop, you have to buy a Mac."

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Jet_ski said:

*snip*

There's a lot of stuff in here to talk about. I will preface by saying that overall I agree that Intel is being misleading, however I do feel that a lot of the criticism Intel are recieving from MacRumors here is a bit... unfair. IMO PCWorld is far more fair in their comparisons.

 

Content Creation Performance


For example, MacRumors is quick to point out that, yes, Topaz Lab's apps are designed to take advantage of Intel's hardware acceleration suite. But they neglect to mention, as the original PC World article pointed out:

Quote

This is the same magic that’s made Apples phones shine for so long. 

Which is completely true. This is exactly how Apple is squeezing so much performance out of their processors (M1 included) in the first place - that tight integration between software and hardware goes a long way to boosting Apple's position here. Intel's chips - running plain old windows - don't get to benefit from that level of integration, which does hurt their position when making direct comparisons. As those who have been playing around with the M1 have found out, performance is lost when running other operating systems on M1 Macs - far beyond the level expected from being run through a hypervisor. It's a bit disingenuous to criticise Intel for choosing one benchmark where they have a hardware-level advantage, but to then neglect to mention when Apple has that same advantage for pretty much all of the other tests conducted. If you're going to criticise Intel, you have to at least acknowledge Apple's position as well.

 

To which some may say "but who cares because that's how most people will use an M1 Mac - running Apple software." - which is a correct and valid point. But, if you're ignoring this source of bias when comparing hardware, then you're telling a no more complete story than Intel - you're picking the use cases that fit your perspective. If you want to truly compare the hardware - as Intel seem to be trying to do - you should be running idential sets of software. That's the whole point of a benchmark after all. By swapping out Windows for MacOS in your test suite for the M1 system you are handing the M1 an optimisation advantage in many of the tests performed, which needs to be at the very least mentioned.

 

 

Productivity Performance

 

Another criticism from both OP and "Apple Columnist Jason Snell" quoted by MacRumors is the lack of information surrounding the exact tests being conducted:

Quote

And what is this pdf they exported?!

However anyone who has paid attention to any Intel presentation in the last few years will know that Intel provides all of this information in a separate document that accompanies the slides. For public presentations at events like Computex or Big Chips this document - alongside the slides - is then published on Intel's website. For presentations like this that are given directly to a media outlet, usually the outlet themselves will discuss its contents if they deem them relevant, but they aren't normally open for the public to view - but neither is the full presentation.

 

This document exists for this test suite. We can see it mentioned at the bottom of each of the slides:

Quote

See backup for workloads and configurations.

and this will have been provided to PCWorld for their article. So when I see PCWorld saying that the productivity tests look like legitimate, real-world use cases rather than questioning whether these are meaningless metrics made up for marketing purposes, I'm inclined to believe them over an outlet that doesn't have access to this information like MacRumors, who is simply writing a commentary piece about PCWorld's work. (Also I respect Gordon and trust his opinions on this kind of stuff. If he's saying it looks legit, it almost certainly is.)

 

 

Battery Life

 

I agree that Intel shouldn't have swapped CPU for their battery life test. How much difference it makes? I don't know. In a battery life test like this, neither CPU will be running at full-load, and so given they're basically just the same silicon but with a different clock speed, it's highly likely that their battery life will be identical (or within margin of error). That the 1185G7 can use more power by clocking up higher is pretty irrelevant if you're measuring battery life through a load where that will never happen. Both chips will likely be pretty much idling while their identically-specced iGPUs take up the Netflix decoding work.

 

Intel's test was - as pointed out by PCWorld - actually an arguably better measurement of battery life than Apple's. As PCWorld said:

Quote

Intel did add that Apple’s “8 clicks up” is about 125 nits of brightness on the MacBook Air which is pretty dim.

(These numbers add up with those listed on the Apple website - 50% brightness is indeed at "8 clicks up" given that Macs have 16 levels of screen brightness by default. )

Quote

Our take: We agree 125 nits is a pretty silly brightness to use for video testing. After all, who wants to “watch” a movie on a laptop but then dims the screen so much you can’t actually see most of it?

so Intel's numbers not lining up with Apple's set makes perfect sense. I agree with PCWorld that 125 nits isn't an appropriate test and that Intel testing at a more reasonable 250 nits is not deceptive. Using different browsers on each system also makes sense - Chrome's battery life sucks on MacOS vs Safari, and you can't get Safari on Windows.

 

The most deceptive part of this test is merely the switch to the MacBook Air, rather than sticking to the MacBook Pro. This gave Intel a battery capacity advantage of ~12% over the MacBook Air (~56watt-hours vs ~50), instead of a ~4% capacity disadvantage vs the 58watt-hour MacBook Pro. If we correct for this, the MacBook Pro would  likely come in at ~11.5-12 hours of battery life in a similar test.

 

This doesn't sound great for Intel on paper, but honestly I don't think it's a bad performance at all. MacOS is far better than Windows at power management - a fact which has been known for years - due to their driver optimisations, and the ARM architecture is lauded for its power efficiency. That Intel is able to get within 10% of what the MacBook Pro can (in theory) do is impressive. I'm excited for what Alder Lake's bigLittle architecture can do here.

CPU: i7 4790k, RAM: 16GB DDR3, GPU: GTX 1060 6GB

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Vishera said:

As matter of fact a process shrink requires the same amount of pins or less,but not more of them.

Intel are probably using the headroom that the process shrink gives to them to increase performance,so it may require more pins.

There's two types of pin, data or power. If a shrink allows a reduction in power consumption, then assuming each pin is equal you might do with fewer in that case. Data however depends on what external connectivity you expect. 

 

 

 

On the main point of this thread, it just seems like more petty Intel bashing. It's marketing. Apple do it. AMD do it. nivida do it. I'd rather they all didn't, but we don't live in an ideal world.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, TetraSky said:

It is kind of sad that they can't even win against the M1 in some of their benchmarks... Even more so in Gaming.

 

Also, this is typical of Intel, it's not the first time they've been caught screwing around with benchmarks.
PDF export... AH!

What would be the Reverse Uno of the decade is for Apple to suddenly launch a gaming focused SoC (complete with Ray Tracing and fast x86 emulation), and capitalize on the crap availability of gaming hardware to seize a chunk of the market. 
 

The amount of dumbfounding this would create is likely to be palpable...

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, tim0901 said:

That Intel is able to get within 10% of what the MacBook Pro can (in theory) do is impressive. I'm excited for what Alder Lake's bigLittle architecture can do here.

For me, it boils down to this; Tiger Lake is actually a much better product than a lot of people realise. 

 

No, it does not deliver the same earth-shattering multicore performance AMD's Ryzen 7 4800U would provide (though to the fair, it's not exactly an easy chip to find), nor is it as headline-grabbing and potentially revolutionary as Apple's M1. But they did deliver a very solid product that's much better than Ice Lake in pretty much every way and packs in interesting features like its incredible integrated graphics and host of accelerators. 

 

Aside from the terrible naming scheme adopted from Ice Lake, Intel's marketing of the chip basically being cheap potshots at AMD and Apple don't do it any favors. It just reeks of desperation that only serves to paint the company even further as a technological underdog and buries a lot of what makes TGL good. 

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, D13H4RD said:

For me, it boils down to this; Tiger Lake is actually a much better product than a lot of people realise. 

I suspect a large part of the audience here only cares about what you can get on desktop. Laptop CPUs just don't grab the same attention.

 

Quote

No, it does not deliver the same earth-shattering multicore performance AMD's Ryzen 7 4800U would provide

Intel recently announced 8 core Tiger Lake models coming "soon", so that might be more interesting against Zen 3 laptop CPUs which I understand are a little closer to market. 

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Commodus said:

To borrow from Shakespeare: methinks Intel doth protest too much. If a company releases defensive benchmarks like this, it's because the competition poses a serious threat. Intel wouldn't do this if the M1 was a mediocre chip, and there are areas where Apple absolutely kicks Intel's ass.

They see Apple as a huge threat. Literally the new CEO in the first meeting told others that they have to come up something better than anything "the lifestyle company in cupertino can produce"

3 hours ago, tim0901 said:

Content Creation Performance


For example, MacRumors is quick to point out that, yes, Topaz Lab's apps are designed to take advantage of Intel's hardware acceleration suite. But they neglect to mention, as the original PC World article pointed out:

Which is completely true. This is exactly how Apple is squeezing so much performance out of their processors (M1 included) in the first place - that tight integration between software and hardware goes a long way to boosting Apple's position here. Intel's chips - running plain old windows - don't get to benefit from that level of integration, which does hurt their position when making direct comparisons. As those who have been playing around with the M1 have found out, performance is lost when running other operating systems on M1 Macs - far beyond the level expected from being run through a hypervisor. It's a bit disingenuous to criticise Intel for choosing one benchmark where they have a hardware-level advantage, but to then neglect to mention when Apple has that same advantage for pretty much all of the other tests conducted. If you're going to criticise Intel, you have to at least acknowledge Apple's position as well.

Except no one really ever compared something like Final Cut pro and Adobe Premiere, where the former plays into Apple's hardware/software integration advantage while the latter doesnt.

 

What we've always done with teh M1 is compare apple's to apple's to benchmark scores between CPUs that doesn't really have anything to do with their operating system or any integration. It's just a basic how much time does this chip take to do X amount of work.

 

Apple's CPU aren't fast because off some magical code optimization. It's fast becasue it is fast. Non native OSs dont have proper drives adn run through translations and emultations, and yet it's much faster than expected (i dont know where you got the claim that it isnt)

3 hours ago, tim0901 said:

To which some may say "but who cares because that's how most people will use an M1 Mac - running Apple software." - which is a correct and valid point. But, if you're ignoring this source of bias when comparing hardware, then you're telling a no more complete story than Intel - you're picking the use cases that fit your perspective. If you want to truly compare the hardware - as Intel seem to be trying to do - you should be running idential sets of software. That's the whole point of a benchmark after all. By swapping out Windows for MacOS in your test suite for the M1 system you are handing the M1 an optimisation advantage in many of the tests performed, which needs to be at the very least mentioned.

There's no optimisation advantage. Give me a break. Optimization advantage is when you compare things like Final Cut X and premiere. One is clearly written better than the other. In Intel's case they cherry picked a software that disticntly takes advantage of Intel's acceleration

3 hours ago, tim0901 said:

This document exists for this test suite. We can see it mentioned at the bottom of each of the slides:

and this will have been provided to PCWorld for their article. So when I see PCWorld saying that the productivity tests look like legitimate, real-world use cases rather than questioning whether these are meaningless metrics made up for marketing purposes, I'm inclined to believe them over an outlet that doesn't have access to this information like MacRumors, who is simply writing a commentary piece about PCWorld's work. (Also I respect Gordon and trust his opinions on this kind of stuff. If he's saying it looks legit, it almost certainly is.)

Remember that you are comparing a low end low spec Macbook Air/Pro with no fan/minimal fan with a fully kitted out i7 with no data on acoustics or heat output. That's enough for it it be embarrasing. They should've ideally compared it with their i3s and i5s

3 hours ago, tim0901 said:

 

Battery Life

 

I agree that Intel shouldn't have swapped CPU for their battery life test. How much difference it makes? I don't know. In a battery life test like this, neither CPU will be running at full-load, and so given they're basically just the same silicon but with a different clock speed, it's highly likely that their battery life will be identical (or within margin of error). That the 1185G7 can use more power by clocking up higher is pretty irrelevant if you're measuring battery life through a load where that will never happen. Both chips will likely be pretty much idling while their identically-specced iGPUs take up the Netflix decoding work.

The Macbook Pro has a bigger battery and it is indeed rated for more hours, which has definitely lined up with real world tests. So they in essence took Apple's worst product and compared it with their best under the guise that it's both equal.

 

This tactic is enough of  a reason to throw this whole chart out of the window completely and publicly shame Intel for

3 hours ago, tim0901 said:

Intel's test was - as pointed out by PCWorld - actually an arguably better measurement of battery life than Apple's. As PCWorld said:

(These numbers add up with those listed on the Apple website - 50% brightness is indeed at "8 clicks up" given that Macs have 16 levels of screen brightness by default. )

so Intel's numbers not lining up with Apple's set makes perfect sense. I agree with PCWorld that 125 nits isn't an appropriate test and that Intel testing at a more reasonable 250 nits is not deceptive. Using different browsers on each system also makes sense - Chrome's battery life sucks on MacOS vs Safari, and you can't get Safari on Windows.

 

The most deceptive part of this test is merely the switch to the MacBook Air, rather than sticking to the MacBook Pro. This gave Intel a battery capacity advantage of ~12% over the MacBook Air (~56watt-hours vs ~50), instead of a ~4% capacity disadvantage vs the 58watt-hour MacBook Pro. If we correct for this, the MacBook Pro would  likely come in at ~11.5-12 hours of battery life in a similar test.

 

This doesn't sound great for Intel on paper, but honestly I don't think it's a bad performance at all. MacOS is far better than Windows at power management - a fact which has been known for years - due to their driver optimisations, and the ARM architecture is lauded for its power efficiency. That Intel is able to get within 10% of what the MacBook Pro can (in theory) do is impressive. I'm excited for what Alder Lake's bigLittle architecture can do here.

I see you mentioned about MBP, but still belittled the whole thing into OS optimizations and whatnot. No, you can refer Intel MacBooks adn they have the same battery life as equivalent Windows ultrabooks. Any gains are purely due to ARM device. And I genuinly dont think Intel was honest about this test as the M1 devices have proven to be far better in real world battery life tests then whatever Intel laptops have so far shown (except of course on Intel's slides)

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, D13H4RD said:

The really funny thing is....Intel is really just acting like the spoiled, salty kid by pushing obviously skewed "benchmarks" like these. 

 

The Tiger Lake CPUs are an objectively good product for ultraportables and it plus the 10nm SuperFIN process they're on is definitely what they needed after the disappointment that was Ice Lake. However, there's no denying that Apple's new M1 chip was quite the disruptor. 

 

Mentioned it before and I think @Taf the Ghosthad a similar opinion but it's really a shame that many of Tiger Lake's strengths are basically hidden away under what's basically the marketing equivalent of a child throwing a tantrum. TGL-U is a better all-around processor than many realize, but shit like this doesn't help. 

Yup, Intel have a good product and they're marketing is basically hiding it by being unreasonable.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, tim0901 said:

Chrome's battery life sucks on MacOS vs Safari, and you can't get Safari on Windows.

Did Apple discontinue Safari for Windows? There used to be a Windows version, it wasn't any good but it did exist at one point and you actually had to install it to work around a bug in iTunes for Windows and proxy servers way back, so dumb.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, gabrielcarvfer said:

And with a worse node. 

Personally I'd like to learn a lot more about Intel's 10nm nodes before saying if they are worse than TSMC 5nm or at the very least in what ways. Because I bet there are areas where Intel's 10nm is actually better, such was and is still true about Intel's 14nm vs TSMC 7nm. 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Zodiark1593 said:

What would be the Reverse Uno of the decade is for Apple to suddenly launch a gaming focused SoC (complete with Ray Tracing and fast x86 emulation), and capitalize on the crap availability of gaming hardware to seize a chunk of the market. 
 

The amount of dumbfounding this would create is likely to be palpable...

If the Apple VR headset is true then I actually think they are jumping in to this area, however not actually that quickly though. M1 and future SoC from Apple actually have enough performance with low enough power to not need a PC while also being able to play top tier titles, Apple currently is the most likely to make wireless high performance VR a reality with the optimization and polish required for people to actually want to use it..

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, leadeater said:

Did Apple discontinue Safari for Windows?

The last version of Safari for PC was version 5. The problem with it is that it doesn’t have the same features as the Mac equivalent:

  • PC version is 32 bit, 64 bit for the Mac version 
  • Mac version runs plugins like Flash out of the browser (version 4) hence less crashes, PC version however runs plugins together with the same browser process so should Flash crash, it will crash the entire browser
  • Apple software update for Windows at the time is clunky and requires UAC prompts 
  • I remember that on a 4GB Windows 7 PC back in 2010, Safari is a worse memory hog than Chrome, not to mention it frequently freezes 
  • While it uses the Google Safe Browsing API even back then, the PC version doesn’t always sync with Google server to fetch signatures 
  • Even the last version Safari 5 for Windows isn’t sandboxed afaik unlike the Mac version 
Edited by like_ooh_ahh

There is more that meets the eye
I see the soul that is inside

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, RedRound2 said:

There's no optimisation advantage. Give me a break. Optimization advantage is when you compare things like Final Cut X and premiere. One is clearly written better than the other. In Intel's case they cherry picked a software that disticntly takes advantage of Intel's acceleration

So does Final Cut for Intel CPUs, that's what makes it so good compared to software intended to run on a generalized non standardized set of hardware so focusing in on a single set of hardware acceleration features not actually found on higher end workstations is pointless.

 

4 hours ago, RedRound2 said:

Apple's CPU aren't fast because off some magical code optimization. It's fast becasue it is fast. Non native OSs dont have proper drives adn run through translations and emultations, and yet it's much faster than expected (i dont know where you got the claim that it isnt)

It's not translation or emulation, if you run Linux natively or by way of a hypervisor then what is being said is the performance is significantly worse and has basically nothing to do with drivers or the like at all, especially when running from within a hypervisor (I don't know I haven't personally checked but what was being said isn't what you are thinking). The performance difference will 100% be software optimization related, M1 is a new and very different hardware architecture and it is extremely unlikely any Linux software packages have been optimized for it in any way. This situation can of course be remedied by those that choose to do said optimization.

 

Also saying MacOS and Apple software are better optimized for the hardware they run isn't a negative or a criticism, it's more praise than anything else. It's just context that does need to be considered when talking about other hardware, operating systems and software.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Zodiark1593 said:

What would be the Reverse Uno of the decade is for Apple to suddenly launch a gaming focused SoC (complete with Ray Tracing and fast x86 emulation), and capitalize on the crap availability of gaming hardware to seize a chunk of the market. 
 

The amount of dumbfounding this would create is likely to be palpable...

@Appletake notes

✨FNIGE✨

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, leadeater said:

If the Apple VR headset is true then I actually think they are jumping in to this area, however not actually that quickly though. M1 and future SoC from Apple actually have enough performance with low enough power to not need a PC while also being able to play top tier titles, Apple currently is the most likely to make wireless high performance VR a reality with the optimization and polish required for people to actually want to use it..

Honestly if they can manage being a high end "competition" against Quest 2 even at that price with a lot better game library and higher performance... that can be better than consoles. Actually it would be THE console for VR with a ridiculously high entry price... most VR enthusiast already paid that much for VR stuff though so it makes sense anyways.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, gabrielcarvfer said:

I meant in transistor density. Intel 10nm has 100M transistor per mm^2. TSMC 7nm has 91M and 5nm has 170M transistors per mm^2.
SuperFin, the MIM capacitors and the architecture clearly done something to make 10nm competitive. How are the yields? Nobody knows.

True it's a lot better in transistor density however I do wonder about other factors like power scaling, that's what made Intel's 14mn so good for a long time and then be able to stretch so far at the very end, even though now power is getting rather high. 14nm was much better able to support an architecture that has higher frequency without as rapidly hitting the voltage curve vertical wall increases.

 

So while you may be able to get more transistors in the same area or have a smaller die you might be better off dependent on target application using a node with worse transistor density trading off instead for other benefits. This is where I want to know more about Intel's 10nm technologies because it's all well and good to say Intel is significantly behind TSMC in fab technology but that may well not actually allow Intel to create a better product using it compared to using their own 10nm, or worse.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, porina said:

Intel recently announced 8 core Tiger Lake models coming "soon", so that might be more interesting against Zen 3 laptop CPUs which I understand are a little closer to market. 

Yep, although that is a 45W H-series part. I mentioned the 4800U specifically because it competes in the same 15-25W bracket as the TGL-U series CPUs, which are currently limited to quad-core offerings.

 

10 hours ago, RedRound2 said:

Apple's CPU aren't fast because off some magical code optimization. It's fast becasue it is fast.

So are we just going to throw away just how much effort Apple's software engineers put into making sure a lot of the M1's potential actually translate to strong real-world performance? One of Appls's key pièce de résistance is the really tight integration between hardware and software, now being much more so thanks to in-house silicon. You can keep parroting the "oh, Apple's fast because they're brute forcing it" phrase, but it doesn't mean Apple's software work is no-less important. The fact that a whole new architecture with a different instruction set hasn't been a right mess and even runs emulation without much issue is nothing short of remarkable.

 

10 hours ago, RedRound2 said:

 

They should've ideally compared it with their i3s and i5s

 

You should take another look at the TGL-U product stack. There is no Core i3 variant as of yet, and the Core i5 and Core i7 variants are essentially identical in terms of CPU, only different in clockspeeds a bit, but most notably in the Xe iGPU, with the i5 1135G7 having a 80EU SKU with both i7s (1165G7/1185G7) having a 96EU version.

 

ERRATUM - There is a Core i3 1115G4 variant though it is relatively unpopular. 

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

ITT: People who for some reason STILL think that first party benchmarks can be trusted. Every single manufacturer cherry picks things that make them look the best in their press releases.

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Arika S said:

ITT: People who for some reason STILL think that first party benchmarks can be trusted. Every single manufacturer cherry picks things that make them look the best in their press releases.

But clearly, our beloved AMD would never stoop so low to that point, right? /s

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, D13H4RD said:

But clearly, our beloved AMD would never stoop so low to that point, right? /s

AMD is wholesome 100 big chungus Keanu Reeves, they would never! /s

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, gabrielcarvfer said:

If anything, they can create competitive products performance-wise, but using way more silicon, which costs more and reduces yields.

True but the offset cost of that, however not currently because both came to market at same time, is usually a less dense process node would be cheaper compared to a more dense one. But that's only because generation R&D costs which are not on Intel's side this time around. At least when talking about 14nm Intel has already recouped the cost of that so it's largely just operating and materials cost, ROI has already been achieved.

 

Just remember it's product performance that sells the product, not backend cost advantages between nodes. If the product can be more performant for the consumer using Intel 10nm compared to TSMC 5nm then Intel 10nm is better for that product. Such is why I'm not so keen on saying which is better because it might very well be product use case dependent.

 

30 minutes ago, gabrielcarvfer said:

AFAIK they're still using conventional litography instead of EUV, and spend much more time than TSMC to complete each wafer.

EUV is still only some layers though, next generation 3nm is supposed to allow up to and over 20 layers to be EUV. 7nm++ was 4 or 5 layers and 5nm is  14 layers. Interestingly enough for more complicated things EUV still requires multi-patterning so ends up being around the same wafer output, that was for 7nm though so not sure about 5nm.

Link to comment
Share on other sites

Link to post
Share on other sites

Can someone genuinely explain to me why would anyone care about Office 365 performance? Like cool, it can export PowerPoint to PDF and sort stuff in excel a little faster but can you actually tell the difference when using it?

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, descendency said:

Apple should release a benchmark of performance without a fan using a macbook air. . . and have all of those 0s for intel

I believe Chinese scientists used fan-less Intel nodes to make this happen:

 

China turns on nuclear-powered 'artificial sun' (Update)

image.jpeg.fd5bf61a98f6b6a40ad7b5d990dff8bc.jpeg
https://phys.org/news/2020-12-china-nuclear-powered-artificial-sun.html

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×