Jump to content

I Put a GPU in the M2 Mac Pro

Apple’s M2 Ultra powered Mac Pro is the final step in their Apple Silicon transition. But without GPU support or meaningful expansion, is it worth nearly double the price of a comparable Mac Studio?
 

 

Emily @ LINUS MEDIA GROUP                                  

congratulations on breaking absolutely zero stereotypes - @cs_deathmatch

Link to comment
Share on other sites

Link to post
Share on other sites

Not gonna lie here this one feels too clickbaity, yes you installed a GPU but it didn't work. Its like a half-truth. Not mad or anything jus got excited that someone found a way to use a GPU and the result was disappointing.

hi

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, MarcLmao said:

Not gonna lie here this one feels too clickbaity, yes you installed a GPU but it didn't work. Its like a half-truth. Not mad or anything jus got excited that someone found a way to use a GPU and the result was disappointing.

There's extensions that fix that.

image.png.b3635aa3b07011418b31766fceb0f6dc.png

I'm not actually trying to be as grumpy as it seems.

I will find your mentions of Ikea or Gnome and I will /s post. 

Project Hot Box

CPU 13900k, Motherboard Gigabyte Aorus Elite AX, RAM CORSAIR Vengeance 4x16gb 5200 MHZ, GPU Zotac RTX 4090 Trinity OC, Case Fractal Pop Air XL, Storage Sabrent Rocket Q4 2tbCORSAIR Force Series MP510 1920GB NVMe, CORSAIR FORCE Series MP510 960GB NVMe, PSU CORSAIR HX1000i, Cooling Corsair XC8 CPU block, Bykski GPU block, 360mm and 280mm radiator, Displays Odyssey G9, LG 34UC98-W 34-Inch,Keyboard Mountain Everest Max, Mouse Mountain Makalu 67, Sound AT2035, Massdrop 6xx headphones, Go XLR 

Oppbevaring

CPU i9-9900k, Motherboard, ASUS Rog Maximus Code XI, RAM, 48GB Corsair Vengeance LPX 32GB 3200 mhz (2x16)+(2x8) GPUs Asus ROG Strix 2070 8gb, PNY 1080, Nvidia 1080, Case Mining Frame, 2x Storage Samsung 860 Evo 500 GB, PSU Corsair RM1000x and RM850x, Cooling Asus Rog Ryuo 240 with Noctua NF-12 fans

 

Why is the 5800x so hot?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, MarcLmao said:

Not gonna lie here this one feels too clickbaity, yes you installed a GPU but it didn't work. Its like a half-truth. Not mad or anything jus got excited that someone found a way to use a GPU and the result was disappointing.

Ye, I agree. But the comparisons between the mac pro and mac studio were really interresting.

Link to comment
Share on other sites

Link to post
Share on other sites

Hey LTT!

 

Hoping this message finds its way to the editors. I think my YT comments have been getting drowned out.

 

For almost all of LTT's recent videos, there's a low frequency hum in all recorded audio. It doesn't seem to appear in sponsor spots. This suddenly started happening somewhat recently, so I assume someone's preset just changed without notice or something. Just needs a high pass filter and you're good to go <3

Link to comment
Share on other sites

Link to post
Share on other sites

Enjoyed the video, but I raised this concern over the previous Mac Studio video (see the MarkBench thread, which seemed the most apt place to put the actual discussion regarding benchmarking compilation).

 

You can't just give "Chromium compile, no symbols" as the only details for compilation, particularly given the ambiguity over whether it's targetting native or a constant platform over all machines.

 

This is a serious problem, as if it's targetting native builds, then this test basically invalidates the meaingfulness of showing the x86 bench on the same graph, as they're basically compiling two completely different programs for two completely different platforms with wildly different optimisation backends and system libraries, meaning there is no apples to apples comparison. This needs to be resolved for these compilation tests to be useful, as currently they're not really.

 

Tell us the compilation settings. Show two bars for each, one targetting ARM and the other targetting x86. Ensure they're targetting the same OS, linux is probably easiest to build for on different machines, and avoids having an explosion of sub-bars for each OS+architecture pairing. This would provide a much more informative graph.

 

I mean, imagine the Mac machines were rendering a completely different camera shot of a scene, or doing game benchmarks on different levels, compared to the x86 bench? Sure, it's the same renderer/game, but the details have to be the same, otherwise the numbers you get don't describe the relative performance. That's the kind of benchmarking error we're seeing here with compilation.

 

Hopefully this can be resolved, and it was just too soon after the Mac Studio video to make sufficient changes, but this is a problem that has to be addressed - or just take compilation benchmarks out of the videos from now on, which seems a shame to do, but if the goal is accurate, meaningful testing that is informative to potential customers, then it's an option, because currently this is a misleading benchmark and graph that really isn't informative or reproducible.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, IkeaGnome said:

There's extensions that fix that.

image.png.b3635aa3b07011418b31766fceb0f6dc.png

woah. whats that called and is there firefox build?

hi

Link to comment
Share on other sites

Link to post
Share on other sites

another one for the editors, the *pop* as the arrows come in on graphs is identical to the default notification pop for mozilla thunderbird, i thought i was getting tons of emails because i wasn't directly watching the video

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, MarcLmao said:

woah. whats that called and is there firefox build?

I use a combination of two. I don't really want to derail an official thread too much, but Dearrow has a thread on here.

Spoiler

 One blocks ads, but I've got YT Premium so I wouldn't be really getting ads anyways. It also adds functionality to the scrubber bar to highlight in video ads and different parts of the video. The sponsor tag by the title is SponsorBlock. It's wrong on this one, but normally it's pretty correct. 

Spoiler

image.thumb.png.2ceeefe87f49ee5cc183789f3eabb0b3.png

https://addons.mozilla.org/en-US/firefox/addon/sponsorblock/

DeArrow is the one that does the titles and thumbnails. Users can make their own titles and thumbnails with it for each video, then people can "vote" on the one they like and that's how the video shows up for people with DeArrow.

https://addons.mozilla.org/en-US/firefox/addon/dearrow/

 

I'm not actually trying to be as grumpy as it seems.

I will find your mentions of Ikea or Gnome and I will /s post. 

Project Hot Box

CPU 13900k, Motherboard Gigabyte Aorus Elite AX, RAM CORSAIR Vengeance 4x16gb 5200 MHZ, GPU Zotac RTX 4090 Trinity OC, Case Fractal Pop Air XL, Storage Sabrent Rocket Q4 2tbCORSAIR Force Series MP510 1920GB NVMe, CORSAIR FORCE Series MP510 960GB NVMe, PSU CORSAIR HX1000i, Cooling Corsair XC8 CPU block, Bykski GPU block, 360mm and 280mm radiator, Displays Odyssey G9, LG 34UC98-W 34-Inch,Keyboard Mountain Everest Max, Mouse Mountain Makalu 67, Sound AT2035, Massdrop 6xx headphones, Go XLR 

Oppbevaring

CPU i9-9900k, Motherboard, ASUS Rog Maximus Code XI, RAM, 48GB Corsair Vengeance LPX 32GB 3200 mhz (2x16)+(2x8) GPUs Asus ROG Strix 2070 8gb, PNY 1080, Nvidia 1080, Case Mining Frame, 2x Storage Samsung 860 Evo 500 GB, PSU Corsair RM1000x and RM850x, Cooling Asus Rog Ryuo 240 with Noctua NF-12 fans

 

Why is the 5800x so hot?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I'd be interested in the Mac Pro. Looking to setup a streaming setup. I have 8 Go Pros so the Pro would bit those Blackmagic cards in perfectly!

Link to comment
Share on other sites

Link to post
Share on other sites

Disappointed when talking about the PCIe setup that you didn't say that it's total potential PCIe bandwidth is only a touch more than your 13900k test bench. (https://social.treehouse.systems/@marcan/110494017883893557)

Such a joke of a "pro" product.

50 minutes ago, themrsbusta said:

If it is detected, the driver can be made.

Only if Apple signs it. Nvidia famously made a Turing driver that Apple told them to fuck off with. (https://appleinsider.com/articles/19/02/14/video-nvidia-support-was-abandoned-in-macos-mojave-and-heres-why)

Main Gaming PC - i9 10850k @ 5GHz - EVGA XC Ultra 2080ti with Heatkiller 4 - Asrock Z490 Taichi - Corsair H115i - 32GB GSkill Ripjaws V 3600 CL16 OC'd to 3733 - HX850i - Samsung NVME 256GB SSD - Samsung 3.2TB PCIe 8x Enterprise NVMe - Toshiba 3TB 7200RPM HD - Lian Li Air

 

Proxmox Server - i7 8700k @ 4.5Ghz - 32GB EVGA 3000 CL15 OC'd to 3200 - Asus Strix Z370-E Gaming - Oracle F80 800GB Enterprise SSD, LSI SAS running 3 4TB and 2 6TB (Both Raid Z0), Samsung 840Pro 120GB - Phanteks Enthoo Pro

 

Super Server - i9 7980Xe @ 4.5GHz - 64GB 3200MHz Cl16 - Asrock X299 Professional - Nvidia Telsa K20 -Sandisk 512GB Enterprise SATA SSD, 128GB Seagate SATA SSD, 1.5TB WD Green (Over 9 years of power on time) - Phanteks Enthoo Pro 2

 

Laptop - 2019 Macbook Pro 16" - i7 - 16GB - 512GB - 5500M 8GB - Thermal Pads and Graphite Tape modded

 

Smart Phones - iPhone X - 64GB, AT&T, iOS 13.3 iPhone 6 : 16gb, AT&T, iOS 12 iPhone 4 : 16gb, AT&T Go Phone, iOS 7.1.1 Jailbroken. iPhone 3G : 8gb, AT&T Go Phone, iOS 4.2.1 Jailbroken.

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, babel said:

Tell us the compilation settings. Show two bars for each, one targetting ARM and the other targetting x86. Ensure they're targetting the same OS, linux is probably easiest to build for on different machines, and avoids having an explosion of sub-bars for each OS+architecture pairing. This would provide a much more informative graph.

 

While I agree they should have clearly markings on the target you are compiling for at some point this becomes very synthetic. With all tests you should ask the question would anyone intenintlay get this machine to compile a linux build of chromeimum while running macOS?

If your planning on using the a Mac for dev and are going to be using it for cross compilation then the software your going to be compiling would much more likly be some server side target, something that you would then deploy on a linux server but want to build locally.  While its not common its is much more legit to say have a PostgreSQL compile task than a chromium one. 
 

They should also have local arc/platform compile tasks as this is also an important factor, yes the OS and CPU target arc play a factor in the speed here but that is an important factor if your using the machine for iterative development and end up doing a lot of recompiles (without using cache) then your likely doing them during dev targeting your local arc and OS, also for such builds I would say they should be debug builds including symbols etc. For most devs this local development build time is a much more important number than the final release build time that is almost always going to be running in a CD/CI env anyway not on your workstation.


 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Hunter259 said:

Only if Apple signs it. Nvidia famously made a Turing driver that Apple told them to fuck off with. (https://appleinsider.com/articles/19/02/14/video-nvidia-support-was-abandoned-in-macos-mojave-and-heres-why)

NV still could have shipped without appel signature but the limitation would be that only apps were the devs opt in to trust NV would use the GPU other apps (and the os window manager etc) would not.    In effect GPU drivers make use of the DDL injection attach method were they swap out the dynamic lib loaded and run by every GUI app on your system, yes your GPU driver is injecting code into every app on the OS, the default for a while now in macOS for devs is to use the Hardened runtime this means our apps only load dynamic libs signed by apple or signed by us or explicitly loaded by us.   
 

For stuff like CUDA NV could have (and still could) shipped a driver for the GPUs without any signatures from apple, since all apps that use CUDA already include the NV SDK and thus do not require DLL injection by the CUDA driver as there is no system lib that the driver would be replacing. 

Link to comment
Share on other sites

Link to post
Share on other sites

. . . Why put a mobile chip in a "pro" workstation?

 

The humongous 840mm2 M2 ultra chip is tuned for efficiency, meaningless for a workstation that has a power cord and where you care about speed and ram.

For comparison, an RTX4090 is 600mm2 and obliterate the M2 ultra in performance. While drawing more power, of course.

A xeon processor is about 300mm2 in size, and again trounces the M2 ultra in ram and speed.

Efficiency wise the M2 ultra is a class of its own. Cool I guess? ANd slow.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, 05032-Mendicant-Bias said:

. . . Why put a mobile chip in a "pro" workstation?

 

The humongous 840mm2 M2 ultra chip is tuned for efficiency, meaningless for a workstation that has a power cord and where you care about speed and ram.

I think I've been to music festivals with more people than there are M2 Mac Pro customers in the world. A high performance version of the M2 Ultra would be a complete redesign to allow for higher clocks and higher power draw. It's not worth the effort for such a small market.

Ian Cutrass talks about scaling (in the other direction) in this video (comparison of Zen 4 and Zen 4c):

 

 

I think "malicious compliance" is a good term to describe what Apple is doing.

The Mac Pro was once a quite popular and price competitive machine with updates at least every other year. Up until the trashcan came to light. With the only expansion being (expensive) Thunderbolt and no updates to the hardware for 6 years, Apple killed the entire segment pretty much with one flop of a product. The 2019 Mac Pro was a necessary step back to the original concept, but too expensive and too little too late.

Since the trashcan Mac Pro, Apple has a real chicken-and-egg problem at their hands. They have a very small customer base for this segment, so they don't want to spend much money on the Mac Pro. In my opinion this is a dilemma of their own making. They could not accept that the Mac Pro was a bad product, they fell for the sunken cost fallacy, didn't update the Mac Pro for way too long and they eventually drove their customers away. Regaining these customers would require investments Apple is simply not willing to make.

 

It seems like the Mac Pro is nearing it's end. I cannot see many people buying the Mac Pro if you can get the same performance in the Mac Studio for a fraction of the cost. The Mac Studio is the spiritual successor of the trashcan with only Thunderbolt expansion. Nevertheless, the huge gap in costs makes it still more compelling for most customers than the Mac Pro.

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, babel said:

You can't just give "Chromium compile, no symbols" as the only details for compilation, particularly given the ambiguity over whether it's targetting native or a constant platform over all machines.

Not their first time doing this, and I doubt it'll be the last. I also voiced this concern in another video of theirs. LTT makes it really hard to give them credibility whenever they try to do anything more serious.

16 hours ago, themrsbusta said:

If it is detected, the driver can be made.

I mean, you can slap asahi on it and should be able to use any GPU in there (even nvidia, since they also offer arm drivers).

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, hishnash said:

NV still could have shipped without appel signature but the limitation would be that only apps were the devs opt in to trust NV would use the GPU other apps (and the os window manager etc) would not.    In effect GPU drivers make use of the DDL injection attach method were they swap out the dynamic lib loaded and run by every GUI app on your system, yes your GPU driver is injecting code into every app on the OS, the default for a while now in macOS for devs is to use the Hardened runtime this means our apps only load dynamic libs signed by apple or signed by us or explicitly loaded by us.   
 

For stuff like CUDA NV could have (and still could) shipped a driver for the GPUs without any signatures from apple, since all apps that use CUDA already include the NV SDK and thus do not require DLL injection by the CUDA driver as there is no system lib that the driver would be replacing. 

Which makes it such a crappy driver that it likely would have resulted in more complaints than it was worth. Apple should have just signed the damn thing and gotten over themselves but we all know that doesn't seem possible anymore. I believe that AMD will never had a driver for ARM Macs unless pros manage to scream louder than they did over the 2013 Mac Pro but I just don't think Apple will care at that point.

Main Gaming PC - i9 10850k @ 5GHz - EVGA XC Ultra 2080ti with Heatkiller 4 - Asrock Z490 Taichi - Corsair H115i - 32GB GSkill Ripjaws V 3600 CL16 OC'd to 3733 - HX850i - Samsung NVME 256GB SSD - Samsung 3.2TB PCIe 8x Enterprise NVMe - Toshiba 3TB 7200RPM HD - Lian Li Air

 

Proxmox Server - i7 8700k @ 4.5Ghz - 32GB EVGA 3000 CL15 OC'd to 3200 - Asus Strix Z370-E Gaming - Oracle F80 800GB Enterprise SSD, LSI SAS running 3 4TB and 2 6TB (Both Raid Z0), Samsung 840Pro 120GB - Phanteks Enthoo Pro

 

Super Server - i9 7980Xe @ 4.5GHz - 64GB 3200MHz Cl16 - Asrock X299 Professional - Nvidia Telsa K20 -Sandisk 512GB Enterprise SATA SSD, 128GB Seagate SATA SSD, 1.5TB WD Green (Over 9 years of power on time) - Phanteks Enthoo Pro 2

 

Laptop - 2019 Macbook Pro 16" - i7 - 16GB - 512GB - 5500M 8GB - Thermal Pads and Graphite Tape modded

 

Smart Phones - iPhone X - 64GB, AT&T, iOS 13.3 iPhone 6 : 16gb, AT&T, iOS 12 iPhone 4 : 16gb, AT&T Go Phone, iOS 7.1.1 Jailbroken. iPhone 3G : 8gb, AT&T Go Phone, iOS 4.2.1 Jailbroken.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Hunter259 said:

I believe that AMD will never had a driver for ARM Macs unless pros manage to scream louder than they did over the 2013 Mac Pro but I just don't think Apple will care at that point.

Does not matter how much pros screen or even if apple wanted to they would not do an AMD driver for apple silicon Macs as there are a load of key metal features that are just not supportable on AMD gpus (feature apple added just for thier gpus) even if apple signed an AMD driver it would be missing so many modern Metal features that almost no modern apps would support those gpus.   Devs are not going to put in dev work for a gpu that is optional added after market to a Mac Pro that will be 0.1% of thier users. 

 

1 hour ago, Hunter259 said:

Which makes it such a crappy driver that it likely would have resulted in more complaints than it was worth. Apple should have just signed the damn thing and gotten over themselves but we all know that doesn't seem possible anymore.

For CUDA (that is what this was always about) there is no differnce if apple sign it or if they do not.  The use case of the apple signature is for these drivers to be able to replace the metal os driver, due to API differences NV gpus would not provide very good Metal feature support (apple is not adding features to metal that line up with NV hardware). 

 

From a third party GPU support angle what would be nice to see apple add is an easy for to have PCIe passthrough to a VM then apps could use the circulations framework apple provided to very easily embed a ultra light eight linux VM (within the app) and take the PCIe passthrough from the attached PCIe gpus for compute using whatever linux driver they want within the VM.   This would allow compute heavy apps to offer support for NV or even AMD gpus without those card vendors needing minting macOS drivers just needing linux ARM64 16kb or 4kb page size drivers.  For long running GPU compute tasks perf would not be impacted much at all. 

Link to comment
Share on other sites

Link to post
Share on other sites

Hmm...... Can you run linux natively on the M2 Ultra Mac Pro, and then load drivers for AMD and NVIDIA gpus that way?

That'd mean it's an artificial driver lock on macOS. Metal is supported for AMD gpus on intel macs, no reason why it wouldn't be on M2 asides from a purposeful lack of drivers.. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/27/2023 at 7:36 PM, Hunter259 said:

Only if Apple signs it. Nvidia famously made a Turing driver that Apple told them to fuck off with. (https://appleinsider.com/articles/19/02/14/video-nvidia-support-was-abandoned-in-macos-mojave-and-heres-why)

Only if you want to use Mac OS, Apple already said you can install Windows ARM and there's some Linux projects to run on the Apple Silicon.

OSX to work, Linux or Windows to gaming.

Made In Brazil 🇧🇷

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, themrsbusta said:

Only if you want to use Mac OS, Apple already said you can install Windows ARM and there's some Linux projects to run on the Apple Silicon.

OSX to work, Linux or Windows to gaming.

AMD has no Windows ARM drivers afaik, and Nvidia's one is under an NDA and not easily accessible.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, igormp said:

AMD has no Windows ARM drivers afaik, and Nvidia's one is under an NDA and not easily accessible.

Again: Can be made (With ARM computers ascension probably would) and doesn't need Apple's permission.

Made In Brazil 🇧🇷

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/28/2023 at 4:21 AM, hishnash said:

While I agree they should have clearly markings on the target you are compiling for at some point this becomes very synthetic. With all tests you should ask the question would anyone intenintlay get this machine to compile a linux build of chromeimum while running macOS?

If your planning on using the a Mac for dev and are going to be using it for cross compilation then the software your going to be compiling would much more likly be some server side target, something that you would then deploy on a linux server but want to build locally.  While its not common its is much more legit to say have a PostgreSQL compile task than a chromium one. 
 

They should also have local arc/platform compile tasks as this is also an important factor, yes the OS and CPU target arc play a factor in the speed here but that is an important factor if your using the machine for iterative development and end up doing a lot of recompiles (without using cache) then your likely doing them during dev targeting your local arc and OS, also for such builds I would say they should be debug builds including symbols etc. For most devs this local development build time is a much more important number than the final release build time that is almost always going to be running in a CD/CI env anyway not on your workstation.

While any specific benchmark is absolutely synthetic, indeed that's what a benchmark is because that's how you get reproducible and repeatable measurements, this is not an inherent issue with the choice of benchmark. Large software builds such as Chromium will do many disparate things, which is why they take so long to compile. Therefore, the benefit of picking Chromium is that it does indeed stress many aspects of the compiler, meaning it is more "indicative" of what other pieces of software will see, and with a realistic distribution meaning it's not an overly synthetic benchmark that stresses "every" part of compilation, and in doing so, stresses many things that never actually come up normally.

 

I disagree with your assessment of developers on Macs, though what you say does indeed line up with some users. Cross compilation is used for a variety of reasons, for example, people that develop software on a Mac, but that target users on a range of platforms. Indeed, with the rising adoption of Electron as a cross-platform toolchain, many app developers will quite conceivably want to compile their app on their Mac, which will in turn compile a large portion of Chromium, and do so for users on Windows and Linux, in addition to Mac x86 and Apple Silicon! They may even be targetting ARM Linux (perhaps even Apple Silicon, but also Raspberry Pi or such). This is just one thing that developers may want to do, and if they have a powerful Mac or MacBook, then it may even be faster and simpler for them to use that instead of a CI/CD machine on a server somewhere.

 

There is, of course, the class of those for whom compilation is something done by a build server, however this is largely constrained to the professional space, where we're talking about programmers working on company code that (understandably) they might not have the resources to run on their machine, making compiling it locally seem redundant, but that doesn't mean they don't compile anything locally, as indeed most dev work involves compiling at least some portion of your toolchain or stack locally, particularly when configuration is handled compile-time. But, we can still see why situations may have a local build step, to avoid having a fleet of servers each rebuilding it themselves, as you are indicating, but then we still want to see benchmarks like this because it is still a reasonable approximation of real-world builds that aren't Chromium.

 

Would multiple benchmarks in this category be useful? Possibly. Different programs, and indeed different specific builds of the same program, will stress compilers in very different ways, so getting more indicators can be valuable. On the other hand, however, too many benchmarks results in information overload, and can be hard to interpret (perhaps one benchmark is much faster on a given platform over another because it doesn't need to compile as much "driver" style code, since it's provided by the OS, which indeed is the kind of problem we need to see addressed).

 

A hidden problem with this is the question of which benchmarks? We can all pick our favourites, but we must not fall under the pretence that something we have to compile frequently is something that most others have to compile frequently. Unfortunately, assessing this is extremely difficult, so finding benchmarks that are representative of what "people compile often" will be an extremely arduous process --- the classic example was the Linux kernel, yet few Linux users compile that themselves as the kernel is provided by their distro already, and often as a pre-compiled image! So, since we can't easily assess which codebases are good candidates for that, we alter it to benchmarks that are representative of "many programs" themselves. Again, this is a tricky problem, but something a little more tractable: look for big, complex pieces of software that serve many uses. This has an added benefit, since larger programs are likely to take longer to compile, it is then easier to observe differences in performance. The Linux kernel is an okay choice, but it's missing a lot, and much of its compilation now is indeed that of drivers which live in its codebase. Chromium is a much better choice, as it contains an astounding number of subsystems which are all complex and significant in their own right, with a significant size and primarily C++ codebase that is far more indicative of other software projects (compared to Linux's C, which has nothing like the template system in C++, which can have a significant effect on compilation).

 

At the very least, considering the above, we can largely agree on the final point, though I would say that the performance of native builds would be indicated if the benchmarks followed the guidance I originally put forward, since the different targets should naturally include the native platforms of each machine (just in such a way that we compare like to like). Unfortunately, we don't have actual details as to the builds specified, only that they are without debug symbols, so the problem there is just one of underspecification that can be remedied with a little more transparency into the benchmark. Iteration speed is important, and that's one of the reasons why compilation benchmarks are actually quite useful for devs looking for new hardware, though as above there is a range of different conditions that can be in which means assumptions about how any given developer (or small cohort) does things should not be taken as how all devs do. Local debug builds may be important to some devs for iteration, though other devs may have test suites that may benefit from more optimised builds for their iteration, while a third group of devs relies on incremental builds to speed up testing and iteration, which is very much not indicated by cold start builds (without caching).

 

This is why the real solution to this isn't simply to pick XYZ, but rather to be transparent about whatever was picked; the more details we know about the build, the more we can assess for ourselves how useful it is as an indicator for our own uses. Which, in my opinion, is what benchmarks are all about, anyway.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×