Jump to content

Apple M1 Ultra - 2nd highest multicore score, lost to 64-core AMD Threadripper.

TheReal1980

If the M1 Ultra is 100% as good as Apple claims it is then I wonder if miners will scalp the Mac Studio? I mean close to 3090 performance in a 100W envelope sounds like a dream for mining.

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, AndreiArgeanu said:

If the M1 Ultra is 100% as good as Apple claims it is then I wonder if miners will scalp the Mac Studio? I mean close to 3090 performance in a 100W envelope sounds like a dream for mining.

If miners get interest in Mac Studio: Hello $3,2 trillion market cap for Apple. 

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, AndreiArgeanu said:

If the M1 Ultra is 100% as good as Apple claims it is then I wonder if miners will scalp the Mac Studio? I mean close to 3090 performance in a 100W envelope sounds like a dream for mining.

Probably not. While it's a good value given the current GPU prices, the Silicon macs don't have "miner" interest.

https://9to5mac.com/2021/11/10/m1-pro-macbook-pro-cryptocurrency-mining/

Quote

Running the numbers through a crypto calculator shows that the profit, after allowing for electricity costs, is just $12.82 per month – or around 42 cents per day. If you were buying the MacBook Pro purely for mining, that means it would pay for itself in… 17 years!

 

Link to comment
Share on other sites

Link to post
Share on other sites

Apple not selling parts individually also helps. 
 

If you are a miner, you can buy just the graphics card (for, oh, $2K), and put multiple of them in one case. No need for more than 1 CPU or that much RAM - just enough to keep the GPUs fed.

 

Whereas, Apple sells their GPU as a complete system. If you are a miner, you are forced to buy a whole case, CPU, 1TB of storage, and 64 GB of RAM that you don’t need, repeatedly.

 

It’s like if a miner was trying to build a farm entirely from gaming PC prebuilts at Best Buy.

 

The Mac Studio is a fine value as a complete system- but terrible value if you just want the GPU. That’s Going to work in Apple’s favor here no doubt.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Dracarris said:

well okay that means you cannot trust speeds and timing that GB reports - and how exactly affects this the score? After all the score tells us in which time machine x gets workload y done. At which clock speeds and memory timings is pretty secondary, especially with RISC machines where e.g., CPU clock is anyways not comparable to CISC. Here we are interested in how Apple Silicon compares to x86 machines, not in scores on two identical CPU models.

I can produce two extremely different scores with GB reporting identical hardware configuration. This means if you search their database for say, an 12900K to compare against a 5950X, depending on which result you pick, you'll draw incorrect comparisons. It's partly why we get these CPU performance leaks claiming "5950X 50% faster than 10900K!" but they end up picking two cherry picked GB results to perpetuate this claim.

 

For the example comparison you mentioned, comparing Apple Silicon to X86 will result in different performance metrics depending on which GB result you select. At that point, do you average out all of the GB scores? Or do you compare both the highest and lowest scores and hope it's an accurate representation of the products across their configuration stacks?

 

I just don't see this being an accurate way to measure performance between identical or different products. Not until they fix how hardware configuration is reported. At best, it serves only to determine if changes you are making on your specific system configuration is improving or decreasing performance for a specified workload, not as a comparison tool.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

So you buy a processor and you don't care if it will benefit YOUR workload, but but watching the most generic test in existence. If you call that an informed customer, good luck with that. Needles to mention that this specific processor is not any good to non macOS apps

Link to comment
Share on other sites

Link to post
Share on other sites

Man this is so fun. 😄 

 

Currently exactly the same thing is happening with this GB, on PC-centric internet forums, that happened when OG M1 was released. 

 

A lot of denial, a lot of excuses. 

 

Man give it to Apple that they once again made tech forum debates interesting. 

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, TheReal1980 said:

Summary

Looks like we have a rocket on our hands! Second highest multicore score of any CPU, not bad.

 

m1-ultraa-benchmark.jpg

 

Quotes

 

My thoughts

This thing is a rocket it seems. Higher score than the Intel 12900 and almost as fast in multi-core score as the 3990X 64-core AMD Threadripper (which is #1). Apple moving forward, looking forward to a octa-CPU Mac Pro in the near future.

 

Sources

https://forums.macrumors.com/threads/m1-ultra-outperforms-28-core-intel-mac-pro-in-first-leaked-benchmark.2337039/

I wish instead of this big "SoC" thing, apple would just release dedicated gpus and cpus. User upgradability would be a thing then, too. 

CPU-AMD Ryzen 7 7800X3D GPU- RTX 4070 SUPER FE MOBO-ASUS ROG Strix B650E-E Gaming Wifi RAM-32gb G.Skill Trident Z5 Neo DDR5 6000cl30 STORAGE-2x1TB Seagate Firecuda 530 PCIE4 NVME PSU-Corsair RM1000x Shift COOLING-EK-AIO 360mm with 3x Lian Li P28 + 4 Lian Li TL120 (Intake) CASE-Phanteks NV5 MONITORS-ASUS ROG Strix XG27AQ 1440p 170hz+Gigabyte G24F 1080p 180hz PERIPHERALS-Lamzu Maya+ 4k Dongle+LGG Saturn Pro Mousepad+Nk65 Watermelon (Tangerine Switches)+Autonomous ErgoChair+ AUDIO-RODE NTH-100+Schiit Magni Heresy+Motu M2 Interface

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Ryan829 said:

I wish instead of this big "SoC" thing, apple would just release dedicated gpus and cpus. User upgradability would be a thing then, too. 

Its only this efficient BECAUSE its an SoC and on a locked-down platform where Metal is specifically tailored to the unified memory architecture.  The consoles are similar but not quite identical, as they still have external RAM to keep costs down.

The problem is as hardware gets further away from CPU/GPU then bottlenecks occur, socketed hardware is even worse, you increase power consumption, latency and decrease bandwidth due to losses and crosstalk in the traces and socket.  This is why PCIe 5 is so hard to implement and may not be wired to every PCIe slot, it becomes insanely complicated to keep the signal integrity over a relatively short distance.

Apple have basically side-stepped generations of technical obstacles by using an SoC.  The most efficient GPUs are using similar techniques by putting everything on the same chip too but it works out REALLY expensive to do so and the PCIe interface becomes even more of a bottleneck to tapping into that performance, which is why those kinds of cards are used for things where the job can fit entirely in the VRAM and caches.

Router:  Intel N100 (pfSense) WiFi6: Zyxel NWA210AX (1.7Gbit peak at 160Mhz)
WiFi5: Ubiquiti NanoHD OpenWRT (~500Mbit at 80Mhz) Switches: Netgear MS510TXUP, MS510TXPP, GS110EMX
ISPs: Zen Full Fibre 900 (~930Mbit down, 115Mbit up) + Three 5G (~800Mbit down, 115Mbit up)
Upgrading Laptop/Desktop CNVIo WiFi 5 cards to PCIe WiFi6e/7

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/10/2022 at 2:56 AM, gjsman said:

then double it. 3090 territory? 

I will frankly be amazed if they manage to do linear scaling, though looking at how the rest of the GPUs are scaling, it would be not out of the blu

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, Spindel said:

Man this is so fun. 😄 

 

Currently exactly the same thing is happening with this GB, on PC-centric internet forums, that happened when OG M1 was released. 

 

A lot of denial, a lot of excuses. 

 

Man give it to Apple that they once again made tech forum debates interesting. 

And it will be the same for M2 as well when it comes out.

 

5 years ago: ARM CPUs are never going to be a thing ever in desktops! Apple would be stupid and it would be their death sentence (and keep in mind this was the time when Intel had 8-10% improvement in perf each gen).

2 years ago at Apple Silicon announcement: LTT making fun on Apple Silicon calling it "ASS" and that Rosetta was never going to work. Apple was going to lose a lot of customers and devs

2 years ago before embargo: M1 presentation claims are wild and non-believable. They're just stuffing iPad chips on Macs. And that is definitely bad case since graphs weren't labelled

0.5 years ago: M1 Pro and Max: Basically the same complains about graphs and deflecting the conversation completely away from their impressive chips to the notch

 

On the phones side of things, its been well known how A series have been way ahead of the competition for many many years, but then the haters change positions and keep saying "Why wOULd aNyOne NeED ThAT kInD of PeRFoRMance?" - Well, shithead - a lot of onboard ML, AI, image processing, video processing require more and more performance and generally, as a tech enthusiast, you should not ever be complaining about hardwares becoming more powerful and efficient.

 

And we have now. I'm not saying M1 Ultra is going to be the best shit to exist. It may not be, and we need to get it on independent reviewers before we come to that conclusion but given their extensive track record till now, it will be close to the top and irritates me how Apple doesn't get enough recognition from "tech nerds" group who can actually admire the engineering marvel they did because "brrr APple evil". But would happily pounce on them for literally the smallest thing (I remember seeing people on this forum discrediting Apple's donation for hurricane relief as marketing move and something with ulterior motive)

Link to comment
Share on other sites

Link to post
Share on other sites

Anyone remember the PPC G3 advertisements about the "Megahertz Myth" back in the day? Apple is very good at cherry-picking data from carefully-selected performance metrics. No doubt the M1 chips are amazing in their own right, but these claims by Apple that they will trounce a Threadripper and a 3090 while sipping power, are just not accurate for all workloads.

My Current Setup:

AMD Ryzen 5900X

Kingston HyperX Fury 3200mhz 2x16GB

MSI B450 Gaming Plus

Cooler Master Hyper 212 Evo

EVGA RTX 3060 Ti XC

Samsung 970 EVO Plus 2TB

WD 5400RPM 2TB

EVGA G3 750W

Corsair Carbide 300R

Arctic Fans 140mm x4 120mm x 1

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, RedRound2 said:

And it will be the same for M2 as well when it comes out.

 

5 years ago: ARM CPUs are never going to be a thing ever in desktops! Apple would be stupid and it would be their death sentence (and keep in mind this was the time when Itel had 10% improvement in perf each gen).

2 years ago at Apple Silicon announcement: LTT making fun on Apple Silicon calling it "ASS" and that Rosetta was never going to work. Apple was going to lose a lot of customers and devs

2 years ago before embargo: M1 presentation claims are wild and non-believable. They're just stuffing iPad chips on Macs. And that is definitely bad case since graphs weren't labelled

0.5 years ago: M1 Pro and Max: Basically the same complains about graphs and deflecting the conversation completely away from their impressive chips to the notch

 

On the phones side of things, its been well known how A series have been way ahead of the competition for many many years, but then the haters change positions and keep saying "Why wOULd aNyOne NeED ThAT kInD of PeRFoRMance?" - Well, shithead - a lot of onboard ML, AI, image processing, video processing require more and more performance and no one should ever complain about hardwares becoming more powerful and efficient.

 

And we have now. I'm not saying M1 Ultra is going to be the best shit to exist. It may not be, and we need to get it on independent reviewers before we come to that conclusion but given their extensive track record iti will be close to the top and irritates me how Apple doesn't get enough recognition from "tech nerds" group who can actually admire the engineering marvel they did because "brrr APple evil". But would happily pounce on them for literally the smallest thing (I remember seeing people on this forum discrediting Apple's donation for hurricane relief as marketing move and something with ulterior motive)

I call it an autistic reluctance to change. 
 

x86 is dying and ARM is the future. 

Link to comment
Share on other sites

Link to post
Share on other sites

50 minutes ago, atxcyclist said:

Anyone remember the PPC G3 advertisements about the "Megahertz Myth" back in the day? Apple is very good at cherry-picking data from carefully-selected performance metrics. No doubt the M1 chips are amazing in their own right, but these claims by Apple that they will trounce a Threadripper and a 3090 while sipping power, are just not accurate for all workloads.

Well Apple was not wrong with the Max and that it mached a 3080 mobile while sipping power with apps written with the correct API. 
 

Or are we arguing that it can only be compared with software that needs at least one translation layer for the CPU and at least one translation layer for the GPU?

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Spindel said:

Well Apple was not wrong with the Max and that it mached a 3080 mobile while sipping power with apps written with the correct API. 
 

Or are we arguing that it can only be compared with software that needs at least one translation layer for the CPU and at least one translation layer for the GPU?

 

"with the correct API" is essentially what I'm getting at. This is the inherent downside of RISC architecture, there are correct APIs and then there are not. Apple is trying to obfuscate that by carefully selecting benchmarks that their hardware excels at, but in general performance it doesn't really live up to their claims. With Apple having near complete control over the OS and hardware they sell, this might be fine for the workloads they allow, but it's still not 100% straightforward of a comparison.

 

As far as your estimation that x86 is dying, RISC systems have been attempted to be brought into the mainstream market permanently and failed multiple times, Apple has done it before, Sun Microsystems did it before, other companies like Fujitsu have done it before; If ARM/RISC were the future we'd be in it by now. You are one of many people that has claimed this over the years, and all of you have been proven wrong.

My Current Setup:

AMD Ryzen 5900X

Kingston HyperX Fury 3200mhz 2x16GB

MSI B450 Gaming Plus

Cooler Master Hyper 212 Evo

EVGA RTX 3060 Ti XC

Samsung 970 EVO Plus 2TB

WD 5400RPM 2TB

EVGA G3 750W

Corsair Carbide 300R

Arctic Fans 140mm x4 120mm x 1

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, atxcyclist said:

"with the correct API" is essentially what I'm getting at. This is the inherent downside of RISC architecture, there are correct APIs and then there are not. Apple is trying to obfuscate that by carefully selecting benchmarks that their hardware excels at, but in general performance it doesn't really live up to their claims. With Apple having near complete control over the OS and hardware they control, this might be fine for the workloads they allow, but it's still not 100% straightforward of a comparison.

This is such a stupid argument. Why wont people use the correct APIs? Do you have a deathwish for your own business to intentionally keep using legacy technology and not take advantage of the massive performance and efficiency gains from using newer API.

 

The simple fact is Apple's hardware can perform if they are written correctly. If they are not, they still perform as well as their more reasonable equal counterparts. Those are extremely impressive achievements. And people who actually use these machines wholeheartedly agree on it. So I dont know why you keep bringing up such arguments.

2 minutes ago, atxcyclist said:

As far as your estimation that x86 is dying, RISC systems have been attempted to be brought into the mainstream market permanently and failed multiple times, Apple has done it before, Sun Microsystems did it before, other companies like Fujitsu have done it before; If ARM/RISC were the future, we'd be in it by now. You are one of many people that has claimed this over the years, and all of you have been proven wrong.

That is again a extremely dumb conclusion. What would end up leading any industry is the whoever is better. Historically CISC based designs were just better than RISC based. That's why it died. Except today, Apple Silicon has the best perf/watt and their iGPUs are on par with dedicated graphics with dedicated coolers. The only reason why ARM would take more time for take over x86 will be because Apple only makes chips for themselves. So we will need other companies like Qualcomm or anyone else to catchup to Apple first before we even think about the end of x86.

 

The adoption rate is defined by mass consumers, not small tech groups like us. And believe me they dont give jack shit about ARM vs x86. They only care about how much work can be done in given period and how good the efficiency/battery life is. Both of which Apple Silicon has done wonders in

Link to comment
Share on other sites

Link to post
Share on other sites

RISC vs CISC doesn’t matter nowadays. Everything (more powerful than integretade controllers) is inbetween RISC and CISC. 
 

That holds true for both ARM and x86

Link to comment
Share on other sites

Link to post
Share on other sites

Okay, since a few people in this thread have questioned Geekbench and said things like "it's just one benchmark" and "it doesn't test real world stuff", I thought I'd investigate a little bit.

Here are some facts about Geekbench.

 

Geekbench is a benchmarking suite. It is not a single benchmark, but rather it is a collection of various benchmarks where the score is aggregated into a single number at the end. 5% of the score is based on the result from the cryptographic workloads. 65% is based on the integer workloads, and the remaining 30% is based on the FP workloads.

 

As for how real world it is, here is a list of applications that are included in the Geekbench 5 CPU suite, and what real life workload they match:

 

Cryptographic workload:

  • Encrypt a file with AES-XTS using a 256bit key - This is done all the time. When you visit a website, when you save a file to an encrypted drive, and so on. You do it hundreds of times a day without even realizing.

Integer workload:

  • Compress and decompress an ebook using LZMA - As it describes in the test itself, this is useful whenever you open a compressed file that uses LZMA. It is a very widely used compression algorithm used in everything from ebook formats, to 7-zip. In fact, it is the foundational compression algorithm used in 7-zip.
  • Compress and decompress a JPEG file (using libjpeg-turbo), and a PNG file (using libpng). - Do I even need to explain why this is very "real world"? This happens all the time.
  • Calculate the route from one point in Ontario Canada to another, with 12 destinations along the way using Dijkstra's algorithm. - This is not just the type of calculation done in GPS applications, but also for path finding for AIs in games.
  • Create a DOM element in a HTML file and then extend it using JavaScript. Again, this is a decent HTML5 test, which is something we use everyday when browsing the web.
  • Do a bunch of queries against an SQLite database (Geekbench acts as both the client and server). - This is similar to how quite a few applications work. Having data stored in an internal database rather than in files. Not sure how common it is in everyday applications but it is used a lot in more enterprise grade applications. In any case, it's a good test.
  • Render a PDF file using the PDFium library, which is lifted out of Chrome. - Every opened a PDF file in Chrome? Then this test applies to you.
  • Open a markdown-formatted text document and render it as a bitmap. Maybe not something you do everyday, but I still think it's a pretty interesting test. It also does some interesting things like aligning the text so that it doesn't spill outside the bitmap. Not sure how widely used these things are. It would save on storage at the expense of compute in for example a game, but I am not sure if developers actually do it.
  • Compile a 729 line long C file using Clang - A lot of people use Clang/LLVM when they compile code, so this test is highly relevant.
  • Image manipulation by cropping a photo, adjusting contrast, applying filters like blur and the likes, then compress it into a JPEG file. Then it generates a thumbnail and stores it in an SQLite database. - This is what people do on their phones all the time. It is highly real world.

 

 

Floating Point Workloads:

  • N-Body physics - Simulate a gravitational force in 3D space. To be more precise, it simulates 16,384 planets orbiting around a black hole. Maybe not that "real world" but a good test nonetheless.
  • Rigid Body Physics - Simulates bodies that move around and then do things like collision detection and calculate/simulate friction. Used in games all the time.
  • Gaussian Blur - This is used EVERYWHERE. Everywhere from making UI elements blurry (for example if you got transparency on in Windows, you can see that things like the taskbar blurs whatever is behind it) and in photoshop-like programs.
  • Face detection - This is getting more and more common. Used in things like cameras to determine where to focus.
  • Horizon detection by inputting a crooked 9 megapixel photo, and then rotation it to that it is straight - Mostly used in camera apps, but probably in games too when determining orientation of some user input.
  • "Magic eraser" on a picture. Basically, you mark a section of an image and it automatically removes whatever is there and fills it in with something else. Got a pimple on your face? Remove it with this. - Used a lot in beaufy filters and the likes.
  • Combine 4 SDR images into a single HDR image. - Used a lot in today's cameras.
  • Ray Tracing - Do I even need to say more?
  • Use the "Structure from Motion" algorithm to create 3D coordinates from 2D images. This is used in things like AR when determining the size of something by camera input.
  • Speech Recognition with PocketSphinx - A very widely used speech recognition library. 
  • Image classification using MobileNet v1 - A pretty typical machine learning workload where it classifies pictures based on what it detects in the images. Used in a lot of modern gallery apps among other things.

 

 

The main criticism I have seen of Geekbench, from people that aren't just parroting what they have heard other say without understanding why, is that the dataset is rather small. This means that a lot of the workloads don't have to wait for data to be fetched from RAM or the HDD/SSD. This means that a processor with a weak and slow memory interface would not be penalized because the RAM was barely used. 

 

The datasets in SPEC are much larger and as a result those tests can take hours to run and as a result puts more importance on RAM.

 

The reason why I don't think this argument (which is the only somewhat valid one I have seen so far of Geekbench) holds much water when talking about the M1 are:

1) Most people don't load 100GB large databases. Doing image manipulation on 9MP images, or things like that, is a pretty good indication of performance for what the average Joe might do on their phone or computer. 

 

2) The datasets have gotten bigger in later versions of Geekbench. Still nowhere near as big as those used in SPEC, but I would say they are at a decent size these days.

 

3) The M1 and its derivatives have a FANTASTIC memory interface. Way better than what Intel and AMD offers. If Geekbench put more pressure on the memory then chances are the M1 models would pull even more ahead of AMD and Intel than they already do. If anything, Geekbench favours Intel and AMD over Apple since it doesn't let Apple's chips show off their far superior memory interface.

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, RedRound2 said:

This is such a stupid argument. Why wont people use the correct APIs? Do you have a deathwish for your own business to intentionally keep using legacy technology and not take advantage of the massive performance and efficiency gains from using newer API.

 

The simple fact is Apple's hardware can perform if they are written correctly. If they are not, they still perform as well as their more reasonable equal counterparts. Those are extremely impressive achievements. And people who actually use these machines wholeheartedly agree on it. So I dont know why you keep bringing up such arguments.

That is again a extremely dumb conclusion. What would end up leading any industry is the whoever is better. Historically CISC based designs were just better than RISC based. That's why it died. Except today, Apple Silicon has the best perf/watt and their iGPUs are on par with dedicated graphics with dedicated coolers. The only reason why ARM would take more time for take over x86 will be because Apple only makes chips for themselves. So we will need other companies like Qualcomm or anyone else to catchup to Apple first before we even think about the end of x86.

 

The adoption rate is defined by mass consumers, not small tech groups like us. And believe me they dont give jack shit about ARM vs x86. They only care about how much work can be done in given period and how good the efficiency/battery life is. Both of which Apple Silicon has done wonders in

Because what Apple thinks are the correct APIs and the rest of the industry thinks are the correct APIs can be two different things, evidenced by this test where the Apple hardware excels but in many others it gets worked. I don't use Photoshop or video editing software, but for the entire almost 20 years I've worked in the professional architectural BIM/CAD environment, Apple hardware/OS has had somewhere between none and shitty support. This only changed with bootcamp when they were running compatible hardware and you could run Windows, but even then it was a waste of money to pay more for a Mac that had limited upgrade potential and control over the hardware for the end user.

 

x86 is going to continue leading the industry for a long time, you guys are delusional. Again, this has been attempted multiple times and failed, because open hardware standards are going to triumph in the desktop space more than locked-in environments. Outside of a few niche professions, the range of hardware offered in x86 systems is more important to users. Apple and QUALCOMM have no interest in opening up hardware, so that alone is never doing to tip the scales in ARM's favor.

 

And how many x86 systems are being purchased and used every day? How many SIs assemble x86 systems? How many people are buying used x86 parts on eBay or Craigslist and throwing computers together? x86 has and will be dominating the industry. I don't really have issues with Apple hardware, I can scan around my room and see 3 Macs including one with a G4, and I have a G3 in another room, but you guys are just fanboying incredibly hard and it's a bit weird. All data, logic, and historical demonstrations refute this claim that ARM will be replacing x86.

My Current Setup:

AMD Ryzen 5900X

Kingston HyperX Fury 3200mhz 2x16GB

MSI B450 Gaming Plus

Cooler Master Hyper 212 Evo

EVGA RTX 3060 Ti XC

Samsung 970 EVO Plus 2TB

WD 5400RPM 2TB

EVGA G3 750W

Corsair Carbide 300R

Arctic Fans 140mm x4 120mm x 1

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, atxcyclist said:

Because what Apple thinks are the correct APIs and the rest of the industry thinks are the correct APIs can be two different things, evidenced by this test where the Apple hardware excels but in many others it gets worked. I don't use Photoshop or video editing software, but for the entire almost 20 years I've worked in the professional architectural BIM/CAD environment, Apple hardware/OS has had somewhere between none and shitty support. This only changed with bootcamp when they were running compatible hardware and you could run Windows, but even then it was a waste of money to pay more for a Mac that had limited upgrade potential and control over the hardware for the end user.

Apple has their own ecosystem and OS. They decide what is correct and what is not. This is something that is established. If anyone is developing something for macOS, I think it is a basic expectation that they use Apple's technology as well. If they don't, someone else will do it. So again, unless you have deathwish for your own business, then you better migrate to new technologies Apple uses. The reason why Windows is as fragmented and clunky as it is today is because microsoft has been forced to support all the old shit

1 hour ago, atxcyclist said:

x86 is going to continue leading the industry for a long time, you guys are delusional. Again, this has been attempted multiple times and failed, because open hardware standards are going to triumph in the desktop space more than locked-in environments. Outside of a few niche professions, the range of hardware offered in x86 systems is more important to users. Apple and QUALCOMM have no interest in opening up hardware, so that alone is never doing to tip the scales in ARM's favor.

People were calling us delusional when we said Apple was going to switch to ARM on Macs. People called us delusional when M1 and its performance claims were released.

What is open about x86, that isn't about ARM? Isn't x86 owned by Intel and just licensed to AMD? and x64 vice versa.

As far as the industry is concerned ARM seems to be a much more open to anyone sort of platform where anyone can develop their own custom chips with some modest fee given to ARM. Its much more difficult for a new x86 company to come up compared to adopting ARM. And if we talk about RISC-V its free

 

Just because shitty attempts in a completely different time was tried before doesnt mean its never going to happen in the future. That is probably the stupidest thing anyone could say. Just because DaVinci and Wright brothers failed their first attempts at flying, doesn't mean making flying machines are going to be always impossible. Oh, wait whoops, airplanes are actually a thing today.

 

1 hour ago, atxcyclist said:

And how many x86 systems are being purchased and used every day? How many SIs assemble x86 systems? How many people are buying used x86 parts on eBay or Craigslist and throwing computers together? x86 has and will be dominating the industry. I don't really have issues with Apple hardware, I can scan around my room and see 3 Macs including one with a G4, and I have a G3 in another room, but you guys are just fanboying incredibly hard and it's a bit weird. All data, logic, and historical demonstrations refute this claim that ARM will be replacing x86.

You are talking about today. Do you not have any concept of time? Obviously x86 has been ruling the consumer space for a long while hence why its widely available in craiglists and shops and etc. But in 5 years, all Apple products and Macs (which basically exists independently in their own bubble) will be completely ARM. Those same places will be selling Apple Silicon equipped computers.

 

And the fact that Apple was able to pull off this and basically catch up and even exceed established long term players in CPU and GPU space proves in concept the potential of ARM chips in the future. Now it is just a matter of time before someone else replicates this success, whether its qualcomm or not, to provide chips to every non-Apple company. If Intel and AMD keeps up with power and efficiency, they will last longer, but the moment they lose ground its basically game over for them, unless they also start doing hybrid architecture and basically shift to ARM to keep up

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/9/2022 at 10:35 PM, AluminiumTech said:

If you take Apple's word for it, the M1 Ultra graphics is 80% faster than a W6900X which is a 6900XT and thus the 3090. 🤣

The GPU is the most interesting thing about the M1 Ultra but not because Apple claims it's as fast as a 3090. People just brush over the fact that the M1 Ultra is the first non-monolithic GPU in the consumer space. How does the interconnection between the two dice work? Where are the bottlenecks? How will it scale with different workloads? So many questions, so many things that could go horribly wrong. I'm excited. 

 

The CPU on the other hand, is just the same boring two year old design. So who cares? 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAwLz said:

Okay, since a few people in this thread have questioned Geekbench and said things like "it's just one benchmark" and "it doesn't test real world stuff", I thought I'd investigate a little bit.

Here are some facts about Geekbench.

 

Geekbench is a benchmarking suite. It is not a single benchmark, but rather it is a collection of various benchmarks where the score is aggregated into a single number at the end. 5% of the score is based on the result from the cryptographic workloads. 65% is based on the integer workloads, and the remaining 30% is based on the FP workloads.

 

As for how real world it is, here is a list of applications that are included in the Geekbench 5 CPU suite, and what real life workload they match:

 

Cryptographic workload:

  • Encrypt a file with AES-XTS using a 256bit key - This is done all the time. When you visit a website, when you save a file to an encrypted drive, and so on. You do it hundreds of times a day without even realizing.

Integer workload:

  • Compress and decompress an ebook using LZMA - As it describes in the test itself, this is useful whenever you open a compressed file that uses LZMA. It is a very widely used compression algorithm used in everything from ebook formats, to 7-zip. In fact, it is the foundational compression algorithm used in 7-zip.
  • Compress and decompress a JPEG file (using libjpeg-turbo), and a PNG file (using libpng). - Do I even need to explain why this is very "real world"? This happens all the time.
  • Calculate the route from one point in Ontario Canada to another, with 12 destinations along the way using Dijkstra's algorithm. - This is not just the type of calculation done in GPS applications, but also for path finding for AIs in games.
  • Create a DOM element in a HTML file and then extend it using JavaScript. Again, this is a decent HTML5 test, which is something we use everyday when browsing the web.
  • Do a bunch of queries against an SQLite database (Geekbench acts as both the client and server). - This is similar to how quite a few applications work. Having data stored in an internal database rather than in files. Not sure how common it is in everyday applications but it is used a lot in more enterprise grade applications. In any case, it's a good test.
  • Render a PDF file using the PDFium library, which is lifted out of Chrome. - Every opened a PDF file in Chrome? Then this test applies to you.
  • Open a markdown-formatted text document and render it as a bitmap. Maybe not something you do everyday, but I still think it's a pretty interesting test. It also does some interesting things like aligning the text so that it doesn't spill outside the bitmap. Not sure how widely used these things are. It would save on storage at the expense of compute in for example a game, but I am not sure if developers actually do it.
  • Compile a 729 line long C file using Clang - A lot of people use Clang/LLVM when they compile code, so this test is highly relevant.
  • Image manipulation by cropping a photo, adjusting contrast, applying filters like blur and the likes, then compress it into a JPEG file. Then it generates a thumbnail and stores it in an SQLite database. - This is what people do on their phones all the time. It is highly real world.

 

 

Floating Point Workloads:

  • N-Body physics - Simulate a gravitational force in 3D space. To be more precise, it simulates 16,384 planets orbiting around a black hole. Maybe not that "real world" but a good test nonetheless.
  • Rigid Body Physics - Simulates bodies that move around and then do things like collision detection and calculate/simulate friction. Used in games all the time.
  • Gaussian Blur - This is used EVERYWHERE. Everywhere from making UI elements blurry (for example if you got transparency on in Windows, you can see that things like the taskbar blurs whatever is behind it) and in photoshop-like programs.
  • Face detection - This is getting more and more common. Used in things like cameras to determine where to focus.
  • Horizon detection by inputting a crooked 9 megapixel photo, and then rotation it to that it is straight - Mostly used in camera apps, but probably in games too when determining orientation of some user input.
  • "Magic eraser" on a picture. Basically, you mark a section of an image and it automatically removes whatever is there and fills it in with something else. Got a pimple on your face? Remove it with this. - Used a lot in beaufy filters and the likes.
  • Combine 4 SDR images into a single HDR image. - Used a lot in today's cameras.
  • Ray Tracing - Do I even need to say more?
  • Use the "Structure from Motion" algorithm to create 3D coordinates from 2D images. This is used in things like AR when determining the size of something by camera input.
  • Speech Recognition with PocketSphinx - A very widely used speech recognition library. 
  • Image classification using MobileNet v1 - A pretty typical machine learning workload where it classifies pictures based on what it detects in the images. Used in a lot of modern gallery apps among other things.

 

 

The main criticism I have seen of Geekbench, from people that aren't just parroting what they have heard other say without understanding why, is that the dataset is rather small. This means that a lot of the workloads don't have to wait for data to be fetched from RAM or the HDD/SSD. This means that a processor with a weak and slow memory interface would not be penalized because the RAM was barely used. 

 

The datasets in SPEC are much larger and as a result those tests can take hours to run and as a result puts more importance on RAM.

 

The reason why I don't think this argument (which is the only somewhat valid one I have seen so far of Geekbench) holds much water when talking about the M1 are:

1) Most people don't load 100GB large databases. Doing image manipulation on 9MP images, or things like that, is a pretty good indication of performance for what the average Joe might do on their phone or computer. 

 

2) The datasets have gotten bigger in later versions of Geekbench. Still nowhere near as big as those used in SPEC, but I would say they are at a decent size these days.

 

3) The M1 and its derivatives have a FANTASTIC memory interface. Way better than what Intel and AMD offers. If Geekbench put more pressure on the memory then chances are the M1 models would pull even more ahead of AMD and Intel than they already do. If anything, Geekbench favours Intel and AMD over Apple since it doesn't let Apple's chips show off their far superior memory interface.

You forgot another complaint against GB. That is short and bursty thus allowing CPUs stay more or less constantly at full boost. 
 

Of course again a factor that favour x86 over AS since AS does not boost. 

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, HenrySalayne said:

The GPU is the most interesting thing about the M1 Ultra but not because Apple claims it's as fast as a 3090. People just brush over the fact that the M1 Ultra is the first non-monolithic GPU in the consumer space. How does the interconnection between the two dice work? Where are the bottlenecks? How will it scale with different workloads? So many questions, so many things that could go horribly wrong. I'm excited. 

That's very true.

Their interconnect is probably the most impressive new piece of info from the presentation.

 

Not that many months ago, AMD announced that they had created "Infinity Fabric 3.0" which allowed up to 400GB/s and would be used in their multi-chip GPUs.

Meanwhile, Apple casually drops an announcement of a 1.25TB/s interconnector just like that (and that's assuming the 2.5TB/s number Apple posted was not bi-directional).

 

 

The latest EPYC processors do not support anything higher than 1600MHz Infinity Fabric, which in turn means it caps at 187GB/s.

 

Apple's "UltraFusion" interconnect is honestly amazing.

 

 

 

If other vendors start implementing similar things, we might see a drastic change in how GPUs are manufactured. Instead of having a massive die that costs a fortune to make, maybe we can link multiple chiplets together using these types of interconnects and still present them to the OS as a single GPU. It would probably result in far lower manufacturing costs.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Spindel said:

Of course again a factor that favour x86 over AS since AS does not boost. 

You mean AS could be faster but Apple forgot to add the boosting functionality? 

What an oversight! Disable boosting on all other products so they won't have an advantage! 

 

I don't know why people try to split hairs for a few percent of performance difference every single time. The truth is, 30% are the "it's noticeable" threshold in most scenarios 

Here two completely different architectures are put to the test and the results are (as expected) all over the place. No clear winner in any category but power consumption. But it's basically still the same M1 that launched in 2020. 

 

The more interesting design will be M2. Apple cannot switch to another architecture again to gain any massive performance/watt increasement. And Intel and AMD are not sleeping (any more ^^) and we have seen around 20% intergenerational IPC increases with both their latest iterations. It will be quite exciting if Apple can keep up the pace with their silicon or if we see stuff like "boosting" or much higher power requirements in the future. 

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, LAwLz said:

If other vendors start implementing similar things, we might see a drastic change in how GPUs are manufactured. Instead of having a massive die that costs a fortune to make, maybe we can link multiple chiplets together using these types of interconnects and still present them to the OS as a single GPU. It would probably result in far lower manufacturing costs.

This is not the holy grail of GPU design. It's not like Apple invented the wheel here, they just use an expensive piece of technology and somehow mitigated the drawbacks. Rumors are, it could be the tight integration of CPU, GPU and software that makes this feasible. But we have to wait an see. 

The manufacturing costs of a silicon interposer are generally higher than a monolithic design. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×