Jump to content

Apple M1 Ultra - 2nd highest multicore score, lost to 64-core AMD Threadripper.

TheReal1980
2 hours ago, atxcyclist said:

 

"with the correct API" is essentially what I'm getting at. This is the inherent downside of RISC architecture, there are correct APIs and then there are not. Apple is trying to obfuscate that by carefully selecting benchmarks that their hardware excels at, but in general performance it doesn't really live up to their claims. With Apple having near complete control over the OS and hardware they sell, this might be fine for the workloads they allow, but it's still not 100% straightforward of a comparison.

 

What the heck are you getting at? "Correct API?" API's are software and have nothing to do with hardware in that context.

 

There is nothing stopping Microsoft or Apple from just implementing CUDA other than Not-invented-here-ism. If Google can re-implement Java and get away with it, so can Apple and Microsoft. They just won't because they would rather your use their tools and hardware. The fact is, if I were to write something for Apple's ML, It would work on Intel, AMD and Mac Silicon. Just that's not what ML is written in, 100% of it is NVIDIA because NVIDIA has the first mover advantage, and hence the Tensorflow and Pytorch based frameworks only support CUDA and are written with that assumption. You add an Intel or apple plugin into Tensorflow, that doesn't make it use those hardware or CPU implementations at all, it just makes it an option within the Tensorflow or PyTorch api.

 

That lack of CUDA, also means games won't use it because it's not available on consoles not using an nVidia GPU. It's essentially a PC-game-only feature.

GameWorks-1-Overview-%287%29.jpg

If you marry your game to nvidia gameworks, then you're going to have to avoid using features that can't be run on the console parts. Which holds true for things like games running on top of Proton on Linux and Mac.

 

These are not API's that are implicitly married to the nvidia hardware, just it's not in nvidia's interest to support amd, intel,  and silicon mac GPU's. You as nvidia want your customers to use your hardware.

 

Apple is no different here, and what makes Apple attractive over Windows is Video and Audio work because the high end parts can utilize hardware encoding/decoding frameworks on the OS that don't exist on Windows, and even on Windows the software still has software options, just they tend to be not as good, or hard to enable (eg h.265 support might only be available on RTX cards or 7th gen Intel GPU's, but not AMD GPU's on Windows.) Sometimes these software codecs are actually driven by CUDA on Windows versions, and thus Intel and AMD don't get accelerated.

 

Anyway. All this complaining about API's can still point at DirectX/Direct3D on Windows as well. Games built for an ARM Windows machine aren't going to happen for years. That's Microsoft's own fault for not making Visual Studio produce "fat binaries" like XCode does. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, HenrySalayne said:

The CPU on the other hand, is just the same boring two year old design. So who cares? 

but isn't the CPU doubled as well as the GPU? So that seems like what it does will be pretty interesting to find out too.

🖥️ Motherboard: MSI A320M PRO-VH PLUS  ** Processor: AMD Ryzen 2600 3.4 GHz ** Video Card: Nvidia GeForce 1070 TI 8GB Zotac 1070ti 🖥️
🖥️ Memory: 32GB DDR4 2400  ** Power Supply: 650 Watts Power Supply Thermaltake +80 Bronze Thermaltake PSU 🖥️

🍎 2012 iMac i7 27";  2007 MBP 2.2 GHZ; Power Mac G5 Dual 2GHZ; B&W G3; Quadra 650; Mac SE 🍎

🍎 iPad Air2; iPhone SE 2020; iPhone 5s; AppleTV 4k 🍎

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Video Beagle said:

but isn't the CPU doubled as well as the GPU? So that seems like what it does will be pretty interesting to find out too.

Meh. Using CPU chiplets is nothing new, AMD made their Infinity Fabric work pretty well (and they not only have CCDs but also an IO die). So why should Apple with a silicon interposer and 2.5 TBit/s bandwidth do any worse? And they have full control over the software and the scheduler (which is a bigger concern for CPU performance than the bus connecting the cores as we have seen with the scheduler problems of Windows 11). 

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, HenrySalayne said:

Meh. Using CPU chiplets is nothing new, AMD made their Infinity Fabric work pretty well (and they not only have CCDs but also an IO die). So why should Apple with a silicon interposer and 2.5 TBit/s bandwidth do any worse?

We'll yet have to see how their die2die interface is implemented/done in detail. Might be that they are much closer to actually abutting two dies instead of going die-substrate/interposer-die. On this scale, every (off-die/TSV) via and um of added (bond)wire can be significant. If their presentation is anything to go by, this is not your classic multi-chiplet approach that we know from AMD. Does anyone know the bandwidth of infinity fabric? Btw not having a separate IO die is a huge advantage in terms of speed and power. However, not having one limits the scalability, so for M2 they'll need to create interfaces on at least two edges of the die to scale up beyond two.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, RedRound2 said:

Apple has their own ecosystem and OS. They decide what is correct and what is not. This is something that is established. If anyone is developing something for macOS, I think it is a basic expectation that they use Apple's technology as well. If they don't, someone else will do it. So again, unless you have deathwish for your own business, then you better migrate to new technologies Apple uses. The reason why Windows is as fragmented and clunky as it is today is because microsoft has been forced to support all the old shit

People were calling us delusional when we said Apple was going to switch to ARM on Macs. People called us delusional when M1 and its performance claims were released.

What is open about x86, that isn't about ARM? Isn't x86 owned by Intel and just licensed to AMD? and x64 vice versa.

As far as the industry is concerned ARM seems to be a much more open to anyone sort of platform where anyone can develop their own custom chips with some modest fee given to ARM. Its much more difficult for a new x86 company to come up compared to adopting ARM. And if we talk about RISC-V its free

 

Just because shitty attempts in a completely different time was tried before doesnt mean its never going to happen in the future. That is probably the stupidest thing anyone could say. Just because DaVinci and Wright brothers failed their first attempts at flying, doesn't mean making flying machines are going to be always impossible. Oh, wait whoops, airplanes are actually a thing today.

 

You are talking about today. Do you not have any concept of time? Obviously x86 has been ruling the consumer space for a long while hence why its widely available in craiglists and shops and etc. But in 5 years, all Apple products and Macs (which basically exists independently in their own bubble) will be completely ARM. Those same places will be selling Apple Silicon equipped computers.

 

And the fact that Apple was able to pull off this and basically catch up and even exceed established long term players in CPU and GPU space proves in concept the potential of ARM chips in the future. Now it is just a matter of time before someone else replicates this success, whether its qualcomm or not, to provide chips to every non-Apple company. If Intel and AMD keeps up with power and efficiency, they will last longer, but the moment they lose ground its basically game over for them, unless they also start doing hybrid architecture and basically shift to ARM to keep up

Nothing Apple makes is focused on my industry, and none of the software I use will be working in whatever limited ecosystem Apple allows because it never has. You can write some huge wall of text but it doesn’t change that you have no clue what other professional people use their hardware for.

 

Have fun fanboying, power x86 users are not moving to ARM and x86 will always be the dominant in many industries, so chill out.

My Current Setup:

AMD Ryzen 5900X

Kingston HyperX Fury 3200mhz 2x16GB

MSI B450 Gaming Plus

Cooler Master Hyper 212 Evo

EVGA RTX 3060 Ti XC

Samsung 970 EVO Plus 2TB

WD 5400RPM 2TB

EVGA G3 750W

Corsair Carbide 300R

Arctic Fans 140mm x4 120mm x 1

 

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, atxcyclist said:

power x86 users are not moving to ARM

Really? Because Apple has literally been the fastest-growing PC maker in every quarter of the last year at 28% growth. 

 

Watch your back - you'll find your next employer using Apple Silicon before you know it. All Apple has to do is have every product available and competitive when Windows 10 End Of Life in 2025 arrives. They've already made easy-to-use management software available ("Apple Business Essentials") for a compelling price. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Dracarris said:

Does anyone know the bandwidth of infinity fabric?

Yes. It depends on which generation and your RAM, but I was quoting maximum numbers earlier:

5 hours ago, LAwLz said:

Not that many months ago, AMD announced that they had created "Infinity Fabric 3.0" which allowed up to 400GB/s and would be used in their multi-chip GPUs.

Meanwhile, Apple casually drops an announcement of a 1.25TB/s interconnector just like that (and that's assuming the 2.5TB/s number Apple posted was not bi-directional).

 

The latest EPYC processors do not support anything higher than 1600MHz Infinity Fabric, which in turn means it caps at 187GB/s.

 

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, atxcyclist said:

Nothing Apple makes is focused on my industry, and none of the software I use will be working in whatever limited ecosystem Apple allows because it never has. You can write some huge wall of text but it doesn’t change that you have no clue what other professional people use their hardware for.

 

Have fun fanboying, power x86 users are not moving to ARM and x86 will always be the dominant in many industries, so chill out.

Xilinx got bought by AMD, Altera is now "Intel FPGA" - I feel you, being an EE who sometimes needs to work with FPGAs. I highly doubt that _these_ companies (despite of focusing on Linux) offer ARM64 binaries.

With that, an M1 Mac is out of the question for any FPGA usecase (yes yes, I know, Parallels running WinOnARM-Emulation yada yada, no professional would do that - because there is no licence for this).
 

I wish that Altium Designer would get adoption for MacOS (it's currently Windows only as far as I know), because... Altium is in my industry a de-facto standard. Same goes for Nvidia JetPack, which requires a Ubuntu 18.04 (yes, they are STILL on Kernel 4.9!) x86_64 machine for flashing the Jetson Modules.
So many development processes require x86, just because of other companies not providing ARM/MacOS binaries. 
 

The recent Macs look so appealing to me, but I'm stuck with x86.

Link to comment
Share on other sites

Link to post
Share on other sites

I wonder if Microsoft will ever accept Apple's invitation to make Windows 11 available on macOS. Apple has told them to go right ahead - they have no objection - but Microsoft apparently signed a ~5 year long exclusivity deal on Windows on ARM with Qualcomm, who proceeded to do laughably little with their half decade of exclusivity.

 

Can't help but pity them both a bit a bit. Half a decade exclusivity should be motivation for any chip maker to do the most they can with it. Instead, Qualcomm just iterated on a smartphone design (the "8cx" which has had Gen 1, Gen 2, and Gen 3), and Microsoft realizing this was going nowhere lagged behind on 64-bit Intel support, which in turn hurt sales, which only incentivized Qualcomm to keep lagging behind and not invest in it. 

 

Now that that exclusivity deal may have recently expired (depending on when "soon" in November meant), maybe Microsoft is working on it right now and we'll get a WWDC surprise. But I wouldn't hold my breath.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, RedRound2 said:

 irritates me how Apple doesn't get enough recognition from "tech nerds" group who can actually admire the engineering marvel they did because "brrr APple evil".

It would help if their OS wasn't a big steaming pile of crap.

 

I think its great they kept the UI basically the same since the early days of MacOS, made jumping back in relatively easy despite the last time I used MacOS was an LC II, but they're doing some very weird stuff in the underlying OS that is hugely frustrating.

 

I've seen plenty of posts from people like myself who jumped into Mac for the first time in decades when the M1 launched, so I wouldn't say its universal that all tech nerds are hating on them.  Its basically the same sorts of people who have been hating on Intel and NVIDIA, basically fanboys.  Brand loyalty never makes any sense.

Router:  Intel N100 (pfSense) WiFi6: Zyxel NWA210AX (1.7Gbit peak at 160Mhz)
WiFi5: Ubiquiti NanoHD OpenWRT (~500Mbit at 80Mhz) Switches: Netgear MS510TXUP, MS510TXPP, GS110EMX
ISPs: Zen Full Fibre 900 (~930Mbit down, 115Mbit up) + Three 5G (~800Mbit down, 115Mbit up)
Upgrading Laptop/Desktop CNVIo WiFi 5 cards to PCIe WiFi6e/7

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Laborant said:

I wish that Altium Designer would get adoption for MacOS (it's currently Windows only as far as I know), because... Altium is in my industry a de-facto standard.

that and every industry-standard CAD suite like AutoCAD, Siemens NX, Solidworks and so on finally need to be available on macOS. The stubbornness in this industry to refuse to open up new horizons is staggering. They are as bad as banks and insurance companies who will continue to do things the way they were done for 30years, because comfort, tradition and reasons.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Video Beagle said:

but isn't the CPU doubled as well as the GPU? So that seems like what it does will be pretty interesting to find out too.

I'm personally more interested in the doubling of the Neural Engine, this could be huge to bring up performance of AI workloads closer to the PC.

On the M1 Pro software like Topaz Video Enhance didn't get close the 2080 because the Neural Engine was the same as the M1.  It DID improve, in fact a fair bit due to some recent tweaks to use the GPU cores more, so this could be pretty huge.  Unfortunately none of the reviewers seem to bother testing AI workloads.

Router:  Intel N100 (pfSense) WiFi6: Zyxel NWA210AX (1.7Gbit peak at 160Mhz)
WiFi5: Ubiquiti NanoHD OpenWRT (~500Mbit at 80Mhz) Switches: Netgear MS510TXUP, MS510TXPP, GS110EMX
ISPs: Zen Full Fibre 900 (~930Mbit down, 115Mbit up) + Three 5G (~800Mbit down, 115Mbit up)
Upgrading Laptop/Desktop CNVIo WiFi 5 cards to PCIe WiFi6e/7

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, gjsman said:

Really? Because Apple has literally been the fastest-growing PC maker in every quarter of the last year at 28% growth. 

 

Watch your back - you'll find your next employer using Apple Silicon before you know it. All Apple has to do is have every product available and competitive when Windows 10 End Of Life in 2025 arrives. They've already made easy-to-use management software available ("Apple Business Essentials") for a compelling price. 

Yes, really. Apple having a closed ecosystem and non-upgradable hardware will not fly in many industries. You as well as many of the other Apple fanboys here don’t understand the needs/wants of x86 power users.

 

I don’t have to watch my back, lol. You Apple fanboys are in the idle threats game now? You going to break my kneecaps if I don’t change?
 

My industry doesn’t use and isn’t supported by Apple hardware or software, You fanboys need to put the Kool Aid down and realize we don’t all live in your little ecosystem and don’t want to.
 

I am the IT at my office, we’re not switching to Apple ever because it doesn’t work for us. I know this better than you do, I’ve been in this business for two decades.

My Current Setup:

AMD Ryzen 5900X

Kingston HyperX Fury 3200mhz 2x16GB

MSI B450 Gaming Plus

Cooler Master Hyper 212 Evo

EVGA RTX 3060 Ti XC

Samsung 970 EVO Plus 2TB

WD 5400RPM 2TB

EVGA G3 750W

Corsair Carbide 300R

Arctic Fans 140mm x4 120mm x 1

 

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, atxcyclist said:

My industry doesn’t use and isn’t supported by Apple hardware or software, You fanboys need to put the Kool Aid down and realize we don’t all live in your little ecosystem and don’t want to.

I rarely comment but,

You get the irony of your statement right?

"The most important step a man can take. It’s not the first one, is it?
It’s the next one. Always the next step, Dalinar."
–Chapter 118, Oathbringer, Stormlight Archive #3 by Brandon Sanderson

 

 

Older stuff:

Spoiler

"A high ideal missed by a little, is far better than low ideal that is achievable, yet far less effective"

 

If you think I'm wrong, correct me. If I've offended you in some way tell me what it is and how I can correct it. I want to learn, and along the way one can make mistakes; Being wrong helps you learn what's right.

 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, J-from-Nucleon said:

I rarely comment but,

You get the irony of your statement right?

Just because Apple has had growth doesn’t mean they’re dominating the pro-sumer and workstation sector, x86 hardware adoption is historically so far beyond the percentage of Apple currently deployed in many industries, even at current growth it won’t be touched any time soon. It also doesn’t make them any more relevant to people in industries that need upgradability and are not supported by Apple’s proprietary hardware and software.

 

There’s nothing ironic about what I’ve stated, in this thread a lot of Apple fanboys have tried to explain to me how my industry works, and I’m not inclined to agree with them because I know better than they do.

My Current Setup:

AMD Ryzen 5900X

Kingston HyperX Fury 3200mhz 2x16GB

MSI B450 Gaming Plus

Cooler Master Hyper 212 Evo

EVGA RTX 3060 Ti XC

Samsung 970 EVO Plus 2TB

WD 5400RPM 2TB

EVGA G3 750W

Corsair Carbide 300R

Arctic Fans 140mm x4 120mm x 1

 

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Paul Thexton said:

Some reference I don’t get.gif

 

It’s no surprise that knowledge of the industry you work in overrides fanboys and trolls.

My Current Setup:

AMD Ryzen 5900X

Kingston HyperX Fury 3200mhz 2x16GB

MSI B450 Gaming Plus

Cooler Master Hyper 212 Evo

EVGA RTX 3060 Ti XC

Samsung 970 EVO Plus 2TB

WD 5400RPM 2TB

EVGA G3 750W

Corsair Carbide 300R

Arctic Fans 140mm x4 120mm x 1

 

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, LAwLz said:

Okay, since a few people in this thread have questioned Geekbench and said things like "it's just one benchmark" and "it doesn't test real world stuff", I thought I'd investigate a little bit.

Here are some facts about Geekbench.

 

Geekbench is a benchmarking suite. It is not a single benchmark, but rather it is a collection of various benchmarks where the score is aggregated into a single number at the end. 5% of the score is based on the result from the cryptographic workloads. 65% is based on the integer workloads, and the remaining 30% is based on the FP workloads.

 

As for how real world it is, here is a list of applications that are included in the Geekbench 5 CPU suite, and what real life workload they match:

 

Cryptographic workload:

  • Encrypt a file with AES-XTS using a 256bit key - This is done all the time. When you visit a website, when you save a file to an encrypted drive, and so on. You do it hundreds of times a day without even realizing.

Integer workload:

  • Compress and decompress an ebook using LZMA - As it describes in the test itself, this is useful whenever you open a compressed file that uses LZMA. It is a very widely used compression algorithm used in everything from ebook formats, to 7-zip. In fact, it is the foundational compression algorithm used in 7-zip.
  • Compress and decompress a JPEG file (using libjpeg-turbo), and a PNG file (using libpng). - Do I even need to explain why this is very "real world"? This happens all the time.
  • Calculate the route from one point in Ontario Canada to another, with 12 destinations along the way using Dijkstra's algorithm. - This is not just the type of calculation done in GPS applications, but also for path finding for AIs in games.
  • Create a DOM element in a HTML file and then extend it using JavaScript. Again, this is a decent HTML5 test, which is something we use everyday when browsing the web.
  • Do a bunch of queries against an SQLite database (Geekbench acts as both the client and server). - This is similar to how quite a few applications work. Having data stored in an internal database rather than in files. Not sure how common it is in everyday applications but it is used a lot in more enterprise grade applications. In any case, it's a good test.
  • Render a PDF file using the PDFium library, which is lifted out of Chrome. - Every opened a PDF file in Chrome? Then this test applies to you.
  • Open a markdown-formatted text document and render it as a bitmap. Maybe not something you do everyday, but I still think it's a pretty interesting test. It also does some interesting things like aligning the text so that it doesn't spill outside the bitmap. Not sure how widely used these things are. It would save on storage at the expense of compute in for example a game, but I am not sure if developers actually do it.
  • Compile a 729 line long C file using Clang - A lot of people use Clang/LLVM when they compile code, so this test is highly relevant.
  • Image manipulation by cropping a photo, adjusting contrast, applying filters like blur and the likes, then compress it into a JPEG file. Then it generates a thumbnail and stores it in an SQLite database. - This is what people do on their phones all the time. It is highly real world.

 

 

Floating Point Workloads:

  • N-Body physics - Simulate a gravitational force in 3D space. To be more precise, it simulates 16,384 planets orbiting around a black hole. Maybe not that "real world" but a good test nonetheless.
  • Rigid Body Physics - Simulates bodies that move around and then do things like collision detection and calculate/simulate friction. Used in games all the time.
  • Gaussian Blur - This is used EVERYWHERE. Everywhere from making UI elements blurry (for example if you got transparency on in Windows, you can see that things like the taskbar blurs whatever is behind it) and in photoshop-like programs.
  • Face detection - This is getting more and more common. Used in things like cameras to determine where to focus.
  • Horizon detection by inputting a crooked 9 megapixel photo, and then rotation it to that it is straight - Mostly used in camera apps, but probably in games too when determining orientation of some user input.
  • "Magic eraser" on a picture. Basically, you mark a section of an image and it automatically removes whatever is there and fills it in with something else. Got a pimple on your face? Remove it with this. - Used a lot in beaufy filters and the likes.
  • Combine 4 SDR images into a single HDR image. - Used a lot in today's cameras.
  • Ray Tracing - Do I even need to say more?
  • Use the "Structure from Motion" algorithm to create 3D coordinates from 2D images. This is used in things like AR when determining the size of something by camera input.
  • Speech Recognition with PocketSphinx - A very widely used speech recognition library. 
  • Image classification using MobileNet v1 - A pretty typical machine learning workload where it classifies pictures based on what it detects in the images. Used in a lot of modern gallery apps among other things.

 

 

The main criticism I have seen of Geekbench, from people that aren't just parroting what they have heard other say without understanding why, is that the dataset is rather small. This means that a lot of the workloads don't have to wait for data to be fetched from RAM or the HDD/SSD. This means that a processor with a weak and slow memory interface would not be penalized because the RAM was barely used. 

 

The datasets in SPEC are much larger and as a result those tests can take hours to run and as a result puts more importance on RAM.

 

The reason why I don't think this argument (which is the only somewhat valid one I have seen so far of Geekbench) holds much water when talking about the M1 are:

1) Most people don't load 100GB large databases. Doing image manipulation on 9MP images, or things like that, is a pretty good indication of performance for what the average Joe might do on their phone or computer. 

 

2) The datasets have gotten bigger in later versions of Geekbench. Still nowhere near as big as those used in SPEC, but I would say they are at a decent size these days.

 

3) The M1 and its derivatives have a FANTASTIC memory interface. Way better than what Intel and AMD offers. If Geekbench put more pressure on the memory then chances are the M1 models would pull even more ahead of AMD and Intel than they already do. If anything, Geekbench favours Intel and AMD over Apple since it doesn't let Apple's chips show off their far superior memory interface.

One point that you left out is that geekbench doesn't "force" raw CPU usage, and most of those tasks are offloaded to dedicated hardware, specially on the M1 where it has tons of dedicated hardware for a myriad of tasks. It does reflect how well an entire device performs in a general, real life scenario, but I don't think it's really suitable to measure raw CPU perf.

 

8 hours ago, Kisai said:

The fact is, if I were to write something for Apple's ML, It would work on Intel, AMD and Mac Silicon. Just that's not what ML is written in, 100% of it is NVIDIA because NVIDIA has the first mover advantage, and hence the Tensorflow and Pytorch based frameworks only support CUDA and are written with that assumption. You add an Intel or apple plugin into Tensorflow, that doesn't make it use those hardware or CPU implementations at all, it just makes it an option within the Tensorflow or PyTorch api.

I get what you tried to mean, but the plugin for TF does make tf use the GPU and CPU extensions for accelerated training, so it does make full use of the hardware.

Intel has nothing like that available at the moment, and AMD has ROCm (which is pretty shit).

 

4 minutes ago, atxcyclist said:

It’s no surprise that knowledge of the industry you work in overrides fanboys and trolls.

So you clearly don't know my industry lol

 

We're adding ARM instances to our k8s clusters because they're cheap and we need more machines, at peak load we regularly have over 5000 clusters running (each with 3~50 instances). For devs, it doesn't make a difference if they're using M1 Macs or x86 thinkpads, a laptop is nothing more than a dumb terminal anyway.

 

Your industry might not need/use arm, but don't generalize that because it makes you look as stupid as those apple fanboys.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, gjsman said:

Really? Because Apple has literally been the fastest-growing PC maker in every quarter of the last year at 28% growth. 

 

Watch your back - you'll find your next employer using Apple Silicon before you know it. All Apple has to do is have every product available and competitive when Windows 10 End Of Life in 2025 arrives. They've already made easy-to-use management software available ("Apple Business Essentials") for a compelling price. 

This is way too cheerleadery, and I use a ton of Apple gear.

 

To start: Mac sales have grown well since M1 arrived, but fastest-growing PC maker in every quarter? Cite your source for that, please, because I've seen other vendors do well (at least worldwide) in other market share estimates.

 

And while Apple Silicon-based Macs are a big step up, you're acting as if Apple merely needs to show up to conquer the workplace... er, no. While the hardware is genuinely better at some important tasks, and Apple has gotten better about enterprise support, there are some areas where Macs (or even ARM-based PCs in general) either lag or aren't advantageous enough to justify the outlay.

 

Industry-specific software is one thing, but there's also the simple matter of cost. Let's say you're running a reasonably large company and have to replace hundreds of computers people use to run Office and some cloud services. Now, a Mac mini will be faster (and quite possibly better-behaved) than a starter business desktop like Dell's entry-level Vostro, but is a budget-conscious IT manager going to buy a fleet of $699 Mac minis (plus the not-included peripherals), or a suite of $489 Vostros with peripherals already included? That's not including Dell's greater experience with business support. You can discuss Mac reliability and relative security as much as you like, but those aren't going to mean much to an IT guy who knows the Windows PCs will be good enough while saving tens of thousands of dollars.

 

Apple has made progress in business, but it has a long way to go. Right now, its strong suits remain in audiovisual editing and general-purpose computing (at mid-range and premium prices, that is). It'll have to rethink its model strategy and encourage wider software development if it's going to claim a significant stake of work PC sales.

 

The irony: this is the same mindset that toppled Microsoft's dominance of the tech industry. Steve Ballmer acted as if every human being on Earth naturally preferred Windows, and merely making Windows or Office available on a device was enough to sway users. Smash your iPhone against the wall, this is a Windows Phone! The truth is that a platform's success in the market is dictated by a range of factors, and you should never assume victory will be automatic.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, atxcyclist said:

It’s no surprise that knowledge of the industry you work in overrides fanboys and trolls.

I’ve been a software engineer for 22 years. I’m well aware that software originally written for one platform provides an almost insurmountable amount of inertia when it comes to porting that solution to something different. 
 

You do seem to be coming across as particularly angry in this thread though. Personally I’m just choosing to giggle at predictions of x86_64 somehow becoming defunct (it isn’t, and won’t)

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, igormp said:

So you clearly don't know my industry lol

 

We're adding ARM instances to our k8s clusters because they're cheap and we need more machines, at peak load we regularly have over 5000 clusters running (each with 3~50 instances). For devs, it doesn't make a difference if they're using M1 Macs or x86 thinkpads, a laptop is nothing more than a dumb terminal anyway.

 

Your industry might not need/use arm, but don't generalize that because it makes you look as stupid as those apple fanboys.


And quite frankly that isn’t of any concern to me, do whatever works for your tasks.

 

I have throughout this entire discussion been ambiguous as to other industries, with the exception of the one I work in. I’ve not told anyone in any specific industry that they shouldn’t switch, only that my industry needs compatibility and upgradability of our workstation hardware, and Apple’s ecosystem doesn’t offer either, especially support for our software.

 

And to me, people making wild claims that Apple/ARM will rise up and dominate the personal computing world just seem ignorant. Many companies have tried but they all failed, this is reality.

 

My Current Setup:

AMD Ryzen 5900X

Kingston HyperX Fury 3200mhz 2x16GB

MSI B450 Gaming Plus

Cooler Master Hyper 212 Evo

EVGA RTX 3060 Ti XC

Samsung 970 EVO Plus 2TB

WD 5400RPM 2TB

EVGA G3 750W

Corsair Carbide 300R

Arctic Fans 140mm x4 120mm x 1

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Paul Thexton said:

I’ve been a software engineer for 22 years. I’m well aware that software originally written for one platform provides an almost insurmountable amount of inertia when it comes to porting that solution to something different. 
 

You do seem to be coming across as particularly angry in this thread though. Personally I’m just choosing to giggle at predictions of x86_64 somehow becoming defunct (it isn’t, and won’t)

My annoyance has everything to do with people calling me out, when I simply stated some industries can not and will not use Apple hardware. 

My Current Setup:

AMD Ryzen 5900X

Kingston HyperX Fury 3200mhz 2x16GB

MSI B450 Gaming Plus

Cooler Master Hyper 212 Evo

EVGA RTX 3060 Ti XC

Samsung 970 EVO Plus 2TB

WD 5400RPM 2TB

EVGA G3 750W

Corsair Carbide 300R

Arctic Fans 140mm x4 120mm x 1

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, atxcyclist said:

My annoyance has everything to do with people calling me out, when I simply stated some industries can not and will not use Apple hardware. 

Yeah that’s fair.

 

For me I think the biggest hurdle to overcome is software support, and it can be catch 22.

 

Even if a different platform offers a lot of features that you want but you can’t do it without {x} software, then you’ll frequently find that the vendor simply turns around and says “there isn’t high enough demand for that” - and I’ve worked at companies where that statement is given out without any effort being put in to actually measure the demand, and it actually means “we don’t want to invest more than skeleton staff in to maintaining this” in my experience (that’s not been true 100% off the time, but it’s not been uncommon in my career)

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, atxcyclist said:

My annoyance has everything to do with people calling me out, when I simply stated some industries can not and will not use Apple hardware. 

And that's true, however the way you phrased that made it sound like you were both generalizing and also kinda aggressive.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, igormp said:

And that's true, however the way you phrased that made it sound like you were both generalizing and also kinda aggressive.

I mean, get dog-piled on for no reason and see if you find yourself getting annoyed by it.

 

Too many people took-personally my factual statements of historical actions including the ones about my industry, so I’m out of this thread.

My Current Setup:

AMD Ryzen 5900X

Kingston HyperX Fury 3200mhz 2x16GB

MSI B450 Gaming Plus

Cooler Master Hyper 212 Evo

EVGA RTX 3060 Ti XC

Samsung 970 EVO Plus 2TB

WD 5400RPM 2TB

EVGA G3 750W

Corsair Carbide 300R

Arctic Fans 140mm x4 120mm x 1

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×