Jump to content

Early benchmarks (Geekbench) of an Asus Windows 10 on ARM device got leaked — and it sucks

I thought we all agreed Geekbench is a terrible way to compare CPUs because it's just flatout inaccurate.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

x86 processors don't have much of a chance to catch with their low-power ARM competitors, regardless of the manufacturing advantage (by Intel). The instruction decoding overhead for the complicated x86 ISA drains too much power, along other workarounds that mitigate the bottlenecks of decades old legacy support. Intel tried to mitigate this with their Atom series of simplified CPU architectures, but that took too much performance hit to be competitive.

 

There's a hope that after 2020, with the depreciating of the old BIOS support in both hardware and software, future x86 CPU generations can dispense with some legacy dead weight (namely, the 16-bit real mode) and clean up the architecture for more efficient operation.

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, Trixanity said:

It'll drive up costs. It would be better if Intel or AMD integrated ARM cores on die.

AMD already had project skylake back on the 2015 roadmap. I'm not sire if they every released a project. 

                     ¸„»°'´¸„»°'´ Vorticalbox `'°«„¸`'°«„¸
`'°«„¸¸„»°'´¸„»°'´`'°«„¸Scientia Potentia est  ¸„»°'´`'°«„¸`'°«„¸¸„»°'´

Link to comment
Share on other sites

Link to post
Share on other sites

Let them first finish developping before you judge the outcome!

SW optimisation can improve the speed a lot, especially if you do low level stuff. I once had a 10x improvement in performance when changing a singel letter in the code.

Mineral oil and 40 kg aluminium heat sinks are a perfect combination: 73 cores and a Titan X, Twenty Thousand Leagues Under the Oil

Link to comment
Share on other sites

Link to post
Share on other sites

56 minutes ago, vorticalbox said:

AMD already had project skylake back on the 2015 roadmap. I'm not sire if they every released a project. 

All AMD's ARM endeavors have been in the server space and most have been cancelled as far as I know, including K12. I don't think they've ever worked on integrating low power cores on mobile processors.

Link to comment
Share on other sites

Link to post
Share on other sites

Geekbench,  just to add how people compare like Apple AX chip to desktop with it.. 

Anyway, it definitely would be epic zo see x86 take over mobile but we'll see what happens in future. Cause seeing that is way more interesting though. 

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, Bit_Guardian said:

You couldn't be farther from the truth. Every emulator on the planet does exactly what the JVM does: interpret code and respond with expected values. How it gets to those values is irrelevant (hence many JVMs, but same behavior). The JVM is in fact a more complicated emulator than just about any other, video game consoles included. You're trying to elevate the problem, but I'm sorry, it's reducible to code interpretation.

All right, if we want to be pedantic perhaps we can draw a parallel between the jvm and an emulator at a theoretical level, but in practice the difference is massive, particularly in performance. The JVM interprets java code much more efficiently than an x86 to ARM translator ever could. On top of that, x86 specific compilers deliberately transform code into assembly in a way that's efficient for that architecture, which may be terrible on a different one (and it often is), and cannot be compared with the tailor built JVM which still deals with high level abstraction and can interpret it however it wants. A competent emulator can convert individual commands efficiently, but it can't take whole chunks of commands and convert them into something more efficient for its target architecture without a huge expenditure of resources. This isn't necessarily about complexity of the problem, but about how demanding it is to perform. The JVM skips the middle man that is the compiler (sure, it compiles to "java bytecode", but that bytecode is still tailor made for the JVM to be as fast as possible).

13 hours ago, Bit_Guardian said:

ARM is not true RISC, and modern x86 processors use RISCy micro-ops anyway. There's an interpretation (read: emulation) layer literally built in. In short, that's not the performance bottleneck on merit.

This doesn't matter much. We don't know what the performance difference is, but we definitely know it's there, especially since, as I wrote above, compilers will convert code in a way that's efficient for x86 and not consider the bytecode's portability. I'm sure the difference in this case is exacerbated by geekbench being a shitty benchmark and favouring arm to begin with, but it's delusional to think there is going to be NO performance overhead in normal use.

14 hours ago, Bit_Guardian said:

No, it does not require specific hardware support, not unless that console had something truly exotic such as fixed-function pipeline hardware or a special DSP. This is exactly why languges like C++ are so powerful. They continually prove every point you've made here wrong.

If you'd reread what I wrote you'd notice I never said emulators require special hardware, only that they have "different requirements" - in terms of performance. I probably could have written that better. I'm... not sure why C++ specifically would prove that point (which I didn't make) wrong, but I agree that it would be wrong.

14 hours ago, Bit_Guardian said:

And actually while we're on this subject, the JVM is a monolithic cross-aechitecture codebase (aka a NoArch executable, which you will observe if you install it on Linux).

Well, the JVM is part of the JRE, so that's sort of a moot point...

 

If you mean the JRE, can you link me a source that proves that the compiled JRE executable runs on different platforms and architectures without magic involved, and explains how it achieves that? I thought the java packages were simply bash scripts which then downloaded the correct executables, as is generally the case with NoArch packages for binary programs. How do you explain the presence of so many different binaries on the website if it is universal?

image.png.a5949876645305011f1a9da7ecbf0fab.png

14 hours ago, Bit_Guardian said:

HSA (and in fact OpenCL 2.1 and the last 3 versions of CUDA do not require the GPU report back. It can literally pick up and run. C++ SyCl, OpenACC, and HPX do not require this either).

Except at some point you'll want to know the result of that calculation, at least in most cases, which is what I was referring to. It's technically not required to run the calculation, but in practical uses it almost always is. But that's beside the point.

 

14 hours ago, Bit_Guardian said:

It is possible. It's a solved problem. It's baked into the most recent standards of every major parallel programming framework today. Get started on OpenMP and work your way up and out.

First line of the wikipedia page on OpenMP:

Quote

OpenMP (Open Multi-Processing) is an application programming interface (API) that supports multi-platform shared memory multiprocessing

The keyword being "shared memory", which I don't think is possible between a snapdragon 835 and a core i5. If it is, I stand corrected, but even then it would require some pretty extensive rewriting of the windows kernel, wouldn't it? The only "fix" to the shared memory problem would be the (discontinued) Cluster OpenMP, but at that point we're talking clusters and to create a cluster between a snapdragon and a core i5 you'd need to use ethernet (obviously not fast enough) or invent a new standard that allows them to communicate onboard, which may or may not require modifications to the cpus themselves depending on how they are designed - and once again, it would mean drastically changing windows' codebase.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Sauron said:

The keyword being "shared memory", which I don't think is possible between a snapdragon 835 and a core i5. If it is, I stand corrected, but even then it would require some pretty extensive rewriting of the windows kernel, wouldn't it? The only "fix" to the shared memory problem would be the (discontinued) Cluster OpenMP, but at that point we're talking clusters and to create a cluster between a snapdragon and a core i5 you'd need to use ethernet (obviously not fast enough) or invent a new standard that allows them to communicate onboard, which may or may not require modifications to the cpus themselves depending on how they are designed - and once again, it would mean drastically changing windows' codebase.

This very problem is being worked on by the Gen-z consortium and it's very yearly stages. The only working prototype system is the HPE The Machine and has a very different purpose in mind, still what is being worked on is to allow this type of thing to happen.

 

Long and short of it, it's not ready yet and won't be for a while and will first be found in HPC clusters and not in laptops.

 

Quote

Gen-Z: An open systems Interconnect designed to provide memory semantic access to data and devices via direct-attached, switched or fabric topologies.

Quote

Gen-Z is the solution, an open systems interconnect designed to provide memory semantic access to data and devices via direct-attached, switched or fabric topologies. This means Gen-Z will allow any device to communicate with any other device as if it were communicating with its own local memory using super-simple commands, sometimes called load/store protocol, we refer to it as a “memory semantic communications” because it uses the same language as local memory does today. Memory-semantic communications are used to move data between buffers located on different components with minimal overhead. For example, Gen-Z-attached memory can be mapped into a processor memory management unit (MMU) such that any processor load, store, or atomic operation is transparently translated into Gen-Z read, write, or atomic operation and transported to the destination memory component. Similarly, Gen-Z supports buffer put and get operations to move up to 232 bytes of data between buffers without any processor involvement.

http://genzconsortium.org/about/

Link to comment
Share on other sites

Link to post
Share on other sites

Not gonna get into the whole debate about how Java is a prime example of hardware instruction emulation.

 

The original focus of Microsoft was to be able to get business folks to be able to access their applications on their mobile devices. Remember HP with their phone laptop hybrid earlier this year. Microsoft always targets business or developer requests before features trickle into the consumer market.So a lot of this benchmarks will be confusing, then again it's just tests and not actual goals that we can only see from this. Unless Asus is willing to tell us what their goal is, we can assume its suppose to be a Android/Windows tablet, which the Atoms cpu still do a bad job at for if you are running mainly Android, even better is that you run Android with Windows apps on top..

 

Quote

Microsoft promised users that many of their existing x86/64 applications will run just fine on an ARM Windows 10 device.

Fine as in doesn't crash, good enough for an employee to access their business application made specifically on one platform with too many Microsoft dependencies.

Also the project probably cost them $10M and they aren't going to spend more to run it on Android.

 

Everyone has to make every hardware post like its for gamers or something, there are other markets too. And no, not the school market for this one.

Information Security is my thing.

Running a entry/mid-range pc, upgrading it slowly.

Link to comment
Share on other sites

Link to post
Share on other sites

Simply put, unless you can control almost everything (like Apple does) ARM CPUs just aren’t there yet. 

 

 

Laptop: 2019 16" MacBook Pro i7, 512GB, 5300M 4GB, 16GB DDR4 | Phone: iPhone 13 Pro Max 128GB | Wearables: Apple Watch SE | Car: 2007 Ford Taurus SE | CPU: R7 5700X | Mobo: ASRock B450M Pro4 | RAM: 32GB 3200 | GPU: ASRock RX 5700 8GB | Case: Apple PowerMac G5 | OS: Win 11 | Storage: 1TB Crucial P3 NVME SSD, 1TB PNY CS900, & 4TB WD Blue HDD | PSU: Be Quiet! Pure Power 11 600W | Display: LG 27GL83A-B 1440p @ 144Hz, Dell S2719DGF 1440p @144Hz | Cooling: Wraith Prism | Keyboard: G610 Orion Cherry MX Brown | Mouse: G305 | Audio: Audio Technica ATH-M50X & Blue Snowball | Server: 2018 Core i3 Mac mini, 128GB SSD, Intel UHD 630, 16GB DDR4 | Storage: OWC Mercury Elite Pro Quad (6TB WD Blue HDD, 12TB Seagate Barracuda, 1TB Crucial SSD, 2TB Seagate Barracuda HDD)
Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, DrMacintosh said:

Simply put, unless you can control almost everything (like Apple does) ARM CPUs just aren’t there yet. 

 

 

I wouldn't say that necessarily. Emulating x86 on ARM was certainly never going to win any performance contests. Offering another option for businesses and enterprise users seems to be key for the time being. I'm frankly impressed the performance difference is as close as it is rather than orders of magnitude apart.

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Zodiark1593 said:

I wouldn't say that necessarily. Emulating x86 on ARM was certainly never going to win any performance contests. Offering another option for businesses and enterprise users seems to be key for the time being. I'm frankly impressed the performance difference is as close as it is rather than orders of magnitude apart.

If only the likes of Qualcomm releases drivers for more than two years ?

There is more that meets the eye
I see the soul that is inside

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×