Jump to content

Practical use of Assembly/X86_64

Guest

TL;DR
Have you written a full program, not a test project using Assembly?
Was it as painful as people make it seem?


Long:
I saw this video by this youtube channel called "Dave's Garage" where this guy used to work for Microsoft.

He made one video called "The lost art of assembly" where you get really low to the metal & do a lot of crazy things.

 

From what I understand about Assembly is that a lot of common functionality in "high level" languages already exists in Assembly.
Jump -> if statement
add -> + sign

Functions just exist

Jump can also be used for loops, and other "moving around" functionality

 

 

Many have proclaimed that writing in assembly is a bad idea because compilers can write better assembly than you, but a lot of programmers write code in languages like Java & Python, so if it doesn't matter how "fast" the compiled code is, because it's already objectively slower because it has to be interpreted.

If people just wrote assembly, wouldn't it become "just as quick" to develop programs?

 

If you did use assembly in any non-experimenting capacity, am I being absurd? If so, why?
If I'm not being absurd, how much wrong do I have in my theory?

Obviously Assembly is processor dependent, an it looks like we're migrating to a new instruction set under Arm, but it's not machine language, so it still compiles to relative offsets instead of hard coded values.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, fpo said:

If people just wrote assembly, wouldn't it become "just as quick" to develop programs?

No, because it's way too verbose. A dev programming in python will get the job done faster than someone doing it in C, given equivalent experience for both in the respective languages. Same applies to something like ASM x C.

 

5 minutes ago, fpo said:

If you did use assembly in any non-experimenting capacity, am I being absurd? If so, why?

You're giving out on portability and many possible optimizations by the compiler, unless you know how and when to use stuff like AVX/NEON by yourself.

 

6 minutes ago, fpo said:

but it's not machine language

Except it is, it's just mnemonics to machine language, your ADD instruction translates directly to a binary opcode. Different ISAs have different opcodes/instructions, possible registers and instruction's arguments.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, fpo said:

If you did use assembly in any non-experimenting capacity, am I being absurd? If so, why?

I learned Motorola 68K assembly (twice) in college. There are definitely advantages to understanding assembly and what compilers generate for each high-level programming concept, but actually writing it takes forever even for seemingly-trivial programs. Things like loops and such are a two/three line affair in most high-level languages, whereas in assembly you have to keep track of your starting pointer, jump back there every time, juggle registers/memory along the way, and write every single exit jump individually.

 

The main advantage I see is resource management: high-level languages keep track of your pointers and memory allocations for you (usually in the background), while assembly you need to do so only manually, and god help you if you accidentally some data where you weren't supposed to, especially if you're using a platform without memory space protections.

 

12 minutes ago, fpo said:

Many have proclaimed that writing in assembly is a bad idea because compilers can write better assembly than you

This is true for the vast majority of programming paradigms out there. The people writing compilers have seen (and optimized) for it all, so why waste time reinventing the (assembly) wheel every time when you can spend less time writing in a easier-to-read language.

 

24 minutes ago, fpo said:

but a lot of programmers write code in languages like Java & Python, so if it doesn't matter how "fast" the compiled code is, because it's already objectively slower because it has to be interpreted.

I can't speak to Python, but Java generates bytecode when compiled, and then that is run inside a "VM" of sorts. However, for a lot of operations, all that VM is doing is translating the Java opcode into the-platform-it's-running-on's opcode (one of the selling points of Java is its portability) while keeping track of resources in its own pool. Sure, Java is a tad slower because of this, but a lot of tweaking has been done in the last decade to make it a lot faster than it used to be considered.

 

Also, no computer will ever be using 100% of every CPU segment at all times, and CPUs are now so fast, that a slightly less efficient language may result is microseconds longer processing times, much slower than network latency or even pixel-to-screen latency. Most of the time, a computer is doing nothing. There are applications where latency matters (real time systems, HF trading), but for a webserver, simplicity/security/portability will win out over raw return rate very quickly.

Main System (Byarlant): Ryzen 7 5800X | Asus B550-Creator ProArt | EK 240mm Basic AIO | 16GB G.Skill DDR4 3200MT/s CAS-14 | XFX Speedster SWFT 210 RX 6600 | Samsung 990 PRO 2TB / Samsung 960 PRO 512GB / 4× Crucial MX500 2TB (RAID-0) | Corsair RM750X | a 10G NIC (pending) | Inateck USB 3.0 Card | Hyte Y60 Case | Dell U3415W Monitor | Keychron K4 Brown (white backlight)

 

Laptop (Narrative): Lenovo Flex 5 81X20005US | Ryzen 5 4500U | 16GB RAM (soldered) | Vega 6 Graphics | SKHynix P31 1TB NVMe SSD | Intel AX200 Wifi (all-around awesome machine)

 

Proxmox Server (Veda): Ryzen 7 3800XT | AsRock Rack X470D4U | Corsair H80i v2 | 64GB Micron DDR4 ECC 3200MT/s | 4x 10TB WD Whites / 4x 14TB Seagate Exos / 2× Samsung PM963a 960GB SSD | Seasonic Prime Fanless 500W | Intel X540-T2 10G NIC | LSI 9207-8i HBA | Fractal Design Node 804 Case (side panels swapped to show off drives) | VMs: TrueNAS Scale; Ubuntu Server (PiHole/PiVPN/NGINX?); Windows 10 Pro; Ubuntu Server (Apache/MySQL)


Media Center/Video Capture (Jesta Cannon): Ryzen 5 1600X | ASRock B450M Pro4 R2.0 | Noctua NH-L12S | 16GB Crucial DDR4 3200MT/s CAS-22 | EVGA GTX750Ti SC | UMIS NVMe SSD 256GB / TEAMGROUP MS30 1TB | Corsair CX450M | Viewcast Osprey 260e Video Capture | Mellanox ConnectX-2 10G NIC | LG UH12NS30 BD-ROM | Silverstone Sugo SG-11 Case | Sony XR65A80K

 

Camera: Sony ɑ7II w/ Meike Grip | Sony SEL24240 | Samyang 35mm ƒ/2.8 | Sony SEL50F18F | Sony SEL2870 (kit lens) | PNY Elite Perfomance 512GB SDXC card

 

Network:

Spoiler
                           ┌─────────────── Office/Rack ────────────────────────────────────────────────────────────────────────────┐
Google Fiber Webpass ────── UniFi Security Gateway ─── UniFi Switch 8-60W ─┬─ UniFi Switch Flex XG ═╦═ Veda (Proxmox Virtual Switch)
(500Mbps↑/500Mbps↓)                             UniFi CloudKey Gen2 (PoE) ─┴─ Veda (IPMI)           ╠═ Veda-NAS (HW Passthrough NIC)
╔═══════════════════════════════════════════════════════════════════════════════════════════════════╩═ Narrative (Asus USB 2.5G NIC)
║ ┌────── Closet ──────┐   ┌─────────────── Bedroom ──────────────────────────────────────────────────────┐
╚═ UniFi Switch Flex XG ═╤═ UniFi Switch Flex XG ═╦═ Byarlant
   (PoE)                 │                        ╠═ Narrative (Cable Matters USB-PD 2.5G Ethernet Dongle)
                         │                        ╚═ Jesta Cannon*
                         │ ┌─────────────── Media Center ──────────────────────────────────┐
Notes:                   └─ UniFi Switch 8 ─────────┬─ UniFi Access Point nanoHD (PoE)
═══ is Multi-Gigabit                                ├─ Sony Playstation 4 
─── is Gigabit                                      ├─ Pioneer VSX-S520
* = cable passed to Bedroom from Media Center       ├─ Sony XR65A80K (Google TV)
** = cable passed from Media Center to Bedroom      └─ Work Laptop** (Startech USB-PD Dock)

Retired/Other:

Spoiler

Laptop (Rozen-Zulu): Sony VAIO VPCF13WFX | Core i7-740QM | 8GB Patriot DDR3 | GT 425M | Samsung 850EVO 250GB SSD | Blu-ray Drive | Intel 7260 Wifi (lived a good life, retired with honor)

Testbed/Old Desktop (Kshatriya): Xeon X5470 @ 4.0GHz | ZALMAN CNPS9500 | Gigabyte EP45-UD3L | 8GB Nanya DDR2 400MHz | XFX HD6870 DD | OCZ Vertex 3 Max-IOPS 120GB | Corsair CX430M | HooToo USB 3.0 PCIe Card | Osprey 230 Video Capture | NZXT H230 Case

TrueNAS Server (La Vie en Rose): Xeon E3-1241v3 | Supermicro X10SLL-F | Corsair H60 | 32GB Micron DDR3L ECC 1600MHz | 1x Kingston 16GB SSD / Crucial MX500 500GB

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, fpo said:

If you did use assembly in any non-experimenting capacity, am I being absurd? If so, why?

If I'm not being absurd, how much wrong do I have in my theory?

Obviously Assembly is processor dependent, an it looks like we're migrating to a new instruction set under Arm, but it's not machine language, so it still compiles to relative offsets instead of hard coded values.

I did a little bit of ARM ASM like a decade ago when I was into calculators. I dropped it once Lua and C became available however, so only some tangential experience, but here are my 2 cents anyway. If you're good or become an expert then you can probably write as good a piece of code in ASM as you can in C, but it's one lesser layer of abstraction that you have to deal with so I don't think you'll be faster. Add to that that a year from now I'd make more sense out of seeing high-level code compared to a bunch of assembly instructions. I haven't touched it since my calculator days though. For me it's in the regime where if you know you can do it well because of some application or target-specific tricks, for example, go ahead, but otherwise I leave it up to the compiler (or stick to even higher level languages, most of my work gets by just fine in Python).

 

55 minutes ago, fpo said:

If people just wrote assembly, wouldn't it become "just as quick" to develop programs?

I think it's a bit like typesetting something in LaTeX vs typing it in Word. I have much more control over my document in the former and I can make things I wouldn't know how to do in Word, but at the same time I need to declare the start and end of the document, text formatting turns into commands for every word that needs formatting (things like \textbf{boldtext}; {\color{red} red text}), figures and tables come in special environments, I need to think about which package provides which functionality etc. Even hyperlinks are non-trivial and require a package. I may be able to make as beautiful a document/poster/whatever as I can with something else and with much more control, but it comes at the expense of it being more time consuming and much more thinking on my part how exactly things fit together. The cost of being more powerful is that it won't be as quick to write.

Crystal: CPU: i7 7700K | Motherboard: Asus ROG Strix Z270F | RAM: GSkill 16 GB@3200MHz | GPU: Nvidia GTX 1080 Ti FE | Case: Corsair Crystal 570X (black) | PSU: EVGA Supernova G2 1000W | Monitor: Asus VG248QE 24"

Laptop: Dell XPS 13 9370 | CPU: i5 10510U | RAM: 16 GB

Server: CPU: i5 4690k | RAM: 16 GB | Case: Corsair Graphite 760T White | Storage: 19 TB

Link to comment
Share on other sites

Link to post
Share on other sites

I ran into multiple linear algebra libraries that can use SIMD instructions but i never had to write any assembly code. I think this is as practical as it gets. CyberSec is another example where assembly knowledge can be useful during reverse engineering.

ಠ_ಠ

Link to comment
Share on other sites

Link to post
Share on other sites

Although not really related to x86, you still see occasional assembly usage in the embedded world, particularly on microcontrollers. You typically don't want to ever use assembly, but when you need things to be microscopic and/or ultra fine control its hard to get away from it. 

 

As @shadow_ray mentioned you will often see math libraries utilize SIMD asm instructions, often the headers for SSE or similar SIMD extensions are little more than an extremely thin wrapper over the assembly instructions. For example the SSE instruction mnemonic for add is:

addps xmm0, xmm1

 This adds the packed values of register xmm1 to register xmm0 and stores in xmm0.

In contrast to the intrinsics provided by headers <xmmintrin.h>

__m128 v1;
__m128 v2;

v1 = __mm_add_ps(v1,v2)

 

Another usage that I can think off that is useful (albeit a bit hacky) is using assembly instructions to trigger breakpoints. E.g.

INT3

This is a single byte instruction that raises a software interrupt for a debug exception (breakpoint). This instruction is only valid on x86 architectures but is portable across operating systems. You can utilize this to pause an application on a condition. Combine it with some macro and preprocessor magic and you now have an assert that will trigger a breakpoint and it can be stripped out of production builds.

 

Windows has an intrinsic that performs this same operation except its portable across hardware architectures.

__debugbreak()

This will generate the appropriate assembly interrupt instruction for whichever architecture is being compiled for. However it's only available for MSVC so if your on linux and want GCC you're shit out of luck.

 

Personally I don't think assembly is going away anytime soon as its still useful, but I also don't think we should be writing code in assembly. 99.9% of the time you're better served with a programming language (assembly isn't really a langauge per se) . Compilers are much better at generating efficient instructions in bulk than humans are. That said though, a compiler still can not match a human with proper understanding of a problem and computer architecture. But to produce better asm instructions than a compiler takes considerable time and it's not easy to get it right.

 

Writing even shit assembly is time consuming and error prone. This is why we have high level programming languages. (By high level I mean basically anything higher than asm)

 

As programmer, even if you never write assembly, you should be able to read it and understand it. In performance critical applications and constrained systems it helps to know what is generated by a programming langauge/compiler so that you can make informed decisions when optimizing a section of code. Simply put you know why something is slow rather just trying random things and hoping it speeds up. You also learn how you can write code that can be more easily optimized by the compiler. Even small changes can have massive effects in terms of what a compiler will spit out for an optimized build.

CPU: Intel i7 - 5820k @ 4.5GHz, Cooler: Corsair H80i, Motherboard: MSI X99S Gaming 7, RAM: Corsair Vengeance LPX 32GB DDR4 2666MHz CL16,

GPU: ASUS GTX 980 Strix, Case: Corsair 900D, PSU: Corsair AX860i 860W, Keyboard: Logitech G19, Mouse: Corsair M95, Storage: Intel 730 Series 480GB SSD, WD 1.5TB Black

Display: BenQ XL2730Z 2560x1440 144Hz

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/26/2022 at 11:14 AM, fpo said:

If people just wrote assembly, wouldn't it become "just as quick" to develop programs?

Java exists because of the concept of portability, things such as C existed because it makes writing programs easier.  Different programming languages have different benefits.  The modern languages essentially abstract some of the tedious/hard to do type of work (think polymorphism).  Assembly has the benefit of being the "quickest" in terms of runtime...but that assumes you know exactly what you are doing (including optimizations)

 

Here is an example:

//c style
int main() {
  int val = 123;
  int val2 = 123;
  if(val < 123 && val2 < 123) {
    return 1;
  }
  else if(val > 123 && val2 > 123) {
    return 2;
  }
  else {
    return 0;
  }
}

Assemblyish:
        push    rbp
        mov     rbp, rsp
        mov     DWORD PTR [rbp-4], 123
        mov     DWORD PTR [rbp-8], 123
        cmp     DWORD PTR [rbp-4], 122
        jg      .L2
        cmp     DWORD PTR [rbp-8], 122
        jg      .L2
        mov     eax, 1
        jmp     .L3
.L2:
        cmp     DWORD PTR [rbp-4], 123
        jle     .L4
        cmp     DWORD PTR [rbp-8], 123
        jle     .L4
        mov     eax, 2
        jmp     .L3
.L4:
        mov     eax, 0
.L3:
        pop     rbp
        ret

This is just a simple example as well [admittedly my assembly isn't great].  The assembly just makes it more prone programming errors...see how many different options there are for a simple if, elseif, else statement.  Overall writing assembly has a higher learning curve, and is more prone to user programming errors.

 

The other thing is if you have it written in something like C, it's easier to recompile it with better optimizations (as the compiler gets better).  See Super Mario 64, where setting -o3 flag for gcc allowed moders to eliminate the lag on some levels.  Lets say considering a change in architecture from x86 to x64, changing that in assembly would be a lot harder than converting the c code to x64

 

In general we went away from assembly because it doesn't make sense to program in it anymore.  Ultimately it still is used for some functions that lets say get called a lot and it's critical to get every last ounce of performance out of it...but really that is few and far between.  Computers for the most part have gotten quick enough that in most cases the added time and effort isn't worth that 1-2% change, and that's in the best case.   Sometimes the compiler can do tricks that you wouldn't really think about.

 

The only time I use assembly is when I'm trying to figure out what a program is doing (or hooking into it) when I don't have the source code.

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

Oh to add onto things I've said, here is a simple prime program written in python, C/C++ and assembly where the guy tracked time it took to write the program, vs the speed that algo ran at

 

 

I think it's actually a pretty good case why we don't use assembly as much anymore (twice the time to write), twice as many lines of code and overall it isn't as readable as something like C

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/26/2022 at 8:14 PM, fpo said:

From what I understand about Assembly is that a lot of common functionality in "high level" languages already exists in Assembly.

All of those are low level operations. Their mere existence does not determine whether a language is high level. Assembler is a 1:1 representation of machine code. From this follows that anything that can be done in a higher level language must be doable in assembly, because at the end of the day, your higher level language is translated into machine code for execution.

 

On 7/26/2022 at 8:14 PM, fpo said:

Many have proclaimed that writing in assembly is a bad idea because compilers can write better assembly than you, but a lot of programmers write code in languages like Java & Python, so if it doesn't matter how "fast" the compiled code is, because it's already objectively slower because it has to be interpreted. If people just wrote assembly, wouldn't it become "just as quick" to develop programs?

Java is not an interpreted language. It is compiled to (platform independent) Java bytecode, which is then compiled "Just in time" to platform specific machine code by the Java runtime (JRE) when executing the program. There is some startup cost associated with this, but good Java code isn't slow per se.

 

It would not be "just as quick" to write things in Assembler simply because of the amount of code that you'd have to write to achieve the same thing as in higher level languages. And more lines of code means more potential for bugs.

 

You could speed development up over time by moving commonly used code into functions that encapsulate and abstract away their complexity, which means you're slowly and painfully recreating a higher level language.

 

As an additional bonus: The code I work on for a living is written in Java. That program runs on various Linuxes, Windows, macOS (x86) and macOS (M1). The same code base without having to re-write or recompile for any of these platforms. You'd be hard pressed to achieve the same with most other languages, let alone Assembler.

 

On 7/26/2022 at 8:14 PM, fpo said:

If you did use assembly in any non-experimenting capacity, am I being absurd? If so, why?

Writing good assembler requires a ton of knowledge about the hardware you're writing code for. I'm sure you can find some edge case where hand written assembler is faster than code produced by a compiler, simply because you know things about your program that the compiler can't. But by and large the general optimizations done by a modern compiler are going to outperform the large majority of developers out there. Both in the sense of speed at runtime and the speed of development.

 

The time you'd need to invest to benchmark and optimize your Assembler code is going to be comically absurd compared to the development speed in a modern language. And at the end of the day your hand written Assembler isn't going to outperform e.g. Java to such a degree that it makes this cost worth it.

 

On 7/26/2022 at 8:14 PM, fpo said:

Obviously Assembly is processor dependent, an it looks like we're migrating to a new instruction set under Arm, but it's not machine language, so it still compiles to relative offsets instead of hard coded values.

As was already pointed out above, assembler is machine language. It's a set of mnemonics that translate 1:1 into machine code. Code written for x86 is not going to run on ARM and vice versa. With a high level language you can cross-compile, because the code itself isn't married to one particular ISA.

 

Since it wasn't explicitly mentioned yet: Assembler is as low level as you can get. Which means you'll be primarily concerned with moving bits around rather than solving your actual problem. This is a needless distraction in many cases.

 

For example if I need an array of numbers in e.g. Java, I can just do something like

int numbers[] = new int[] { 10, 20, 30, 40, 50 }

How would you achieve the same thing in Assembler? You're suddenly concerned with allocating memory, moving data into registers, writing them out into memory, making sure memory is freed when you're done and so on. All of these are needles distractions from the (business) problem you're actually trying to solve. The more time I can spend thinking about the problem I'm trying to solve, rather than the technical process of moving bits around, the more productive I'm going to be.

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, wanderingfool2 said:

Oh to add onto things I've said, here is a simple prime program written in python, C/C++ and assembly where the guy tracked time it took to write the program, vs the speed that algo ran at

That comparison would probably be even less in favor of Assembler if you were writing anything non-trivial where you actually have to think about what you're doing rather than just typing the algorithm from memory. He's basically measuring raw typing speed at this point, and not the amount of time you need to spend thinking about the algorithm (and language specific "distractions" taking time away from thinking about the algorithm).

 

I've recreated his program in Python, Java and C and tried it on my own computer… the result is:

  • C: 1.67s
  • Java: 1.68s
  • Python: 31.68s

So at least in this case Java is essentially the same speed as C, which I'm going to assume is the same speed as Assembler as it was on their machine.

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

Assembler is still used in programs where it's really important to squeeze performance, and to control the data flow well.

For example, the software video encoder x264 uses A LOT of assembly because the hand optimized routines will do better than the code generated by the compiler in lots of cases. Or, the code can use specific routines with optimizations for various instruction sets available for the processor being used. In some cases, they may choose to intentionally use a different instruction set than what the compiler would.

You can also do some really tricky optimizations like placing some instructions in a specific way to take advantage of hyper threading and branch prediction, you could control how much data and instructions are loaded in the L1 and L2 and L3 caches of the CPU and lots of other things.

 

See for example the code here : https://code.videolan.org/videolan/x264/-/tree/master/common/x86

 

Link to comment
Share on other sites

Link to post
Share on other sites

When learning about os dev, I used assembly to get out of the boot sector, but even then I wrote c everywhere I could. occasionally for really low level stuff it is helpful to use inline asm to do some specific things, but usually I would use light wrappers for that, the additional cognitive load of asm is rarely worth it with modern compilers in my opinion.

If your question is answered, mark it so.  | It's probably just coil whine, and it is probably just fine |   LTT Movie Club!

Read the docs. If they don't exist, write them. | Professional Thread Derailer

Desktop: i7-8700K, RTX 2080, 16G 3200Mhz, EndeavourOS(host), win10 (VFIO), Fedora(VFIO)

Server: ryzen 9 5900x, GTX 970, 64G 3200Mhz, Unraid.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah I did, wrote a text based game as a class project back in college, in mips assembly, ran and debug on a mips emulator. I hate it with a passion.

 

for user level application, people use higher level language for a reason. For barebones devices like operating system kernel, some parts are written in assembly for a reason. 
 

Right tool for the right job. 

Sudo make me a sandwich 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Sakuriru said:

.NET will even work embedded these days.

Are you referring to .NET IoT libs?

 

I didn't think you could write .net stuff on baremetal since it needs the runtime. I am interested in this if you have an example to share.

If your question is answered, mark it so.  | It's probably just coil whine, and it is probably just fine |   LTT Movie Club!

Read the docs. If they don't exist, write them. | Professional Thread Derailer

Desktop: i7-8700K, RTX 2080, 16G 3200Mhz, EndeavourOS(host), win10 (VFIO), Fedora(VFIO)

Server: ryzen 9 5900x, GTX 970, 64G 3200Mhz, Unraid.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Well that is pretty legit! thanks for sharing.

If your question is answered, mark it so.  | It's probably just coil whine, and it is probably just fine |   LTT Movie Club!

Read the docs. If they don't exist, write them. | Professional Thread Derailer

Desktop: i7-8700K, RTX 2080, 16G 3200Mhz, EndeavourOS(host), win10 (VFIO), Fedora(VFIO)

Server: ryzen 9 5900x, GTX 970, 64G 3200Mhz, Unraid.

 

Link to comment
Share on other sites

Link to post
Share on other sites

The main thing is that asm is not really scalable when application is built to enough a complexity and higher level. 
 

higher programming language is simply an abstraction of a lower level assembly language or whatever the lower level intermediate language is. assembly language in turn is itself an abstraction of the machine code.
 

This is no more different than how http abstracts away the lower level network layers like tcp and udp sockets or how an api abstracts the inner working of the code that implements it. When writing application at this level, it is a hell to literally rewrite everything from scratch and reinvent the whole fcking wheel every time pretty much. 
 

in assembly, you do not even have the concept of a loop and a Subroutine+stack memory management (basic feature of all programming language) unless you code it in yourself. 

 

Would you rather move data between registers and stack everytime when you need to do things like said invoke/conclude a subroutine or would you rather just do it with a pair of brackets and some function xyz parenthesis like in so many higher level languages?  

 

The downside of course is that you have less control and also overhead but to be honest here, if you do things like malloc in c, do you actually need to look into the libc and check out how it does its malloc and start tweaking things around? Do you also feel the need that its performance is subpar and you need to tweak around its dynamic memory allocator alogrithm? Even those who knew what they are doing would avoid it like a plague. same reason applied, people avoid asm becuase higher language has more features and benefits that gving up the control and the also the tedius need to write these lower level details is 100% worth it. 

 

Sudo make me a sandwich 

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, Franck said:

If you can compile on the device you can compile with .NET Native which make the code on part performance wise to C++.

.net native means it compiles to machine code instead of bytecode?

Link to comment
Share on other sites

Link to post
Share on other sites

56 minutes ago, fpo said:

.net native means it compiles to machine code instead of bytecode?

It embeds the .net runtime onto the binary itself, so you have a single, fat binary that contains both the .net runtime and the "bytecode" for it to execute, akin to how Go works.

1 hour ago, Franck said:

If you can compile on the device you can compile with .NET Native which make the code on part performance wise to C++.

I know it can target linux, but can it target baremetal?

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, igormp said:

It embeds the .net runtime onto the binary itself, so you have a single, fat binary that contains both the .net runtime and the "bytecode" for it to execute, akin to how Go works.

I know it can target linux, but can it target baremetal?

It translate to machine code so it's specific to the platform you are compiling on. If it's windows x86 if will be for windows 32 bits, if it's linux x64 it will be linux 64 bits. Exactly like C++ compile for many platforms. I haven't played enough with it to test all possibility but it might not support all of .NET features but so far haven't had a case where it didn't.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Franck said:

It translate to machine code so it's specific to the platform you are compiling on. If it's windows x86 if will be for windows 32 bits, if it's linux x64 it will be linux 64 bits. Exactly like C++ compile for many platforms. I haven't played enough with it to test all possibility but it might not support all of .NET features but so far haven't had a case where it didn't.

I'm aware of the platforms where you have an OS, but I was asking about an actual freestanding binary, but after some googling it seems like that's a no.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×