Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

SiFive now has MiniITX RISC-V being released soon.

1 minute ago, Bombastinator said:

Both of these seem to go to the point that I said the assebly language is always different, rather than can be, and that is not always the case because manufacturers arrange it that way for simplicity.  

The language (e.g. GAS and NASM) is the same. Grammar, syntax, parser, infrastructure, etc.

 

Instructions may differ (e.g. when you compile something targeting skylake, you may use instructions added with that microarchitecture, which will cause the application to fail in previous microarchitectures (e.g. Westmere, Piledriver, Zen)).

 

If manufactures want, they could release an assembly language per architecture but that would be pretty dumb. Notice I'm not talking about machine code, which definitely is dependent on the underlying architecture.

 

17 minutes ago, Bombastinator said:

As an aside You may also have an issue with Wikipedia then because they seem to be saying always. 

Wikipedia isn't a trustworthy source.

I'm pretty sure they're wrong on that because I've researched that as a tutor for basic software classes (assemblers, compilers, static and dynamic linkers, virtual machines) on my university.

Link to post
Share on other sites
6 hours ago, igormp said:

 

 

"as you can't run most Linux docker's on Windows". You can. Previously it was leveraged by running a linux VM on the background, but now it makes use of WSL2 and works for 95% or even more of the containers found on dockerhub.

 

"and all the AI/neuralnet stuff is using docker because of how sloppy Nvidia is. ". They are not, have a look at most models available on github and you will rarely ever see a Dockerfile in there. Applications using those models to leverage docker due do deployment reasons, and that has NOTHING to do with "how sloppy nvidia is" (what even did you mean by that?).

 

Every, single, thing I've seen built on a CNN has a dockerfile to run a linux image with a specific set of nVidia drivers, Cuda version, CuDNN, Python, Tensorflow/Pytorch, etc, that's before you even get to that application written in python. You can not just "update" these things, because it breaks everything that runs on top of it.

https://docs.nvidia.com/deeplearning/cudnn/support-matrix/index.html

 

Got a new Ampere card? can't use it with cuDNN less than. 8.0.0 running CUDA 11.0 and driver r450

Maybe you're not even using the GPU because nVidia doesn't make it easy to get cuDNN https://github.com/pytorch/pytorch/issues/17445

 

It's a mess for end users to setup. One specific project requires CUDA 10.0 only, CuDNN 7.6.5 for Cuda 10.0 , Python 3.7 only, PyTorch for CUDA 10.0, Tensorflow 1.14. And what if you want to use something that requires a newer version side-by-side? you can't. You can not side-by-side drivers except via virtualization, and even then, you still need two separate GPU's. 

 

image.png.a3726d4e7dc0ac49a6ea3da01bcf43fa.png

image.png.070fb9324f7832a0d8fd9006898080f3.png

image.png.fe400b2b533ac14af9ed447eafaf29a7.png

Anyway. These docker containers don't work on Windows because they require the GPU to be virtualized, which means it's not available to the host OS to use previously. It's only recently that any work on this has been possible. It's not even in the current release build of windows.

https://developer.nvidia.com/blog/announcing-cuda-on-windows-subsystem-for-linux-2/

 

https://docs.nvidia.com/cuda/wsl-user-guide/index.html#known-issues

 

To go back to the topic about bare metal though. It sounds like these are all FPGA-adjacent technology.

 

https://sifive.github.io/freedom-metal-docs/introduction.html#what-is-freedom-metal

Quote

What is Freedom Metal?

Freedom Metal enables portable, bare-metal application development for all of SiFive’s RISC-V IP, FPGA evaluation targets, and development boards.

Freedom Metal provides:
  • A bare-metal C application environment

  • An API for controlling CPU features and peripherals

  • The ability to retarget to any SiFive RISC-V product

This makes Freedom Metal suitable for:
  • Writing portable hardware tests

  • Bootstrapping bare metal application development

  • A RISC-V hardware abstraction layer

  • And more!

Then look at what it actually runs on:

Quote

Board Support Packages (found under bsp/)

Those SiFive boards, the third one in that list is marketed as being able to be a linux pc. The two above it are embedded. The QEMU's are of course simulated CPU's.

 

https://www.sifive.com/boards/hifive-unmatched , is the latest, available for pre-order one.

https://sifive.cdn.prismic.io/sifive/c05b8ddd-e043-45a6-8a29-2a137090236f_HiFive+Unmatched+Product+Brief+(released).pdf

 

Note "bare metal with Freedom-E SDK", that is separate from the Freedom-U SDK which is a built on top of Linux.

Quote

Freedom U-SDK The Freedom U-SDK allows you to create a custom Linux distribution and is based on the collaborative open-source Yocto Project. The layer model makes it easy to add or remove system components from the reference configuration to customize and build your own Linux based system.

 

That very much is back into the realm of using system images in the same way docker is used.

Link to post
Share on other sites
3 minutes ago, Kisai said:

Every, single, thing I've seen built on a CNN has a dockerfile to run a linux image with a specific set of nVidia drivers, Cuda version, CuDNN, Python, Tensorflow/Pytorch, etc, that's before you even get to that application written in python. You can not just "update" these things, because it breaks everything that runs on top of it.

https://docs.nvidia.com/deeplearning/cudnn/support-matrix/index.html

 

Got a new Ampere card? can't use it with cuDNN less than. 8.0.0 running CUDA 11.0 and driver r450

Maybe you're not even using the GPU because nVidia doesn't make it easy to get cuDNN https://github.com/pytorch/pytorch/issues/17445

 

It's a mess for end users to setup. One specific project requires CUDA 10.0 only, CuDNN 7.6.5 for Cuda 10.0 , Python 3.7 only, PyTorch for CUDA 10.0, Tensorflow 1.14. And what if you want to use something that requires a newer version side-by-side? you can't. You can not side-by-side drivers except via virtualization, and even then, you still need two separate GPU's. 

You can't pack drivers inside Docker, fwiw. I don't know where you are looking for those networks, but many of the SOTA ones barely give you a requirements.txt. As I said, docker is mostly used for deployments, not when doing research, and the most you need for that is a virtual env, and that's more than enough to have specific versions of tf, pytorch, python itself,  and whatever else you may need without touching your system files. 

 

"You can not just "update" these things, because it breaks everything that runs on top of it.". You can, I've done so myself. Takes roughly 30 minutes in case some previously used API was deprecated, otherwise it's fine most of the time.

 

"And what if you want to use something that requires a newer version side-by-side? you can't." You can, virtual environments exists due to that and are common place in many python projects, not only ML-related ones.

 

"You can not side-by-side drivers except via virtualization" You don't need to, you can run 2 networks in parallel using the same GPU and sharing resources. A GPU isn't locked to a single application, you know? Otherwise one wouldn't be able to train networks while browsing the web.

 

Please, stop spouting stuff that you've read superficially or heard for others without actual, personal experience, you're just spreading misinformation this way.

 

 

12 minutes ago, Kisai said:

That very much is back into the realm of using system images in the same way docker is used.

Not really. Although the whole "layer" idea from both projects may seem similar, Docker's layers (as in, each line of your Dockerfile) is just a tar "diff" based on the previous layer in order to take advantage of caching (so you can rebuild stuff faster). The "image" here is just a userland that you can basically chroot into, agnostic to the underlying kernel and system.

 

Yocto on the other hand uses layers to separate the different parts and requirements of a distro, such as BSPs, kernel config files and any other extra binaries. It takes all of those, merges then into a single thing and then build all of those at once, generating a fully bootable distro at the end with the settings you made. The "image" here is a really hardware-specific distro that can be booted on bare-metal boards. Yocto is more akin to something like Buildroot instead of Docker (which is similar to a chroot, as I said before).

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to post
Share on other sites
9 hours ago, igormp said:

You can't pack drivers inside Docker, fwiw. I don't know where you are looking for those networks, but many of the SOTA ones barely give you a requirements.txt. As I said, docker is mostly used for deployments, not when doing research, and the most you need for that is a virtual env, and that's more than enough to have specific versions of tf, pytorch, python itself,  and whatever else you may need without touching your system files. 

 

"You can not just "update" these things, because it breaks everything that runs on top of it.". You can, I've done so myself. Takes roughly 30 minutes in case some previously used API was deprecated, otherwise it's fine most of the time.

 

"And what if you want to use something that requires a newer version side-by-side? you can't." You can, virtual environments exists due to that and are common place in many python projects, not only ML-related ones.

 

"You can not side-by-side drivers except via virtualization" You don't need to, you can run 2 networks in parallel using the same GPU and sharing resources. A GPU isn't locked to a single application, you know? Otherwise one wouldn't be able to train networks while browsing the web.

 

Please, stop spouting stuff that you've read superficially or heard for others without actual, personal experience, you're just spreading misinformation this way.

 

 

Not really. Although the whole "layer" idea from both projects may seem similar, Docker's layers (as in, each line of your Dockerfile) is just a tar "diff" based on the previous layer in order to take advantage of caching (so you can rebuild stuff faster). The "image" here is just a userland that you can basically chroot into, agnostic to the underlying kernel and system.

 

Yocto on the other hand uses layers to separate the different parts and requirements of a distro, such as BSPs, kernel config files and any other extra binaries. It takes all of those, merges then into a single thing and then build all of those at once, generating a fully bootable distro at the end with the settings you made. The "image" here is a really hardware-specific distro that can be booted on bare-metal boards. Yocto is more akin to something like Buildroot instead of Docker (which is similar to a chroot, as I said before).

You can absolutely install drivers on Docker but they don't do anything, the OS you're running docker on is the real "controller" of the hardware, BUT you can absolutely pass hardware to inside the docker container and make use of the hardware.

This can be achieved several ways depending on the version of docker and if you're running on swarm or not, and example of this: https://github.com/NVIDIA/nvidia-docker

Yocto is made to build embedded OSes it acts as a packager of sorts.

@Kisai Docker doesn't use system images in the sense you're thinking of, this is the base for almost all the dockers you run https://hub.docker.com/_/scratch. You don't need to worry about the underlying hardware you're running on, on Yocto you're building an OS that's going to run on the target hardware.

Link to post
Share on other sites
3 hours ago, zhnu said:

You can absolutely install drivers on Docker but they don't do anything, the OS you're running docker on is the real "controller" of the hardware, BUT you can absolutely pass hardware to inside the docker container and make use of the hardware.

This can be achieved several ways depending on the version of docker and if you're running on swarm or not, and example of this: https://github.com/NVIDIA/nvidia-docker

Indeed, it's just that the step of "passing hardware" to docker is just relying on linux namespaces and cgroups to allow such access, and Kisai meant (in my understanding) something more akin to virtualization where even the host system would not have access to it anymore.

(nvidia-docker is now built in into the official docker now, fwiw :P)

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to post
Share on other sites
19 hours ago, gabrielcarvfer said:

The language (e.g. GAS and NASM) is the same. Grammar, syntax, parser, infrastructure, etc.

 

Instructions may differ (e.g. when you compile something targeting skylake, you may use instructions added with that microarchitecture, which will cause the application to fail in previous microarchitectures (e.g. Westmere, Piledriver, Zen)).

 

If manufactures want, they could release an assembly language per architecture but that would be pretty dumb. Notice I'm not talking about machine code, which definitely is dependent on the underlying architecture.

 

Wikipedia isn't a trustworthy source.

I'm pretty sure they're wrong on that because I've researched that as a tutor for basic software classes (assemblers, compilers, static and dynamic linkers, virtual machines) on my university.

Assembly languages are generally one-to-one with a specific architectures according to by computer architecture professor. With the example of modern x86 processors (skylake, piledriver, zen, sunnycove), they can technically all be categorized under the broad, x86 family of computer architectures, which is targeted by a whole family of slightly differing x86 assembly languages. Many x86 assembly languages are able to target a variety of x86 capable processors precisely because modern x86 processors are intentionally designed to support decades of legacy x86 assembly instructions (which some are viewing to be more and more of a hassle). You could even say that assembly and architecture are so closely tied that even decades later, the reliance on & ubiquity of x86 assembly has implications on modern x86 processor design.

 

From what I know, manufacturers MUST release a new assembly language per new architecture, since no new assembly languages implies no new architecture.

Link to post
Share on other sites
28 minutes ago, thechinchinsong said:

Assembly languages are generally one-to-one with a specific architectures according to by computer architecture professor.

Are you sure your professor said that? Because this is incorrect. I think one of you misunderstood something, or maybe nomenclature changed over time.

 

28 minutes ago, thechinchinsong said:

With the example of modern x86 processors (skylake, piledriver, zen, sunnycove), they can technically all be categorized under the broad, x86 family of computer architectures, which is targeted by a whole family of slightly differing x86 assembly languages.

There are different assembly languages (GAS, NASM, MASM, Intel, ...), but not one for each architecture. Instruction changes do not create a new assembly language by itself.

 

28 minutes ago, thechinchinsong said:

You could even say that assembly and architecture are so closely tied that even decades later, the reliance on & ubiquity of x86 assembly has implications on modern x86 processor design.

Not really. Some languages can map assembly instructions into new hardware instructions, requiring the assembly code to be reassembled (just like you can recompile C code to different architectures). NASM also has higher-level pseudo instructions that can be mapped to multiple instructions or single instructions on x86 CPUs. (EDIT: Wrong assembly... NASM is x86 only. GAS supports multiple architectures but I'm not sure if it has pseudo instructions).

What is closely tied to the architecture are the instruction sets, which are assembled into native machine code. Those were maintained to keep binary compatibility without requiring some form of dynamic recompilation/emulation.

Link to post
Share on other sites

 

1 hour ago, gabrielcarvfer said:

Are you sure your professor said that? Because this is incorrect. I think one of you misunderstood something, or maybe nomenclature changed over time.

 

There are different assembly languages (GAS, NASM, MASM, Intel, ...), but not one for each architecture. Instruction changes do not create a new assembly language by itself.

 

Not really. Some languages can map assembly instructions into new hardware instructions, requiring the assembly code to be reassembled (just like you can recompile C code to different architectures). NASM also has higher-level pseudo instructions that can be mapped to multiple instructions or single instructions on x86 CPUs. (EDIT: Wrong assembly... NASM is x86 only. GAS supports multiple architectures but I'm not sure if it has pseudo instructions).

Instructions sets and instructions are what make up a large portion of what assembly languages are. Like how a very basic MIPS-32 architecture is based around an instruction set, which is used in the assembly code, which is then assembled into native machine code. Of course these were all kept in the case of x86 architectures in order to maintain compatibility, which is precisely why all these architectures can be said to be grouped under the x86 architecture. NASM is an example of an assembly language specific to x86 that is able to be somewhat ported to powerPC and SPARC through use of cross-compilers. Of course instruction changes do not create new assembly languages by themselves, since introducing new instructions does not necessarily mean it is a new architecture (but often times this is the case). If a manufacturer were to create a new architecture today, with completely new instruction sets, it would be 100% necessary to create a new assembly language alongside that brand new architecture. If said architecture were already compatible with another assembly language like x86, PowerPC, SPARC, ARM, MIPS, RISCV, etc., then it would not be considered a new architecture, as it would need to implement physical design changes specifically to do so.

 

Yes, the terminology of "skylake" or "sunny cove" as an architecture is correct and used widely throughout industry, but that does not mean they don't fall under the broader category of "x86" architectures (or at least compatible). What this means is that if one were to come up with a "new" arch, that also supports previous assembly languages like ARM, RISC-V, MIPS, x86, etc., that said "new" arch would need physical hardware specifically tuned to accept each assembly language.

 

The way I say "x86 assembly" is directly tied to your next statement, which I should have made clear.

1 hour ago, gabrielcarvfer said:


What is closely tied to the architecture are the instruction sets, which are assembled into native machine code. Those were maintained to keep binary compatibility without requiring some form of dynamic recompilation/emulation.

According to "Computer Organization and Design" (Patterson & Hennessy 5th edition), the numeric version of instructions is called machine language simply to distinguish it from the more human readable form (assembly language).

 

The point is that since assembly is the way to use instructions from a specific instruction set, and since each architecture has different instruction sets, the assembly must be different. Again, the main point is that assembly is for the most part, a line for line translation of machine code, which you acknowledge is architecture dependent.

 

For example in MIPS:

add t1 t2 t3 (a single line using a supported MIPS instruction) ---> 0x014B4820 (hex since its easier to read than binary)

This adds the values in t2 (register 10) and t3 (register 11) and stores the result in t1 (register 9)

 

In RISCV:

add t1 t2 t3,

Would still add the values in t2 (register 7) and t3 (register 28) and stores the result in t1 (register 6), but directly translating it to hexademical would yield 0x01C382B3.

 

Each of these 32 binary digits corresponds to a datapath in the specific computer architecture itself, which must be explicitly designed to support the instruction, and thus, the assembly language. The fact that you cannot take 0x01C382B3 and run that in a MIPS architecture and expect the same result, since MIPS would require 0x014B4820 in order to produce that result. 

 

Since the assembly language itself is itself tied to the instructions it is allowed to use, and the instruction sets are closely tied to the architecture, does that not mean that the assembly language itself is tied to the architecture? 

Link to post
Share on other sites
3 hours ago, thechinchinsong said:

Instructions sets and instructions are what make up a large portion of what assembly languages are.

Correct, but not due to underlying micro architecture.

3 hours ago, thechinchinsong said:

Like how a very basic MIPS-32 architecture is based around an instruction set, which is used in the assembly code, which is then assembled into native machine code.

Exactly.

3 hours ago, thechinchinsong said:

Of course these were all kept in the case of x86 architectures in order to maintain compatibility, which is precisely why all these architectures can be said to be grouped under the x86 architecture.

Which may support different combinations of instruction sets and still may share a single language, even though not every instruction is supported by all micro architectures. It is a clear example of what I said.

 

3 hours ago, thechinchinsong said:

If a manufacturer were to create a new architecture today, with completely new instruction sets, it would be 100% necessary to create a new assembly language alongside that brand new architecture.

This is the point I still didn't manage to get across. No, they don't. They just need to map their registers and instructions. The only scenario where it is absolutely necessary to create a new assembly language is when your architecture doesn't share the same computational model we are accustomed to work with (e.g. move data to registers, operate data in the registers, move data back to memory). The only reason you don't see cross-architecture assemblers is that most of architectures are very niche (x86 and ARM being notable exceptions) and manufacturers don't think it is worth the trouble.
 

3 hours ago, thechinchinsong said:

According to "Computer Organization and Design" (Patterson & Hennessy 5th edition), the numeric version of instructions is called machine language simply to distinguish it from the more human readable form (assembly language).

 

The point is that since assembly is the way to use instructions from a specific instruction set, and since each architecture has different instruction sets, the assembly must be different. Again, the main point is that assembly is for the most part, a line for line translation of machine code, which you acknowledge is architecture dependent.

This is an oversimplification for undergrad students. The link I've provided earlier show that this is definitely not the case.
Assembly language is a way to represent instructions executed by a machine in a readable form. It isn't bound to a specific instruction set, even though most assembly languages are.

"For the most part" means that it isn't. What I said is that the machine code is architecture dependent.

 

3 hours ago, thechinchinsong said:

The fact that you cannot take 0x01C382B3 and run that in a MIPS architecture and expect the same result, since MIPS would require 0x014B4820 in order to produce that result.

This is binary compatibility. Never said this was the case.


What prevents an assembler from mapping an ADD $(r1), $(r2), $(r3) into x86 or MIPS3000/4000 machine code?
Both have 32bit registers, which can add, subtract, multiple, load, store, move data, etc.
x86 has fancy instructions that can do all loads, register operations and stores with a single instruction (reducing the binary size).
MIPS would need at least a li t1, $(r2); li t2, $(r3); add r3, t1, t2; sw t3, $(r1), plus a few instructions to save previous values if previously used (which can later be removed by the linker).
Trivial mapping of a pseudo-instructions and pseudo-registers into architecture specific registers and instructions, that will then be used to generate the proper binaries with machine code.

Link to post
Share on other sites

 

 

2 hours ago, gabrielcarvfer said:

Which may support different combinations of instruction sets and still may share a single language, even though not every instruction is supported by all micro architectures. It is a clear example of what I said.

That's precisely my point. All of the following micro-architectures are under the x86 architecture family. They share a single x86 assembly language that can vary with the specific instructions that must be extended or supported, but all of these micro-architectures are known as x86 architectures, hence, they all use the x86 assembly language.

2 hours ago, gabrielcarvfer said:

This is the point I still didn't manage to get across. No, they don't. They just need to map their registers and instructions. The only scenario where it is absolutely necessary to create a new assembly language is when your architecture doesn't share the same computational model we are accustomed to work with (e.g. move data to registers, operate data in the registers, move data back to memory). The only reason you don't see cross-architecture assemblers is that most of architectures are very niche (x86 and ARM being notable exceptions) and manufacturers don't think it is worth the trouble.

Yes, by mapping their registers and instructions such as the MIPS/ARM example, where registers 3-5 might be 28-30 in a different architecture, the way they must write the assembly, and thus the machine code is different. Yes of course most architectures today will share many instructions and many will be extremely similar, but that is the obvious convergent need where all computational machines must share some of the same basic functionality. Yes, architectures share computational models such as operating with registers and memory, but the differences in the architectures such as register size, different pipelining/forwarding implementations cause them to differ. I'm not saying cross-architecture assemblers don't exist, but the very reason they do is because there are different archictures that require different assembly code (languages) were it not for the translation capability of that software.

2 hours ago, gabrielcarvfer said:

This is an oversimplification for undergrad students. The link I've provided earlier show that this is definitely not the case.
Assembly language is a way to represent instructions executed by a machine in a readable form. It isn't bound to a specific instruction set, even though most assembly languages are.

"For the most part" means that it isn't. What I said is that the machine code is architecture dependent.

Can you show me the link again, is it a much earlier comment? If assembly is used to represent instructions, and the instruction set is specific to each micro-architecture, how is it then that assembly isn't specific to each micro-architecture. Instructions used by the assembly language belong to the Instruction Set Architecture (ISA) of the hardware, which is, by definition, the architecture of the hardware, meaning that the assembly language for one ISA will differ from the assembly language of another ISA.

2 hours ago, gabrielcarvfer said:

This is binary compatibility. Never said this was the case.


What prevents an assembler from mapping an ADD $(r1), $(r2), $(r3) into x86 or MIPS3000/4000 machine code?
Both have 32bit registers, which can add, subtract, multiple, load, store, move data, etc.
x86 has fancy instructions that can do all loads, register operations and stores with a single instruction (reducing the binary size).
MIPS would need at least a li t1, $(r2); li t2, $(r3); add r3, t1, t2; sw t3, $(r1), plus a few instructions to save previous values if previously used (which can later be removed by the linker).
Trivial mapping of a pseudo-instructions and pseudo-registers into architecture specific registers and instructions, that will then be used to generate the proper binaries with machine code.

This does not preclude that the assembly code written for the MIPS ISA would need to be translated into a form recognizable by another ISA, hence translating to another, different assembly code. Yes, much of assembly code for one architecture is easily assembled into another, but it is precisely slight differences and different instructions that require different assembler to be used and slightly different notation (such as which t2, s3, r0, etc.). Again, I'm not denying the existence of cross-assemblers, but don't they differ fundamentally from how the assembly language needed by one ISA is different from the assembly language of another? Even in the example of ARM and x86, where cross-assemblers exist, the fact that they do, and require specific software to perform that translations suggests that you are translating from one to another, no matter how similar or different the exact language they use.

 

I think I'm understanding better your point of the similarities between ISAs, but I'm struggling to understand why if assembly language is a way to represent instructions executed by a machine in a readable form, that it isn't bound to a specific instruction set, even though most assembly languages are. Even assembly languages like GAS that are cross-compile capable are created specifically to support the multiple architectures it can target. I'm not saying there can't exist assembly languages created independently of micro-architectures that are able to target multiple ISAs, I'm saying you can't have a micro-architecture without having a new assembler (thus dictating the new assembly language syntax), or it would fall under an already existing micro-architecture.

 

Edit: Perhaps I've been misunderstanding the whole time. Are you referring to different micro-architectures specifically like Nehalem/Skylake/SunnyCove, ARM32/ARM64, MIPS32/MIPS64, or Bulldozer/Matisse? I've been thinking about ISA's specifically. Different micro-architectures like Nehalem/Skylake/SunnyCove will all implement the x86 ISA in differing ways. Hence, these micro-architectures are developed in absence of any new assembly language, while x86 is still the underlying ISA.

Edited by thechinchinsong
Reasoning
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×