Jump to content

GPU instruction sets?

BenCamps

Not sure if this is the right place to ask this questions, but since it has more to do with computer science than the general GPU discussion I'm going to park it down right here.

The other day I was trying to explain GPUs to someone and why they are different from CPUs The person I was talking to was aware of the fact that desktop CPUs typically use x86/x64 and phones typically use ARM so they asked the question, "what instruction set do GPUs use? and does nVidia and AMD use different instruction sets?" 

I came up with blanks because I could never recall anyone talking about processor architectures on GPUs. I realized that they wouldn't be using x86 or x64... but then what are they using? Or does everything I understand about CPU architectures not apply to GPUs. What about other coprocessors?

 

I did a little bit of searching around but the closest thing I could find was 

PTX - a pseudo-assembly language used in Nvidia's CUDA programming environment

and SPIR - an intermediate language for parallel compute and graphics, originally developed for use with OpenCL Note: OpenCL is not OpenGL

Link to comment
Share on other sites

Link to post
Share on other sites

Because GPUs are programmed using CUDA/OpenGL/DirectX/OPENCL etc, the instruction set never really needs to be exposed to the programmer. Nvidia/AMD will create an instruction set for their new gpu architecture, not worrying about binary compatibility with older architectures. All that is handled by the driver.

As far as i'm aware, there's no proper name for any of the ISAs used.

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

What does the gpu in my smartphone use?

Sudo make me a sandwich 

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, logic_error said:

Not as simple as that though. Your GCN link is for first gen only. Vega cards use a much enhanced version with extra functions, such as packed math. https://developer.amd.com/wp-content/resources/Vega_Shader_ISA_28July2017.pdf

Gaming Rig:CPU: Xeon E3-1230 v2¦RAM: 16GB DDR3 Balistix 1600Mhz¦MB: MSI Z77A-G43¦HDD: 480GB SSD, 3.5TB HDDs¦GPU: AMD Radeon VII¦PSU: FSP 700W¦Case: Carbide 300R

 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...

This is a relatively old post but I wanted to chime in for any interested readers.

As @logic_error and @Madgemade pointed out, instruction sets do exist for GPUs and they are usually tied to specific architecture.  For example AMD/ATI used something called VLIW (very long instruction word) architecture before they moved on to GCN (graphics core next) which each had their own instruction sets.

For the most part, the developer does not need to know anything about the native instruction set of the computer because they program the GPU through the use of APIs such as DirectX or OpenGL.  These are high level APIs, which means that the developer focuses on what they want the graphics card to do but not how to do it.  Think of it as talking to GPU in a more conversational manner, like "draw a circle" as a opposed to mathematically describing to a computer what pixels to fill in to draw a circle.  It is the graphics driver's job to take the simple language of the API and convert that into instruction set machine code and send it to the GPU in real time!

Low level APIs also exist such as DirectX 12, Vulkan and Metal.  This offers a way for developers to code that is just one step above assembling the machine language right out.  These allow developers to tap into the specific instructions of the GPU or commonly used strings of instructions.  In the 1990s, it was common for PC games to be released with several editions supporting several different APIs because each graphics manufacturer had its own API.  I remember the game MechWarrior 2 released with a CPU only version, an Intel MMX CPU version, an ATI 3D Rage Version, an Nvidia Version, a 3DFX version, a Matrox version and eventually a Direct3D version!  They actually wrote the graphics engine for that game for each brand of card!  Once Direct3D and OpenGL took off though, this was no longer the norm.

DirectX 12 and Vulkan have taken off in a quest for efficiency rather than convenience.  Sometimes the driver doesn't always do a good job of taking the plain language of a high level API and converting into computationally efficient code.  That's where driver optimizations come in:  AMD and Nvidia routinely run diagnostics of how their driver behaves with different games and they can diagnose how efficiently the compiled machine code is running.  If they spot inefficiencies, they will make it behave differently for specific games.  Developers have more control over a GPU and can be more efficient using a low level API at the cost of convenience.  However, by using a high level API, the burden is now on AMD or Nvidia to optimize games.  There are pluses and minuses to both approaches, but the good thing is now developers have options in how they approach programming graphics.

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/15/2018 at 2:53 PM, BenCamps said:

Or does everything I understand about CPU architectures not apply to GPUs.

 

That depends on how low on the architecture you are talking about. Ostensibly, the same general ideas in design show through. They are both still programmable processors. There are, however, a few key differences between GPUs and CPUs:

 

  • CPUs are more concerned with achieving high instruction throughput, while GPUs are more concerned with achieving high data throughput.
  • CPUs are (generally) not doing time sensitive tasks. GPUs have to have low latency, especially if they want to support VR.
  • CPUs are concerned with maintaining separation between multiple "active" programs. GPUs (hardware) run a single program at a time.
  • CPUs handle many types of I/O. GPUs handle only a few types of I/O.

 

So now that we've identified the key differences in the needs of the different processors, we can translate that into differences in architectural needs:

  • CPUs need to have deep pipelines, and be able to predict what instructions might come next, while GPUs need to have many cores that run the same stream of instructions on different data.
  • CPUs can (generally) complete tasks in indeterminate time. GPUs need to complete tasks in determinate time.
  • CPUs need some method of managing multiple active tasks. GPUs need only manage one task.
  • CPUs need to get many kinds of input from the outside world. GPUs need only get streams of instructions and data, and output data or video.

Now that we've identified the architectural differences, we can start to identify differences in the ISAs:

  • CPUs need memory management, multithreading, I/O, logic, and math instructions.
  • GPUs need instructions that perform the same actions on wide data sets.

Since processors are just hardware implementations of an ISA, we can start to see how things are going to look:

  • A CPU will be geared towards quickly completing single instructions, while GPUs will be geared towards quickly operating on lots of data.

The two tasks are clearly very different from each other, however, we can't really get more in depth than that. GPU manufacturers aren't really going around sharing their ISAs publicly.

However, if you want to see how they might work, here is the Instruction Set Reference for the PowerVR line of embedded GPUs. Compare it to the AVR Instruction Set Reference.

 

The two are similar in complexity from the standpoint of the number of instructions. But the PowerVR has fewer instructions for I/O and conditional execution, and many more for math. In reverse, the AVR has many more instructions for conditional execution and I/O and extremely smaller amount of instructions for math.

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/15/2018 at 5:11 PM, wasab said:

What does the gpu in my smartphone use?

Smartphone GPUs can also have their own instruction sets (I know for a fact ARM Mali does). I haven't seen any public documentation for them, however, unlike for AMD GCN etc.

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/15/2018 at 8:53 PM, BenCamps said:

SPIR-V - the fifth iteration of SPIR - is an important (and cross platform) assembly like language for GPUs. It allows for the front end of the compilation process to be done ahead of time, and with a single front end.

 

Vulkan no longer requires opengl. Instead, it uses SPIR-V. The shaders are compiled to SPIR-V. And these shaders (in theory) will work on any platform with vulkan support, be it a PC with an Radeon VII, a Nintendo Switch with it's nVidia Tegra GPU, or a smartphone with a Mali or Adreno GPU.

 

It can even be used when developing for iOS and macOS, although it will have to be converted into MSL, using tools like those in MoltenVK.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×