Programming can be a useful skill to have, as it's the skill of telling computers what to do. One of the problems though is that there are so many programming languages, or ways to tell a computer what to do. While programming languages can be thought of like natural languages, in that there are clearly defined syntax, or rules, they all revolve around doing the same things. But why have so many languages? Because some people thought that there were better ways to do certain things. It's like why a lot of East Asian languages chose pictographs for words rather than in Western Languages that chose combinations of letters to form words.   This guide is a primer into those programming concepts, or at least what most, if not all, programming languages use to achieve the result of telling a computer what to do. This doesn't teach any particular language, as I don't feel like it's beneficial to the reader to use one language as an example.   But why bother writing about programming concepts and not making a tutorial on a particular language? Because once you have the concepts down, learning a language becomes much easier.   The Overall Concepts For the purposes of this topic, I will go over two main concepts: the main features of most programming languages and how programming languages become... well programs. Note that while there are other "languages" that technically tell a computer what to do, they have their defining characteristics: A programming language describes, and I quote Wikipedia, programs. Programs directly control the behavior of the machine or express algorithms. A scripting language is something that usually supplements programs. While a scripting language can be similar to a programming language, the defining point here is that a program must be running to take in the scripting language, read it, figure out what to do, then execute it. A programming language, when turned into a program, needs no further processing. A markup language is something that describes a data set in order to present something. The name originates from the idea of "marking up a paper" in editing. The Main Features of a Programming Language I've found for most programming languages, there are four main features to them: Symbols that represent data Operators that modify the data Conditional statements to control the flow of a program The ability to jump around the program at will Symbols (and Scope) Symbols are a way of giving a piece of data a name. Otherwise, you would be referring to data in terms of address locations in the system memory. Typically symbol names must be unique, but how unique this can be depends on another concept in some programming languages: Scope. Scope is a way of specifying context within a section or subsection of a program.   To put this into a real world concept, let's pretend symbols are like an address in the real world. And let's take an example address: 1313 Disneyland Dr, Anaheim, CA 92802 USA   For those of you not familiar with the United States addressing system, this is what it breaks down to: 1313 is the street number. This tells us where this location is on the street (which is the next level up). There can be only one street number per street. Disneyland Dr. is the name of the street the location is on. There can be only one street name per town/city/etc. Anaheim is the city the street is in. There can be only one town/city/etc. with this name in the state/province/etc. CA is shorthand for California, the state the city is in. There can be only one state/province with this name in the country. 92802 is what's known as the ZIP code. Some countries use a postal code. This is more for the post office's benefit as it quickly narrows down an area. Like states and provinces, there can be only one ZIP code in the country. USA is the country the state belongs to. There can be only one country with this name in the world. Notice how I kept saying "there can be only one" of something. Let's talk about this, but go backwards: A county must be unique within the scope of the world. With states and provinces, they must be unique within the scope of the country. However, in another country, they can have a state/province with the same name. There's a Punjab, a state in India, and Punjab, a province in Pakistan. With cities/towns/etc., they must be unique within the scope of the state/province. However, within the country, two or more states/provinces can have a city with the same name. For example, there's a Kansas City, Missouri and a Kansas City, Kansas. Or if you want to go beyond countries, there's a Dublin, California, USA, and a Dublin, Leinster, Ireland A street must be unique within the scope of a town/city. But other towns/cities can have the same street name. I don't think I have to tell you how many "Main Streets" there are. A street number must be unique within the scope of a street. So within programming languages, there are similar concepts like the address system our civilization has come up with. You have a global scope, which anything in the program can see. You have a file scope, which anything within the same source file can see. And then you have various levels of "local" scope, such as a subroutine scope, an if-block scope, and others. These types of local scope varies depending on the language you're using. This is why I mentioned that the uniqueness of a symbol name depends on its scope. For example, if I had a symbol named "lolcats" in the global scope, then I can't (or should not be able to) use "lolcats" for anything else. If I tried to say in a file "I want to use 'lolcats' for something else", different things can happen depending on the programming language. Some programming languages won't let you use a symbol in the global scope for anything other than the global scope. Other languages require you to explicitly state what scope it's in. However, if I had "lolcats" in one file on the file scope level, I can generally use "lolcats" in another file without worry.   Scope is important, since it limits ambiguity during the step when turning the source files into a program. In the real world, this can be a problem. I know someone in Dublin, California. However I have to specify using "Dublin, California," otherwise people think I'm talking about Dublin, Ireland. Likewise, if Isay "New York", people may get confused. Am I talking about New York City (as New York City is often shortened to just "New York"), or am I talking about New York, the state? Or if you really want to go further, am I talking about one of the many places named New York in various states of the US? By limiting the scope of where I'm talking about, I create context so people can understand me.   Summary: Symbols are names for data. They must be uniquely named depending on the scope. Scope is a way of specifying the context in a section or subsection of the program.   Operators Operators are ways to modify the data. Despite the myriad of ways you can appear to modify the data, there's really only a handful of things that are done to data. Basic arithmetic: addition, subtraction, multiplication, and division. Logical operations, such as NOT, AND, OR, and XOR, or combinations thereof. Bitwise operations, such as shifting, rotating, or masking. Assignment, or telling a piece of data it equals something. Programming languages may let you chain operations together, which creates a bit of a problem. How do you complete the operation? Recall in math class you learned the order of operations. A similar thing applies here and like in math, they're more or less the same. As various languages have different operators, I'll generalize the order of operations that can be more less expected out of a given language: Anything in parenthesis or brackets Multiplication or division Addition or subtraction Logical or bit-wise operations Assignment Summary: Operators modify the data. Generally there's also an order the operations are done, regardless of how it's written. If you want to ensure a program does something first (or sooner), wrap it around parenthesis.   Flow Control This idea arguably is what makes programming powerful, however it's also its greatest weakness. Flow control examines the state of the data and makes a decision based on it. For example, if you want to ride a bicycle, what's the weather like? I'm sure you wouldn't want to ride in the rain.   Most programming languages have two types of flow control: if statements and loops.   An if statement examines one or more pieces of data and see if it matches what was specified in the source file. If it does, then it goes down one path in the code. If not, it goes in the other. Many languages support if-else, which is a way of chaining if-statements together or providing a default case if the data does not meet any of the criteria.   A loop is a section of the program that gets repeated as long as some condition exists. In a naive sense, this is to prevent you from copying and pasting a piece of code over and over again. However, if you don't know how many times you need to repeat the part of the program, you can't exactly copy and paste it so many times, can you? A lot of languages have two forms of loops: While-loop. This is a loop that will keep repeating as long as some condition is true. This is useful if the number of iterations cannot be predicted. For example, if you are waiting on user input and inputting a certain letter quits the loop, you do not know when the user will stop the program. For-loop. This is a loop meant for repeating the same part of a program a number of times. This is useful if the number of iterations can be known. For example, you can use a for loop to generate the value of an exponential. Since an exponential is some number times itself so many times, the number of iterations can be known. The reason why I say it's also programming's greatest weakness is twofold. The biggest reason is it creates complexity. For every if-condition you have, you double the possible outcomes of your program. Rampant use of if-statements can lead to situations where your program does not behave the way you expect it to because the program hit just the right combination of data that you weren't expecting. Debugging this is a nightmare as you try to figure out what caused the problem and how did it get there. An approach if you are checking many things at once is to create a single state variable that changes based on what happened. The other reason is if the CPU supports branch prediction, if-statements can kill performance. Branch prediction tries to preload instructions the CPU thinks the program will run. If it mis-predicts, it has to dump everything it loaded, creating a stall in execution. Summary: Flow control changes where the code will run based on the state of some data and what the programmer specifies. And while powerful, it also creates complexity in programs.   Jumping around the program at will This ability allows programmers to create sections of code that can be used whenever that's needed. Imagine for a moment you're writing a program and you want it to print out something to the display. Would you rather: Copy and paste the steps to print that thing out to the display every time you need to use it? Have a way to write the instructions once, then any time you want to use print something to the display, you jump to that set of instructions with some parameters? To tie this to something in the real world, there are plenty of commands and tasks we do that require more steps involved than what the command by itself would suggest. If I told you "make me a peanut butter and jelly sandwich", chances are I don't need to tell you "go get the loaf bread and peanut butter in the cupboard, get the jelly out of the refrigerator, get a knife from the drawer, open the bag the loaf of bread is in, get two slices of bread out, open the ... " Okay, hopefully you got the idea. Not only does this make code more readable, but it significantly reduces the size of the program by avoiding having to have repeats of the same operation being done when it's needed   There are typically two ways of jumping around a program: Unconditional jumping, commonly referred to as the goto statement. I'm mentioning this for historical reasons (unless you get down into assembly language). This statement will jump to some label in the program. It's heavily fallen out of use because misuse of it created something called spaghetti code, where the code appeared to have no flow or structure with how many goto statements were in the program. Subroutines, which take on the form in other languages as procedures, functions, or methods. This is a more organized way of what the goto statement was trying to achieve. In this, you call a subroutine by name, either with parameters or without, and when the subroutine is done, the program jumps back to where this was called from. Summary: The ability to jump at will allows the creation of routines and subroutines for readability and to keep program sizes small.   How Programs are Made Programs are made by, strangely enough, another program taking in what the user has inputted and turning that into a program. In the context of making a program, there are generally two types of programming languages depending on whether it talks directly to the hardware or whether it's more of a generalized language: Low level programming language: This type of programming language talks directly to hardware. As a consequence, a program written in a low level language is only understood by the hardware it was written for. Generally speaking, all forms of low level language is called assembly language. If you want to get technical, there is also "machine language", which is literally the pattern of 0s and 1s that feed into the computer, but writing programs in machine language is almost never a thing these days. The process of turning a low level language into a program is called assembling. High level programming language: This is a, usually, more natural expression of what we want a computer to do. Since higher level languages are also more generalized, they are not bound to any machine. However, some sort of "translation" must be done in order to convert the high level language into a lower level language.The process of turning a high level language into something that can be executed is either called compiling or interpreting. While there are a lot of different types, these  three methods cover the gist of it: Ahead of Time Compiling (usually just compiling or AoT): This turns the source code into a program which can be run as-is. This has the fastest execution time and uses up the least amount of resources. Examples of normally Ahead-of-Time compiled languages are C, C++, and Fortran. Just-in-Time Compiling (usually shortened to JIT): The source code is compiled into an intermediate form. Parts of that intermediate form are then compiled further for the machine when needed, hence just-in-time. However, this requires a special framework to run. While execution time can be almost as fast as AoT, it takes up more resources. Examples of normally JIT compiled languages are Java and C#. Interpreting: This takes each line of source code and runs it one by one. Languages that can be interpreted (but are usually compiled for speed) are Python, BASIC, and scripting languages like JavaScript. Concepts you should also know Computers start at 0 Everything is indexed from 0, because 0 is valid address! So any time you see me starting at 0, this is why.   Don't question it, it's just how programmers roll.   Most significant digit and least significant digit Let's start this one with a number we're probably familiar with: 123,456,789. The most significant digit is the left-most one, which carries the highest value. In this case, it's not really a "1", but "100,000,000." The least significant digit in this number is the right-most one, or 9, which represents plain ol' 9.   A slight sidetrack is something called endianness. Endianness describes if the leftmost digit is the most significant digit (big endian) or the least signfiicant digit (little endian). This isn't that important to know for programming unless you're dealing down to the bit level in an architecture or communication protocol. Endianness can also extend to byte order. If you have a four-byte value that's read naturally as 0x12 34 56 78, little endian will encode this as 0x78 56 34 12, whereas big endian will encode it as you would read it naturally.   Binary and Hexadecimal It's very handy to know these two number systems. Binary is base-2, meaning that there's only two numbers per digit (0 and 1 in this case). Hexadecimal (or hex) is base 16, which in addition to the usual 0-9 we're used to, it extends onto A-F, so the sequence is 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F.   Converting between binary and hex can be surprisingly easy. Every four digits in a binary number is a single digit in a hex number. It's of course, converting either binary or hex into decimal that causes some headaches. If you need a list to get started:   Binary | Hex | Decimal 0      = 0   = 0 1      = 1   = 1 10     = 2   = 2 11     = 3   = 3 100    = 4   = 4 101    = 5   = 5 110    = 6   = 6 111    = 7   = 7 1000   = 8   = 8 1001   = 9   = 9 1010   = A   = 10 1011   = B   = 11 1100   = C   = 12 1101   = D   = 13 1110   = E   = 14 1111   = F   = 15   Data sizes Every computer (at least built after 1980) uses 8-bits per byte. Bytes can be grouped into sizes of 1, 2, 4, or 8, which gives you 8-bit, 16-bit, 32-bit, and 64-bit. Some programming languages specify symbol data types by their size, while others don't care and use the largest value they can get away with. For example, in C, there's a specific data type for each data size (byte, short, int, etc.) In JavaScript, every number is a 64-bit floating point number.   Data sizes are important to know because it limits the range of values the symbol can have. For example, 8-bits has a range of 256 values and 16-bits has a range of 65536 values.   Number representation Numbers are represented in three primary ways in a computer: Unsigned integers: The entire range of values are whole positive numbers. Signed integers: Half the range of values is positive, the other half is negative, while still representing whole numbers. Most use the two's compliment representation, which takes a bit of explaining to do: Let's use decimal for example. In a decimal system, each digit represents a power of 10. So the least significant digit is 10^0, the next is 10^1, then 10^2, and so on. So when you have a decimal number of 1,234, what's this really means is (1 * 10^3) + (2 * 10^2) + (3 * 10^1) + (4 * 10^0). Likewise in binary, each digit represents a power of 2. So the binary number 1011 means is (1 * 2^3) + (0 * 2^2) + (2 * 2^1) + (1 + 2^0) = 8 + 0 + 2 + 1 = 11. What two's compliment does is it makes the most significant digit a negative number. So if 1011 were in two's compliment, it would really mean -(1 * 2^3) + (0 * 2^2) + (2 * 2^1) + (1 + 2^0) = -8 + 0 + 2 + 1 = -3.

Two's compliment is used because it allows the widest range of values, and you don't have oddities like +/- 0. So for example, an 8-bit signed number has a range of -128 to 127. Floating point numbers: These represent not-whole numbers, like halves and quarters. Most follow the IEEE 754 floating point representation. I'm not going to explain it in detail, since I even have hard time with it, but there is one caveat with floating point numbers: they have a limited amount of precision. This lack of precision at times can create odd results. Like how supposedly in the early 90s, Microsoft's calculator when subtracting say 1.111111-1 would result in 0.1111112 or something. It wasn't the calculator app that was a problem, it was the fact the floating point number representation had issues when you started to use really small numbers. There's also another representation for non-whole numbers you can use, called fixed point, but this is kind of an intermediate topic and should only be used if speed is of higher concern than precision or accuracy. Otherwise you should use floating point for non-whole numbers. Some Concepts You May Want to Know Reference vs. Value This topic can be confusing to start with. Symbols can hold two different types of data: an actual value or a reference to where another symbol is, which can be a value or a reference.   It's a simplified analogy, but we can think of this like topics in Wikipedia. Let's say the topic at hand is GPUs made by NVIDIA. The value of the topic is the entire page (or at least the markup you can edit when you press the "Edit" button). The reference to that page is its URL: https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units. However, you can also get to this page by going to https://en.wikipedia.org/wiki/Nvidia_gpus, which contains a reference (or rather a redirect) to https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units.   Some programming languages let you choose if a symbol can be a reference or a value. Others will only let you use one or the other for a particular type of symbol.   Truthiness This is a supplement to explaining program control. On the most basic level in a computer, 0 means false and 1 means true. Programming languages may explicitly define a "truthiness" data type known as a boolean, which can be either true or false.  However, some languages don't have an explicit truthiness data type. In this case, you can expect 0 to always mean false, and anything not 0 to always mean true. For example:   if(0) will never run. While if(1) or if(255) or if(1000000) will always run. (may add more here later if anyone suggests something)