Jump to content

How do modern computers perform calculations?

Colt_0pz

It seems like a silly question but what does a computer do to perform let's say the four basic operations?

Link to comment
Share on other sites

Link to post
Share on other sites

there is a clock and its pumping waves, each wave is a 1 or 0 so if you want four basic operations it might be 16 waves so thats 16 Hz

Link to comment
Share on other sites

Link to post
Share on other sites

the bits are the switches so you hit the switches just like a low rider and based on the logic gates it lights up the LCD digits on the calculators its the same on a monitor except you got 1080p those logic gates are those little black things on the motherboard

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, ManuelNigrito said:

the bits are the switches so you hit the switches just like a low rider and based on the logic gates it lights up the LCD digits on the calculators its the same on a monitor except you got 1080p those logic gates are those little black things on the motherboard

 

17 minutes ago, VegetableStu said:

 

Thank you both for showing me.

Link to comment
Share on other sites

Link to post
Share on other sites

It's called FM for a reason and I don't mean frequency modulation.

Link to comment
Share on other sites

Link to post
Share on other sites

And if it  pentium 4 it only knows how to add and multiply 

There is no enemy. The foe on the battlefield is merely the manifestation of that which we must overcome. The doubt, and fear, and despair. Every battle is fought within. Conquer the battlefield that lies inside you, and the enemy disappears like the illusion it is.

Link to comment
Share on other sites

Link to post
Share on other sites

Just a quick note ... if you are going to post multiple pictures, please keep them in a single post if possible. ?

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, ManuelNigrito said:

there is a clock and its pumping waves, each wave is a 1 or 0 so if you want four basic operations it might be 16 waves so thats 16 Hz

woah dude. No!
 

10 hours ago, Colt_0pz said:

It seems like a silly question but what does a computer do to perform let's say the four basic operations?

The question is actually very hard to fully answer. The answer to "How do I program a computer to conduct a varying sequence of mathematic operations" requires a relatively high amount of background knowledge. I will attempt to provide a basic overview, but be aware that it is an oversimplification of how things work.

Things are inserted into spoilers to keep the reply from being a mile and a half long. You should read the spoilers in order.
 

Basics:

Spoiler

A computer has some memory, which we'll call Instruction Memory. The instruction memory just contains codes that our processor will execute. It takes as input the address of the instruction to output, and a code telling it whether to output the instruction or not.

 

A computer has some memory, which we'll call a register file. A register file is a collection of single registers. A single register takes a command that tells it either to output it's current contents, or to accept new contents. For the sake of argument, let's say that the contents of a register are any number.

A computer also has another single register, which we will call the Accumulator. The Accumulator is just another register, and just holds regular numbers, but we reserve it for a special purpose. we'll learn what that special purpose is later.

 

A computer also has an Arithmetic Logic Unit, or ALU. The ALU takes as inputs the accumulator, the contents of a register, a command telling it whether to add, subtract, multiply, or divide, and outputs the result of the operation to the accumulator.

A computer also has another single register, which we'll call the Program Counter. The program counter is responsible for holding the address of the currently executing instruction. It takes as inputs a code that tells it whether to add one to it's value or whether it should output it's value.

 

Finally, a computer has a device which we shall call the Controller. The controller takes as input an instruction and outputs commands that other components take as inputs.

So, to sum up what we have just learned, a computer has:

  • Instruction Memory, which holds instructions.
    • Instructions are commands which tell the computer to do something.
    • Address is the location in memory that we want to look at.
  • Register File, which holds registers.
    • Registers are just a place to store a number.
  • Accumulator, which is just a register.
  • ALU, which carries out mathematic operations.
  • Program Counter, which is just a register that can add one to it's value on command.
  • Controller, which outputs the commands that the other components take.

So, what does such a computer look like? Rather than explaining it, I will draw an image:
246210871_basiccomputerarchitecture.PNG.bed051c505b112a54395443cc0dc0518.PNG

In this image, the big arrows show value transfers, and the small lines indicate commands from the controller.

 

Getting the Computer to do Things

Spoiler

So, how we get the computer to do things is like this:

  1. Put the value of the Program Counter in the Instruction Memory Address.
  2. Put the value of the Instruction Memory into the Controller.
  3. Tell the Program Counter to add one to it's value.
  4. Carry out the Instruction.

Clearly then, in order to understand what a computer does we must understand it's instructions. But we don't have any yet, so let's define the few necessary to add, subtract, multiply, or divide two numbers, and store their result back in memory. But before we can describe what an instruction does, we must first describe the information that an instruction contains. This simple architecture only needs one instruction format, which looks like:

  • NAME [address]
    • Name is the name for the instruction. It says what the instruction does.
    • address is the register that we want to operate on.

So, now we can list the instructions that we need:

  • LOAD [address]
    • Loads the accumulator with the contents of the specified register.
  • STORE [address]
    • Stores the contents of the accumulator in the specified register.
  • ADD [address]
    • Adds the contents of the specified register to the contents of the accumulator, and stores the result in the accumulator.
  • SUB [address]
    • Subtracts the contents of the specified register from the contents of the accumulator, and stores the result in the accumulator.
  • MUL [address]
    • Multiplies the contents of the specified register with the contents of the accumulator, and stores the result in the accumulator.
  • DIV [address]
    • Divides the contents of the accumulator with the contents of the specified register, and stores the result in the accumulator.

So, finally, all that's left to do is describe the "program", or the list of instructions that we put into Instruction Memory to tell the processor to calculate something:

 

To add two numbers, assuming that the first number is in register 0, and the second in register 1. We will put the result in register 2.

  1. LOAD [0]
  2. ADD [1]
  3. STORE [2]

To multiply two numbers, assuming the same:

  1. LOAD [0]
  2. MUL [1]
  3. STORE [2]

And so on to divide and subtract.

 

Bringing it all together:

Spoiler

Next, to fully understand what we need to do, let's combine our description of getting the computer to do things with our description of instructions by looking at exactly what happens when we want to run the add two numbers program.

  1. The contents of the program counter are moved into the instruction memory address.
  2. The contents of the instruction memory are loaded into the controller.
  3. The program counter adds one to it's value
  4. The controller decodes the instruction, and decides what to do next.
  5. We tell the 0 register to output, and the accumulator to accept input. This moves register 0 to the accumulator [LOAD instruction]
  6. The contents of the program counter are moved into the instruction memory address.
  7. The contents of the instruction memory are loaded into the controller.
  8. The program counter adds one to it's value.
  9. The controller decodes the instruction, and decides what to do next.
  10. We tell 1 register to output, the accumulator to output, the ALU to add, and the accumulator to accept input. [ADD instruction].
  11. The contents of the program counter are moved into the instruction memory address.
  12. The contents of the instruction memory are loaded into the controller.
  13. The program counter adds one to it's value.
  14. The controller Decodes the instruction, and decides what to do next.
  15. We tell register 2 to accept input, and the accumulator to output. [STORE instruction]

And that's it, register two now contains register 0 added to register 1.


Closing Remarks and Further Resources:

Spoiler

Of course, all of this was a simplification of how it works. In most ways, an oversimplification of how things work. But, what it does do, is give you enough information to know what to ask google for to dig deeper into the question, if you so desire. If you're really interested in learning how a basic computer might work, I would strongly recommend starting out with Ben Eaters wonderful 8 Bit Breadboard Computer youtube series. It is a wonderful and fairly in-depth introduction into how processors work.

In the case that you prefer reading over watching, All About Circuits has some wonderful textbook style material: All About Circuits Digital Textbook

 

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Colt_0pz said:

It seems like a silly question but what does a computer do to perform let's say the four basic operations?

@straight_stewie gave a crash course on the basics of computer systems, but this is a pretty broad question. Is there anything specific you want to know about?

Link to comment
Share on other sites

Link to post
Share on other sites

Boolean logic.

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

"how" computers complete calculations is exactly why they are so "slow" 

 

We really need a paradigm shift in how computations are made. Part of the reason quantum computers are so powerful is because they are tertiary not binary.   Quantum doesn't work for everything though.       I'm betting in the next 5 years you will see some sort of change the the very basic mechanics of computers that puts moores law back on course. 

Link to comment
Share on other sites

Link to post
Share on other sites

in a very basic, stripped down form (for the sake of ease of explanation, a computer is built as follows:

(indent means that the components are parts of what's above it)

- CPU (central processing unit)

  - ALU (arithmetic logic unit, this does the actual work)

    - <insert all functions of the CPU here: multiply, divide, add, substract, etc.>

  - CU (control unit, what activates the propper )

  - registers (basicly a very small amount of memory for holding data to be processed, and instructions)

- I/O busses

- memory

 

basicly, a cpu contains a number of "funcitonal blocks" that have basic functionalities like "A + B", "A * B", "not A" and so on.

the control unit gets a stream of instructions (the program you're running) and according to these instructions activates the right portions of the CPU to execute those instructions.

this concept hasnt really changed since the days computers were rooms full of relays clicking.

 

(FYI: "0x" implies the value is hexadecimal)

a theoretical CISC processor (complex instruction set computing, like x86 is) may for example have "0xABBA"  be the instruction for adding a static value passed along with the instruction, to a value from a given register, the address for this register is passed on as a second parameter with the instruction.

this example instruction is then a "3 word instruction": the instruction itself and two variables. which as a result means it will take 3 clock cycles to process this instruction.

so, what this theoretical instruction does is the following:

- the CU gets fed "0xABBA"

- it knows the next cycle will be a static value, followed by an address that contains the second value, and these need to go trough the "full adder" (full adder is the logical block that does A+B)

- the CU gets fed "0x0010"

- it places this value in one of the two input registers for the "full adder" in the CU

- the CU gets fed "0x0010"

- it knows this is an address, and moves the value from address"0x0010" to the second input register for the "full adder".

- the full adder does its job, and the resulting value is now in it's output register.

- the CU now will get its next instruction.

 

in a very simplified (and unfortunately still very complex) example this is how a computer works, the only thing between this example and the computer you're typing this on is the amount of available instructions, and the complexity of the available instructions.

 

in this example the instruction took 3 cycles to process, if this were a 3GHz processor, it could do 1 000 000 000 of these instructions per second. (spaces added for clarity)

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/28/2019 at 7:26 AM, ManuelNigrito said:

there is a clock and its pumping waves, each wave is a 1 or 0 so if you want four basic operations it might be 16 waves so thats 16 Hz

Not at all. The clock only periodically activates the chip, it doesn't contain the information needed for the operations. Basic operations are ran by dedicated circuits which can perform and add operation in a single cycle (sometimes more than one). This isn't related to the frequency of the clock, which measures how many cycles happen in a second and only partially depends on the current CPU load, and only in modern hardware.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, JCBiggs said:

We really need a paradigm shift in how computations are made. Part of the reason quantum computers are so powerful is because they are tertiary not binary.   Quantum doesn't work for everything though.       I'm betting in the next 5 years you will see some sort of change the the very basic mechanics of computers that puts moores law back on course.  

There are a couple of problems with this. I am only pointing them out because the OP is trying to understand computers more fully, and misinformation can be extremely detrimental to someone whose just starting to learn a field. I apologize in advance if that makes me a butthole.
 

  1. To call Quantum computers tenary is, misleading, at best.
    1. The "qubits" as they are popularly called exist in a superposition of states.
      1. A qubit can be on
      2. A qubit can be off
      3. A qubit can be in any mixture of on and off simultaneously.
    2. None of that was true. Quantum computers work by resolving the state of a particle, which we say exists in a "superposition" of multiple states at once as a way of describing the probability of the particle being in one state or another. The particle is only ever in one state at a time. This is why, when observe the particle and it's only in one state, we say that a "quantum decision was made" and that it "chose what state to be in". This idea is fundamental to quantum mechanics, and it does not mean that the qubits are actually in more than one state at once.
    3. The terms, tertiary, quaternary, or binary when applied to quantum computers are best used to represent the number of classical bits required to represent the state of the quantum computer at the last measurement.
      1. However, the search space of a qubit based quantum computer is 2num_qubits. , so any given state can be represented by Log2(2num_qubits) classical bits.
    4. The restriction to two states was a lie. The general case of quantum computers actually use "qudits" (not a typo) as their base unit. A qudit is an n dimensional set of base units.
      1. This is similar to the idea that we can have 8 bit, 16 bit, 24 bit, 32 bit... classical computers, except that each quantum bit is that many... suffice it to say that it's difficult to grasp.
      2. This means that the search space of a qudit computer is base_statesnum_qudits Where base_states is equivalent to the number of states a "quantum bit" can be in. So, the number of classical bits necessary to represent the state of a qudit quantum computer is Logbase_states(base_statesnum_bits).
  2. People seem to think that Moore's law means that performance will increase, or that transistors will get smaller. This is completely false. Moore's law simply states that the number of transistors in integrated circuits will roughly double every 18 months, and that is all that it states.
    1. The real question is: Does that trend happen because Moore's law exists, or does Moore's law exist because that trend happens?

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, straight_stewie said:

There are a couple of problems with this. I am only pointing them out because the OP is trying to understand computers more fully, and misinformation can be extremely detrimental to someone whose just starting to learn a field. I apologize in advance if that makes me a butthole.
 

  1. To call Quantum computers tertiary is, misleading, at best.
    1. The "qubits" as they are popularly called exist in a superposition of states.
      1. A qubit can be on
      2. A qubit can be off
      3. A qubit can be in any mixture of on and off simultaneously.
    2. None of that was true. Quantum computers work by resolving the state of a particle, which we say exists in a "superposition" of multiple states at once as a way of describing the probability of the particle being in one state or another. The particle is only ever in one state at a time. This is why, when observe the particle and it's only in one state, we say that a "quantum decision was made" and that it "chose what state to be in". This idea is fundamental to quantum mechanics, and it does not mean that the qubits are actually in more than one state at once.
    3. The terms, tertiary, quaternary, or binary when applied to quantum computers are best used to represent the number of classical bits required to represent the state of the quantum computer at the last measurement.
      1. However, the search space of a qubit based quantum computer is 2num_qubits. , so any given state can be represented by Log2(2num_qubits) classical bits.
    4. The restriction to two states was a lie. The general case of quantum computers actually use "qudits" (not a typo) as their base unit. A qudit is an n dimensional set of base units.
      1. This is similar to the idea that we can have 8 bit, 16 bit, 24 bit, 32 bit... classical computers, except that each quantum bit is that many... suffice it to say that it's difficult to grasp.
      2. This means that the search space of a qudit computer is base_statesnum_qudits Where base_states is equivalent to the number of states a "quantum bit" can be in. So, the number of classical bits necessary to represent the state of a qudit quantum computer is Logbase_states(base_statesnum_bits).
  2. People seem to think that Moore's law means that performance will increase, or that transistors will get smaller. This is completely false. Moore's law simply states that the number of transistors in integrated circuits will roughly double every 18 months, and that is all that it states.
    1. The real question is: Does that trend happen because Moore's law exists, or does Moore's law exist because that trend happens?

That's tertiary.  Im not reading the rest. 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, JCBiggs said:

That's tertiary.  Im not reading the rest. 

No it's not:

"any mixture of the two" means that it is a ratio, which means that it is an irrational number, which means that there are an infinite number of states that the thing could be in at any time... e.g. There are an infinite number of values in the range [0, 1]

Ternary means that a thing can be: 0, 1, Z, and no other values or mixtures of values, and that Z is a predefined state.

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, straight_stewie said:

No it's not:

"any mixture of the two" means that it is a ratio, which means that it is an irrational number, which means that there are an infinite number of states that the thing could be in at any time...

Ternary means that a thing can be: 0, 1, Z, and no other values or mixtures of values...

The whole POINT of a quantum computer is having the 3rd state. LOL.  If it didn't use the 3rd state it might as well be a classical computer.  Your point is invalid due to the basic mechanics of a quantum computer.  The whole aspect of quantum computeres quickly breaking encryption and the like, is based around its ability to use super position.   just because you program it with 1s and 0s doesnt make it binary. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, JCBiggs said:

The whole POINT of a quantum computer is having the 3rd state. LOL.  If it didn't use the 3rd state it might as well be a classical computer.  Your point is invalid due to the basic mechanics of a quantum computer.  The whole aspect of quantum computeres quickly breaking encryption and the like, is based around its ability to use super position.   just because you program it with 1s and 0s doesnt make it binary.  

Read the rest of my previous reply that you refuse to read...

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, JCBiggs said:

The whole POINT of a quantum computer is having the 3rd state. LOL.  If it didn't use the 3rd state it might as well be a classical computer.  Your point is invalid due to the basic mechanics of a quantum computer.  The whole aspect of quantum computeres quickly breaking encryption and the like, is based around its ability to use super position.   just because you program it with 1s and 0s doesnt make it binary. 

Ternary computers existed before: https://en.wikipedia.org/wiki/Ternary_computer

 

The advantage of quantum computers is the way they do math to solve a class of problems, not because it uses ternary. And I believe most quantum computing models use quibits, not quitrits. You're welcome to provide sources that say otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

ones and zeros

Slayerking92

<Type something witty here>
<Link to some pcpartpicker fantasy build and claim as my own>

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×