Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
okaysalami

How come PCs use so little power compared to other machines?

Recommended Posts

29 minutes ago, comander said:

Those have more complex instruction sets than the Complex Instruction Set Computers historically described in the literature. Many of the instructions they can do in a single cycle, used to take multiple cycles on CISC computers. 

They aren't pure RISC designs. They have largish, complex instruction sets and do instructions that were historically only present on CISC processors. 

Name something with a similar number of instructions as the original RISC designs and with similar limitations. 

Now you are putting caveats into every argument you make, You asked for RISC based implementations, so I gave you some,I could give more such as the ARM based HPE Apollo 70 servers https://www.hpe.com/uk/en/product-catalog/servers/apollo-systems/pip.hpe-apollo-70-system.1010742472.html

 

They are considered RISC based, but you are changing the definition. However that does not get away from your original point, that desktops have more performance than tablets. I have given you an example where that is not always true. While I understand what you are getting at, in that you can not compare the original RISC with current RISC designs, the came can be said for CISC based compute.
 

It is now 3:20am here and I have spent the last four hours trying to resolve a particularly annoying crash on a high end x86 box. My eyes are going squiffy after hours of going through error logs etc. I have finally got to the bottom of it and have a workaround but will need to speak to the devs for a permanent fix. So I need sleep before I collapse in a heap. Been nice chatting, at least a break from the task I was doing.

Link to post
Share on other sites
7 minutes ago, mr moose said:

heat is not work.

Obviously. I feel like you don't know what "work" means in the context of thermodynamics. Work isn't some form of capicitance, it's going to be released as energy... which is heat.

 

Look, I'm not the most knowledgeable on the subject and my science degree is in computing, but this is pretty elementary stuff that you learn from the start. There are a number of YouTube videos and articles on the subject that can explain it for you. 

 

Again, a 1000w leaf blower is going to raise the temperature of a sealed environment the same amount a 1000w computer will, factoring out semantics like capacitors and such. If your argument is that the leaf blower is releasing kinetic energy that turns into heat after the fact, that's not valid. The same can be said about computing, it just happens at a stupidly smaller scale.

Link to post
Share on other sites
38 minutes ago, Vitamanic said:

Obviously. I feel like you don't know what "work" means in the context of thermodynamics. Work isn't some form of capicitance, it's going to be released as energy... which is heat.

 

Look, I'm not the most knowledgeable on the subject and my science degree is in computing, but this is pretty elementary stuff that you learn from the start. There are a number of YouTube videos and articles on the subject that can explain it for you. 

 

Again, a 1000w leaf blower is going to raise the temperature of a sealed environment the same amount a 1000w computer will, factoring out semantics like capacitors and such. If your argument is that the leaf blower is releasing kinetic energy that turns into heat after the fact, that's not valid. The same can be said about computing, it just happens at a stupidly smaller scale.

You are trying to simplify the laws of thermodynamics to something that is a little disingenuous to what it is.  The first law of thermodynamics is a basically the law of conservation of energy written as an expression of energy movement in or out of a system.  The internal energy of that system is conserved.  I.E  the total energy in a system is heat energy + work.    Work cannot equal heat as you put it or the total energy could not be conserved.   If you could get 100% heat energy and kinetic work you would have a perpetual motion device and break the first law of thermodynamics.

 

 

I am sorry, I misread one of your earlier posts, I see no need in discussing something that doesn't need to be discussed.


QuicK and DirtY. Read the CoC it's like a guide on how not to be moron.  Also I don't have an issue with the VS series.

Sometimes I miss contractions like n't on the end of words like wouldn't, couldn't and shouldn't.    Please don't be a dick,  make allowances when reading my posts.

Link to post
Share on other sites
2 hours ago, Phill104 said:

That is not always the case. A RISC chip in many ways can be more efficient performing the same task as an x86 therefore using less energy. Other dedicated hardware in one device can perform tasks the other requires hardware and CPU load to accomplish. It is not possible to make a sweeping statement in defence of one architecture compared to another in this instance. 

This reminded me of a video with Steve Furber one of the designers of the first arm chip. 

 

It's a great video for anyone interested in micro computer history. But anyway he tells an anecdote of one of the first test s where they forgot to connect power to the chip. 

 

It's at 29 mins

 

 

Link to post
Share on other sites
1 hour ago, Phill104 said:

Now you are putting caveats into every argument you make, You asked for RISC based implementations, so I gave you some,I could give more such as the ARM based HPE Apollo 70 servers https://www.hpe.com/uk/en/product-catalog/servers/apollo-systems/pip.hpe-apollo-70-system.1010742472.html

 

They are considered RISC based, but you are changing the definition. However that does not get away from your original point, that desktops have more performance than tablets. I have given you an example where that is not always true. While I understand what you are getting at, in that you can not compare the original RISC with current RISC designs, the came can be said for CISC based compute.
 

It is now 3:20am here and I have spent the last four hours trying to resolve a particularly annoying crash on a high end x86 box. My eyes are going squiffy after hours of going through error logs etc. I have finally got to the bottom of it and have a workaround but will need to speak to the devs for a permanent fix. So I need sleep before I collapse in a heap. Been nice chatting, at least a break from the task I was doing.

You said pure RISC. 

Pretty much EVERYTHING RISC based has been borrowing CISC tricks for like 20+ years. Multiplication, division, non-uniform encoding... it's all been absorbed into "RISC"... and more. 

A "RISC" processor in 2020 would probably have been classified as a CISC processor in the 1970s when people smarter than me realized that they could shave off a few thousand transistors and get a bit more frequency (assuming you aren't limited by some other design factor) by swapping things up. It's been 40+ years and those benefits of cutting down and simplifying CPU essentials have close to 0 practical benefit (less than a tenth of a cent saved per system, no real performance difference)... Similarly the downsides have been ameliorated as well (compilers are better, saving a few kilobytes of RAM doesn't matter) that's why EVERYONE is using some sort of a hybrid to get the best of both worlds for 99% of uses (and if the outlier is truly important, it's new instruction time or ASIC time or GPU time or accelerator time or... )


I want to emphasize, there are no "pure" RISC or CISC (the only "CISC" parts available are better thought of as RISC-like processors with a conversion layer for legacy CISC instruction implementations) processors anymore. It's is not even particularly important to discuss these concepts when other things matter MUCH MUCH more (instructions supported, pipeline length, number of pipelines, cache implementation, manufacturing process, front-end design, back-end design, threading implementation, memory controller implementation, internal buses/meshes/cross-bars, I/O, etc.). When modern CPUs have 1 million times as many transistors (they do), you can do A LOT of stuff that wouldn't make sense when you have fewer resources. (Also doing more of any one thing generally yields diminishing returns so you end up doing a little everything) 
Zen2 has ~6 billion transistors, the 8080 had ~3.5 thousand


880px-Transistor_Count_and_Moore's_Law_-


 


R9 3900x; 64GB RAM | RTX 2080 | 1.5TB Optane P4800x

1TB ADATA XPG Pro 8200 SSD | 2TB Micron 1100 SSD
HD800 + SCHIIT VALI | Topre Realforce Keyboard

Link to post
Share on other sites
2 hours ago, comander said:

You said pure RISC. 

 

 

 

Semantics, and you know it’s.

2 hours ago, comander said:



A "RISC" processor in 2020 would probably have been classified as a CISC processor in the 1970s when people smarter than me realized that they could shave off a few thousand transistors and get a bit more frequency (assuming you aren't limited by some other design factor) by swapping things up. It's been 40+ years and those benefits of cutting down and simplifying CPU essentials have close to 0 practical benefit (less than a tenth of a cent saved per system, no real performance difference)... Similarly the downsides have been ameliorated as well (compilers are better, saving a few kilobytes of RAM doesn't matter) that's why EVERYONE is using some sort of a hybrid to get the best of both worlds for 99% of uses (and if the outlier is truly important, it's new instruction time or ASIC time or GPU time or accelerator time or... )


I want to emphasize, there are no "pure" RISC or CISC (the only "CISC" parts available are better thought of as RISC-like processors with a conversion layer for legacy CISC instruction implementations) processors anymore. It's is not even particularly important to discuss these concepts when other things matter MUCH MUCH more (instructions supported, pipeline length, number of pipelines, cache implementation, manufacturing process, front-end design, back-end design, threading implementation, memory controller implementation, internal buses/meshes/cross-bars, I/O, etc.). When modern CPUs have 1 million times as many transistors (they do), you can do A LOT of stuff that wouldn't make sense when you have fewer resources. (Also doing more of any one thing generally yields diminishing returns so you end up doing a little everything) 
Zen2 has ~6 billion transistors, the 8080 had ~3.5 thousand

 

 

The point I was originally making was a simple one, to highlight the changes in the computer environment and things that are being done to reduce the problems of power and heat in data centres. My first post was to giggling that for the average home computer, a lot of power is used behind the scenes so while 400w may seem very little, a lot of energy has gone into the daily use of that machine behind the scenes.

 

You then make a very generalised statement, one that is not always the case. Instead of discussing that statement you have introduced an incredible amount of waffle to try and prove your incorrect statement was right. Maybe you should become a politician, 

Link to post
Share on other sites
2 hours ago, Marbo said:

This reminded me of a video with Steve Furber one of the designers of the first arm chip. 

 

It's a great video for anyone interested in micro computer history. But anyway he tells an anecdote of one of the first test s where they forgot to connect power to the chip. 

 

It's at 29 mins

 

 

He gave a talk at university back in the 90s. I was quite gripped by everything he was saying including that story.

Link to post
Share on other sites
17 hours ago, RiktigaRonny said:

I often see topics about how PCs use a lot of power and how to make your PC use less, but is 300-500W really that much? I mean other machines such as leaf blowers sometimes use 3000W. I understand those require a lot of hard physical movement to do their job, but things inside a PC like processors do billions of calculations per second. How come they don't draw as much power as let's say a drilling machine with cable?

PC's don't use a linear amount of power over sustained lengths of time. A PC with a 100w PSU or a 1000W PSU consume the same power if the all the parts don't consume more than 100W. Unlike say a fridge, heater or air conditioner which will consume the rated amount of power constantly if they are running (and more than rated if used in certain environments.) These appliances run until they reach the temperature needed to stop, and then don't start again until they fall below the requested temperature. In the case of A/C and Heaters, they may still run their fans even if the heat/cooling is off because it will still blow the air over the hot/cold parts.

 

This is why inefficient heating systems cost a lot of money. Your PC consumes energy between 1 and 30 lightbulbs (25w) worth of power, and most of the time it's only pulling about 4-8 bulbs worth. Your heater or air conditioner however is pulling 60 lightbulbs worth, as long as the heating or cooling is running, and maybe 2-10 bulbs worth when only the fan is running.

 

Central heating/cooling is more efficient (because it is a bigger heater and can move the air faster) than little space heater/air conditioners and more efficient than baseboard radiators (have no fans, so they heat the wall more than they do the room.)  Central electric heating pulls about 3500 watts (140 lightbulbs) and costs literately dollars per hour if you have expensive energy costs.

 

CPU's are often rated in TDP, which is basically "how much it cooks". The bigger the number, the more power it consumes. Where as your appliances are rated in kWh, how much power they consume per hour. If you go look at energuide or similar decals on appliances you'll see stuff like this:

 

eg_labels_refrigerator-2012_whit.jpg

 

So by comparison an i7 PC has a Energy consumption of 36kWh to 285kWh.

Lowest i7 (DELL - D10U : OptiPlex 7060 Micro):

https://www.energystar.gov/productfinder/product/certified-computers/details/2326381

Highest i7 (OMEN X by HP Desktop PC - 900)

https://www.energystar.gov/productfinder/product/certified-computers/details/2322471

 

Note the latter is a large fancy HP 2016 PC with a K cpu and a GTX 1080 and the Dell is a tiny NUC-style from 2019 with a non-K CPU and no dGPU. So 

 

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×