Jump to content

DON'T Water Cool Your PC

Plouffe
 Share

2 hours ago, Nystemy said:

So to a large degree, we don't have to emulate X86 for compatibility reasons. Every now and then yes, but in large no.

Though, recompiling software for the other platform often works fairly flawlessly. (unless one's code is IO or accelerator heavy. But at least accelerators can be emulated in code, and IO can be buffered and handled with libraries if we target a specific platform.) And a fair few other architectures do have rather extensive software catalogues available as is. So if one goes about things wisely, then one won't have to do much redevelopment. (depending on the project)

That's all true BUT this part right here is crucial.  The critical customer for chip makers, software companies etc in the PC sector are big institutional purchasers.  Such purchasers run software that is very old, written in languages people who are also very old know, and integral to their work.  I recall seeing, in the days of windows 95 - xp even... a modern windows desktop with a terminal open on it, and something running on a Vax VMS server, or in a Vax VMS emulator somewhere was part of their work.   This was in a government office probably my college. 

Lots of companies are like that.  It is why their web apps still to this day often need Internet Explorer.  IF an office gets arm PC's that can't run the obscure piece of mission critical software, which may be two versions out of date BUT with a license structure where one copy can be legally installed 1000000 times ... that's a deal breaker.  If it means moving to some new subscription based software or having to retrain workers, that's a deal breaker. 

Then there are the consumer enthusiasts.  IF your arm PC can't run Crysis ...or Cyberpunk 2077  or GTA 6 that's a problem.  Especially when consoles and games are optimized for x86 (unless they are mobile). 

 

Over time you are right though.  IF we can get something like the Transmeta Crusoe, but it would use an ARM or RISC V chip to low level emulate X86 that would be a first step on the ladder to an ARM future. 

Link to comment
Share on other sites

Link to post
Share on other sites

The power requirements of CPUs and GPUs keep rising, I wonder when Nvidia or AMD will begin to focus on power efficiency instead of making the most powerful hardware causing power supply requirements to increase and the size of the cards to increase as well. Systems built in the ITX form factor seem to be harder to cool than ever before, hopefully CPUs and GPUs become more power efficient with future architectures.

Link to comment
Share on other sites

Link to post
Share on other sites

I posted this over on the vid, but this has me questioning the mobile processors as well.

Granted, Intel's mobile I7 and i9 in 12th gen are not the same chip, they don't have quite as much performance, still in the confines of a laptop with cooling how does a powersuser, developer, or gamer weigh the performance characteristics of throughput, power, and cooling?

What I am seriously wondering here is if even the 12th Gen I9s in many gaming machines by Razer or Msi, May not be desirable because of similar (but perhaps not as extreme) thermal characteristics?  I spent most of the spring waiting to see what happened with 12th Gen vs 11th Gen I7 and i9s, and now I'm wondering if the 12th Gen i9s will have shortened lifespan, even if you aren't Overclocking or performance tuning them like you might in a desktop gaming system?

Any thoughts on this?

Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, Uttamattamakin said:

That's all true BUT this part right here is crucial.  The critical customer for chip makers, software companies etc in the PC sector are big institutional purchasers.  Such purchasers run software that is very old, written in languages people who are also very old know, and integral to their work.  I recall seeing, in the days of windows 95 - xp even... a modern windows desktop with a terminal open on it, and something running on a Vax VMS server, or in a Vax VMS emulator somewhere was part of their work.   This was in a government office probably my college. 

Lots of companies are like that.  It is why their web apps still to this day often need Internet Explorer.  IF an office gets arm PC's that can't run the obscure piece of mission critical software, which may be two versions out of date BUT with a license structure where one copy can be legally installed 1000000 times ... that's a deal breaker.  If it means moving to some new subscription based software or having to retrain workers, that's a deal breaker. 

Then there are the consumer enthusiasts.  IF your arm PC can't run Crysis ...or Cyberpunk 2077  or GTA 6 that's a problem.  Especially when consoles and games are optimized for x86 (unless they are mobile). 

 

Over time you are right though.  IF we can get something like the Transmeta Crusoe, but it would use an ARM or RISC V chip to low level emulate X86 that would be a first step on the ladder to an ARM future. 

Emulating, even on a hardware level doesn't make one less dependent on what one desires to replace.

 

Transitioning is more about accepting that one has to clean out the trash when moving to a new home. One has to discuss what one actually needs to bring and not. And also discuss what one might have to remake anew. The comfort of backwards compatibility is something one often have to give up to actually get the benefits out of a new platform.

 

I personally think X86 will stay for at least the next decade, maybe two before it starts becoming irrelevant.

ARM has a fair few issues in regards to fragmentation that makes it a bit unfriendly as a PC replacement at the moment, however this is likely slowly going to change over time.

RISC-V is likely not going to replace a CPU in the PC space or even servers for the foreseeable future, as an architecture it is a bit too scatter brained to be practical. (it is designed as a jack of all traits, but a master of non. Except implementation cost as far as licensing goes, the core is rather cheap unless one wants a tape out ready one.To a degree, PowerPC likely has a larger potential taking over as a new CPU architecture for the PC market compared to RISC-V. (Since it is a more unified designed aiming at higher performance computing, and is also open source. And there is chips already on the market that are quite decent spec wise. However, currently focused a bit too much on mainframe applications)

 

However, the most important thing to remember is that most power users don't have 1 system. Nor does businesses either. Systems and workloads go more or less hand in hand, so depending on the need, the system specs and features will follow. There is reasons why IBM still sells mainframes, why some super computers use ARM, SPARC, X86 or even more application specific architectures, same story for higher end workstations. (To a degree, we should also not forget about smartphones that predominantly run ARM. And how this has helped migration to ARM far more than most other efforts before it.)

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, Nystemy said:

Personally, I am looking forward to potential multi socket solutions becoming a thing again. (partly since that is what my own hobby architecture is specifically designed for.)

20 hours ago, Crunchy Dragon said:

That would be quite nice.

 

There could also still be a case for larger sockets and packages, similar to the LGA 2011 of old.

nice to get a triple socket, 2 CPUs 1 controller/manager (one CPU could do physics, other do other stuff) which is speaking to my GPU about why you are so roundabout the problem? just become a server it said, at home and create your PC as a consumer server to be the new trend like linus wanted in his house.

jk, I want my CPU stacked like pancakes 🙂

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, Quackers101 said:

nice to get a triple socket, 2 CPUs 1 controller/manager (one CPU could do physics, other do other stuff) which is speaking to my GPU about why you are so roundabout the problem? just become a server it said, at home and create your PC as a consumer server to be the new trend like linus wanted in his house.

jk, I want my CPU stacked like pancakes 🙂

Why stop at stacking CPUs like pancakes when one can stack the whole computer like pancakes:
image.png.da9b4269088ce4acba17f8fe73e668bd.png

And yes, this is an actual computer standard called PC104

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, Nystemy said:

Why stop at stacking CPUs like pancakes when one can stack the whole computer like pancakes:

And yes, this is an actual computer standard called PC104

that's illegal, I'm calling all mother of motherboards to stack your stack to be unstacked.

Link to comment
Share on other sites

Link to post
Share on other sites

Can't wait for the copper version to release in 2023 (hopefully early 2023 xD). This thing is gonna be a godsend for future LGA 2000-2500 Intel CPUs.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, kriegalex said:

Also, if a card goes from 200W to 300W and is 1.5x faster, I don't call it a better card. I just call it a factory super-overclocked card...

 

Onboard SLI is IMO closer to the truth here.....

 

Lets see if the can get some actual benefits from moving to the next smaller process and wether they waste it all to claim a slight lead over the competition.

Link to comment
Share on other sites

Link to post
Share on other sites

at 9:10 linus mentiones the extended version of the kallmekris collab but i cant find it on float plane and no unextended cut has been released that i know of. any one know whats up with that?

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, Kronoton said:

Lets see if the can get some actual benefits from moving to the next smaller process

This is anti-shareholder behaviour and a recipe for company disaster (Especially Nvidia, they cannot afford to have a current Intel GPU series, AMD 5000 series situation arise, where they don't release anything above mid range). And Similarly neither AMD nor Intel can afford to do same for CPUs.

19 hours ago, Kronoton said:

wether they waste it all to claim a slight lead over the competition

This is the only correct path from their standpoint, any other path has a higher potential to lead to disaster (mostly in the fiscal responsibility sense+mindshare department, but in other ways too). We are in a part of a cycle, where its not possible to be competitive without more power due to laws of physics. This cycle ends and new one begins with the implementation of GAAFETs and more importantly PowerVIa (backside delivery) in 2024/2025

 

MCM GPUs are a wildcard, another possible wildcard would be a complete architecture redesign not reliant on reducing compute (increased clocks for gaming). But its highly unlikely any of that happens before 2026-2029 (depending on which company it is).

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/18/2022 at 7:06 AM, Uttamattamakin said:

Basically we need something like this that can translate the X86 instructions into ARM instructions or RISCV instructions.  It would mean a fraction of the potential computing power being used to run this hard coded VM.  Say a chip with  16 big ARM or RISC V cores.  That can then run cool and quiet and presents itself to legacy Windows or Linux as being a 12 core processor.

As I said making a translator isn't a problem, the performance is. Fun thing even then when Transmeta CPUs were a thing a lot of people claimed they work just fine and are worth it because they have the features of Intel processor and can run same software. The point those even then omitted was that the Transmeta Crusoe was released in the 2000, the same year Intel released Pentium 4 and direct quote from Wikipedia:
 

Quote

A 700 MHz Crusoe ran x86 programs at the speed of a 500 MHz Pentium III x86 processor,[5] although the Crusoe processor was smaller and cheaper than the corresponding Intel processor.[5]

That is 200Mhz loss of computing speed for the translation and the top-of-the-line Crusoe was only 800Mhz. Intel Pentium 4 worked in 1.4/1.5Ghz at the launch which was also in the 2000.

 

2004 Transmeta brought out the Efficeon which on paper sounds a lot better deal, double the speed and minimize the power draw. Too bad the second generation 1-1.7Ghz speeds were not really competitive. Intel was already in 2004 tickling the 3.0Ghz speeds and with Celeron even over it. But as said there's little to none published comparisons between Transmeta Efficeons and Intels CPUs, most probably for the same reason there's little to none comparisons between Toyota Yaris and Lamborghini Aventador on the race track.

 

Also from the performance point we need to remember that Transmeta worked when we only used single-core CPUs. If we say there's about 30% loss in performance as was with the Transmeta Crusoe (as in what it could do with its native code vs. what it does with non-native code) in multi-core environment it's going to be a lot worse because all the optimization that goes into handling the loads and managing what operations are cheapest to do in which order. Now also hits the hard piece to chew, ARM and RISC-V are still not as well performing in general load compared to AMD and Intel (and while Apple M1 and M2 are powerful, they have Apple Magic in them and as always with Apple Magic they aren't sharing that).

Spoiler

I put this in to here jsut because I wrote it and there's surprisingly lot of people when architectures come to topic who like to point this one out.

 

Of course there are the RISC-V and ARM based supercomputers (that people seem to find a "true point which means they are possibilities") which have performed fantastically but with them we are talking about special usecases and unobtaniums, as in they ain't consumer things and used very, very, very differently. And even then ARM based Fugaku is #2 in the world of computing loosing to the AMD EPYC based Frontier by about 2.5 times in performance (440 PFLOPS vs. 1,100 PFLOPS) and the big difference, core count that is quite a huge thing; Frontier having 591,872 compute cores (9,248 x 64-core) and Fugaku having whooping 7,630,848 cores (158,976 x 48-core) that is over 17 times more CPUs and almost 13 times the core count to get to the second place with ARM. The thing in the question now, the power consumption and so excess heat, that also isn't on the ARMs side this time with Frontier living in just 74 racks, that is "JUST" because Fugaku lives in 432 racks and probably no one needs a lot of math to guess which one is more power efficient solution.

So in short not only would obtainable ARM and RISC-V CPUs match the performance of Intal and AMD CPUs but with ISA translation ARM and RISC-V would need to surpass the Intel and AMD. And this would only be the ISA side of things, as @Nystemy mentioned, there's a lot of other things that also would need to be translated and correctly accessed and optimized (probably case by case) and you have a ton of work to do.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Thaldor said:

As I said making a translator isn't a problem, the performance is.

*looks at intel*

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/18/2022 at 4:08 PM, Veretax said:

What I am seriously wondering here is if even the 12th Gen I9s in many gaming machines by Razer or Msi, May not be desirable because of similar (but perhaps not as extreme) thermal characteristics?  I spent most of the spring waiting to see what happened with 12th Gen vs 11th Gen I7 and i9s, and now I'm wondering if the 12th Gen i9s will have shortened lifespan, even if you aren't Overclocking or performance tuning them like you might in a desktop gaming system?

Any thoughts on this?

Modern processor will reduce performance (and power usage) to avoid any serious problems due to temperature.

 

However, what you should be concerned about is power usage (obviously higher power lowers battery life), or at least temperature of the system, which might get uncomfortable (and if you neither carry laptop and keep it on a desk just get a desktop).

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, Thaldor said:

As I said making a translator isn't a problem, the performance is. Fun thing even then when Transmeta CPUs were a thing a lot of people claimed they work just fine and are worth it because they have the features of Intel processor and can run same software. The point those even then omitted was that the Transmeta Crusoe was released in the 2000, the same year Intel released Pentium 4 and direct quote from Wikipedia:
 

That is 200Mhz loss of computing speed for the translation and the top-of-the-line Crusoe was only 800Mhz. Intel Pentium 4 worked in 1.4/1.5Ghz at the launch which was also in the 2000.

 

2004 Transmeta brought out the Efficeon which on paper sounds a lot better deal, double the speed and minimize the power draw. Too bad the second generation 1-1.7Ghz speeds were not really competitive. Intel was already in 2004 tickling the 3.0Ghz speeds and with Celeron even over it. But as said there's little to none published comparisons between Transmeta Efficeons and Intels CPUs, most probably for the same reason there's little to none comparisons between Toyota Yaris and Lamborghini Aventador on the race track.

 

Also from the performance point we need to remember that Transmeta worked when we only used single-core CPUs. If we say there's about 30% loss in performance as was with the Transmeta Crusoe (as in what it could do with its native code vs. what it does with non-native code) in multi-core environment it's going to be a lot worse because all the optimization that goes into handling the loads and managing what operations are cheapest to do in which order. Now also hits the hard piece to chew, ARM and RISC-V are still not as well performing in general load compared to AMD and Intel (and while Apple M1 and M2 are powerful, they have Apple Magic in them and as always with Apple Magic they aren't sharing that).

  Hide contents

I put this in to here jsut because I wrote it and there's surprisingly lot of people when architectures come to topic who like to point this one out.

 

Of course there are the RISC-V and ARM based supercomputers (that people seem to find a "true point which means they are possibilities") which have performed fantastically but with them we are talking about special usecases and unobtaniums, as in they ain't consumer things and used very, very, very differently. And even then ARM based Fugaku is #2 in the world of computing loosing to the AMD EPYC based Frontier by about 2.5 times in performance (440 PFLOPS vs. 1,100 PFLOPS) and the big difference, core count that is quite a huge thing; Frontier having 591,872 compute cores (9,248 x 64-core) and Fugaku having whooping 7,630,848 cores (158,976 x 48-core) that is over 17 times more CPUs and almost 13 times the core count to get to the second place with ARM. The thing in the question now, the power consumption and so excess heat, that also isn't on the ARMs side this time with Frontier living in just 74 racks, that is "JUST" because Fugaku lives in 432 racks and probably no one needs a lot of math to guess which one is more power efficient solution.

So in short not only would obtainable ARM and RISC-V CPUs match the performance of Intal and AMD CPUs but with ISA translation ARM and RISC-V would need to surpass the Intel and AMD. And this would only be the ISA side of things, as @Nystemy mentioned, there's a lot of other things that also would need to be translated and correctly accessed and optimized (probably case by case) and you have a ton of work to do.

YES!
Real time translating isn't a solution unless emulation is a "requirement". Usually if a proper transition is cost prohibitive or illegal from a copyright/license standpoint.


Other than that, migrating code from one architecture to another more permanently is actually often not all that hard. Depending on how many dependencies one has. Sometimes it is as easy as getting a compatible compiler, other times it is as easy as just moving over the application to a ported over version of the OS. And other times more effort will be needed.

 

However, a large portion of software don't actually run on the hardware directly. (Even C/C++ often runs in an emulator if we look at it from the CPU's perspective, the OS really adds a lot of abstraction layers for better or worse.)

Link to comment
Share on other sites

Link to post
Share on other sites

hey I was wondering if I can get some advice from you I'm looking to upgrade my computer on a budget. but still I'm trying to get the best performance I play a lot of mods on GTA v which cause a lot of usage of my performance. I'm just trying to have a better experience from what came with the computer any recommendations for CPU graphics card power supply that I would need to change this is my setup.

Lenovo - Legion Tower 5i Gaming Desktop - Intel Core i5-11400 - 16GB Memory - NVIDIA GeForce GTX 1660 Super - 256GB SSD + 1TB HDD - 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share


×