Jump to content

DON'T Water Cool Your PC

Plouffe

Intel's Core i9-12900K is a beast that needs taming, and the IceGiant team thinks they might have a solution. Can ANY of our coolers keep up?

 

Check out IceGiant's Campaign Page! https://www.startengine.com/icegiant

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

These are crazy expensive so that's a nope.

As for the crazy power consuming components that AMD,Intel and NVIDIA plan for us i would stay away from them and stick to CPUs and GPUs up to 170W.

A PC Enthusiast since 2011
AMD Ryzen 7 5700X@4.65GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2085MHz Memory 5000MHz
Cinebench R23: 15669cb | Unigine Superposition 1080p Extreme: 3566
Link to comment
Share on other sites

Link to post
Share on other sites

I understand clicky titles but this is straight up misleading. Does LTT think you should not use water cooling? I don't think so, and neither is it said in the video. But that's what this title suggests.

 

There was a video 3 years ago titled "why you shouldn't watercool your PC". That video actually said if and why you should not use water coolers.

 

This one has basically same title, except replacing "shouldn't" with "DON'T" (in all caps). And... It is a cooler review.

Link to comment
Share on other sites

Link to post
Share on other sites

Yes. The industry has gotten a bit too crazy in regards to "performance", completely forgetting power consumption in the process.

 

The somewhat silly power densities of these individual components are however likely not only impacting cooling requirements. But also power delivery. And it isn't like Intel has burnt out socket power pins on earlier generations. (or nVidia burning up power connectors for that matter, there is reasons they have current shunts per connector these days.)

 

At least AMD's multi chip approach spreads out the hotspots a fair bit more compared to a monolithic approach. But we are talking a few mm here, so it doesn't buy them much in terms of making it easier to cool. (this is also the reason why both AMD and Intel has historically placed the cores on the edges of the die, and everything else towards the center. A few mm further separation does make a measurable difference.)

 

 

Personally I am rather uninterested in any component that requires anything extra fancy to keep it running "in spec".

 

Imagine how the computer/electronics magazines of the 80's would have reacted if we traveled back in time and explained what is needed these days to run a moderately average computer. Back then a CPU often didn't have a heatsink at all, GPUs were almost not a thing however. But the thought of a system in need of large heat sinks, potential water cooling, a smattering of high performance (by the standards of the 80's) fans and the thing still needing to do this fancy thing called thermal throttling that people didn't even consider a thing until the early 2000's. People have slowly and steadily been eased towards running on the bleeding edge in terms of cooling, and we have all more or less accepted it.

 

However, computer trends during the last years seems to venture up the wall a bit faster than most of the market actually is interested in. Either it is a simple wake up call and people are considering that things have gone a bit far.

 

But looking back, people have complained about "excessive power consumption and hard to cool chips" since before the 80's. Back then the solution were a bit of airflow in the case. Now it is phase change or water cooling solutions. Who knows, in another decade or so we might consider emersion cooling the norm, and something even sillier as the crazy stuff.

 

Personally, I am looking forward to potential multi socket solutions becoming a thing again. (partly since that is what my own hobby architecture is specifically designed for.)

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Nystemy said:

Personally, I am looking forward to potential multi socket solutions becoming a thing again. (partly since that is what my own hobby architecture is specifically designed for.)

That would be quite nice.

 

There could also still be a case for larger sockets and packages, similar to the LGA 2011 of old.

Quote or tag me( @Crunchy Dragon) if you want me to see your reply

If a post solved your problem/answered your question, please consider marking it as "solved"

Community Standards // Join Floatplane!

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, Ydfhlx said:

There was a video 3 years ago titled "why you shouldn't watercool your PC". That video actually said if and why you should not use water coolers.

 

This one has basically same title, except replacing "shouldn't" with "DON'T" (in all caps). And... It is a cooler review.

Yeah, this is more "Do You Really NEED Water Cooling?"

I sold my soul for ProSupport.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Nystemy said:

and the thing still needing to do this fancy thing called thermal throttling that people didn't even consider a thing until the early 2000's

 

Sure, but not for that reason. Run a 20th century CPU with insufficient cooling and you will either see it crash or burn out. A.k.a it's a feature not a bug.

 

Back in the days you had really big boxes both in PC and elsewhere (Amiga2000 and 3/4000t) which had 170-300W PSU that were only needed when you packed all the slots and all the drivebays. 

 

At the moment everyone except Apple seem to be going in the "more power more good" direction which at some point will hit a hard wall with most consumers.

Link to comment
Share on other sites

Link to post
Share on other sites

Sorry im a bit confused, as it works using gravity doesnt the orientation have to be planar? Doesnt that rule out most PC cases as the motherboard is vertical? Dont get me wrong I guess you could lay down your pc flat but still

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Brown_Thunder said:

Sorry im a bit confused, as it works using gravity doesnt the orientation have to be planar?

 

Sure.

 

- this cooler seems to be mostly marketed at people that might run open bench anyways

- redoing it in a 90° version should be fairly easy

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Crunchy Dragon said:

That would be quite nice.

 

There could also still be a case for larger sockets and packages, similar to the LGA 2011 of old.

Larger and smaller sockets do have various pros and cons compared to each other. (smaller = cheaper, and larger = more data lanes.) 

Though, an ideal socket would just be a set of memory channels, some high speed data lanes (likely PCIe but my own architecture uses another bus) and power delivery. Beyond that a system clock and a reset pin is all that is needed. So that a given socket isn't generationally bound by control lines (these can be augmented on the regular data lanes, and why I don't use PCIe for my architecture), but rather just a socket that more or less is usable as long as its associated buses aren't deprecated. (and with a good foundation, deprecating a bus is unlikely.)

 

And preferably, a good multi socket supporting architecture should support systems consisting of non symmetrical CPUs. (Ie, over time, one just stuffs in new CPUs into the system until all sockets are populated, then one starts swapping out those that one considers insufficient. Also allowing an end user to configure their system with different types of CPUs, like a higher serial performance one for single threaded tasks, and a more multi core CPU for other tasks, or CPUs with specific hardware accelerators among other things. Potentially even socketed GPUs, thanks to on module memory (HBM). Making any mix of performance that they need. And of course the OS needs to keep all of this in mind.)

 

Another thing a good multi socket system would center around is modularity, so the concept of a "motherboard" would be rather antiquated since it negates future expandability. (and making this largely "future proof" wouldn't be that hard. A given interconnect would last for many years, just look at PCIe slots as an example, and a given connector/interface can coexist with the older one for a set of years, PCI and PCIe are again a good example. But PC104 systems are an even better example of this.) Ie, a system would then consist of a socket board, an IO board, and then likely some auxiliary board for internal IO, and if one wants more sockets, then add in another socket board. There would also be a system controller board that boots the whole contraption and interfaces with the PSU and the all important power button.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Nystemy said:

And preferably, a good multi socket supporting architecture should support systems consisting of non symmetrical CPUs.

 

A multi socket motherboard only makes sense if you need to populate those sockets on day 1. While the possibility of upgrading in such way or beyond a generation is a nice accident, designing something for that purpose is a futile task and expensive fail.

 

What really is needed (and has started in many tech sectors decades ago) is asking the question "when is enough enough?". So in the context of gaming does have 2 or 3 times the CPU/GPU pushing 33.17 millions pixels 360times a second really improve the experience?

 

For most people the answer is a clear "no" already and hence you will see more and more gaming PC with lowend CPUs and GPUs or APUs even among enthusiasts. 

Link to comment
Share on other sites

Link to post
Share on other sites

noctua nh-d15 would eat my 360mm radiator about any day.. Wonder if this siphon will do the same.. if not even better.

Useful threads: PSU Tier List | Motherboard Tier List | Graphics Card Cooling Tier List ❤️

Baby: MPG X570 GAMING PLUS | AMD Ryzen 9 5900x /w PBO | Corsair H150i Pro RGB | ASRock RX 7900 XTX Phantom Gaming OC (3020Mhz & 2650Memory) | Corsair Vengeance RGB PRO 32GB DDR4 (4x8GB) 3600 MHz | Corsair RM1000x |  WD_BLACK SN850 | WD_BLACK SN750 | Samsung EVO 850 | Kingston A400 |  PNY CS900 | Lian Li O11 Dynamic White | Display(s): Samsung Oddesy G7, ASUS TUF GAMING VG27AQZ 27" & MSI G274F

 

I also drive a volvo as one does being norwegian haha, a volvo v70 d3 from 2016.

Reliability was a key thing and its my second car, working pretty well for its 6 years age xD

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, Kronoton said:

 

Sure, but not for that reason. Run a 20th century CPU with insufficient cooling and you will either see it crash or burn out. A.k.a it's a feature not a bug.

 

Back in the days you had really big boxes both in PC and elsewhere (Amiga2000 and 3/4000t) which had 170-300W PSU that were only needed when you packed all the slots and all the drivebays. 

 

At the moment everyone except Apple seem to be going in the "more power more good" direction which at some point will hit a hard wall with most consumers.

I think you missed my point. (so I'll rephrase)

 

Back in when thermal throttling came onto the market, it were indeed just a safety feature to not burn up the chip. Even a rather poorly cooled CPU wouldn't get up to those temps.

 

These days, both CPUs and GPUs opportunistically boost their clocks until thermal limits are reached. And then they stay there for as long as their power budget allows.

 

Back before the 2000's, VRM designs simply weren't up to snuff to deliver such power, so CPUs were mainly power limited, not thermally limited. So the concept to people in the 80's that a CPU would need to regulate back its clock as to not overheat would be deemed as utterly crazy.

 

4 minutes ago, Kronoton said:

A multi socket motherboard only makes sense if you need to populate those sockets on day 1. While the possibility of upgrading in such way or beyond a generation is a nice accident, designing something for that purpose is a futile task and expensive fail.

 

What really is needed (and has started in many tech sectors decades ago) is asking the question "when is enough enough?". So in the context of gaming does have 2 or 3 times the CPU/GPU pushing 33.17 millions pixels 360times a second really improve the experience?

 

For most people the answer is a clear "no" already and hence you will see more and more gaming PC with lowend CPUs and GPUs or APUs even among enthusiasts.

That depends on the motherboard design. As I did postulate later I did say that the concept of a motherboard is antiquated as far as a proper multi socket system is concerned.

 

The main advantages of multi socket isn't to speed up 1 application, that is often not that ideal unless the application is somewhat inherently scalable without latency becoming a big issue. The main advantage of having more sockets is to run more applications, or offloading background tasks. But also to spread out thermal dissipation and in effect make cooling easier.

 

Diminishing returns is a thing, and yes at some point more isn't strictly needed as far as most users are concerned.

For truly scalable applications however, diminishing returns is less of a thing. And here it is more budgetary concerns that plays the main role. (But even then, there is upper practical limits to a system.)

 

9 minutes ago, Kronoton said:

While the possibility of upgrading in such way or beyond a generation is a nice accident, designing something for that purpose is a futile task and expensive fail.

This is something I will have to strongly disagree with.

Yes, a lot of current architectures are not good candidates for multi socketed systems, especially considering how the industry has approached the problem from both the hardware and software perspective.

 

An architecture has to be designed with multi core and multi socket systems in mind for this to be practical. (My own architecture as an example is designed with CPU surprise hot swapping in mind. The OS and software suite will have to play ball as well, but architecturally it isn't all that complex to deal with. Most of the "dealing with it" part is to just realize that the other CPU is gone, eventually time out its calls, and that a new CPU has appeared a while later for the kernel to boot up and assign address space to. The rest is handled in kernel land. In regards to core configuration and feature differences, this is just presented to the Kernel as a print out of sorts, base features is the same on all CPUs, additional addons, core count, clock speed, accelerators, etc all varies. Then it is just the kernel that has to not schedule tasks to a core/CPU that don't support the task. And likewise shall the task specify what features it needs. As an exceptionally rough example, adding a new CPU isn't technically much different than adding a new storage device, it has X capacity and it is up to the kernel/user to decide what to put on there. (the real complexities comes from memory sharing between the CPUs and cache coherency checking, memory security, handing IO devices, etc. But this is still quite simple.))

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Nystemy said:

As I did postulate later I did say that the concept of a motherboard is antiquated as far as a proper multi socket system is concerned.

No matter how you do it, modularity will always add cost and complexity and have a negative impact on performance.

 

We also live in a world were 99.99% of users would be more than satisfied with an existing SoC having little to no expansion. For the real heavy lifting SW has been adapted to run on multiple systems connected by high speed LAN. Some of these will use multiple sockets but if you don't populate all of them the ROI will be suboptimal.

 

A lot of what you are describing falls under "solution in search of a problem" which is all nice and good if you do it as a study project but will simply have no implications in the real world.

Link to comment
Share on other sites

Link to post
Share on other sites

Maybe it's back to one of those immersed in oil cooling solutions, where the whole motherboard is immersed in mineral oil. Like

 

Link to comment
Share on other sites

Link to post
Share on other sites

Does anyone know if this cooler fits in a 4U chassis? If not, I'd have to go with a 360mm AIO.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Kronoton said:

No matter how you do it, modularity will always add cost and complexity and have a negative impact on performance.

 

We also live in a world were 99.99% of users would be more than satisfied with an existing SoC having little to no expansion. For the real heavy lifting SW has been adapted to run on multiple systems connected by high speed LAN. Some of these will use multiple sockets but if you don't populate all of them the ROI will be suboptimal.

 

A lot of what you are describing falls under "solution in search of a problem" which is all nice and good if you do it as a study project but will simply have no implications in the real world.

The idea isn't fully about the normal consumer.

 

More servers and workstations.

In regards to unpopulated sockets, a lot of servers in a lot of datacenters aren't fully populated with CPUs. Sockets aren't actually that expensive, and most of these non fully populated systems are there for ease of future expandability. (that often never end up happening regardless, since by the time the upgrade is needed, the motherboard is outdated from a feature standpoint. All though, most servers also have auxiliary components that make this more debatable, and it is here that splitting that off onto a separate board has advantages, something the industry already is doing. Mezzanine, OCP, and regular add in cards are fairly common in the industry. The main thing that makes a motherboard outdated is the chipset. (something my own architecture don't have. Just like AMD's server CPUs, and AMD is at their 3rd generation now on the SP3 socket.))


(There is also additional methods of reducing socket associated costs from the system. One thing that has been popping up as a potential future in the industry every now and then has been integrating the VRM onto the CPU module itself, both reducing the number of socket power pins, freeing them up for IO. But power density and current cooling issues have been the main setback with this idea. But in a truly multi socket system, power dissipation on the CPU can step back in favor of efficiency, giving thermal headroom for the VRM.)
 

Secondly, the idea of multi generational support isn't particularly new. Both Intel and AMD have done it more than once before. And a lot of their customers do upgrade their systems to new CPUs as they come out. Replacing a whole server is more expensive than just a component or two. (The most notable example being the AM4 socket for consumers.)

 

Making an architecture designed with multi generational support in mind isn't a disadvantage. Especially hot swapping of CPUs in such a system. Imagine if you don't have to inform your virtualization customers of scheduled downtime to migrate their VMs to another system, being able to just update the system in situ is a value add from an administrative standpoint alone. (And PowerPC systems to my knowledge already does this. And it isn't that hard to implement in hardware.)

 

Likewise is a CPU failure in a system nice if it doesn't make the whole system crash. Resiliency is in itself a large value add to a lot of customers.

 

 

So no, it isn't a solution in search of a problem.

It is an observation of what the industry has been doing for a rather long time now. And optimizing a bit for it by removing some of the minor complexities surrounding long term socket support (mainly dedicated control signal lines to the socket), and then putting a requirement onto Kernel designers to better handle non uniform systems.

Link to comment
Share on other sites

Link to post
Share on other sites

"Don't ship your PC with this installed" Sums it up right there, because the title of the video really should have been DON'T BUY THIS WHACK JOB COOLER.

 

  1. It's an engineering sample and not even a retail product - has LTT become paid quality control testers or does money totally trump common sense with Linus now???
  2. It failed. Spectacularly. It cannot keep a consumer chip cool, never mind a high end part.
  3. It only works in ONE orientation, yet PC's come in MULTIPLE orientations depending on your case - that immediately discards this as a test bench product at best.
  4. It has clearance issues with RAM and mounting issues with the socket - both problems that simply aren't there when installing a custom water block.
  5. It looks ugly AF regardless of the concept it is trying to prove.

Custom water cooling doesn't come without it's problems either, but those are pretty much mitigated to the point where if you've installed it right you won't have any issues. This crazy contraption introduces more headaches than the ones it's supposed to actually solve - cooling. Also, keep in mind that all AIO coolers have very little actual coolant in them due to the absence of a res and how thin their rads are. That puts a hard cap on the heat they can dissipate effectively, explaining why the D15 was beating many AIO coolers.

 

LTT should have told IceGiant to go take a hike with their garbage cooler. The fact that you even made a video with this POS and added such a misleading title shows HOW LOW your standards have gone. Very disappointed!!! 👎👎

Link to comment
Share on other sites

Link to post
Share on other sites

So Intel already hitting the good old AMD FX-9000 problem. Just for a recap for those not informed, there never was separate FX-9000 chips, they were all just top-of-the-line FX-8350/8370 chips "dirty" factory overclocked. And by "dirty" I mean max core voltage (1.5375V, that's crispy), yeet the clocks and hope for the best. The kicker was that from thermal design FX-8350/8370 were just 125W TDP chips but the FX-9370/9590 were whooping 220W TDP and actually cooling one straight out of the box was a nightmare if you didn't want to thermal throttle when the CPU decided to yeet the clocks. The fun thing to take from this was that before FX-9000 launched you could got those golden FX-8350's that later would be binned as FX-9000 (some golden ones still made it to the FX-8350/8370 bins) and overclock them to 5.0Ghz without going exotic with the cooling, same as (I still believe) you could have taken any FX-9370/9590 and manually overclock it to 5.0Ghz and get better results (especially thermally) out of them than out of the box with factory overclocks. Then there were of course the best of the best solution and get the FX-8370E, the low power edition of the FX-8370, that was just 95W TDP and by the specs boosted at only 1.3750V to 4.3Ghz, golden FX-8370E and the 5.0Ghz was very manageable thermally.

 

Though the whole AMD FX lineup was failure from the beginning because awful design and abysmal performance because the awful design, I believe Intel is now doing the exact same with the i9 12900K and especially with the "performance mode". Just plain dirty and awful factory overclock that will deliver the performance but the thermal side is just garbage because "yeet the numbers and frack everything!". And I would bet that you actually could get way better performance and thermals out of them by just handling the overclocking yourself the old way, if it's still possible.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Thaldor said:

So Intel already hitting the good old AMD FX-9000 problem. Just for a recap for those not informed, there never was separate FX-9000 chips, they were all just top-of-the-line FX-8350/8370 chips "dirty" factory..........

 

Though the whole AMD FX lineup was failure from the beginning because awful design and abysmal performance because the awful design, I believe Intel is now doing the exact same with the i9 12900K and especially with the "performance mode". Just plain dirty and awful factory overclock that will deliver the performance but the thermal side is just garbage because "yeet the numbers and frack everything!". .....

WAY back in 2006 Intel changed from the NetBurst architecture running at 4 -5 Ghz to a lower GhZ Core processor. Better or equal performance less FAR less heat and power needed.  SOMEONE needs to come up with a better X86 architecture.  AMD or need an X86_RISC or X86_ARM processor.  Something that can run X86_64 natively but with the power savings of arm.  Maybe some processor that JIT translates the X86 code into ARM or RISC V instructions.  Essentially hard coded hardware level emulation.  

I'd be shocked if they have not tried/ are not working on something like this. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Uttamattamakin said:

Something that can run X86_64 natively but with the power savings of arm.  Maybe some processor that JIT translates the X86 code into ARM or RISC V instructions.  Essentially hard coded hardware level emulation. 

As far as I know this is a really huge problem actually.

 

Making a direct translator like Apples Rosetta isn't a problem but there is only that much it can do and so much it cannot do. Mainly the problem is code optimization, already really optimize your code on one architecture is a big job but if we were to have a CPU that takes in x64/86 instructions but is actually ARM/RISC-V CPU that just translates the code you would either need to optimize the code for the x64/86, for the translator or for the ARM/RISC-V or in the worst case for all of them. With that rises the question would it just be wiser to just make it for one architecture and put a big warning about user being an idiot not using that architecture.

 

Now before anyone says that "Apple did it and so it's possible", that is Apple. Unlike Intel and AMD, Apple doesn't need to take into account 1000 and 1 different hardware combinations and 1000 and 1 different usecases and million and 1 different software developers. And even Apple still offers Mac Pro and Mac Mini with Intel CPUs because they still have demand (they wouldn't be offering them otherwise).

And before someone says "Rosetta just works", yes it works but AFAIK you wouldn't even try to run something like Adobe Premier with 8K footage, Autodesk 3Ds Max with movie grade high-poly modelling, Solidworks with thousand part assembly or anything that goes balls to the walls and with what you actually already start to question "would there be better software for this because this is heavy" and find out that pretty much everything is on its knees while doing that and only answer is to throw more hardware at it. As in the points where with modern hardware we again start to come to the point where changing one "while" loop to "for" loop may have huge effect on performance and that level of optimization is impossible for a translator/emulator without coding specific exceptions for it and then you would be optimizing your code for one architecture while optimizing also for the translator.

 

Intel still has in the future jumping to Intel 5 manufacturing and the first tastes of 5nm node size we will get later with the AMD Zen 4 release but currently we have run out of the benefits of the node size change rather quickly. And tells a lot how much at least AMD expects the TSMC 5nm to help the situation when already Zen 5 with TSMC 3nm node has been shown on the roadmap.

With that I believe in the future we might need to make some radical changes and accept the compromises they come with, like maybe the powerful gaming PC of the 2025-2030 will be exclusively EATX or XL-ATX sized, maybe we need to even ditch the ATX and jump into the WTX space and use something more like SWTX motherboard. Instead of trying to make the one CPU as powerful as possible accept the fact that one just isn't enough and the joined dual-CPU doesn't spread the heat enough and we need to start using 2 or more CPUs. Probably go even more to the server side of building and start using plastic tunnels to direct the air to the components.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Thaldor said:

With that I believe in the future we might need to make some radical changes and accept the compromises they come with, like maybe the powerful gaming PC of the 2025-2030 will be exclusively EATX or XL-ATX sized, maybe we need to even ditch the ATX and jump into the WTX space and use something more like SWTX motherboard. Instead of trying to make the one CPU as powerful as possible accept the fact that one just isn't enough and the joined dual-CPU doesn't spread the heat enough and we need to start using 2 or more CPUs. Probably go even more to the server side of building and start using plastic tunnels to direct the air to the components.

While there will always be a place for that.  I think there is SO Much money and interest in ARM.  There was a proccessor that did essentially what I had in mind already.  The Transmeta Crusoe processor.  

 

https://en.wikipedia.org/wiki/Transmeta_Crusoe

 

Quote

Instead of the instruction set architecture being implemented in hardware, or translated by specialized hardware, the Crusoe runs a software abstraction layer, or a virtual machine, known as the Code Morphing Software (CMS). The CMS translates machine code instructions received from programs into native instructions for the microprocessor. In this way, the Crusoe can emulate other instruction set architectures (ISAs). This is used to allow the microprocessors to emulate the Intel x86 instruction set.

Basically we need something like this that can translate the X86 instructions into ARM instructions or RISCV instructions.  It would mean a fraction of the potential computing power being used to run this hard coded VM.  Say a chip with  16 big ARM or RISC V cores.  That can then run cool and quiet and presents itself to legacy Windows or Linux as being a 12 core processor. 

SO where did the IP for doing this wonderful thing go.  Intel wasn't interested so Transmeta sold out to a company called Novafora.  Which I checked they also went bust.  Get that IP onto an ARM chip... SOMEONE.  At this rate global warming won't be because of carbon in the atmosphere but because of all the CISC chips. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Uttamattamakin said:

While there will always be a place for that.  I think there is SO Much money and interest in ARM.  There was a proccessor that did essentially what I had in mind already.  The Transmeta Crusoe processor.  

 

https://en.wikipedia.org/wiki/Transmeta_Crusoe

 

Basically we need something like this that can translate the X86 instructions into ARM instructions or RISCV instructions.  It would mean a fraction of the potential computing power being used to run this hard coded VM.  Say a chip with  16 big ARM or RISC V cores.  That can then run cool and quiet and presents itself to legacy Windows or Linux as being a 12 core processor. 

SO where did the IP for doing this wonderful thing go.  Intel wasn't interested so Transmeta sold out to a company called Novafora.  Which I checked they also went bust.  Get that IP onto an ARM chip... SOMEONE.  At this rate global warming won't be because of carbon in the atmosphere but because of all the CISC chips. 

Here is a fun thing with software.

 

Most of it isn't complied down to machine code. A lot of it runs in runtime environments and low level interpreters. Not to mention dynamically linked libraries (even if not all OSes calls them such.).

 

Porting over the libraries, interpreters and runtime environments and suddenly the new platform has support for a large amount of applications. However, we still need to port over the OS as well. Since a given application needs the correct flavor of software glue for it to work. (unless one develops for a runtime environment, then one just needs one's virtual environment and then most things are fine.)

 

So to a large degree, we don't have to emulate X86 for compatibility reasons. Every now and then yes, but in large no.

Though, recompiling software for the other platform often works fairly flawlessly. (unless one's code is IO or accelerator heavy. But at least accelerators can be emulated in code, and IO can be buffered and handled with libraries if we target a specific platform.) And a fair few other architectures do have rather extensive software catalogues available as is. So if one goes about things wisely, then one won't have to do much redevelopment. (depending on the project)

 

To be fair. GNU/Linux has shown quite a bit of architecture independence for a while now. Since as stated, most applications don't run on hardware, but rather on top of the OS. Performance mostly boils down to how well the various OS pieces could be implemented on the other architecture. (and porting GNU/Linux to every new platform that has sufficient RAM is literally a hobby for some Linux folks. Even if it is a PIC microcontroller with only a few KB of RAM, but one gets a very slim version of the OS.)

Link to comment
Share on other sites

Link to post
Share on other sites

Thank god content creators like Linus are finally starting to complain about the state of TDPs of modern chips...

 

Call me a boomer, but I don't want more than 100-150W and on my CPU and 200-250W on my GPU. 250W for GPUs used to be only for high end Ti cards.... Also, if a card goes from 200W to 300W and is 1.5x faster, I don't call it a better card. I just call it a factory super-overclocked card...

 

400W is already a lot of heat (and energy) and with heat waves becoming more and more common in the summer, I don't want a 800W tower heating an already super hot room...

 

Yeah sure AC exists, but I think we know deep down it's not the best solution (I won't go down that rabbit hole in an online forum...). Not convinced ? LTT is making videos about putting your PC in a grow tent, so... Yeah we are there already 🙂

Gaming: Windows 10 - Intel i7 9900K - Asus RTX 2080 Strix OC - GIGABYTE Z390 AORUS MASTER - O11 Dynamic

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×