Jump to content

How Big and Fast do you think GPU's and CPU's would be in the future?

This post explores the current state of CPU and GPU architecture, the limitations we're facing, and the potential future developments in the computing world. It delves into topics like Moore's Law, transistor size, and the challenges of quantum effects, while also discussing the potential for new materials, manufacturing processes, and designs to overcome these hurdles. also contemplates the idea of having powerful computers as small as our phones and acknowledges the difficulties humans have in predicting the future.

 

 

Let's be honest, we are starting to hit the limit of GPU and CPU architecture improvements.

 

For those who know Moore's Law, it's an observation that the number of transistors in a computer chip doubles every two years or so. We're seeing that cards and chips don't have the same rapid improvement as they used to; for example, a CPU from 2000 and 2005 has a significant performance difference and a larger overclocking headroom. But now, look at the 13th and 14th generation Intel chips.

 

Cards are still seeing performance improvements, unlike CPUs. I don't know when they will start to progress more slowly. How big can GPUs get? (The 40×× NVIDIA cards are massive, and if we continue like this, they'll become even bigger.)

 

We used to have computers as large as a room that packed a few kilobytes or megabytes of power, and now we have gaming PCs in the corner of our rooms.

 

Transistors were massive, but now they're really small—currently, they're roughly 7-10 nanometers in size, and they're on track to shrink further to 5 nanometers..

 

As transistors continue to shrink, they approach the scale of individual atoms. At these dimensions, quantum effects become more pronounced and can lead to issues such as quantum tunneling and increased leakage current. (Quantum tunneling occurs when electrons pass through barriers they normally wouldn't be able to), due to their wave-like properties. This can lead to increased power consumption and reduced stability in the transistors. To mitigate these quantum effects, new materials, manufacturing processes, and transistor designs are being researched. For example, Intel has been developing 3D transistor designs like FinFET and Gate-All-Around (GAA) to improve control over the flow of current in the transistors, reducing leakage and improving stability. Other companies and researchers are exploring alternative materials and technologies, such as carbon nanotubes, graphene, and 2D materials, in an effort to create smaller, more efficient, and more stable transistors.

 

We all know that we could get video games with life like graphics but due to limitations there needs to be some cutting edges, who knows maybe we could have a computer as small as out phones and have a beast of a power beyond our thought. 

 

We as humans are terrible at prediction even history told us so but we can only estimate.

 

* what do you think the new Architecture would be and when it might come even though we are not looking at anytime soon.

 

Note: this is a compilation of my thoughts and research.

 

Do you ever get that feeling when you're on a forum from 2000-2006 and think, "Wow, this is old. This is how they thought about [***] back then"? Well, someday, someone might look at this conversation the same way, or perhaps not. Who knows?

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Blazepoint5 said:

For those who know Moore's Law, it's an observation that the number of transistors in a computer chip doubles every two years or so.

yeah it's a law, but it's not really a "law" it's just called law, however it's more of an observation of historical trend

 

16 minutes ago, Blazepoint5 said:

Let's be honest, we are starting to hit the limit of GPU and CPU architecture improvements.

 

videogames are starting to hit the same limit, yet there's exceptions that always surprise you,

 

same can be with GPUs, you didn't have RayTracing before, and look where we are now.

18 minutes ago, Blazepoint5 said:

Intel has been developing 3D transistor designs like FinFET and Gate-All-Around (GAA) to improve control over the flow of current in the transistors, reducing leakage and improving stability.

that sounds interesting, why not apply similar logic of 3D stacking cells in storages and cache for other circuits? it could potentially work

 

19 minutes ago, Blazepoint5 said:

* what do you think the new Architecture would be and when it might come even though we are not looking at anytime soon.

I'm wondering for current technologies, whether AMD or Intel will eventually come with something that can match their RT performance for example

 

21 minutes ago, Blazepoint5 said:

We all know that we could get video games with life like graphics but due to limitations there needs to be come cutting edge, who knows maybe we could have a computer as small as out phones and have a beast of a power beyond our thought. 

honestly going back to videogames, some games are already leaning into realism quite hard, both due to their details or the simple existence of VR,

 

GTA 5 is from 2015 and it's quite impressive of how realistic and immersive it's graphics can be.

Note: Users receive notifications after Mentions & Quotes. 

Feel free to ask any questions regarding my comments/build lists. I know a lot about PCs but not everything.

PC:

Ryzen 5 5600 |16GB DDR4 3200Mhz | B450 | GTX 1080 ti

PCs I used before:

Pentium G4500 | 4GB/8GB DDR4 2133Mhz | H110 | GTX 1050

Ryzen 3 1200 3,5Ghz / OC:4Ghz | 8GB DDR4 2133Mhz / 16GB 3200Mhz | B450 | GTX 1050

Ryzen 3 1200 3,5Ghz | 16GB 3200Mhz | B450 | GTX 1080 ti

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Blazepoint5 said:

Let's be honest, we are starting to hit the limit of GPU and CPU architecture improvements.

I'm sure there's a lot that could still be done, the bigger issue is both cost and software compatibility. Smaller nodes cost more and more to develop. More specialized accelerators require software support. Multiple cores require multi-threaded software to take advantage and that is, comparatively speaking, worlds more complex than single threaded (and not every problem can be divided easily or at all)

 

9 minutes ago, Blazepoint5 said:

Cards are still seeing performance improvements, unlike CPUs.

Graphics is an embarrassingly parallel problem. Until your card has as many cores as there are pixels on your screen, you could always go bigger (provided memory bandwidth can keep up)

 

Additionally we just got a "brand new" technology in terms of real time ray tracing. New technologies naturally see much greater improvements across generations, compared to more mature technologies (i.e. rasterization). There are improvements in hardware and software (e.g. ReSTIR) that, when combined, result in rapid improvements. But that will end once things become more mature.

 

Then of course you have software/hardware improvements like DLSS that can reduce the load on the GPU. However algorithms that provide interpolated results aren't universally applicable. It mostly works for images where somewhat less precise results are acceptable. You couldn't necessarily do the same in, for example, a more scientific context.

 

In many ways CPUs are much more complex beasts, because they are more generalized. A CPU core can do a lot more than a GPU core. And there are many tasks that run on a CPU that can't be split across multiple cores, so they really only benefit from faster cores or alternatively more specialized hardware.

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, podkall said:

yeah it's a law, but it's not really a "law" it's just called law, however it's more of an observation of historical trend

 

videogames are starting to hit the same limit, yet there's exceptions that always surprise you,

 

same can be with GPUs, you didn't have RayTracing before, and look where we are now.

that sounds interesting, why not apply similar logic of 3D stacking cells in storages and cache for other circuits? it could potentially work

 

I'm wondering for current technologies, whether AMD or Intel will eventually come with something that can match their RT performance for example

 

honestly going back to videogames, some games are already leaning into realism quite hard, both due to their details or the simple existence of VR,

 

5 minutes ago, podkall said:

GTA 5 is from 2015 and it's quite impressive of how realistic and immersive it's graphics can be.

 

Indeed random friend.

 

Ray tracing and path tracing is an insane thing and improvement to technology and video games knowing does tech runs on realtime with only few ms of delay which is as to be expected.

 

GTA 5 to be honest would have had a way better result even though its a 2013 game and had some improvement over the years in 2015 and the ps5 upgrade. Rockstar had to cut corners to deliver to as expectations, THIS is to show that even by that year there is capability for archiving something beyond what we got but due to the limitations we got a masterpiece for its time and upto date its still good although the graphics are the flaw as its showing its age. 

 

I think in the future there is going to be something new, something that would blow our minds and shall be talked about by the community.

Link to comment
Share on other sites

Link to post
Share on other sites

Very fast

I don't think we have any idea of the ceiling at this point.

5950X/3080Ti primary rig  |  1920X/1070Ti Unraid for dockers  |  200TB TrueNAS w/ 1:1 backup

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Eigenvektor said:

I'm sure there's a lot that could still be done, the bigger issue is both cost and software compatibility. Smaller nodes cost more and more to develop. More specialized accelerators require software support. Multiple cores require multi-threaded software to take advantage and that is, comparatively speaking, worlds more complex than single threaded (and not every problem can be divided easily or at all)

 

Graphics is an embarrassingly parallel problem. Until your card has as many cores as there are pixels on your screen, you could always go bigger (provided memory bandwidth can keep up)

 

Additionally we just got a "brand new" technology in terms of real time ray tracing. New technologies naturally see much greater improvements across generations, compared to more mature technologies (i.e. rasterization). There are improvements in hardware and software (e.g. ReSTIR) that, when combined, result in rapid improvements. But that will end once things become more mature.

 

Then of course you have software/hardware improvements like DLSS that can reduce the load on the GPU. However algorithms that provide interpolated results aren't universally applicable. It mostly works for images where somewhat less precise results are acceptable. You couldn't necessarily do the same in, for example, a more scientific context.

 

In many ways CPUs are much more complex beasts, because they are more generalized. A CPU core can do a lot more than a GPU core. And there are many tasks that run on a CPU that can't be split across multiple cores, so they really only benefit from faster cores or alternatively more specialized hardware.

The cost of developing smaller nodes is indeed increasing, and this can limit the pace of innovation. As transistors become smaller and more densely packed, the manufacturing processes become more complex and expensive. This is a major concern for chip manufacturers, as they need to balance the cost of innovation with the need to keep their products affordable for consumers.

 

Specialized accelerators can significantly improve performance for specific tasks, but they require software support to be utilized effectively. This is a chicken-and-egg problem, as software developers may be hesitant to invest in optimizing their applications for specialized hardware if there is not a large enough user base, and users may be hesitant to adopt specialized hardware if there is not enough software that can take advantage of it.

 

The difference in the nature of GPUs and CPUs, as you mentioned, is also an important factor. GPUs excel at handling embarrassingly parallel problems, and as such, they can benefit from simply having more cores. On the other hand, CPUs are designed to handle a wider range of tasks, and not all of these tasks can be easily parallelized. This means that increasing the core count of a CPU may not always result in a proportional increase in performance.

 

Real-time ray tracing and other advancements in graphics technology have certainly allowed for significant improvements in GPU performance, but as you pointed out, these improvements may slow down as the technology matures. The same can be said for techniques like DLSS, which can greatly improve performance in specific scenarios but are not universally applicable.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Blazepoint5 said:

Indeed random friend.

 

Ray tracing and path tracing is an insane thing and improvement to technology and video games knowing does tech runs on realtime with only few ms of delay which is as to be expected.

 

GTA 5 to be honest would have had a way better result even though its a 2013 game and had some improvement over the years in 2015 and the ps5 upgrade. Rockstar had to cut corners to deliver to as expectations, THIS is to show that even by that year there is capability for archiving something beyond what we got but due to the limitations we got a masterpiece for its time and upto date its still good although the graphics are the flaw as its showing its age. 

 

I think in the future there is going to be something new, something that would blow our minds and shall be talked about by the community.

I mean I could have mentioned better examples, like Cyberpunk, Death Stranding, the list goes on, FH5 is 3 years old now, and how stunning the cars look

 

2 minutes ago, Blazepoint5 said:

The cost of developing smaller nodes is indeed increasing, and this can limit the pace of innovation. As transistors become smaller and more densely packed, the manufacturing processes become more complex and expensive. This is a major concern for chip manufacturers, as they need to balance the cost of innovation with the need to keep their products affordable for consumers.

not just products, capitalism, they have to pay for their electricity, yacht fuel, etc.

Note: Users receive notifications after Mentions & Quotes. 

Feel free to ask any questions regarding my comments/build lists. I know a lot about PCs but not everything.

PC:

Ryzen 5 5600 |16GB DDR4 3200Mhz | B450 | GTX 1080 ti

PCs I used before:

Pentium G4500 | 4GB/8GB DDR4 2133Mhz | H110 | GTX 1050

Ryzen 3 1200 3,5Ghz / OC:4Ghz | 8GB DDR4 2133Mhz / 16GB 3200Mhz | B450 | GTX 1050

Ryzen 3 1200 3,5Ghz | 16GB 3200Mhz | B450 | GTX 1080 ti

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Blazepoint5 said:

Let's be honest, we are starting to hit the limit of GPU and CPU architecture improvements.

 

 

We already hit the limit where it makes any financial sense to keep shrinking it. Plus many things already do not benefit from further die shrinks (Eg storage and networking) and have switched to stacked and MCM's Basically going from 2D to 3D, but so far CPU's and GPU's can't stack this way. We've seen apple stack RAM on the CPU, but we haven't seen CPU cores stacked on CPU cores. Yet anyway.

 

Given the amount of power CPU and GPU's are taking, there is physical limit (15A/1800w) where the consumer starts needing a dedicated 20A or 240V circuit. So you are split between the CPU and GPU, and we might start seeing situations where people start buying computers with less CPU cores in favor of having a higher power GPU.

 

Link to comment
Share on other sites

Link to post
Share on other sites

I'm paraphrasing Jim Keller, technical progress is jumping from one S shaped (Sigmoid curve) to another. 

1980-2020 or so was mostly focused on general compute as the way ahead. Flexibility mattered most since everything was getting 2x faster every 1-3 years during most of that span. 


2020 on ward will likely be more focused on compute specialization and finding the right mix for a given use case. 

I can imagine in the next 10 years GPUs becoming 10x more effective (mix of upscaling, ray tracing and tensor calculations) at the types of workloads that will be common in 10 years and maybe 2-3x more effective at today's workloads. 
I can see a similar trends for CPUs, just less extreme. 1.5-2x more effective per core but ~3x more effective at the operations of the future. I can see core count going up to 2-4x today's level. A large chunk of this might come from smaller "mini" cores though. 

 

These are ball-park guestimates. 

 

1 hour ago, podkall said:

not just products, capitalism, they have to pay for their electricity, yacht fuel, etc.

You could pick just about any economic system for this critique. Resources are never infinite. 

Feudalism... if the nobility spend too much on compute you'll get peasant revolts because everyone is starving. 
"communism" - we already saw the masses rebel against the people at the top spending like idiots. This system is SUPER vulnerable to getting top heavy and corrupt. 
Capitalism is pretty much the most robust of those systems when it comes to keeping people at the top from exploiting those at the bottom. Which is crazy given how many critiques are levied against it for that very thing.

And yeah, as long as we live on a finite planet... tossing TONS AND TONS AND TONS of resources into converting sand into magic electricity switches has very real trade offs. No "solutions" just trade-offs. 

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, cmndr said:

Capitalism is pretty much the most robust of those systems when it comes to keeping people at the top from exploting those at the bottom. Which is crazy given how many critiques are levied against it for that very thing.

it's very funny,

 

20 minutes ago, Kisai said:

We already hit the limit where it makes any financial sense to keep shrinking it. Plus many things already do not benefit from further die shrinks (Eg storage and networking) and have switched to stacked and MCM's Basically going from 2D to 3D,

"oh wait a minute, we have plenty of empty space in the PC's case, why should we further reduce the SSD size, when we can double it's capacity with a fractional increase of it's size?"

Note: Users receive notifications after Mentions & Quotes. 

Feel free to ask any questions regarding my comments/build lists. I know a lot about PCs but not everything.

PC:

Ryzen 5 5600 |16GB DDR4 3200Mhz | B450 | GTX 1080 ti

PCs I used before:

Pentium G4500 | 4GB/8GB DDR4 2133Mhz | H110 | GTX 1050

Ryzen 3 1200 3,5Ghz / OC:4Ghz | 8GB DDR4 2133Mhz / 16GB 3200Mhz | B450 | GTX 1050

Ryzen 3 1200 3,5Ghz | 16GB 3200Mhz | B450 | GTX 1080 ti

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Blazepoint5 said:

Cards are still seeing performance improvements, unlike CPUs. I don't know when they will start to progress more slowly. How big can GPUs get? (The 40×× NVIDIA cards are massive, and if we continue like this, they'll become even bigger.)

If you look at silicon terms, it could be said GPUs are getting smaller. Looking at recent consumer tier 102 offerings:

 

2080 Ti 754mm2 250W

3090 628mm2 350W

3090 Ti 628mm2 450W

4090 609mm2 450W

 

What we're instead seeing is pushing more power through those GPUs, which requires bigger power delivery and cooling.

 

Above is only looking at the highest end. Those limits might continue to be pushed. I feel that in the mass market mainstream power isn't increasing much if at all.

 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

We can only speculate. There are many advances in new materials over silicon and optical computing. So new materials not heating nearly as much, along with much higher frequencies and new architectures that can be made with new materials and nodes.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

Very big, not so much faster. Very power hungry.

 

PC Builder Creates an Nvidia RTX 4090 for April Fools' Day and It's Massive  | PCMag

 

/obviously this is a joke.

CPU: AMD Ryzen 3700x / GPU: Asus Radeon RX 6750XT OC 12GB / RAM: Corsair Vengeance LPX 2x8GB DDR4-3200
MOBO: MSI B450m Gaming Plus / NVME: Corsair MP510 240GB / Case: TT Core v21 / PSU: Seasonic 750W / OS: Win 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

I mean what we had at the peak of even just 10 years ago is nothing compared to the processing power we have today, but I do not think they will get exponentially faster, surely there has to be a limit somewhere, at least with too many tradeoffs such as power draw and thermals

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, Kisai said:

 

We already hit the limit where it makes any financial sense to keep shrinking it. Plus many things already do not benefit from further die shrinks (Eg storage and networking) and have switched to stacked and MCM's Basically going from 2D to 3D, but so far CPU's and GPU's can't stack this way. We've seen apple stack RAM on the CPU, but we haven't seen CPU cores stacked on CPU cores. Yet anyway.

 

Given the amount of power CPU and GPU's are taking, there is physical limit (15A/1800w) where the consumer starts needing a dedicated 20A or 240V circuit. So you are split between the CPU and GPU, and we might start seeing situations where people start buying computers with less CPU cores in favor of having a higher power GPU.

 

 

17 hours ago, Kisai said:

we might start seeing situations where people start buying computers with less CPU cores in favor of having a higher power GPU.

I totally agree with this 

Link to comment
Share on other sites

Link to post
Share on other sites

I think we are closer to stagnating when it comes to CPUs than we are on GPUs.

 

But there will always be improvements to me made, we might see more specialised parts that do just some things but do them very fast. For example RT cores or some of what apple have been doing with M1.

“Remember to look up at the stars and not down at your feet. Try to make sense of what you see and wonder about what makes the universe exist. Be curious. And however difficult life may seem, there is always something you can do and succeed at. 
It matters that you don't just give up.”

-Stephen Hawking

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, Kisai said:

 

We already hit the limit where it makes any financial sense to keep shrinking it. Plus many things already do not benefit from further die shrinks (Eg storage and networking) and have switched to stacked and MCM's Basically going from 2D to 3D, but so far CPU's and GPU's can't stack this way. We've seen apple stack RAM on the CPU, but we haven't seen CPU cores stacked on CPU cores. Yet anyway.

 

Given the amount of power CPU and GPU's are taking, there is physical limit (15A/1800w) where the consumer starts needing a dedicated 20A or 240V circuit. So you are split between the CPU and GPU, and we might start seeing situations where people start buying computers with less CPU cores in favor of having a higher power GPU.

 

it's likely that we'll see a shift in how computing systems are designed and optimized. For instance, manufacturers may focus on improving the efficiency of individual cores rather than simply increasing the core count. Similarly, there may be a greater emphasis on developing specialized hardware accelerators that can offload specific tasks from the CPU or GPU, thus improving overall system efficiency and performance.

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, cmndr said:

I'm paraphrasing Jim Keller, technical progress is jumping from one S shaped (Sigmoid curve) to another. 

1980-2020 or so was mostly focused on general compute as the way ahead. Flexibility mattered most since everything was getting 2x faster every 1-3 years during most of that span. 


2020 on ward will likely be more focused on compute specialization and finding the right mix for a given use case. 

I can imagine in the next 10 years GPUs becoming 10x more effective (mix of upscaling, ray tracing and tensor calculations) at the types of workloads that will be common in 10 years and maybe 2-3x more effective at today's workloads. 
I can see a similar trends for CPUs, just less extreme. 1.5-2x more effective per core but ~3x more effective at the operations of the future. I can see core count going up to 2-4x today's level. A large chunk of this might come from smaller "mini" cores though. 

 

These are ball-park guestimates. 

 

You could pick just about any economic system for this critique. Resources are never infinite. 

Feudalism... if the nobility spend too much on compute you'll get peasant revolts because everyone is starving. 
"communism" - we already saw the masses rebel against the people at the top spending like idiots. This system is SUPER vulnerable to getting top heavy and corrupt. 
Capitalism is pretty much the most robust of those systems when it comes to keeping people at the top from exploiting those at the bottom. Which is crazy given how many critiques are levied against it for that very thing.

And yeah, as long as we live on a finite planet... tossing TONS AND TONS AND TONS of resources into converting sand into magic electricity switches has very real trade offs. No "solutions" just trade-offs. 

Your predictions for the next decade are interesting. GPUs becoming more effective at future workloads through advancements in upscaling, ray tracing, and tensor calculations seems plausible, as these technologies continue to evolve and find new applications. Similarly, CPUs becoming more effective per core, but also more efficient at handling future operations, makes sense given the need to optimize performance without solely relying on increasing core counts.

Your point about capitalism and the finite nature of resources is also valid. No economic system is immune to the constraints imposed by limited resources.

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, podkall said:

oh wait a minute, we have plenty of empty space in the PC's case, why should we further reduce the SSD size, when we can double it's capacity with a fractional increase of it's size?"

there are trade-offs to consider, such as the potential increase in heat generated by more densely packed SSDs, and the diminishing returns in terms of performance gains from increased density. It's an ongoing balance between size, performance, cost, and power efficiency that manufacturers have to navigate.

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, porina said:

If you look at silicon terms, it could be said GPUs are getting smaller. Looking at recent consumer tier 102 offerings:

 

2080 Ti 754mm2 250W

3090 628mm2 350W

3090 Ti 628mm2 450W

4090 609mm2 450W

 

What we're instead seeing is pushing more power through those GPUs, which requires bigger power delivery and cooling.

 

Above is only looking at the highest end. Those limits might continue to be pushed. I feel that in the mass market mainstream power isn't increasing much if at all.

 

When looking at the actual silicon size, it's clear that GPUs are indeed getting smaller, as seen in the examples you provided. However, as you also noted, this reduction in silicon size is accompanied by an increase in power consumption, which requires larger power delivery systems and cooling solutions.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, TetraSky said:

Very big, not so much faster. Very power hungry.

 

PC Builder Creates an Nvidia RTX 4090 for April Fools' Day and It's Massive  | PCMag

 

/obviously this is a joke.

Why does this 4090 have 8fans and its massive 💀, is it four 4090s stacked or something.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Blazepoint5 said:

there are trade-offs to consider, such as the potential increase in heat generated by more densely packed SSDs, and the diminishing returns in terms of performance gains from increased density. It's an ongoing balance between size, performance, cost, and power efficiency that manufacturers have to navigate.

I mean, some of the 3D cache or 3D Nand didn't really impact it's heat, some of the 3D NAND SSDs are as cool as they can be,

 

perhaps the only issue would be the 3D cache's sensitivity to high heat, which is seen in the AMD x3D CPUs that have lower max allowed tj max temp

Note: Users receive notifications after Mentions & Quotes. 

Feel free to ask any questions regarding my comments/build lists. I know a lot about PCs but not everything.

PC:

Ryzen 5 5600 |16GB DDR4 3200Mhz | B450 | GTX 1080 ti

PCs I used before:

Pentium G4500 | 4GB/8GB DDR4 2133Mhz | H110 | GTX 1050

Ryzen 3 1200 3,5Ghz / OC:4Ghz | 8GB DDR4 2133Mhz / 16GB 3200Mhz | B450 | GTX 1050

Ryzen 3 1200 3,5Ghz | 16GB 3200Mhz | B450 | GTX 1080 ti

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Blazepoint5 said:

Why does this 4090 have 8fans and its massive 💀, is it four 4090s stacked or something.

it's a joke picture based on how massive nvidia GPUs are right now.

CPU: AMD Ryzen 3700x / GPU: Asus Radeon RX 6750XT OC 12GB / RAM: Corsair Vengeance LPX 2x8GB DDR4-3200
MOBO: MSI B450m Gaming Plus / NVME: Corsair MP510 240GB / Case: TT Core v21 / PSU: Seasonic 750W / OS: Win 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

I think that one of three things could be the end of transistors shrinking and us just starting to make bigger processing units.

 

1) Till they start to melt themselves because of how much heat is being generated from the number of tiny transistors is a small space.

 

2) We all agree that we can stop shrinking them (this is probably the most unlikely explanation).

 

3) As some have pointed out, quantum tunneling starts to really stop us from shrinking them.

 

Now I will say, it is a possibility that we find some new technologies that allow us to continue to shrink transistors without having to worry as much about heat and quantum tunneling. Then what would stop us would be either manufacturing something so small or they just can’t get that small. Even if we stop being able to shrink transistors, there are still many more things we can do to improve computing. Either way I am excited to see just how good we can make computers.

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, Milesqqw2 said:

We all agree that we can stop shrinking them (this is probably the most unlikely explanation).

i think since they partially already doing that (bigger dies "for no reason") this is one of the more likely outcomes. 

 

you *cannot* shrink them indefinitely,  its physically impossible  - with to us known tech at least.

 

also for anyone claiming moores law is *not* dead,  can you point me to the info that shows transistor count actually doubling every 2 years for the last 10 years or so? to my knowledge that isnt the case but i haven't seen hard proof for either case so far.

 

just people saying "nah its not dead man" (which leaves me to believe its indeed probably dead dead dead 🙂 )

 

 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×