Jump to content

28nm to 20nm. What does it mean for us?

MooseParade

I've read articles that AMD and Nvidia won't be moving to 20nm until sometime next year. My question might be kind of noobish, but what will the 20nm process mean for consumers and enthusiasts?

 

 

 

TSMC's 20nm process technology can provide 30 percent higher speed, 1.9 times the density, or 25 percent less power than its 28nm technology

 

The quote is straight from TSMC's website. Is that what we're actually going to get? Will there be new features?

"Energy drinks don't make my mouth taste like yak buttholes like coffee does, so I'll stick with them." - Yoinkerman

Link to comment
Share on other sites

Link to post
Share on other sites

faster

the idea of less power doesnt  really matter because nvidia will see that as more head room for higher clocks

If your grave doesn't say "rest in peace" on it You are automatically drafted into the skeleton war.

Link to comment
Share on other sites

Link to post
Share on other sites

Faster performance, less heat output, less power consumption, smaller package. That's what 20nm is all about.

Main Rig: CPU: AMD Ryzen 7 5800X | RAM: 32GB (2x16GB) KLEVV CRAS XR RGB DDR4-3600 | Motherboard: Gigabyte B550I AORUS PRO AX | Storage: 512GB SKHynix PC401, 1TB Samsung 970 EVO Plus, 2x Micron 1100 256GB SATA SSDs | GPU: EVGA RTX 3080 FTW3 Ultra 10GB | Cooling: ThermalTake Floe 280mm w/ be quiet! Pure Wings 3 | Case: Sliger SM580 (Black) | PSU: Lian Li SP 850W

 

Server: CPU: AMD Ryzen 3 3100 | RAM: 32GB (2x16GB) Crucial DDR4 Pro | Motherboard: ASUS PRIME B550-PLUS AC-HES | Storage: 128GB Samsung PM961, 4TB Seagate IronWolf | GPU: AMD FirePro WX 3100 | Cooling: EK-AIO Elite 360 D-RGB | Case: Corsair 5000D Airflow (White) | PSU: Seasonic Focus GM-850

 

Miscellaneous: Dell Optiplex 7060 Micro (i5-8500T/16GB/512GB), Lenovo ThinkCentre M715q Tiny (R5 2400GE/16GB/256GB), Dell Optiplex 7040 SFF (i5-6400/8GB/128GB)

Link to comment
Share on other sites

Link to post
Share on other sites

Less power, more heat :D

It's actually less heat (assuming the same amount of transistors and workload).

 

OP the smaller transistors get, the less power they use, the less heat they produce, and the more you can stuff into a certain area.

So the move from 28nm to 20nm would let manufactures make more powerful components, or make them smaller/cheaper/more energy efficient.

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for all the replies. I appreciate the insight. One more quick question, are there any downsides? Prices rising from the new process costing more to manufacture?

"Energy drinks don't make my mouth taste like yak buttholes like coffee does, so I'll stick with them." - Yoinkerman

Link to comment
Share on other sites

Link to post
Share on other sites

Moooorrreeeee heat

Link to comment
Share on other sites

Link to post
Share on other sites

Heat output depends on density. A 20nm chip with 1.9x density (compared to 28nm, same number of transistors, same switching frequency) will run hotter because the area of contact between the die and the cooler's contact plate will be so much smaller. Idle power will be better but full load will be brutal for some coolers.

 

I'd take matured 28nm over fresh and hip 20nm any day of the week. 28nm has been perfected to a state where you can use a full GK104 (GTX 880M ~ GTX 770 ~ GTX 680) in a laptop without thermal issues. Density is not great with 28nm, Titan Black / GTX 780Ti is basically the limit. On 20nm they will build chips the size of GK104 (GTX 770) with the performance of the GK110 (GTX 780Ti). But they won't be the best 20nm has to offer, not until the process matures and they start thinking about 1Xnm (probably 14nm).

 

At the beginning of it's life, 20nm will be leaky, hot and will have issues. Early adopters will watch their expensive cards be outclassed by revision "B" chips that will run faster and cooler. Like it happened with GTX 680 (see GTX 770) or with GTX 780 (see GTX 780Ti).

There are more things in heaven and earth then are dreamt of in your philosophy.

Link to comment
Share on other sites

Link to post
Share on other sites

20nm means high priced cards.

Le Bastardo+ 

i7 4770k + OCUK Fathom HW labs Black Ice 240 rad + Mayhem's Gigachew orange + 16GB Avexir Core Orange 2133 + Gigachew GA-Z87X-OC + 2x Gigachew WF 780Ti SLi + SoundBlaster Z + 1TB Crucial M550 + 2TB Seagate Barracude 7200rpm + LG BDR/DVDR + Superflower Leadex 1KW Platinum + NZXT Switch 810 Gun Metal + Dell U2713H + Logitech G602 + Ducky DK-9008 Shine 3 MX Brown

Red Alert

FX 8320 AMD = Noctua NHU12P = 8GB Avexir Blitz 2000 = ASUS M5A99X EVO R2.0 = Sapphire Radeon R9 290 TRI-X = 1TB Hitachi Deskstar & 500GB Hitachi Deskstar = Samsung DVDR/CDR = SuperFlower Golden Green HX 550W 80 Plus Gold = Xigmatek Utguard = AOC 22" LED 1920x1080 = Logitech G110 = SteelSeries Sensei RAW
Link to comment
Share on other sites

Link to post
Share on other sites

Faster performance, less heat output, less power consumption, smaller package. That's what 20nm is all about.

 

Dat smaller but better E-peen :P

Link to comment
Share on other sites

Link to post
Share on other sites

Heat output depends on density. A 20nm chip with 1.9x density (compared to 28nm, same number of transistors, same switching frequency) will run hotter because the area of contact between the die and the cooler's contact plate will be so much smaller. Idle power will be better but full load will be brutal for some coolers.

 

I'd take matured 28nm over fresh and hip 20nm any day of the week. 28nm has been perfected to a state where you can use a full GK104 (GTX 880M ~ GTX 770 ~ GTX 680) in a laptop without thermal issues. Density is not great with 28nm, Titan Black / GTX 780Ti is basically the limit. On 20nm they will build chips the size of GK104 (GTX 770) with the performance of the GK110 (GTX 780Ti). But they won't be the best 20nm has to offer, not until the process matures and they start thinking about 1Xnm (probably 14nm).

 

At the beginning of it's life, 20nm will be leaky, hot and will have issues. Early adopters will watch their expensive cards be outclassed by revision "B" chips that will run faster and cooler. Like it happened with GTX 680 (see GTX 770) or with GTX 780 (see GTX 780Ti).

I see that you're fairly new to technology. Ivy Bridge was an exception rather than the rule. In most other die shrinks the heat output has gone down.

Intel just blamed the higher heat output on density but the real reason was that they cut corners when it came to the thermal interface material between the die and the heat spreader.

 

If density and heat were related, then we would have 200+ Celsius GPUs at load today, since the far less dense GPUs from 10-20 years ago also ran at 50-100 degrees. You just have to look back a few generations to see that the theory of "heat output = density" is wrong.

 

It's very unwise to say that you would "take matured 28nm over fresh and hip 20nm" as well, since we do not know how good/bad it will be.

If you look back at previous generations, the first products with the new transistor size have often been really good compared to the previous generation (with Ivy being an exception).

Link to comment
Share on other sites

Link to post
Share on other sites

Here's an article if anyone wants to read: http://www.tweaktown.com/news/37219/tsmc-unlikely-to-make-20nm-gpu-chips-for-amd-and-nvidia/index.html

Link to comment
Share on other sites

Link to post
Share on other sites

I see that you're fairly new to technology. Ivy Bridge was an exception rather than the rule. In most other die shrinks the heat output has gone down.

Intel just blamed the higher heat output on density but the real reason was that they cut corners when it came to the thermal interface material between the die and the heat spreader.

 

If density and heat were related, then we would have 200+ Celsius GPUs at load today, since the far less dense GPUs from 10-20 years ago also ran at 50-100 degrees. You just have to look back a few generations to see that the theory of "heat output = density" is wrong.

 

It's very unwise to say that you would "take matured 28nm over fresh and hip 20nm" as well, since we do not know how good/bad it will be.

If you look back at previous generations, the first products with the new transistor size have often been really good compared to the previous generation (with Ivy being an exception).

 

Well, no...

 

Intel's process since 2011 is based on tri-gate "3-D" tech, which is different from planar tech. Intel's 22nm process isn't even really 22nm, it's more a very refined 32nm, all they did was shuffle some very important stuff around... instead of building their gates just on a single plane, they moved some key components of the design a bit more "off-plane". Following industry standards (if you can define any when tech is so much different) it looks like it's 22nm, but technically it's not. They increased density without sacrificing projected "contact" areas which allowed them to keep die areas small (or within reason). They escaped the pitfalls of planar technology, but Intel's case can't be used as a guideline for other process manufacturers. This is why I didn't even mentioned Intel.

 

Also you cannot compare CPUs with GPUs. But that's another story.

 

When we're talking about planar technology, density is an important factor when it comes to heat output. Now I see that you might be the one new to technology. Old GPUs were cooled by mere foils of aluminium with tiny little fans with very bad goo applied to them and didn't really had the problems with heat output that GPUs today have. Now we have vapor chambers with a gazillion fans and AIO water coolers. Of course performance wise they are now laughable but we're not comparing performance, we're comparing the technology inside those chips.

 

Hopefully TSMC's 20nm process won't be planar. But TSMC isn't Intel. So I'm not getting my hopes up.

 

As for the maturity of the process, TSMC always had problems with their first batches. They were always late. Few of the dies were functional, so we had shortages of certain GPUs. So we do know that it will not be the best 20nm that can be produced at first. Hell, they're already behind schedule, we should've had 20nm GPUs on sale today. You don't fall behind because things are good.

 

I'm also concerned about the longevity of 20nm planar tech based chips. Again, another story...

There are more things in heaven and earth then are dreamt of in your philosophy.

Link to comment
Share on other sites

Link to post
Share on other sites

-snip-

OK riddle me this. What would happen if AMD took the let's say the 290X GPU and shrunk it to 20nm. Do you honestly believe that it would suddenly produce more heat? Of course it won't, it would produce less heat. Each transistor need less energy, which means less wasted energy turned into heat.

 

Look back at the 200 series of cards. They ran hot and they had big coolers just like we do today. Yet they were much less dense and much less powerful. How do you explain that?

Want another example? Look at the 980X and compare it to the 930. With the move from 45nm to 32nm Intel could add 2 extra cores and they both had the same TDP. If they had just shrunk the die without increasing the core count the TDP would have been lower. Whaaat!? Lower TDP by just moving to smaller transistors? Yes, that's how it works. It's the same story every time we move to smaller transistors.

 

Just because a new transistor node isn't as good as it can be doesn't mean it is worse than the previous one. Sure we might be able to get some additional performance out of 20nm further down the line, but that doesn't mean the first bath of 20nm components will be worse than the 28nm components. What you are saying does not make any sense whatsoever.

Link to comment
Share on other sites

Link to post
Share on other sites

faster

the idea of less power doesnt  really matter because nvidia will see that as more head room for higher clocks

Nvidiot: say something doesnt matter, then give a reason for why it should matter.

no.

CPU: Ryzen 2600 GPU: RX 6800 RAM: ddr4 3000Mhz 4x8GB  MOBO: MSI B450-A PRO Display: 4k120hz with freesync premium.

Link to comment
Share on other sites

Link to post
Share on other sites

Nvidiot: say something doesnt matter, then give a reason for why it should matter.

no.

AMD fanboy: Lacks reading comprehension.

 

He is not saying the lower power consumption of 20nm transistors doesn't matter. He is saying we won't see lower power consumption because Nvidia will probably use that to increase clocks (giving even higher performance at the same power consumption as the 28nm part).

It's kind of strangely worded, but I am 99% sure that's what qwertywarrior meant. He didn't mean lower power consumption was useless, just that we shouldn't expect to see it in the high end parts.

Link to comment
Share on other sites

Link to post
Share on other sites

AMD fanboy: Lacks reading comprehension.

 

He is not saying the lower power consumption of 20nm transistors doesn't matter. He is saying we won't see lower power consumption because Nvidia will probably use that to increase clocks (giving even higher performance at the same power consumption as the 28nm part).

It's kind of strangely worded, but I am 99% sure that's what qwertywarrior meant. He didn't mean lower power consumption was useless, just that we shouldn't expect to see it in the high end parts.

yes thats what i meant 

If your grave doesn't say "rest in peace" on it You are automatically drafted into the skeleton war.

Link to comment
Share on other sites

Link to post
Share on other sites

OK riddle me this. What would happen if AMD took the let's say the 290X GPU and shrunk it to 20nm. Do you honestly believe that it would suddenly produce more heat? Of course it won't, it would produce less heat. Each transistor need less energy, which means less wasted energy turned into heat.

 

Look back at the 200 series of cards. They ran hot and they had big coolers just like we do today. Yet they were much less dense and much less powerful. How do you explain that?

Want another example? Look at the 980X and compare it to the 930. With the move from 45nm to 32nm Intel could add 2 extra cores and they both had the same TDP. If they had just shrunk the die without increasing the core count the TDP would have been lower. Whaaat!? Lower TDP by just moving to smaller transistors? Yes, that's how it works. It's the same story every time we move to smaller transistors.

 

Just because a new transistor node isn't as good as it can be doesn't mean it is worse than the previous one. Sure we might be able to get some additional performance out of 20nm further down the line, but that doesn't mean the first bath of 20nm components will be worse than the 28nm components. What you are saying does not make any sense whatsoever.

 

It's about the options that a new process node gives you. You can have the density or you can have the lower power consumption, but not both. Not in 1:1 ratio. With 20nm we can get 1.9x density or 25% lower power consumption. So we can cram 90% more transistors in a die size similar to the 290X with an unspecified power consumption or we can build the same 290X on 20nm with 25% lower power consumption.

 

Some GPUs run hot because the manufacturer goes for density. Some GPUs run cooler because they went for efficiency. Also, the way a GPU is designed can influence the heat output a lot. You can have designs with the same number of transistors in the same die size but with different power consumption numbers. Another factor is that heat influences power consumption, there's a certain point on the temperature curve where the GPU is at peak efficiency.

 

I wasn't comparing 28nm and 20nm directly. I was stressing the point that early adopters usually get the short end of the stick. If I buy the new GTX 880 card on 20nm on launch day, a few months down the line, they'll release the GTX 880Ti that will be better performing in the same power envelope, even if it's the same chip.

 

I was comparing revision 'Ax' 28nm chips with revision 'Bx' 28 nm chips. And I wanted to explain that I won't buy the 880 on launch day (like I did with the 680) because it's not going to be the best thing it can be. The GTX 770 is basically the same card as the GTX 680, but it's better. It has a better cooler, a more efficient GPU, better memory. All because a new revision of the same GK104 GPU that's build on a more mature 28nm process. That chip has gotten so much more efficient that they can put it in a laptop in the form of a GTX 880M fully unlocked.

There are more things in heaven and earth then are dreamt of in your philosophy.

Link to comment
Share on other sites

Link to post
Share on other sites

It's about the options that a new process node gives you. You can have the density or you can have the lower power consumption, but not both. Not in 1:1 ratio. With 20nm we can get 1.9x density or 25% lower power consumption. So we can cram 90% more transistors in a die size similar to the 290X with an unspecified power consumption or we can build the same 290X on 20nm with 25% lower power consumption.

 

Some GPUs run hot because the manufacturer goes for density. Some GPUs run cooler because they went for efficiency. Also, the way a GPU is designed can influence the heat output a lot. You can have designs with the same number of transistors in the same die size but with different power consumption numbers. Another factor is that heat influences power consumption, there's a certain point on the temperature curve where the GPU is at peak efficiency.

I have no idea what you are replying to here. Now you're talking about power consumption and I was talking about heat output. Are you deliberately dodging the question?

Yes you have to pick between more transistors and power consumption. You don't have to pick between density and power consumption though. For example a die shrink of the 290X's GPU would increase density but reduce power consumption. I think you are getting "density" and "transistor count" mixed up. One is just how densely packed they are (1B transistors in 2cm2 is less dense than 1B transistors in 1cm2) and the other one is how many transistors are on the die.

Transistor count increases heat output and power consumption. Density does not.

 

 

 

 

I wasn't comparing 28nm and 20nm directly. I was stressing the point that early adopters usually get the short end of the stick. If I buy the new GTX 880 card on 20nm on launch day, a few months down the line, they'll release the GTX 880Ti that will be better performing in the same power envelope, even if it's the same chip.

 

I was comparing revision 'Ax' 28nm chips with revision 'Bx' 28 nm chips. And I wanted to explain that I won't buy the 880 on launch day (like I did with the 680) because it's not going to be the best thing it can be. The GTX 770 is basically the same card as the GTX 680, but it's better. It has a better cooler, a more efficient GPU, better memory. All because a new revision of the same GK104 GPU that's build on a more mature 28nm process. That chip has gotten so much more efficient that they can put it in a laptop in the form of a GTX 880M fully unlocked.

Yes you were actually comparing 28nm to 20nm directly. Here is a direct quote:

I'd take matured 28nm over fresh and hip 20nm any day of the week.

 

Sure they will release a better GPU based on 20nm later down the line, but by that logic you shouldn't buy the better 20nm part because by that time we aren't that far away from 14nm. You buy what you need, when you need it, otherwise you end up waiting forever. Just because the 20nm process might not be fully matured when the first batch of 20nm components are released does not mean they are worse than the late 28nm part.

Link to comment
Share on other sites

Link to post
Share on other sites

No, I'm not dodging anything. I'm replying to the 290X statement. You can't expect that they'll give you exactly the same GPUs, just on 20nm. They need to justify their existence by making them at least 25% better performing for all mid to upper performance brackets. It will be better from the efficiency stand point, but will not necessarily run cooler. It's about real expectations.

 

I think they will rebrand the 290X as the 280X, hopefully in 20nm form, but again, stories for another time.

 

The quote is about the fact that I prefer a fully baked process over a rushed to market one. I will eventually buy the 20nm GTX 880... but the "after at least 6 months revision".

 

The question was "28nm to 20nm. What does it mean for us?". My answer is "Faster cards of course (25%+), hotter running cards (because of the limits of planar transistor tech), possibly cards that will die faster from overclocking."

 

There is a reason Intel went to 3-D tri-gate tech and not continued with planar 22nm tech. Maybe it would've been ok for 22nm, but for the next step, surely not. They must've feared electron migration and heat output.

There are more things in heaven and earth then are dreamt of in your philosophy.

Link to comment
Share on other sites

Link to post
Share on other sites

New benchmark wars

CPU AMD FX 8350 @5GHz. Motherboard Asus Crosshair V Formula Z. RAM 8GB G.Skill Sniper. GPU Reference Sapphire Radeon R9 290X. Case Fractal Design Define XL R2. Storage Seagate Barracuda 1TB HDD and 120GB Kingston HyperX 3K. PSU XFX 850BEFX Pro 850W 80+ Gold. Cooler XSPC RayStorm

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×