Jump to content

28nm to 20nm. What does it mean for us?

MooseParade

No, I'm not dodging anything. I'm replying to the 290X statement. You can't expect that they'll give you exactly the same GPUs, just on 20nm. They need to justify their existence by making them at least 25% better performing for all mid to upper performance brackets. It will be better from the efficiency stand point, but will not necessarily run cooler. It's about real expectations.

 

I think they will rebrand the 290X as the 280X, hopefully in 20nm form, but again, stories for another time.

 

The quote is about the fact that I prefer a fully baked process over a rushed to market one. I will eventually buy the 20nm GTX 880... but the "after at least 6 months revision".

 

The question was "28nm to 20nm. What does it mean for us?". My answer is "Faster cards of course (25%+), hotter running cards (because of the limits of planar transistor tech), possibly cards that will die faster from overclocking."

 

There is a reason Intel went to 3-D tri-gate tech and not continued with planar 22nm tech. Maybe it would've been ok for 22nm, but for the next step, surely not. They must've feared electron migration and heat output.

The 290X was just an example. You said that higher density equals more heat, which is false. More transistors equals more heat. The same amount but smaller transistors equals lower heat.

Density and heat are not related. I asked you a very simple question, if AMD simply shrunk the 290X to 20nm would it be hotter? If your earlier post was correct, then yes it would be hotter. That however is false. It would be cooler. That's why you instead of continuing talking about heat output started bringing in power consumption to the discussion.

If they increased the number of transistors and start using smaller ones then it might run hotter (depending on how many more they add and what clock they run at).

Link to comment
Share on other sites

Link to post
Share on other sites

aVZgT.gif

- i7-2600k @ 4.7GHz - MSI 1070 8GB Gaming X - ASUS Maximus V Formula AC3 Edition - 16GB G.SKILL Ripjaws @ 1600Mhz - Corsair RM1000 - 1TB 7200RPM Seagate HDD + 2TB 7200 HDD + 2x240GB M500 RAID 0 - Corsair 750D - Samsung PX2370 & ASUS ROG SWIFT -

Link to comment
Share on other sites

Link to post
Share on other sites

faster

the idea of less power doesnt  really matter because nvidia will see that as more head room for higher clocks

 

It kind of does then, because we get higher clocks and lower TDP's..  That's kind of a big deal lol

Stuff:  i7 7700k @ (dat nibba succ) | ASRock Z170M OC Formula | G.Skill TridentZ 3600 c16 | EKWB 1080 @ 2100 mhz  |  Acer X34 Predator | R4 | EVGA 1000 P2 | 1080mm Radiator Custom Loop | HD800 + Audio-GD NFB-11 | 850 Evo 1TB | 840 Pro 256GB | 3TB WD Blue | 2TB Barracuda

Hwbot: http://hwbot.org/user/lays/ 

FireStrike 980 ti @ 1800 Mhz http://hwbot.org/submission/3183338 http://www.3dmark.com/3dm/11574089

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for all the replies. I appreciate the insight. One more quick question, are there any downsides? Prices rising from the new process costing more to manufacture?

You might want to edit that in your main post, because many wont see your updated question.

CPU: Ryzen 2600 GPU: RX 6800 RAM: ddr4 3000Mhz 4x8GB  MOBO: MSI B450-A PRO Display: 4k120hz with freesync premium.

Link to comment
Share on other sites

Link to post
Share on other sites

AMD fanboy: Lacks reading comprehension.

 

He is not saying the lower power consumption of 20nm transistors doesn't matter. He is saying we won't see lower power consumption because Nvidia will probably use that to increase clocks (giving even higher performance at the same power consumption as the 28nm part).

It's kind of strangely worded, but I am 99% sure that's what qwertywarrior meant. He didn't mean lower power consumption was useless, just that we shouldn't expect to see it in the high end parts.

faster

the idea of less power doesnt  really matte

 

Nvidiot: say something doesnt matter, then give a reason for why it should matter.

no.

 

 

 

Then I shall retype it:

Nvidiot (not to be taken seriously): says something doesnt really matter, then give a reason for why it should matter.

Explanation for you, hopefully I dont have to take your understanding with even less estime.

''because nvidia will see that as more head room for higher clocks'' is the reason I refer to.

 

If you disagree, then you imply you think (or ignore post) that ''because nvidia will see that as more head room for higher clocks'' doesnt matters.

 

we'll see who lacks reading or teck comprehension. I did not read the rest of your post. I can defend everything I say and can attack and refute your arguments I disagree with, But I dont guarantee interest in doing so.

CPU: Ryzen 2600 GPU: RX 6800 RAM: ddr4 3000Mhz 4x8GB  MOBO: MSI B450-A PRO Display: 4k120hz with freesync premium.

Link to comment
Share on other sites

Link to post
Share on other sites

AMD

It's kind of strangely worded, but I am 99% sure that's what qwertywarrior meant. He didn't mean lower power consumption was useless, just that we shouldn't expect to see it in the high end parts.

Well my fucking post disappeared. Im just gonna conclude: His writing does not reflect his thoughts and so it is the source of our discussion, if we both agree on this, then we dont need to debate, but I assure I can refute every argument I disagree with, and refute counter arguments against mine and so on.

 

I caricaturised parts of what he wrote, I can get on that if it comes back again.  modify it to dismiss your belief. and fucking nvidiot is not to be taken seriously. Its just a prejudice of stupid people leaning statisticly to be nvidiots. Dont bring me into an annoying loophole of Why why why, cause aint I got time for that obvious shit that you might doubt my conciousness of, ( statistics based on my lived observation)

 

You wanna know what my observations are based on? then your stupid or trolling or we have another misunderstandment

CPU: Ryzen 2600 GPU: RX 6800 RAM: ddr4 3000Mhz 4x8GB  MOBO: MSI B450-A PRO Display: 4k120hz with freesync premium.

Link to comment
Share on other sites

Link to post
Share on other sites

It's actually less heat (assuming the same amount of transistors and workload).

 

OP the smaller transistors get, the less power they use, the less heat they produce, and the more you can stuff into a certain area.

So the move from 28nm to 20nm would let manufactures make more powerful components, or make them smaller/cheaper/more energy efficient.

Less heat indeed, higher temps, to illuminate some: lower nm processing inhibits heat dissipation and as we go, the lower the heat, the temperature increases, because the concentrated heat cant escape.

One thing is sure: The temperature/heat = more  aretransistor size gets lower

 

As I read.

CPU: Ryzen 2600 GPU: RX 6800 RAM: ddr4 3000Mhz 4x8GB  MOBO: MSI B450-A PRO Display: 4k120hz with freesync premium.

Link to comment
Share on other sites

Link to post
Share on other sites

LAwLs, you are in theory, correct.

 

But... when we had the switch to 28nm, efficiency improved by about 35%. This improvement was the result of reducing leakage by a staggering 50% compared to the previous process node, while active power was reduced just by 15%. And with that 35% in mind, think about the GTX 480.

 

Now, with 20nm, 25% efficiency improvement is claimed. Still a large number, but not as high as before. When you're using basically the same process technology and performing just a node shrink, the active power can't be reduced too much, so I'm guessing that leakage did not improve enough this time around. A leaky chip is a hot chip (see AMD's CPUs).

 

Simply shrinking the 290X doesn't come with any guarantee. The transistors design used for 28nm, which is already very leaky in AMD's case (the 95C threshold tells me this), might be an extremely leaky design on 20nm. And that 25% efficiency average might go up in smoke. Combine that with a small contact area with the cooler... and you might get the same or worse temps. So, in theory, I can make a hotter 290X on 20nm. But this is just theory. Just as correct as your own theory.

 

For the high end, they will choose density (more SPs, approx. die size) over efficiency (smaller die), they have to in order to create a "Titan+1" GPU. And with all the leakage (wasted energy really) I expect hot running GPUs. At least the first revisions.

 

There are many other factors to consider. The types of transistors used, on the same process node, you can design a transistor that is less leaky then another one (think GTX 480 vs. GTX 580). Or frequency. They could just reduce frequency by designing a more efficient SP. And that will keep power and heat well in check.

 

I'm basically making a prediction. You're talking theory, which is nice and all (also safe and easy), but I'm thinking about the actual products that we might get.

There are more things in heaven and earth then are dreamt of in your philosophy.

Link to comment
Share on other sites

Link to post
Share on other sites

LAwLs, you are in theory, correct.

I am not just correct in theory, I am correct in practice as well. Just look at the previous die shrinks. The TDP always goes down. I don't understand where you got this crazy idea from that something that is more efficient will produce more heat.

 

 

 

But... when we had the switch to 28nm, efficiency improved by about 35%. This improvement was the result of reducing leakage by a staggering 50% compared to the previous process node, while active power was reduced just by 15%. And with that 35% in mind, think about the GTX 480.

 

Now, with 20nm, 25% efficiency improvement is claimed. Still a large number, but not as high as before. When you're using basically the same process technology and performing just a node shrink, the active power can't be reduced too much, so I'm guessing that leakage did not improve enough this time around. A leaky chip is a hot chip (see AMD's CPUs).

 

Simply shrinking the 290X doesn't come with any guarantee. The transistors design used for 28nm, which is already very leaky in AMD's case (the 95C threshold tells me this), might be an extremely leaky design on 20nm. And that 25% efficiency average might go up in smoke. Combine that with a small contact area with the cooler... and you might get the same or worse temps. So, in theory, I can make a hotter 290X on 20nm. But this is just theory. Just as correct as your own theory.

Leakage is a different issue not related to this. We might have gotten lower leakage but the transistors themselves would still have used less power and produced less heat than before, even with exactly the same leakage as the previous generation.

We might have gotten a bigger boost because of it, but that does not mean we wouldn't have gotten any benefit at all if it weren't for the reduced leakage. You keep trying to drift further and further away from the core subject.

I am going to ask you again since you avoided the question last time. If AMD were to shrink the 290X with 20nm transistors. Do you think it would have a higher, the same, or lower TDP than the 28nm version? It's just a simple question with 3 answers, and only one is correct (if history is any indicator).

 

Do you even know what leakage is? AMD's chips are not necessarily hot because they leak a lot. You can have a chip with very little leakage that's still hot as hell. It depends on how efficient the architecture is. Leakage plays some role in it, but your post makes it seem like all heat generated it from leaks, which is far from true. If you think that you can tell how much leakage the transistors have by looking at the Tcase then you're very clueless. Sorry but it's true.

By the way, leakage is on a transistor basis, not architecture basis. AMD and Nvidia both use the same transistors (from TSMC) so they should have more or less the same amount of leakage. Leakage is not what causes some architectures to be hotter than others, the efficiency of the architecture is what cases them to differ in heat output.

 

OK sure. 20nm could be very leaky and therefore very inefficient, how big is the risk of that happening, and ending up in consumer products? How often have that happened in the last 10 years? You are grasping at straws here.

Smaller contact area shouldn't really matter that much because that is exactly why we have heat spreaders. To distribute the heat generated from a small area to a big area which we can cool.

Please stop misusing the word "theory" by the way. What I am saying is based on scientific observations. What you are saying is just grabbed out of thin air. Our "theories" are not equal.

 

 

For the high end, they will choose density (more SPs, approx. die size) over efficiency (smaller die), they have to in order to create a "Titan+1" GPU. And with all the leakage (wasted energy really) I expect hot running GPUs. At least the first revisions.

What leakages are you talking about? Even if you weren't just making stuff up about how 20nm will have leakage issues you are still not making any sense. We never even mentioned high end vs low end. This discussion is about 20nm vs 28nm, not current products (like the 780 Ti) vs future products (like the 880 or whichever GPU will use 20nm transistors). In order to compare 20nm to 28nm you have to keep everything else equal. You can't just say "well 20nm runs hotter because the 880 will have more cores, run at higher clocks etc etc". When you make that statement then you're no longer talking about 20nm vs 28nm, which is what OP asked for. What is true for 20nm vs 28nm will be true across the entire product range, not just the low end or the high end. How Nvidia and AMD choose to take advantage of 20nm might vary in the low and high end, but the benefits are still the same. Less heat generated and more efficient (per transistor).

 

 

There are many other factors to consider. The types of transistors used, on the same process node, you can design a transistor that is less leaky then another one (think GTX 480 vs. GTX 580). Or frequency. They could just reduce frequency by designing a more efficient SP. And that will keep power and heat well in check.

Of course, but that doesn't mean 20nm won't be better than 28nm. The 580 was much better than the 480 when it came to heat output and yes that was partially because the 40nm transistors had matured but also because they made the architecture more efficient.

The 480 to 580 is a good example just how unrelated density is to TDP. The 580 is significantly cooler even though it is slightly denser. So like I said before, density and heat output are not related.

 

 

I'm basically making a prediction. You're talking theory, which is nice and all (also safe and easy), but I'm thinking about the actual products that we might get.

Then I suggest you make it clear in your future posts that you are just making guesses not based on actual facts, instead of just going "this is what will happen!" like you did in this thread.

Link to comment
Share on other sites

Link to post
Share on other sites

It's actually less heat (assuming the same amount of transistors and workload).

 

OP the smaller transistors get, the less power they use, the less heat they produce, and the more you can stuff into a certain area.

So the move from 28nm to 20nm would let manufactures make more powerful components, or make them smaller/cheaper/more energy efficient.

 

Smaller die = harder to transfer heat

Curing shitposts by shitposts

Link to comment
Share on other sites

Link to post
Share on other sites

I am going to ask you again since you avoided the question last time. If AMD were to shrink the 290X with 20nm transistors. Do you think it would have a higher, the same, or lower TDP than the 28nm version? It's just a simple question with 3 answers, and only one is correct (if history is any indicator).

[...]

What I am saying is based on scientific observations. What you are saying is just grabbed out of thin air. Our "theories" are not equal.

[...]

...you're no longer talking about 20nm vs 28nm, which is what OP asked for.

[...]

Then I suggest you make it clear in your future posts that you are just making guesses not based on actual facts, instead of just going "this is what will happen!" like you did in this thread.

 

The 20nm 290X question is hypothetical in nature. One can give whatever he likes to a question like this. It could be so efficient that they will bundle it with a hamster wheel to power it. It's unlikely, but until they release it, this can be both true and false at the same time. It's the "cat in a box" argument. I answered that question three times. Each time I gave a different answer. Each of those answers can be true.

 

A hypothetical product consumes just as much power as it takes to talk about it. Maybe more if you're really inefficient at talking.

 

There are no "facts" about 20nm. There are no TSMC released products on 20nm. You can claim you base your assumptions on observation and make an educated guess, but you can't really directly observe the future. You can only make predictions about the future. So when you claim "facts" about 20nm, those are actually called "predictions".

 

The thread is in the "Computer hardware -> Graphics cards" section. The OP cares about graphic cards. He doesn't care about the intricacies of the 20nm process technology. He wants to know how the cards built on 20nm will be like.

 

I'm not drifting... I'm just explaining that there's more to density then just a number. "Density" is a nice buzz word that is good for marketing purposes, but it comes with great technological obstacles. The bad parts about obtaining high density could influence things in really bad ways. You keep trying to isolate the word "density". Video cards are made of much more just the word "density". I can't list all of them, I don't know all of them, I don't have the curiosity to.

 

I too have made an observation about the past and the present: Intel abandoned planar transistor technology when they switched to 22nm. There are some smart people at Intel, so there must be a good reason. TSMC's 20nm process is very late. They must have a good reason for that. Maybe they switched to a pseudo 3-D process tech too and called it 20nm.

 

Anyway, I'm bored. If the alpha dog in you wants to win, go ahead, claim victory. But it's not really about that.

There are more things in heaven and earth then are dreamt of in your philosophy.

Link to comment
Share on other sites

Link to post
Share on other sites

Uhm, yeah, that escalated quickly. I think I have more questions than answers after reading all that. I think I'll just wait for next year when 20nm actually hits consumers.

"Energy drinks don't make my mouth taste like yak buttholes like coffee does, so I'll stick with them." - Yoinkerman

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah, sorry MooseParade. Whatever LAwLz said is actually true. 20nm should in fact enable the GPU manufacturers to bring better cards to market. That's what should happen, the happily ever after fairytale version of an uncertain future. But, usually, there's a lot of leaks and information coming from various sources. This time there's not. There's a lot of silence. It's this silence that worries me. 20nm could be a rough patch for TSMC. If it is, as I suspect it will, then what I said could very well be true. We'll just have to wait and see.

There are more things in heaven and earth then are dreamt of in your philosophy.

Link to comment
Share on other sites

Link to post
Share on other sites

Smaller die = harder to transfer heat

Smaller transistors => smaller die => less heat

Also, we have heat spreaders to help with the "less contact area for the cooler".

 

 

The 20nm 290X question is hypothetical in nature. One can give whatever he likes to a question like this. It could be so efficient that they will bundle it with a hamster wheel to power it. It's unlikely, but until they release it, this can be both true and false at the same time. It's the "cat in a box" argument. I answered that question three times. Each time I gave a different answer. Each of those answers can be true.

Cat in a box argument? Never heard of that before. If you're talking about Schrödinger's cat then no, this is nothing like that. The question can't be both true and false at the same time. Also, I am obviously talking about the most likely scenario, not something ridiculous.

You haven't answered my question, you always dance around it or gasp at straws like you're doing now "I can't answer it because it could be so efficient that a hamster could power it!". The answer I was looking for was "it would have a lower TDP", because that's what history tells us. You just have to look at previous generations to see this (like the 980X I used as an example before).

 

 

A hypothetical product consumes just as much power as it takes to talk about it. Maybe more if you're really inefficient at talking.

I have no idea what you are trying to say with this.

 

 

There are no "facts" about 20nm. There are no TSMC released products on 20nm. You can claim you base your assumptions on observation and make an educated guess, but you can't really directly observe the future. You can only make predictions about the future. So when you claim "facts" about 20nm, those are actually called "predictions".

Reread my post. I said that I base everything I say on previous generations. Of course I can't prove that 20nm will use less power and produce less heat, just like I can't prove that I would die if I jumped off a skyscraper. If we look at all other generations, and people who have jumped off skyscrapers before then the conclusion is more than "just a theory that's worth as much as any other theory". Your logic doesn't hold up. You're basically trying to argue that because I am not 100% sure that it will happen, my theory is just as valid as all other theories, which is not true. Saying that I would die from jumping off a skyscraper is worth more than saying I would gain super powers and fly to safety, because one is based on historic evidence and the other is just dragged out of thin air with no evidence supporting it whatsoever.

 

 

 

I'm not drifting... I'm just explaining that there's more to density then just a number. "Density" is a nice buzz word that is good for marketing purposes, but it comes with great technological obstacles. The bad parts about obtaining high density could influence things in really bad ways. You keep trying to isolate the word "density". Video cards are made of much more just the word "density". I can't list all of them, I don't know all of them, I don't have the curiosity to.

1) Density is not a buzzword. It has a very clear definition and most people I see using it uses it correctly. The number of transistors in a given area give you the density. Increase the number of transistors in the same area and you get higher density. It's as simple as that.

2) That's not what you said before. You said, and I quote "Heat output depends on density". I am the one who has been arguing that density is not related to heat output. Right now you're saying the exact opposite of what you said in your first post. It's nice to see that you are finally agreeing with me though.

 

 

 

I too have made an observation about the past and the present: Intel abandoned planar transistor technology when they switched to 22nm. There are some smart people at Intel, so there must be a good reason. TSMC's 20nm process is very late. They must have a good reason for that. Maybe they switched to a pseudo 3-D process tech too and called it 20nm.

 

Anyway, I'm bored. If the alpha dog in you wants to win, go ahead, claim victory. But it's not really about that.

Yes they are definitely having issues with it. That's why I said "It's very unwise to say that you would "take matured 28nm over fresh and hip 20nm" as well, since we do not know how good/bad it will be.".

I am not here to "win" anything. I just don't want a bunch of misinformation that heat output is related to the density, and that smaller transistors will generate more heat circulating. This forum has enough misinformation as it is already.

 

People should wait until products are actually out before saying they will be worse than the previous generation (which is effectively what you said when you said "I'd take matured 28nm over fresh and hip 20nm any day of the week")

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×