Jump to content

NVIDIA Pascal Mythbusting

Glenwing

My main concern is coil whine. After fruitlessly trying to upgrade GeForce 760GTX, I finally gave up after three GTX 970 with unbearable coil whine (I returned all of them - 760GTX has none) and decided to wait for Pascal instead.

 

I hope they will do something about it.

Link to comment
Share on other sites

Link to post
Share on other sites

My main concern is coil whine. After fruitlessly trying to upgrade GeForce 760GTX, I finally gave up after three GTX 970 with unbearable coil whine (I returned all of them - 760GTX has none) and decided to wait for Pascal instead.

 

I hope they will do something about it.

R9 390?

Archangel (Desktop) CPU: i5 4590 GPU:Asus R9 280  3GB RAM:HyperX Beast 2x4GBPSU:SeaSonic S12G 750W Mobo:GA-H97m-HD3 Case:CM Silencio 650 Storage:1 TB WD Red
Celestial (Laptop 1) CPU:i7 4720HQ GPU:GTX 860M 4GB RAM:2x4GB SK Hynix DDR3Storage: 250GB 850 EVO Model:Lenovo Y50-70
Seraph (Laptop 2) CPU:i7 6700HQ GPU:GTX 970M 3GB RAM:2x8GB DDR4Storage: 256GB Samsung 951 + 1TB Toshiba HDD Model:Asus GL502VT

Windows 10 is now MSX! - http://linustechtips.com/main/topic/440190-can-we-start-calling-windows-10/page-6

Link to comment
Share on other sites

Link to post
Share on other sites

I could try one, yes, but I don't know if there is a point to it when Pascal is so close. I have nothing against AMD, personally, but after two burnt Radeons 2900 I kinda prefer Nvidia.

 

I also plan on 4K gaming and am looking to buy a 4K monitor, R9 390 ain't enough to achieve 60 fps in most games in 4K iirc

Link to comment
Share on other sites

Link to post
Share on other sites

My main concern is coil whine. After fruitlessly trying to upgrade GeForce 760GTX, I finally gave up after three GTX 970 with unbearable coil whine (I returned all of them - 760GTX has none) and decided to wait for Pascal instead.

 

I hope they will do something about it.

There is nothing for NVIDIA to do about it, the coils are not part of the GPU. Whining doesn't come from the GPU and so the architecture or company doesn't matter. It varies on a unit-by-unit basis so it doesn't have anything to do with what GPU model it is.

Link to comment
Share on other sites

Link to post
Share on other sites

We know die shrinks do not help increase performance, as seen by recent die shrinks. Do not expect a massive jump as NVIDIA/AMD will want slowly release the GPUs (starting Q2 2016) with incremental performance gains, until you get to high end Pascal (Q2 2017)

Would you care to back these statements up with facts?

4790k 4.9GHz @ 1.375v, HD 7970, 850 EVO SSD's, Corsair 750D Airflow Edition, SeaSonic 860w Platinum.

Link to comment
Share on other sites

Link to post
Share on other sites

Would you care to back these statements up with facts?

It's what NVIDIA have done with their last two architectures. They release a mid-high chip as the "top-end card", like the GTX 680 and 980, with a bit of performance gain and lower power consumption, then release the top-end chip later as the real flagship (GTX 780/Ti, GTX 980 Ti) after people have bought their 680s and 980s.

As for die shrinks not affecting performance, that's a well-known fact that you can observe with Intel's recent generations, where they keep the architecture largely the same and just do a die shrink, and there are no notable performance gains resulting from it, such as between Sandy Bridge and Ivy Bridge, or between Haswell and Broadwell.

Link to comment
Share on other sites

Link to post
Share on other sites

It's what NVIDIA have done with their last two architectures. They release a mid-high chip as the "top-end card", like the GTX 680 and 980, with a bit of performance gain and lower power consumption, then release the top-end chip later as the real flagship (GTX 780/Ti, GTX 980 Ti) after people have bought their 680s and 980s.

As for die shrinks not affecting performance, that's a well-known fact that you can observe with Intel's recent generations, where they keep the architecture largely the same and just do a die shrink, and there are no notable performance gains resulting from it, such as between Sandy Bridge and Ivy Bridge, or between Haswell and Broadwell.

The data we have doesn't support what you're saying. The largest gains are at the process node shrink (aside from in 2008 with the 280, which was an outlier and a MASSIVE (at the time) 576mm squared die.

Full data sheets: https://docs.google.com/spreadsheets/d/1awOqzOXrnhwgcUP7ORB4vzbtdjXwQ-2t6xiwRiYkENo/edit?usp=docslist_api

post-246838-0-86093100-1451856554_thumb.

post-246838-0-95934800-1451856571_thumb.

4790k 4.9GHz @ 1.375v, HD 7970, 850 EVO SSD's, Corsair 750D Airflow Edition, SeaSonic 860w Platinum.

Link to comment
Share on other sites

Link to post
Share on other sites

The data we have doesn't support what you're saying. The largest gains are at the process node shrink (aside from in 2008 with the 280, which was an outlier and a MASSIVE (at the time) 576mm squared die.

Full data sheets: https://docs.google.com/spreadsheets/d/1awOqzOXrnhwgcUP7ORB4vzbtdjXwQ-2t6xiwRiYkENo/edit?usp=docslist_api

 

As I mentioned, Sandy to Ivy, Haswell to Broadwell. Or does that data not count because it doesn't demonstrate your point?

 

In any case all that's being said is that it isn't because of the node shrinks, it's because of the new architectures that came with them. If history repeats itself then yes Pascal will be a nice leap up. But you cannot predict new architectures based on history. Processor design doesn't come from forces of nature. It comes from engineering teams, and isn't beholden to patterns of history.

Link to comment
Share on other sites

Link to post
Share on other sites

As I mentioned, Sandy to Ivy, Haswell to Broadwell. Or does that data not count because it doesn't demonstrate your point?

In any case all that's being said is that it isn't because of the node shrinks, it's because of the new architectures that came with them. If history repeats itself then yes Pascal will be a nice leap up. But you cannot predict new architectures based on history. Processor design doesn't come from forces of nature. It comes from engineering teams, and isn't beholden to patterns of history.

Ivy bridge to Haswell had the exact same number of transistors. you're comparing apples to oranges.

4790k 4.9GHz @ 1.375v, HD 7970, 850 EVO SSD's, Corsair 750D Airflow Edition, SeaSonic 860w Platinum.

Link to comment
Share on other sites

Link to post
Share on other sites

Ivy bridge to Haswell had the exact same number of transistors. you're comparing apples to oranges.

 

Not at all. The argument is that die shrinks themselves do not increase performance, that's all. If you want to argue transistor counts increase performance then say that.

Link to comment
Share on other sites

Link to post
Share on other sites

Not at all. The argument is that die shrinks themselves do not increase performance, that's all. If you want to argue transistor counts increase performance then say that.

Wow. I didn't realize we were on that level of pedantics. Nobody thinks pascal and arctic islands will be the exact same chips shrunk to 14/16nm process.

Here's my previous point more clearly worded:

"Historically, nVidia and AMD flagships show the biggest improvements in the generations that the process node shrinks, therefore I strongly suspect the same to hold true for flagships from Pascal and Arctic Islands families of chips"

Feel free to agree or disagree with that.

4790k 4.9GHz @ 1.375v, HD 7970, 850 EVO SSD's, Corsair 750D Airflow Edition, SeaSonic 860w Platinum.

Link to comment
Share on other sites

Link to post
Share on other sites

Wow. I didn't realize we were on that level of pedantics. Nobody thinks pascal and arctic islands will be the exact same chips shrunk to 14/16nm process.

Here's my previous point more clearly worded:

"Historically, nVidia and AMD flagships show the biggest improvements in the generations that the process node shrinks, therefore I strongly suspect the same to hold true for flagships from Pascal and Arctic Islands families of chips"

Feel free to agree or disagree with that.

 

Sure. But every time there's been a node shrink, there's been a core architecture change too. It's just odd you seem to be trying really hard to relate the performance to the die shrinks when it's very well known that they aren't related at all.

 

My issue was that someone else pointed out node shrinks don't improve performance, and you asked for facts to back it up. That's what I was responding to.

 

What you're essentially saying is "I've found a pattern, therefore I suspect the pattern will continue". Why? There's no actual argument there. I suspect Pascal will bring a large performance increase too, but it has nothing to do with the node shrink. It's a new architecture and has a lot more transistors. You're trying to establish a relationship where none exists.

Link to comment
Share on other sites

Link to post
Share on other sites

You're trying to establish a relationship where none exists.

So the fact that the performance increase has increased more during architecture changes that coincided with node shrinks is purely coincidental?

4790k 4.9GHz @ 1.375v, HD 7970, 850 EVO SSD's, Corsair 750D Airflow Edition, SeaSonic 860w Platinum.

Link to comment
Share on other sites

Link to post
Share on other sites

You guys know the performance jump will be similar to the GTX 780ti to GTX 980, as the new die size is really dense, therefore overclocking will be limited. Also why make a 80 percent increase in performance when they can do two GPUs with 40 percent increase in performance.

GPU[Two GTX 980ti Lightnings overclocked]-CPU[Core i7 5930K, 4.4Ghz]-Motherboard[MSI X99 Godlike]-Case[Corsair 780t Black]-RAM[32GB Corsair Dominator 3000Mhz Light bars]-Storage[samsung 950 Pro 512GB, Samsung 850 Pro 1TB and Samsung 850 Pro 512GB]-CPU Cooler[EK Predator 360mm]-PSU[EVGA 1600w T2 Individual cables Red]-Monitor[ASUS PG348Q]-Keyboard[Corsair K70 Red]-Mouse[Corsair M65 RGB]-Headset[sennheiser G4me one]-Case Fans[beQuiet Silent Wings 2]

Link to comment
Share on other sites

Link to post
Share on other sites

You guys know the performance jump will be similar to the GTX 780ti to GTX 980, as the new die size is really dense, therefore overclocking will be limited. Also why make a 80 percent increase in performance when they can do two GPUs with 40 percent increase in performance.

 

I doubt that. Here's why:

  • The 780 Ti to 980 improvement was the smallest improvement in the last 7 years, it was also the first step backwards in transistor count in nvidia's flagships. As there was no process shrink to go with the die shrink, the only thing carrying the 980 forward over the 780 Ti was architecture, so given that it should have had ~35% less transistors, it still managed roughly 10% more more performance. (
  • Moving from 40nm process to 28nm process was approx. a 42% shrink. In practice, nvidia managed to put more than twice as many transistors per sq. mm. The shrink from 28nm to 14nm (or 16nm) is a approximately halved again. If this means nvidia doubles their transistors per sq. mm. again, they are in for a serious increase in power.
  • If nvidia goes with a relatively conservative die size of 400mm2 (approx. same as 980), they should manage to get approximately 10,348,000,000 transistors on the die. Pascal architecture SHOULD only be an improvement from Maxwell architecture, so it would be a given that even if it only equaled Maxwell performance on a transistor for transistor level (which it won't, it'll outdo it), it would be a 29% faster card than a 980 Ti.
  • The smallest Flagship die nvidia's released since 2008 was the 680 in 2012. If they copied that die size, it might have approximately 7800 transistors, which would be just shy of a 980 Ti. If Pascal's architecture is as much of an improvement as Maxwell was over Kepler, you're still looking at a large performance increase.

There are no signs pointing to this being an incremental upgrade for nvidia or AMD. Everything says that this will be the biggest jump we've seen since 2012.

4790k 4.9GHz @ 1.375v, HD 7970, 850 EVO SSD's, Corsair 750D Airflow Edition, SeaSonic 860w Platinum.

Link to comment
Share on other sites

Link to post
Share on other sites

So the fact that the performance increase has increased more during architecture changes that coincided with node shrinks is purely coincidental?

 

Basically, yes. The power consumption and die size characteristics brought by smaller nodes may affect how far they decide to scale up their new architecture, but it doesn't directly affect performance.

Link to comment
Share on other sites

Link to post
Share on other sites

Basically, yes. The power consumption and die size characteristics brought by smaller nodes may affect how far they decide to scale up their new architecture, but it doesn't directly affect performance.

No, it's not coincidence. You're absolutely right in that the shrink itself doesn't increase performance, but every single time they've shrunk the process node, they've packed more transistors onboard, even when they've also shrunk the die.

4790k 4.9GHz @ 1.375v, HD 7970, 850 EVO SSD's, Corsair 750D Airflow Edition, SeaSonic 860w Platinum.

Link to comment
Share on other sites

Link to post
Share on other sites

No, it's not coincidence. You're absolutely right in that the shrink itself doesn't increase performance, but every single time they've shrunk the process node, they've packed more transistors onboard, even when they've also shrunk the die.

 

It is a coincidence. NVIDIA could have made Maxwell on the 20nm node if it had been available, the performance characteristics of the 980 would have been the same. The fact that the 20nm production capacity wasn't there is just the fault of TSMC. The reason the 980 was not a large performance increase was, as you correctly pointed out, the die size and transistor counts were much lower. The architecture is better, but was not scaled up as much as GK110. Looking at that is how you predict characteristics. And again I point out Sandy Bridge to Ivy Bridge, a die shrink with few other changes and small performance gain. You can use a die shrink to fit more transistors, but it doesn't have to be that way. The fact that NVIDIA and AMD have chosen to use the die shrinks to fit more every time, creating your pattern, is their choice, there is no actual reason they have to do that every time. So, basically a coincidence. There are business reasons of course, but in engineering terms there's no direct tie between them.

 

If I asked you "Pascal will be shrunk to 16nm (or whatever), will there be a big performance increase?" you can reply "maybe, because in the past they've used node shrinks to fit more transistors in a given space, and doing that would increase performance", but that in itself doesn't really make a prediction. To do that you must ask a follow-up question, which is "Cool, are they doing that this time?", and you would reply "Yes, Pascal will have 17 billion transistors which is twice as much as Maxwell GM200".

 

I just don't understand why you're trying to center things around the die shrink, when you could just skip that and cut to the real question, which is "Pascal will have 17 billion transistors, will there be a big performance increase?" to which the reply is "probably, because that's a lot more transistors than GM200". The die shrink is what enables that, but it in itself is irrelevant to the performance question, and the fact that these large increases are only made possible by a die shrink doesn't automatically mean future die shrinks will always be used that way. To find out you would need look at transistor counts to see how far the architecture has been scaled up, but you can just do that right from the start and ignore the die shrink.

Link to comment
Share on other sites

Link to post
Share on other sites

It is a coincidence. NVIDIA could have made Maxwell on the 20nm node if it had been available, the performance characteristics of the 980 would have been the same. The fact that the 20nm production capacity wasn't there is just the fault of TSMC. The reason the 980 was not a large performance increase was, as you correctly pointed out, the die size and transistor counts were much lower. The architecture is better, but was not scaled up as much as GK110. Looking at that is how you predict characteristics. And again I point out Sandy Bridge to Ivy Bridge, a die shrink with few other changes and small performance gain. You can use a die shrink to fit more transistors, but it doesn't have to be that way. The fact that NVIDIA and AMD have chosen to use the die shrinks to fit more every time, creating your pattern, is their choice, there is no actual reason they have to do that every time. So, basically a coincidence. There are business reasons of course, but in engineering terms there's no direct tie between them.

 

If I asked you "Pascal will be shrunk to 16nm (or whatever), will there be a big performance increase?" you can reply "maybe, because in the past they've used node shrinks to fit more transistors in a given space, and doing that would increase performance", but that in itself doesn't really make a prediction. To do that you must ask a follow-up question, which is "Cool, are they doing that this time?", and you would reply "Yes, Pascal will have 17 billion transistors which is twice as much as Maxwell GM200".

 

I just don't understand why you're trying to center things around the die shrink, when you could just skip that and cut to the real question, which is "Pascal will have 17 billion transistors, will there be a big performance increase?" to which the reply is "probably, because that's a lot more transistors than GM200". The die shrink is what enables that, but it in itself is irrelevant to the performance question, and the fact that these large increases are only made possible by a die shrink doesn't automatically mean future die shrinks will always be used that way. To find out you would need look at transistor counts to see how far the architecture has been scaled up, but you can just do that right from the start and ignore the die shrink.

 

 

I doubt that. Here's why:

  • The 780 Ti to 980 improvement was the smallest improvement in the last 7 years, it was also the first step backwards in transistor count in nvidia's flagships. As there was no process shrink to go with the die shrink, the only thing carrying the 980 forward over the 780 Ti was architecture, so given that it should have had ~35% less transistors, it still managed roughly 10% more more performance. (
  • Moving from 40nm process to 28nm process was approx. a 42% shrink. In practice, nvidia managed to put more than twice as many transistors per sq. mm. The shrink from 28nm to 14nm (or 16nm) is a approximately halved again. If this means nvidia doubles their transistors per sq. mm. again, they are in for a serious increase in power.
  • If nvidia goes with a relatively conservative die size of 400mm2 (approx. same as 980), they should manage to get approximately 10,348,000,000 transistors on the die. Pascal architecture SHOULD only be an improvement from Maxwell architecture, so it would be a given that even if it only equaled Maxwell performance on a transistor for transistor level (which it won't, it'll outdo it), it would be a 29% faster card than a 980 Ti.
  • The smallest Flagship die nvidia's released since 2008 was the 680 in 2012. If they copied that die size, it might have approximately 7800 transistors, which would be just shy of a 980 Ti. If Pascal's architecture is as much of an improvement as Maxwell was over Kepler, you're still looking at a large performance increase.

There are no signs pointing to this being an incremental upgrade for nvidia or AMD. Everything says that this will be the biggest jump we've seen since 2012.

 

Didn't I just answer everything you asked?

 

There's no way that nvidia is going to put out a flagship on a sub 300 sq. mm. node, therefore I can pretty well guarantee you that we are going to see an increase of at a MINIMUM of whatever the improvement in architecture will be. The difference from Kepler to Maxwell was approximately 50% increase in performance, transistor for transistor (a 5200 transistor 980 performed 10% better than a 7080 transistor 780 Ti, or approximately equivalent to a 7788 transistor theoretical 780 Ti).

 

Nobody can say that Pascal architecture will be a straight up transistor for transistor improvement of 50% over Maxwell, but between Kepler and Maxwell, all that changed was the architecture. Between Maxwell and Pascal, we get new architecture, we could potentially see more transistors, we have a new memory bus and we have all the other improvements listed in your original post.

 

Take it to the bank. Pascal is going to be the biggest increase in performance we've seen since the 680 was over the 580 (which was approximately 40%).

4790k 4.9GHz @ 1.375v, HD 7970, 850 EVO SSD's, Corsair 750D Airflow Edition, SeaSonic 860w Platinum.

Link to comment
Share on other sites

Link to post
Share on other sites

I doubt that. Here's why:

  • The 780 Ti to 980 improvement was the smallest improvement in the last 7 years, it was also the first step backwards in transistor count in nvidia's flagships. As there was no process shrink to go with the die shrink, the only thing carrying the 980 forward over the 780 Ti was architecture, so given that it should have had ~35% less transistors, it still managed roughly 10% more more performance. (
  • Moving from 40nm process to 28nm process was approx. a 42% shrink. In practice, nvidia managed to put more than twice as many transistors per sq. mm. The shrink from 28nm to 14nm (or 16nm) is a approximately halved again. If this means nvidia doubles their transistors per sq. mm. again, they are in for a serious increase in power.
  • If nvidia goes with a relatively conservative die size of 400mm2 (approx. same as 980), they should manage to get approximately 10,348,000,000 transistors on the die. Pascal architecture SHOULD only be an improvement from Maxwell architecture, so it would be a given that even if it only equaled Maxwell performance on a transistor for transistor level (which it won't, it'll outdo it), it would be a 29% faster card than a 980 Ti.
  • The smallest Flagship die nvidia's released since 2008 was the 680 in 2012. If they copied that die size, it might have approximately 7800 transistors, which would be just shy of a 980 Ti. If Pascal's architecture is as much of an improvement as Maxwell was over Kepler, you're still looking at a large performance increase.

There are no signs pointing to this being an incremental upgrade for nvidia or AMD. Everything says that this will be the biggest jump we've seen since 2012.

 

It might be the biggest jump, however that will be over many different GPUs, also why release one big GPU, then nothing until Volta (2018 or 2019) if you can do small incremental upgrades and get more money. Also expect a Q1 2017 release for high end Pascal

GPU[Two GTX 980ti Lightnings overclocked]-CPU[Core i7 5930K, 4.4Ghz]-Motherboard[MSI X99 Godlike]-Case[Corsair 780t Black]-RAM[32GB Corsair Dominator 3000Mhz Light bars]-Storage[samsung 950 Pro 512GB, Samsung 850 Pro 1TB and Samsung 850 Pro 512GB]-CPU Cooler[EK Predator 360mm]-PSU[EVGA 1600w T2 Individual cables Red]-Monitor[ASUS PG348Q]-Keyboard[Corsair K70 Red]-Mouse[Corsair M65 RGB]-Headset[sennheiser G4me one]-Case Fans[beQuiet Silent Wings 2]

Link to comment
Share on other sites

Link to post
Share on other sites

It might be the biggest jump, however that will be over many different GPUs, also why release one big GPU, then nothing until Volta (2018 or 2019) if you can do small incremental upgrades and get more money. Also expect a Q1 2017 release for high end Pascal

Sorry metro. You're wrong again. Quote me. The biggest jump will be from the first generation of Pascal gpu's, not the second or third generation 14/16nm FinFET.

You're also wrong about the release date. AMD just announced Arctic Islands release to hit mid-2016 and they displayed a working competitor to the GTX 950 that had a wattage draw of approximately 40w. We are going to see flagships by mid to late 2016 guaranteed, and like I said before: take it to the bank and quote me, they will be the biggest increase in performance since the 680 over the 580 (40%) or the 7970 over the 6970 (60+%).

4790k 4.9GHz @ 1.375v, HD 7970, 850 EVO SSD's, Corsair 750D Airflow Edition, SeaSonic 860w Platinum.

Link to comment
Share on other sites

Link to post
Share on other sites

Sorry metro. You're wrong again. Quote me. The biggest jump will be from the first generation of Pascal gpu's, not the second or third generation 14/16nm FinFET.

You're also wrong about the release date. AMD just announced Arctic Islands release to hit mid-2016 and they displayed a working competitor to the GTX 950 that had a wattage draw of approximately 40w. We are going to see flagships by mid to late 2016 guaranteed, and like I said before: take it to the bank and quote me, they will be the biggest increase in performance since the 680 over the 580 (40%) or the 7970 over the 6970 (60+%).

 

I expect low end and mid end Pascal to be Q2 2016, then high end Pascal in Q2 2017

 

One is the 16nm fab is new...and untested...and you can't go large and power hungry on first generation chips. Don't believe how unreliable a new fab is? Look at how long it took Nvidia to pick between Samsung and TSMC and finally siding with TSMC due to their longer working history and reliability.

Second point is...they will kill their own sales by offering their best card right at the release of the new generation. They will not be able to give any reason at all to their enthusiast market to upgrade their card within 3 years of launch. Whereas when you look at Kepler, people bought the 680, then the Titan came out, then the 780, and most people who bought the 680 eventually upgraded to those cards. And then when Maxwell launched, they started with just the 750 and 750ti. Then released the 980 that some bought, and finally after a while, the Titan X and 980ti which gave people (enthusiast market, again) a reason to upgrade.

Nvidia likes incremental upgrades. They don't want you to buy a card now, and launch another card in a year that completely destroys the card you bought. They put out one card, then 12 months later they bring out another card that is generally just 10-20% faster, so it's better than what they offered before, but not enough to **** off anyone who bought their last gen cards. And then another 6-12 months later they put out an even better card, with 50%+ better performance than their older cards, and people start upgrading again.

If on day one of the Pascal launch, if they come out with their absolute best card, they are going to have nothing interesting to bring to market for 2 to 3 years until Volta comes out. And that would be silly. Even if it were possible with the new 16nm fab. In terms of business, you need to offer a product that is a bit better than your competition, without being too much better/too costly for you. So they just need to put out a slight performance increase, but sell the card on much lower power consumption, wait for AMD to release something else, and then launch a bigger die version themselves, and back/forth they go. Just a quick reference:

FERMI

GTX 580 = 520mm2

KEPLER

GTX 680 = 294mm2

GTX Titan = 551mm2

MAXWELL

GTX 750ti = 148mm2

GTX 980 = 398mm2

GTX Titan X = 601mm2

Do you see the pattern? Small, Medium, Big, restart. Don't think about it based on die size. Because it's really just about transistor count. Pascal at 16nm, even a 294mm2 sized die like the GTX 680, along with HBM2, would result in performance close to the Titan X...and perhaps even higher due to lower heat/power consumption allowing higher clocks.

Hope that helps.

Also, NVIDIA is launching the GTX 990, so it would not make sense to release high end Pascal in Q2 2016

We also know HBM2 is going to have problems with amount of stacks as there was not enough HBM1 for AMD, so if NVIDIA and AMD are using HBM2, there will not be enough (Titan O, will need 16GB HBM2) this is why NVIDIA will have to wait for Samsung HBM2 (which starts production in Q2 2016) so it will take till 2017 for there to be enough and for NVIDIA to implement it on the GTX 1080ti and Titan O

 

 

Also give me a link where AMD says they will release high end Arctic islands in Q2 2016

GPU[Two GTX 980ti Lightnings overclocked]-CPU[Core i7 5930K, 4.4Ghz]-Motherboard[MSI X99 Godlike]-Case[Corsair 780t Black]-RAM[32GB Corsair Dominator 3000Mhz Light bars]-Storage[samsung 950 Pro 512GB, Samsung 850 Pro 1TB and Samsung 850 Pro 512GB]-CPU Cooler[EK Predator 360mm]-PSU[EVGA 1600w T2 Individual cables Red]-Monitor[ASUS PG348Q]-Keyboard[Corsair K70 Red]-Mouse[Corsair M65 RGB]-Headset[sennheiser G4me one]-Case Fans[beQuiet Silent Wings 2]

Link to comment
Share on other sites

Link to post
Share on other sites

I think we are having a fundamental disagreement over the terminology of "high end".

 

 

I expect low end and mid end Pascal to be Q2 2016, then high end Pascal in Q2 2017

 

On release, the 480, 580, 680, 780, 780 Ti, 980 and 980 Ti have been the high end flagship cards for nVidia. Why ignore the Titans you ask? Because they're $999+ cards that are exorbitantly priced and not worth considering as consumer cards. (Also, they support what I'm saying).

 

400 Series Launch date: March 26, 2010.

480 Release date: March 26, 2010.

 

500 Series Launch date: November 9, 2010

580 Release date: November 9, 2010

 

600 Series Launch date: March 22, 2012

680 Release date: March 22, 2012

 

700 Series Launch date: May 23, 2013

780 Release date: May 23, 2013

 

900 Series Launch date: September 18, 2014

980 Release date: September 18, 2014

 

Are you starting to follow what I'm conveying? nVidia debuts every generation with its flagship. That means that the GTX 1080 (or whatever they decide to name it), will be released at the launch of this coming generation, which will likely happen in mid 2016. It will be the highest performance consumer card nvidia has to offer. It will likely later be surpassed by a 1080 Ti, but that's neither here nor there. The jump from 980 Ti to 1080 will very likely be larger than 1080 to 1080 Ti (as history has shown with the jumps from the 285 to the 480 and the 580 to the 680, both of which were process shrinks).

Can we stop with the bullshit, now?

4790k 4.9GHz @ 1.375v, HD 7970, 850 EVO SSD's, Corsair 750D Airflow Edition, SeaSonic 860w Platinum.

Link to comment
Share on other sites

Link to post
Share on other sites

I think we are having a fundamental disagreement over the terminology of "high end".

 

 

On release, the 480, 580, 680, 780, 780 Ti, 980 and 980 Ti have been the high end flagship cards for nVidia. Why ignore the Titans you ask? Because they're $999+ cards that are exorbitantly priced and not worth considering as consumer cards. (Also, they support what I'm saying).

 

400 Series Launch date: March 26, 2010.

480 Release date: March 26, 2010.

 

500 Series Launch date: November 9, 2010

580 Release date: November 9, 2010

 

600 Series Launch date: March 22, 2012

680 Release date: March 22, 2012

 

700 Series Launch date: May 23, 2013

780 Release date: May 23, 2013

 

900 Series Launch date: September 18, 2014

980 Release date: September 18, 2014

 

Are you starting to follow what I'm conveying? nVidia debuts every generation with its flagship. That means that the GTX 1080 (or whatever they decide to name it), will be released at the launch of this coming generation, which will likely happen in mid 2016. It will be the highest performance consumer card nvidia has to offer. It will likely later be surpassed by a 1080 Ti, but that's neither here nor there. The jump from 980 Ti to 1080 will very likely be larger than 1080 to 1080 Ti (as history has shown with the jumps from the 285 to the 480 and the 580 to the 680, both of which were process shrinks).

Can we stop with the bullshit, now?

 

No, the GTX 1080 is a mid end GPU, with a low die size. The GTX 1080ti is high end with a large die size. Also the GTX 1080ti will be a bigger jump. Why would a GTX 980ti (high end) to GTX 1080 (mid end) be a bigger jump in performance. The GTX 1080 will be the same performance jump as the GTX 780ti to GTX 980

GPU[Two GTX 980ti Lightnings overclocked]-CPU[Core i7 5930K, 4.4Ghz]-Motherboard[MSI X99 Godlike]-Case[Corsair 780t Black]-RAM[32GB Corsair Dominator 3000Mhz Light bars]-Storage[samsung 950 Pro 512GB, Samsung 850 Pro 1TB and Samsung 850 Pro 512GB]-CPU Cooler[EK Predator 360mm]-PSU[EVGA 1600w T2 Individual cables Red]-Monitor[ASUS PG348Q]-Keyboard[Corsair K70 Red]-Mouse[Corsair M65 RGB]-Headset[sennheiser G4me one]-Case Fans[beQuiet Silent Wings 2]

Link to comment
Share on other sites

Link to post
Share on other sites

No, the GTX 1080 is a mid end GPU, with a low die size. The GTX 1080ti is high end with a large die size. Also the GTX 1080ti will be a bigger jump. Why would a GTX 980ti (high end) to GTX 1080 (mid end) be a bigger jump in performance. The GTX 1080 will be the same performance jump as the GTX 780ti to GTX 980

 

Nowdays the 780ti has the same performance as a 960 roughlym

 

Just whackin' that in. Anybody want popcorn?

Hiya :)

Feel free to quote me in a reply so I can see your reply :)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×