Jump to content

NVIDIA Pascal Mythbusting

Glenwing

No, the GTX 1080 is a mid end GPU, with a low die size. The GTX 1080ti is high end with a large die size. Also the GTX 1080ti will be a bigger jump. Why would a GTX 980ti (high end) to GTX 1080 (mid end) be a bigger jump in performance. The GTX 1080 will be the same performance jump as the GTX 780ti to GTX 980

Well, we've said our points. We'll see who was right.

4790k 4.9GHz @ 1.375v, HD 7970, 850 EVO SSD's, Corsair 750D Airflow Edition, SeaSonic 860w Platinum.

Link to comment
Share on other sites

Link to post
Share on other sites

That Popcorn was great.

Hiya :)

Feel free to quote me in a reply so I can see your reply :)

Link to comment
Share on other sites

Link to post
Share on other sites

Well, we've said our points. We'll see who was right.

 

I expect to upgrade my GTX 980ti SLI to GTX 1080ti SLI anyway, so it will be interesting to find out what happens.

GPU[Two GTX 980ti Lightnings overclocked]-CPU[Core i7 5930K, 4.4Ghz]-Motherboard[MSI X99 Godlike]-Case[Corsair 780t Black]-RAM[32GB Corsair Dominator 3000Mhz Light bars]-Storage[samsung 950 Pro 512GB, Samsung 850 Pro 1TB and Samsung 850 Pro 512GB]-CPU Cooler[EK Predator 360mm]-PSU[EVGA 1600w T2 Individual cables Red]-Monitor[ASUS PG348Q]-Keyboard[Corsair K70 Red]-Mouse[Corsair M65 RGB]-Headset[sennheiser G4me one]-Case Fans[beQuiet Silent Wings 2]

Link to comment
Share on other sites

Link to post
Share on other sites

I expect to upgrade my GTX 980ti SLI to GTX 1080ti SLI anyway, so it will be interesting to find out what happens.

You have no need to upgrade anyhow unless they somehow come out with a 2160p ultrawide or 120hz. 980 Ti's in SLI are about as good as performance gets these days. It won't matter if guys upgrade during the initial flagship launch (1080) or the larger die (Ti model) later on, either time you're going to have the best consumer card on the market until the following release that'll knock the crown off.

4790k 4.9GHz @ 1.375v, HD 7970, 850 EVO SSD's, Corsair 750D Airflow Edition, SeaSonic 860w Platinum.

Link to comment
Share on other sites

Link to post
Share on other sites

I would definitely go with high end Volta SLI, however it depends if they make a a 5120x2160 95hz monitor, if they do I will get high end Pascal. Although I expect that the monitor would be released in 2018-2019

 

I will have to look at the performance difference between the GTX 980ti and GTX 1080ti

GPU[Two GTX 980ti Lightnings overclocked]-CPU[Core i7 5930K, 4.4Ghz]-Motherboard[MSI X99 Godlike]-Case[Corsair 780t Black]-RAM[32GB Corsair Dominator 3000Mhz Light bars]-Storage[samsung 950 Pro 512GB, Samsung 850 Pro 1TB and Samsung 850 Pro 512GB]-CPU Cooler[EK Predator 360mm]-PSU[EVGA 1600w T2 Individual cables Red]-Monitor[ASUS PG348Q]-Keyboard[Corsair K70 Red]-Mouse[Corsair M65 RGB]-Headset[sennheiser G4me one]-Case Fans[beQuiet Silent Wings 2]

Link to comment
Share on other sites

Link to post
Share on other sites

I would definitely go with high end Volta SLI, however it depends if they make a a 5120x2160 95hz monitor, if they do I will get high end Pascal. Although I expect that the monitor would be released in 2018-2019

I will have to look at the performance difference between the GTX 980ti and GTX 1080ti

i wouldn't be the least bit surprised if we had 3-5 GPU generations at 14/16nm process, possibly longer than the stale 28nm process we currently have. If 10nm was enough to stall Intel, I don't have higher hopes for graphics firms.

4790k 4.9GHz @ 1.375v, HD 7970, 850 EVO SSD's, Corsair 750D Airflow Edition, SeaSonic 860w Platinum.

Link to comment
Share on other sites

Link to post
Share on other sites

Also, NVIDIA is launching the GTX 990, so it would not make sense to release high end Pascal in Q2 2016

 

Has this been announced yet? I know I've seen photos of AMD's FuryX2 board, but I don't recall seeing a 990 PCB. I know there were hints that they were both planning to build a dual GPU flagship before the end of this generation.

 

Now that AMD's teased a working Polaris unit (arctic islands, not greenland, probably low end gamer to replace 360 or 370 lines) it surprises me that they've delayed the Fury X2... it seems like enthusiasts know a new faster card is on the way, mind you, this dual GPU card will still be faster when crossfire is supported.

4790k 4.9GHz @ 1.375v, HD 7970, 850 EVO SSD's, Corsair 750D Airflow Edition, SeaSonic 860w Platinum.

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah, they had there conference about it a few months ago, I expect they are waiting for AMD to release the Fury X2

 

The GPU AMD shown was a low end GPU for mobile and power efficiency, so I expect high end Arctic islands will not be out this year.

GPU[Two GTX 980ti Lightnings overclocked]-CPU[Core i7 5930K, 4.4Ghz]-Motherboard[MSI X99 Godlike]-Case[Corsair 780t Black]-RAM[32GB Corsair Dominator 3000Mhz Light bars]-Storage[samsung 950 Pro 512GB, Samsung 850 Pro 1TB and Samsung 850 Pro 512GB]-CPU Cooler[EK Predator 360mm]-PSU[EVGA 1600w T2 Individual cables Red]-Monitor[ASUS PG348Q]-Keyboard[Corsair K70 Red]-Mouse[Corsair M65 RGB]-Headset[sennheiser G4me one]-Case Fans[beQuiet Silent Wings 2]

Link to comment
Share on other sites

Link to post
Share on other sites

Will the professional cards be able to play video games well? I am about to pull the trigger on 4 titan x, but if the pascal professional cards come out Q2, I am wait to purchase that instead since the primary focus of my rig is for computational purposes. One professional card from Pascal may be better than 4 titan xs, although probably equivalent in cost.

Link to comment
Share on other sites

Link to post
Share on other sites

The GPU AMD shown was a low end GPU for mobile and power efficiency, so I expect high end Arctic islands will not be out this year.

 

I suspect they didn't want to give nvidia any idea of how far they went with their top release gpu, so they showcased one for power efficiency.

 

You can put me on record for saying there will be a 490 (or whatever they decide to call it) when the GPU's are finally launched or within a month-ish of launch (a GTX 1080 from nVidia, too).

4790k 4.9GHz @ 1.375v, HD 7970, 850 EVO SSD's, Corsair 750D Airflow Edition, SeaSonic 860w Platinum.

Link to comment
Share on other sites

Link to post
Share on other sites

I thought I read/heard somewhere, whereby SLi would allow for all RAM (RAM in multiple cards) to be utilized. Is this the case? Can't find the information again.

Thanks-

Link to comment
Share on other sites

Link to post
Share on other sites

I thought I read/heard somewhere, whereby SLi would allow for all RAM (RAM in multiple cards) to be utilized. Is this the case? Can't find the information again.

Thanks-

That is a feature of DirectX 12. It is up to individual games programmed with DirectX 12 to employ that feature, and it will only work in that particular game, it's not a feature of the graphics card.

Link to comment
Share on other sites

Link to post
Share on other sites

That is a feature of DirectX 12. It is up to individual games programmed with DirectX 12 to employ that feature, and it will only work in that particular game, it's not a feature of the graphics card.

DirectX 12's Multi Display Adaptor. It also allows you to use dissimilar GPU's, even between nVidia and AMD. So, as long as the developer of the game your playing has supported this feature, you could pair your 980 Ti with an AMD 390x or Fury X.

 

I feel like you could run into driver issues pairing two separate GPU brands, but you never know. At least it's a cool possibility.

4790k 4.9GHz @ 1.375v, HD 7970, 850 EVO SSD's, Corsair 750D Airflow Edition, SeaSonic 860w Platinum.

Link to comment
Share on other sites

Link to post
Share on other sites

@Glenwing can you do the same for amd, in the same post?

I don't intend to make an "information round up" for every new generation of graphics cards for both companies. I only made this thread because of the amount of bs being spread by news sites' bad reporting and amount of misconceptions. I don't see a similar amount of confusion for AMD's next generation, and I haven't seen a lot of information about their new stuff in the first place.

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for posting this. I've been IV'ing the hype train on Pascal like a PC gaming fiend. Trying to isolate the accurate information from the hype train can be outrageously difficult to do this far from launch.

Link to comment
Share on other sites

Link to post
Share on other sites

It’s high time to start shutting down some of these myths. There have been too many poorly written and misleading articles published on various tech “news” websites, generating hype out of nothing. Pascal is entirely focused on HPC (high-performance compute, a.k.a. supercomputers and servers) and NVIDIA hasn’t said so much as a word about gaming. But at every turn people keep trying as hard as they can to interpret every statement as "amazing for gamers!", and every time NVIDIA specifically says “compute performance” the words somehow turn into “gaming performance” in people’s minds, leading to a lot of false impressions and expectations. Not to say that Pascal won't be great... What I'm saying is, we really don't know much about Pascal at all in terms of gaming, NVIDIA has really said nothing on the topic, and most of what they've said so far isn't really applicable to gaming.

But anyway... let’s get busting!

 

 

This is a complete myth from start to finish. The belief here is that Pascal will have ten times the general computational power and so 10x the gaming performance of Maxwell. That is specifically NOT what NVIDIA said:

 

469c8abe78.jpg

 

 

 

 

What NVIDIA said was that Pascal would have ten times the throughput in deep learning compute workloads utilizing mixed precision compared to Maxwell, which sounds impressive until you’re made aware that Maxwell is (by design) completely horrible for many compute tasks to begin with, in some cases actually many times slower than Kepler. In double-precision compute workloads for example, Maxwell GM200’s performance in FLOPs is about 1/32 of its single-precision (normal workload) performance, while Kepler GK110 puts out 1/3 of its normal performance. So ten times the performance of Maxwell in compute tasks is not necessarily as earth-shattering as it sounds.

 

It’s also important to keep in mind that NVIDIA didn't say the GPU would be 10x as powerful. They said it would have 10x the mixed precision throughput. Gaming workloads and compute workloads are very different, and stress different parts of the GPU system. Compute, for example, is much heavier on memory bandwidth, memory capacity, and communication with the CPU compared to gaming. These aspects of the GPU can be a major bottleneck in compute applications, but not in gaming. This means that a 10x increase in mixed precision throughput doesn't really tell us anything about the power of the GPU itself, as a lot of that can come from improvements to the memory and system interconnects and not from the GPU itself, which would improve compute performance without improving gaming performance.

 

59816b8ded.jpg

 

If you take a look at the chart which accompanied NVIDIA’s "10x" statement above, that is exactly what you see; more than half the stuff on the chart contributing to the “10x” figure are increases to memory bandwidth and interconnect improvements, things that won’t significantly impact gaming performance.

Mixed precision performance is the only thing on there that actually has to do with the computational power of the GPU. What we know so far is that Pascal's support for mixed precision compute mode means that it can do FP16 calculations at double the rate compared to FP32 (standard), which is all well and good for deep learning but not particularly useful for games. NVIDIA stated that Pascal would have 4x the FP16 performance, 2x of which is from the addition of mixed precision mode, and 2x from the Pascal architecture itself. So at best we can predict that the highest-end Pascal chip might have up to 2x the performance of Maxwell GM200, if we're feeling very optimistic. Obviously that will still make for an incredibly powerful GPU, don't get me wrong, I'm just saying... it's not 10x. Not at all.

 

You can watch the actual keynote if you want; you’ll notice that the presentation is entirely focused on the benefits for deep learning compute algorithms, there is absolutely nothing said about gaming performance:

 

 

NVLink is an interface for connecting multiple GPUs and/or the CPU. NVLink can be used in place of PCI Express, with about 5x the bandwidth of PCI Express 3.0 x16. The full implementation of NVLink, where it entirely replaces PCIe, is only for supercomputers and servers, where they will require purpose-built motherboards and CPUs with NVLink controllers instead of PCIe controllers. So far the only CPUs we know of that are looking to fully implement NVLink in the near future are custom IBM POWER processors to be used as part of the Summit supercomputer being jointly designed and built by IBM and NVIDIA. The full NVLink implementation will not be used in the general desktop market.

 

In compute applications, like what supercomputers do, the connection between the GPU and the CPU (currently PCI Express) is heavily used and can be a significant bottleneck; removing that by replacing PCIe with NVLink will increase the overall throughput by quite a lot in those applications.

 

However, games do not need to use that connection very heavily, and so the communication bus between the GPU and CPU sees much less traffic in gaming compared to compute. PCI Express is nowhere near being a bottleneck for gaming. So even if full NVLink appeared on the desktop as a GPU-CPU interconnect, it wouldn’t provide any significant benefit to gaming performance compared to PCI Express.

 

NVLink can also be used purely as an interconnect between multiple GPUs, without an NVLink connection from the GPUs to the CPU. So basically, it might be implemented as a replacement for the SLI bridge. If this happens, it is not clear yet whether it would be done with a new external bridge, or whether it would be integrated into a proprietary connector or interface on the motherboard itself, though I think that NVIDIA is unlikely to implement it that way unless they can make the cards still compatible with PCI Express slots.

 

But, it is not even clear whether NVLink will appear at all on the desktop market, as NVIDIA has not mentioned anything about NVLink in the context of multi-GPU gaming or consumer GPUs, they’ve only talked about how useful it will be for supercomputers so far.

 

 

This will only be possible over NVLink, meaning it will only appear in specialized systems, and unless NVLink replaces the SLI implementation we have now in consumer cards, 8-way GPUs likely won’t see usage outside of NVIDIA’s devboxes, which they announced would have up to 8 GPUs in the future with Pascal, the only mention of 8-way GPUs thus far. Various tech “news” websites as usual chose to interpret that as “8-way SLI will now be a thing!!!!!”

 

 

Even if NVLink turns out to be used in the desktop market as a multi-GPU interconnect, 8-way SLI doesn’t really have any impact on most consumers. Sure it will be fun to marvel at the one or two showcase builds that at least one or two people will surely build for conventions, but realistically hardly anyone is going to use this feature anyway even if it appears on the desktop. So it is a bit funny to see people say “You should wait for Pascal because...” and list 8-way SLI as a bullet point, as if it’s a useful consideration for regular people.

But anyway, this all depends on whether NVLink will be available as a GPU interconnect in regular graphics cards anyway. We don't know enough at this time.

 

*UP TO 32GiB of memory. The highest end Pascal-based GPU will be equipped with 32GiB of memory. This will be the highest end out of ALL Pascal cards, including Quadros and Teslas. It's extremely unlikely any gaming card will have more than 16GiB, and even that will probably only be seen on a TITAN-class card while the regular lineup will probably max out at 8GiB. Still quite a lot and unlikely to make any difference for gamers, but all the same it still isn't 32GiB, so saying “Pascal’s gonna be amazing for gaming! It’s gonna have 32GiB of memory which is way more than AMD’s next gen gaming cards will have!” is a bit misleading when 32GiB won’t be found outside of workstation cards that gamers don’t care about.

I mean, you could do the same kind of thing with generations gone by and say that AMD's Hawaii GPU was equipped with 16GiB of memory (!) or that Kepler cards had up to 12GiB of memory by including Firepros and Quadros in your tally, but pretty much anyone will agree that as far as gamers are concerned Hawaii had up to 8GiB at the very most and 4GiB normally at the high end, while Kepler had 6GiB at the very most and 3GiB normally. So once again, the 32GiB figure here is unlikely to apply to any cards gamers are interested in, just like the 16GiB Hawaii cards and 12GiB Kepler cards, these amounts will only be found in high-end Quadros and Teslas.

 

So, yes a Pascal card of some kind will have 32GiB of memory, but it’s probably not a good idea to get hyped about having 32GiB cards on the market if the only market you pay attention to is the gaming market, because it’s unlikely you’ll see any 32GiB cards appearing there.

 

HBM2 is an advanced type of memory which will replace GDDR5 on the new Pascal cards. It can be scaled to much higher densities and much higher bandwidth than GDDR5 while taking less space and using less power. These benefits will be seen on both compute and gaming cards alike, in the form of smaller cards and higher power efficiency. But as far as performance goes, the benefit is marginal at best for gaming since they are rarely limited by memory bandwidth.

 

Compute applications are a lot heavier on memory bandwidth than gaming, so they’ll benefit a lot more. Gaming performance may see some small improvements, but it’s not going to be much. We’ve already seen the impact of massive memory bandwidth increases with the AMD R9 Fury X; it does help the card pull a little bit ahead at higher resolutions, but it’s not a game changer by any means. With HBM, the R9 Fury X has certainly pushed well beyond any kind of bandwidth bottleneck at this point, so adding even more with Pascal won't really help beyond that. The diminishing returns barrier has been crossed into the no-returns zone. For gaming, anyway.

 

We’ve heard so far from NVIDIA that Pascal will have a Terabyte per second of memory bandwidth in its full form (twice as much as the Fury X).

 

 

So, I don’t mean to dampen the mood or say Pascal won’t be great, not at all; the reality is we have absolutely no idea how Pascal will be for gaming, there’s been no information about that topic yet. It might be only a marginal improvement, it might be totally amazing. Everything we’ve heard so far is about compute capabilities. As much as the sensationalist “news” sites across the web want to make it seem like all these things are applicable to gaming, they simply aren’t. So far the Pascal architecture seems to be entirely centered around high-performance compute and accelerated computing.

 

We’ll see what kind of gaming performance Pascal brings to the table in due time. For now, just sit back, relax, and be careful of what you read on the Internet.

 

 

 

 

They may have 8gpu chips on 1 gpu for all I care... we all know that means they'll just make them all weaker so you'll only get a small improvement.

Link to comment
Share on other sites

Link to post
Share on other sites

I still don't want to upgrade my mobo yet, so I'm not interested in Nvlink. I'll just take two of the next mainstream GM204 equivalents, I doubt they can ditch PCI-E without going bankrupt yet.

Linus is my fetish.

Link to comment
Share on other sites

Link to post
Share on other sites

I want to buy a new rig and propably use 970 in it. Is good to wait until pascal or buy now?

Link to comment
Share on other sites

Link to post
Share on other sites

That arricle is mostly rumors. But Im still unsure whether to upgrade to Pascal or wait for Volta. It will depend on the Pacal 1070s price / performance.

Linus is my fetish.

Link to comment
Share on other sites

Link to post
Share on other sites

That arricle is mostly rumors. But Im still unsure whether to upgrade to Pascal or wait for Volta. It will depend on the Pacal 1070s price / performance.

Volta will be some time away.  If you are trying to run higher resolutions while using cards under the 9xx series, I would just upgrade to Pascal.  I need to upgrade because even a GTX 980 struggles to perform adequately at 1440p.

Link to comment
Share on other sites

Link to post
Share on other sites

I am going to get a 1080/ti or what AMD has since I am using a backup 7950 because my 780 ti broke.

Even if the new gpus are only 20% faster I think that it would be stupid buying the top end right now.

I still hope that Nvidia won't call a gpu for 1080.

Before you buy amp and dac.  My thoughts on the M50x  Ultimate Ears Reference monitor review I might have a thing for audio...

My main Headphones and IEMs:  K612 pro, HD 25 and Ultimate Ears Reference Monitor, HD 580 with HD 600 grills

DAC and AMP: RME ADI 2 DAC

Speakers: Genelec 8040, System Audio SA205

Receiver: Denon AVR-1612

Desktop: R7 1700, GTX 1080  RX 580 8GB and other stuff

Laptop: ThinkPad P50: i7 6820HQ, M2000M. ThinkPad T420s: i7 2640M, NVS 4200M

Feel free to pm me if you have a question for me or quote me. If you want to hear what I have to say about something just tag me.

Link to comment
Share on other sites

Link to post
Share on other sites

Volta will be some time away.  If you are trying to run higher resolutions while using cards under the 9xx series, I would just upgrade to Pascal.  I need to upgrade because even a GTX 980 struggles to perform adequately at 1440p.

 

Well I do have SLI 970s for 1440p 144 Hz. I reckon they will be perfectly fine until the end of 2017, but if Volta isnt out until 2018 then thats too far off. If its out next year then I dont need to bother with Pascal.

 

But thats also assuming that Volta will still be made on PCI-E and not entirely for NV link, and that I doubt they can do that immediately with Pascal, so I'll likely just upgrade to a pair of Pascal 1070s or 1080s depending on price / performance difference.

 

The whole NVlink thing is going to seriously dent my original plans of running my I7 980 until at least 2020 or until it dies. New mobo + CPU in the same range costs far too much for too little gain, but if I do upgrade my mobo, it will be best after NVlink has been out a while and developed into its second or even third generation.

Linus is my fetish.

Link to comment
Share on other sites

Link to post
Share on other sites

Well I do have SLI 970s for 1440p 144 Hz. I reckon they will be perfectly fine until the end of 2017, but if Volta isnt out until 2018 then thats too far off. If its out next year then I dont need to bother with Pascal.

 

But thats also assuming that Volta will still be made on PCI-E and not entirely for NV link, and that I doubt they can do that immediately with Pascal, so I'll likely just upgrade to a pair of Pascal 1070s or 1080s depending on price / performance difference.

 

The whole NVlink thing is going to seriously dent my original plans of running my I7 980 until at least 2020 or until it dies. New mobo + CPU in the same range costs far too much for too little gain, but if I do upgrade my mobo, it will be best after NVlink has been out a while and developed into its second or even third generation.

SLI isn't supported in a majority of games.  Single cards have a lot more benefits, and I highly doubt you will get strong performances out of ports for the next few years.  SLI support for ports has been abysmal. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×