Jump to content

Nvidia thinks "Pascal is just unbeatable" and already decided not to move Volta ahead

Misanthrope
1 hour ago, rrubberr said:

Their profit margin has been negative for the past four years virtually constantly. Apparently, it isn't working.

Negative only when taking in to account paying off debt, reducing debt is a good thing and means more than just posting a net profit.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Jito463 said:

I'm not talking about the FX series, I'm talking about Ryzen/TR/Epyc.  We don't know the results of that yet, as the products only launched 6 months ago.  They couldn't price the FX line too high, as they just didn't have the performance to compete against Intel in anything but the low-mid range.  Now they do.

Zen CPUs are selling, at mass, at higher price points than most of the FX series. And that's just Ryzen.

 

Oh, and it's cheaper to produce!

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Tiwaz said:

100% agree

Ryzen are great for rendering but they just suck ass for single threaded performance, the single threaded performance of a ryzen is comparable to an ivy bridge cpu

Pretty sure the ipc of ryzen is comparable to broadwell. Then again the ipc of broadwell and ivy aren't that far apart so I could see why one might think that.

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, Sakkura said:

RX Vega 56 has much better efficiency than the RX 580, so...

Until you add even a tiny bit more V at which point power draw almost doubles and has been measured almost as high as an entire rig just for the GPU.

 

500w for a single GPU?

 

On topic I think Nvidia are just being business smart here. Jen is correct, Vega 64 is trading blows with 1080, gets wrecked by 1080ti and Titan XP is just in another league. Why would Nvidia want to release the next big step at this point? For them its better to wait as long as possible and get the most profit out of Pascal as they can.

 

They could hold off until Q2/3 2018 and release Volta as the T XP replacement then do the old reshuffle everything else down a tear (so T XP becomes 1180ti, 1080ti becomes 1180, 1080 becomes 1170 etc) and still be ahead of AMD in terms of performance.

 

Unlike Intel, Nvidia haven't been sitting on ass for 4 years.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

People seem really bothered by this news, but did Volta really need to come out this year?

Pascal just finished in 2016, and the V100 chip only came about from an R&D expenditure  to the tune of 3 billion USD.

Nothing's wrong with the P architecture: it's a significant improvement over Maxwell and is very power efficient for the frames it puts out (compared to the competition).

 

If you want an upgrade, just buy a Pascal card.

If you already have one, buy a better one.

If you have a Titan XP, buy a second one.

If you have XP's in SLI...then you can complain, but not before posting rig pics because cmon. We need to see that loop, bruh.

 

xD

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, Terryv said:

Are there any comparisons with the current iteration of Volta and it's equivalent in Pascal?

 

P100 vs V100

There's probably a slide deck or a summary sheet from Nvidia sales that's given to interested enterprise customers, but these aren't people who will put out a review or share that kind of information

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Master Disaster said:

Until you add even a tiny bit more V at which point power draw almost doubles and has been measured almost as high as an entire rig just for the GPU.

Why would you though when it should be more than enough out of the box for any OC? You'll probably hit silicon limits before anything else.

 

I'd say it's better to lower it, increase power target and get lower temps, noise and power while maintaining boost speeds better (lowering it should offset the increase from higher power target I believe).

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Brooksie359 said:

Pretty sure the ipc of ryzen is comparable to broadwell. Then again the ipc of broadwell and ivy aren't that far apart so I could see why one might think that.

though for budget its awesome, i mean you can get an r5 1400, oc dat shit to 3,8ghz and get 1600x like performance

Link to comment
Share on other sites

Link to post
Share on other sites

I think this is a bad move for them there starting to do an "intel" and not thinking AMD are a threat

Intel Xeon X5650 OC'd to 4Ghz  Sapphire R9 290 Vapor X 4GB  |  Vengeance® K70 & M65  W10 Pro

                                                                                                                                                                               

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, JurunceNK said:

 

The 56 draws less power than my fury, but I'm fine with my 530W PSU, so 750W is a bit extreme, 600W should be enough for mostly everyone granted that my cpu isn't that power hungry (6600k). With the power saving mode, the 56 is on par with reference 1070s and it draws something around 30W more, that's not that bad for efficiency in itself.

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, MrMarriarty said:

I think this is a bad move for them there starting to do an "intel" and not thinking AMD are a threat

Well maybe Pascal is unbeatable because they can't get Volta to beat it...

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, laminutederire said:

Well maybe Pascal is unbeatable because they can't get Volta to beat it...

The information about the GV100 says it's almost no IPC increase over the GP100, outside of the tensor cores. There will be a little bit with regards to gaming, but expect the 1170 to be pretty much just the 1080 with some slight uplift. Their dies are just going to get bigger. 

 

Also, it's going to run hotter than Pascal, it seems. (Though without a die shrink or a completely new uArch, that was probably a given. Granted, Pascal is an all-timer.)

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, LyondellBasell said:

There's probably a slide deck or a summary sheet from Nvidia sales that's given to interested enterprise customers, but these aren't people who will put out a review or share that kind of information

Details on the Volta arch improvements: Here

Some comparions between P100 and V100: Here

Everyone loves slides so here's a couple slides: Slides

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, Taf the Ghost said:

The information about the GV100 says it's almost no IPC increase over the GP100, outside of the tensor cores. There will be a little bit with regards to gaming, but expect the 1170 to be pretty much just the 1080 with some slight uplift. Their dies are just going to get bigger. 

 

Also, it's going to run hotter than Pascal, it seems. (Though without a die shrink or a completely new uArch, that was probably a given. Granted, Pascal is an all-timer.)

Well the SMs have been changed along with SIMT also MPS services are now hardware accelerated instead of software like pascal was, amongst other enhancements to the architect outside of the tensor cores.

 

They aren't slated to run any hotter is the SXM2 version they are allowing to run up to 300w TDP and the PCIe card is constrained to 250w which is the same as pascal.

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, Dylanc1500 said:

Well the SMs have been changed along with SIMT also MPS services are now hardware accelerated instead of software like pascal was, amongst other enhancements to the architect outside of the tensor cores.

 

They aren't slated to run any hotter is the SXM2 version they are allowing to run up to 300w TDP and the PCIe card is constrained to 250w which is the same as pascal.

Well, I should have been more careful. The GV100 will have all of the tweaks Nvidia will throw at it for the HPC & other tasks that the non-consumer GPU is being pushed to handle. The released information about GP100 to GV100 doesn't point to anything large hitting the Consumer graphics versions of Volta, but we can expect the GPUs to get bigger, as it's not on a true node shrink.

 

At the same time, the word is the gaming SKUs will running higher because, well, they'll have more SM per tier.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Taf the Ghost said:

Well, I should have been more careful. The GV100 will have all of the tweaks Nvidia will throw at it for the HPC & other tasks that the non-consumer GPU is being pushed to handle. The released information about GP100 to GV100 doesn't point to anything large hitting the Consumer graphics versions of Volta, but we can expect the GPUs to get bigger, as it's not on a true node shrink.

 

At the same time, the word is the gaming SKUs will running higher because, well, they'll have more SM per tier.

 I cant say much about it. I can only speak in a general sense on the architecture as a whole and specifically the GV100 (tesla).

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Dylanc1500 said:

 I cant say much about it. I can only speak in a general sense on the architecture as a whole and specifically the GV100 (tesla).

Thanks for catching that, though. I was thinking about the later part of where I was going before I finished the first part. Messed up the information when I should have been more careful.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Taf the Ghost said:

Thanks for catching that, though. I was thinking about the later part of where I was going before I finished the first part. Messed up the information when I should have been more careful.

It's no big deal. heck, if I delved deeper into the more, in-depth and specific improvements of the arch. Most of the, not more technically inclined people, would be entirely lost and not understand. (Not that you are one of those people, I just don't know your level of technical knowledge). 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Dylanc1500 said:

It's no big deal. heck, if I delved deeper into the more, in-depth and specific improvements of the arch. Most of the, not more technically inclined people, would be entirely lost and not understand. (Not that you are one of those people, I just don't know your level of technical knowledge). 

GPU technicals get so deep. Was talking with an engineer I know (not at any GPU company), and it's an entire design world unto itself. Plus you tend to lose most people when you talk about "pipeline length". Frankly, I try to not get too deep into that, simply because I like CPUs more. I just don't like saying stupid things because I'm shoving 3 thoughts together, haha.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Taf the Ghost said:

GPU technicals get so deep. Was talking with an engineer I know (not at any GPU company), and it's an entire design world unto itself. Plus you tend to lose most people when you talk about "pipeline length". Frankly, I try to not get too deep into that, simply because I like CPUs more. I just don't like saying stupid things because I'm shoving 3 thoughts together, haha.

I've read through the 2000+ page Intel x86 and IBM Power instructions books and find it much easier to understand than the indepths of modern GPUs and how they work at a metal level. Especially with NVLINK 2.0 and the GPU to CPU communication between NVidia and the new power 9.

 

Dont get me started on having multiple thoughts I'll ramble and not finish any of my thoughts lol

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Taf the Ghost said:

The information about the GV100 says it's almost no IPC increase over the GP100, outside of the tensor cores. There will be a little bit with regards to gaming, but expect the 1170 to be pretty much just the 1080 with some slight uplift. Their dies are just going to get bigger. 

 

Also, it's going to run hotter than Pascal, it seems. (Though without a die shrink or a completely new uArch, that was probably a given. Granted, Pascal is an all-timer.)

Yeah I know, that's why I find it extremely arrogant from them to say that, since they just don't have a better value to propose yet because it all revolves around a few added things which don't improve gaming much, and core count increase. They're doing the same thing since maxwell, just tweaking and changing manufacturing processes. On the one hand it's good for current users, as their driver support will be better through times than when they changed arch more radically, but on the other they shouldn't be arrogant, especially when Vega comes closer to their perf/watt figures (at least the 56 does).

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, rrubberr said:

I would never recommend using a consumer grade CPU for real work, much less rendering for days or hours on end, as is my use case.

 

Comparing it to a 7900x, I don't quite see what Threadripper is better at, though. Based on the three videos I've seen and two reviews, I don't actually see the performance gains. With 60% more cores and an extra 30 watts, it doesn't seem to show a performance gain.

 

Why wouldn't my workstation perform within my "expectations?" 

 

 

Then why do Celeron/Pentium/Core i3, the lowest end consumer chips, support ECC memory?

.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, laminutederire said:

Yeah I know, that's why I find it extremely arrogant from them to say that, since they just don't have a better value to propose yet because it all revolves around a few added things which don't improve gaming much, and core count increase. They're doing the same thing since maxwell, just tweaking and changing manufacturing processes. On the one hand it's good for current users, as their driver support will be better through times than when they changed arch more radically, but on the other they shouldn't be arrogant, especially when Vega comes closer to their perf/watt figures (at least the 56 does).

Lol, I was just responding about the quoted part being a mess of thoughts.

 

Maxwell + Pascal are some all-time great uArchs, but you can only stretch a uArch so far. As you shrink nodes, new issues crop up/new abilities become available. Nvidia won't be on anything radically different until 2020 with whatever replaces Volta. RTG's Navi is going to really shift the market around.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, laminutederire said:

but on the other they shouldn't be arrogant, especially when Vega comes closer to their perf/watt figures (at least the 56 does).

What world do you live in?

Vega 56 is nowhere near close in terms of perf/watt, at least not in games.

 

In the best case for AMD, 4K gaming, even the 1060 gives about 30% more performance per watt.

The 1070 has over 40% higher performance for the same watt.

The 1080? Over 50% higher performance for the same watt.

 

I wouldn't call 50%, 40% or even 30% "close".

 

 

And in terms of performance the 1080 Ti is still a league of its own with about 30% higher performance than the Vega 64 model (and the gap gets even wider when you start factoring in overclocking of both cards).

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LAwLz said:

What world do you live in?

Vega 56 is nowhere near close in terms of perf/watt, at least not in games.

 

In the best case for AMD, 4K gaming, even the 1060 gives about 30% more performance per watt.

The 1070 has over 40% higher performance for the same watt.

The 1080? Over 50% higher performance for the same watt.

 

I wouldn't call 50%, 40% or even 30% "close".

 

 

And in terms of performance the 1080 Ti is still a league of its own with about 30% higher performance than the Vega 64 model (and the gap gets even wider when you start factoring in overclocking of both cards).

Where have you seen those figures though?

Basing my calculations on this review:

With power saving mode, you get around 25% more performance per watt for the 1070, not 40%. That's nearly half your figures.

Besides, I said closer, compared to Polaris etc. At least in Hitman there is a significant improvement in perf/watt under power saving mode (13% improvement in that game). I don't have the figures for every game, but that's a probably valid trend regarding the improvement from Polaris to vega (which could be worse vs the 1070 as hitman is a good game for Amd, but closer anyway)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×