Jump to content

Vega shader count has almost no impact on gaming performance

Agosto
6 minutes ago, cj09beira said:

work as =/= being

they will probably have 2 dies that could work alone working together just like ryzen, and if that is true it means it will have 8 shader engines to work with not 4, because it will be like crossfire but better

But they still have to increse ROPs and geometry engines for each die too, assuming those are the limiting factors.

On a mote of dust, suspended in a sunbeam

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Agost said:

But they still have to increse ROPs and geometry engines for each die too, assuming those are the limiting factors.

by the fact that its 2 gpus which by my calculations is almost a full vega 10, (based on a die size of around 256mm^2) it would have around 56 cus and 64 rops per die so 128 rops total, and 23 tflops of compute at just 1600 mhz (probably higher than this)

it actually could be the full fat vega 10 as vega 10 is only 484mm^2 so under 7nm it would be around 242mm^2 plus what ever is needed for the gpu to gpu comunication 

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Agost said:

UPDATE: GN is probably going to test the same for nvidia cards, according to a comment under the youtube video

It's not the same as testing the 1070/1080 or 1080Ti/Titan Xp, but you can look at results for the 980 Ti and Titan X Maxwell. The only difference GPU-wise was 10% less shaders (and TMUs). Otherwise their clock speeds were the same. And the 980 Ti was like <2% less powerful than the Titan X

 

And pretty much for all intents and purposes Maxwell and Pascal are the same.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, cj09beira said:

by the fact that its 2 gpus which by my calculations is almost a full vega 10, (based on a die size of around 256mm^2) it would have around 56 cus and 64 rops per die so 128 rops total, and 23 tflops of compute at just 1600 mhz (probably higher than this)

it actually could be the full fat vega 10 as vega 10 is only 484mm^2 so under 7nm it would be around 242mm^2 plus what ever is needed for the gpu to gpu comunication 

But 64 ROPs still seem to be not enough for 56 GCN CUs... It would be the same if you double it.

3 hours ago, M.Yurizaki said:

It's not the same as testing the 1070/1080 or 1080Ti/Titan Xp, but you can look at results for the 980 Ti and Titan X Maxwell. The only difference GPU-wise was 10% less shaders (and TMUs). Otherwise their clock speeds were the same. And the 980 Ti was like <2% less powerful than the Titan X

 

And pretty much for all intents and purposes Maxwell and Pascal are the same.

Both 980Ti and 1080Ti boost at higher frequencies than their Titan counterparts. Almost all pascal cards can clock the same, the difference caused by shader count among GPUs is definitely there

On a mote of dust, suspended in a sunbeam

Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, Agost said:

Both 980Ti and 1080Ti boost at higher frequencies than their Titan counterparts. Almost all pascal cards can clock the same, the difference caused by shader count among GPUs is definitely there

Officially the 980 Ti and Titan X Maxwell boost to the same speed. And I'm going to assume on the FE cards that's what they go to since nobody seems to have clock speed over time on those things.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, M.Yurizaki said:

Officially the 980 Ti and Titan X Maxwell boost to the same speed. And I'm going to assume on the FE cards that's what they go to since nobody seems to have clock speed over time on those things.

Official Maxwell and Pascal clocks mean nothing, thanks to gpu boost 2/3

 

1080Ti and Titan Xp have the same "official" boost but very small difference in performance due to the minumum decrease in shaders and an actually higher boost on the 1080ti, even if small, given by the higher power available for shaders (300W if iirc).

 

980Tis... Their official boost was something like 300MHz lower than most people would experience by buying an aftermarket card.

 

The main difference between 980Ti/1080Ti and Titan X maxwell/Xp is in fact the cooling, which allows to get on par or surpass the performance obtained by slightly more cores ( 256 and 128 respectively)

 

In fact, liquid cooled Titans always perform a bit better than Tis cutdowns

 

On a mote of dust, suspended in a sunbeam

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Agost said:

But 64 ROPs still seem to be not enough for 56 GCN CUs... It would be the same if you double it.

Both 980Ti and 1080Ti boost at higher frequencies than their Titan counterparts. Almost all pascal cards can clock the same, the difference caused by shader count among GPUs is definitely there

increasing rops and other front end stuff over what is on fiji/vega seems to need a lot of work so they are fixing the issue in another way by having 2 gpus, each with its own front end, it would still be limiting yes, but does it matter when you have 2 front ends aka 128 rops, 8 shader engines etc

i didn't recommend a gpu with less Cu's per die and the same rops because amd will probably still need it for server use also, so the extra Cus will be needed, just not for gaming, (although amd is pushing compute for games since 2011)

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Agost said:

Official Maxwell and Pascal clocks mean nothing, thanks to gpu boost 2/3

Except nobody has any real data on clock speeds over time. At least, not from the channels where I get my data from. So I have nothing else to go off of.

 

Quote

1080Ti and Titan Xp have the same "official" boost but very small difference in performance due to the minumum decrease in shaders and an actually higher boost on the 1080ti, even if small, given by the higher power available for shaders (300W if iirc).

I never compared the 1080 Ti to the Xp. I only made a suggestion that Maxwell and Pascal aren't really any different.

 

Quote

980Tis... Their official boost was something like 300MHz lower than most people would experience by buying an aftermarket card.

 

The main difference between 980Ti/1080Ti and Titan X maxwell/Xp is in fact the cooling, which allows to get on par or surpass the performance obtained by slightly more cores ( 256 and 128 respectively)

 

In fact, liquid cooled Titans always perform a bit better than Tis cutdowns

I don't consider custom AIB solutions or aftermarket solutions because it changes the playing field and adds more variability. My performance assessment was based on the 980 Ti FE, which as far as I know, uses the same cooler as the Titan X.

 

Also I thought the discussion here was "given the same clock speeds, does increasing the amount of shaders do anything?" I pointed out the 980 Ti FE and Titan X because they both practically have the same specs sans the number of shaders (which is the variable we wanted to test) and amount of memory (which at the time they were relevant, most games were using under 6GB so that shouldn't have been a factor). So if you have anything that shows the 980 Ti FE and Titan X with clock speeds over time during benchmark runs, the only reasonable assumption to make is they both were more or less matching. And that would allow us to figure out how much shaders do matter.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, M.Yurizaki said:

Except nobody has any real data on clock speeds over time. At least, not from the channels where I get my data from. So I have nothing else to go off of.

980Ti vs X Maxwell

1080Ti vs Xp vs X Pascal

Only reviews I could find with titan Xp vs 1080Ti vs Titan X (Pascal) and 980TI vs X Maxwell (on a first look)

980Ti was averaging the same as Titan X (maxwell) in that test, maybe a bit higher, but seriously crippled by the cooler. Higher clocks on the 980Ti on torture test (same power limit, less shaders to feed). On par or slower than the Titan X(m) in gaming tests.

Their 1080Ti was averaging higher than Titan X Pascal and somehow lower than Xp, but in other reviews I've seen way higher average clocks for reference 1080Ti (like this)

 

Quote

I never compared the 1080 Ti to the Xp. I only made a suggestion that Maxwell and Pascal aren't really any different.

1080Ti and Titan Xp are the current nvidia flagships and are both based on the same die, one of which with 128 less CUDA cores and 32bit less VRAM bus width; that's a good example for testing. Yes, Maxwell and Pascal are practically the same thing for the most part

 

Quote

I don't consider custom AIB solutions or aftermarket solutions because it changes the playing field and adds more variability. My performance assessment was based on the 980 Ti FE, which as far as I know, uses the same cooler as the Titan X.

 

Also I thought the discussion here was "given the same clock speeds, does increasing the amount of shaders do anything?" I pointed out the 980 Ti FE and Titan X because they both practically have the same specs sans the number of shaders (which is the variable we wanted to test) and amount of memory (which at the time they were relevant, most games were using under 6GB so that shouldn't have been a factor). So if you have anything that shows the 980 Ti FE and Titan X with clock speeds over time during benchmark runs, the only reasonable assumption to make is they both were more or less matching. And that would allow us to figure out how much shaders do matter.

AIB cards are the most common ones, few people will buy a reference card if they can find a better cooled one for the same price. Again, because of gpu boost you can't just say "980Ti/1080ti and X Maxwell/Xp have the same clock" since it will vary from card to card. It's much easier to make an AIB card at a given frequency with less fluctuations than a reference one (unless you're underclocking it)

On a mote of dust, suspended in a sunbeam

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Agost said:

-Snip-

While the 980 Ti and 1080 Ti do boost higher than the Titan X Maxwell and Titan Xp, it's only a ~2% clock speed boost. In the case of the 980 Ti, a ~2% clock speed bump shouldn't make up for the 10% less shaders and thus allow the GTX 980 Ti to perform almost identical to the GTX Titan Maxwell.

 

However the whole thing does get thrown out the window in that the Titan Xp performs better than the GTX 1080 Ti, despite running at roughly the same clock speed and having 7% more shaders.

 

13 minutes ago, Agost said:

AIB cards are the most common ones, few people will buy a reference card if they can find a better cooled one for the same price. Again, because of gpu boost you can't just say "980Ti/1080ti and X Maxwell/Xp have the same clock" since it will vary from card to card. It's much easier to make an AIB card at a given frequency with less fluctuations than a reference one (unless you're underclocking it)

I'm not saying all 980 Ti's will boost the same as the Titan X Maxwell. I was saying the 980 Ti FE card is likely going to match the Titan X Maxwell based on official specifications. But you provided data that said otherwise.

 

However, I will not consider anything other than the stock configurations for low level comparisons like this, because you can always guarantee stock level performance, you cannot guarantee overclocked performance. It's as you said, there's variations between cards (and GPUs) that prevent the same overclock from working on all cards. This is why I use the FE cards.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, M.Yurizaki said:

However, I will not consider anything other than the stock configurations for low level comparisons like this, because you can always guarantee stock level performance, you cannot guarantee overclocked performance. It's as you said, there's variations between cards (and GPUs) that prevent the same overclock from working on all cards. This is why I use the FE cards.

As seen from the difference between TPU and tom's, even reference card clocks can vary a lot, even 100 MHz only from temperature and silicon quality (but mostly ambient temperature, if the target is set at the stock value)

AIB cards don't have to be overclocked, they just have the cooling headroom to make gpu boost work properly, thus having more stable and easier to compare clocks. You can make every pascal card run at 1700 MHz to test shader impact, even if it's mostly underclocking, no need to push FE cards to 2000 MHZ (although they can actually reach it with maxed out fan speed, power and temperature target)

On a mote of dust, suspended in a sunbeam

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Agost said:

As seen from the difference between TPU and tom's, even reference card clocks can vary a lot, even 100 MHz only from temperature and silicon quality (but mostly ambient temperature, if the target is set at the stock value)

AIB cards don't have to be overclocked, they just have the cooling headroom to make gpu boost work properly, thus having more stable and easier to compare clocks. You can make every pascal card run at 1700 MHz to test shader impact, even if it's mostly underclocking, no need to push FE cards to 2000 MHZ (although they can actually reach it with maxed out fan speed, power and temperature target)

However if we're comparing anything against Titan, and considering Titan doesn't come in anything other than NVIDIA's stock cooler, the only fair comparison is to use an FE card.

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, M.Yurizaki said:

However if we're comparing anything against Titan, and considering Titan doesn't come in anything other than NVIDIA's stock cooler, the only fair comparison is to use an FE card.

or liquid cool the hell out of them

On a mote of dust, suspended in a sunbeam

Link to comment
Share on other sites

Link to post
Share on other sites

who's honestly surprised by this?

I loved an article by Tom's Hardware recently - Radeon through the ages, or something along the lines...

Basically what I got from it is they barely increased the number of ROPs per compute unit since the 00's which is ultimately the culprit for 3D and graphics processing

also for compute workflows the shaders do matter however and are noticeable

CPU: Intel i7 5820K @ 4.20 GHz | MotherboardMSI X99S SLI PLUS | RAM: Corsair LPX 16GB DDR4 @ 2666MHz | GPU: Sapphire R9 Fury (x2 CrossFire)
Storage: Samsung 950Pro 512GB // OCZ Vector150 240GB // Seagate 1TB | PSU: Seasonic 1050 Snow Silent | Case: NZXT H440 | Cooling: Nepton 240M
FireStrike // Extreme // Ultra // 8K // 16K

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, DXMember said:

who's honestly surprised by this?

I loved an article by Tom's Hardware recently - Radeon through the ages, or something along the lines...

Basically what I got from it is they barely increased the number of ROPs per compute unit since the 00's which is ultimately the culprit for 3D and graphics processing

also for compute workflows the shaders do matter however and are noticeable

And now they're stuck on 64 ROPs since 2013 -___________-

On a mote of dust, suspended in a sunbeam

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×