Jump to content

AMD RX Vega reviews

thexzenon
1 hour ago, leadeater said:

I think Nvidia just tailors their architecture better to the specific task it's likely to be used for, gaming. The Gx100 dies actually have slightly different designs than the Gx10x lesser dies, I think that might only be a recent thing though? Pascal onward.

 

AMD is a bit like the good old 'Jack of all trades but master at none' kind of thing.

 

I don't think everyone quite remembers just how big a leap Pascal was, and even that really wasn't made clear until 1080Ti. The original 1080 consumed less power than the 980Ti and far out performed it then the 1080Ti came long and basically crushed that again, huge performance increase for the same power as the 980Ti.

 

The Fury X during gaming didn't really use much more power than the 980Ti and we are talking about the same GPU architecture only scaled up in clock rate to increase the performance and the extra transistors required to do that need power so it must use more than Fury X. TBR is supposed to reduce power but it's unclear if it's actually on even for RX Vega but I'm going to have to assume yes since overall RX Vega is better in performance and power than Vega FE.

I just look at GCN from inception until now and every generation seems to take longer and the performance enhancements each time are diminishing. Almost like the fundamental structure of GCN turned out not to be a good long term prospect.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, mr moose said:

I just look at GCN from inception until now and every generation seems to take longer and the performance enhancements each time are diminishing. Almost like the fundamental structure of GCN turned out not to be a good long term prospect.

Yea seems like it, was rather good across the first few generations but after 5 it seems sorely wanting a bit more major redesign. I think it was Anandtech that pointed out rather well in their review that Vega was still almost exactly Fiji in almost every significant way. Sure the memory controller has been changed and the interconnect is now IF, with it's own clock reference, and it has HBM2 yada yada but the GPU itself is much the same. Adding pipeline stages to increase clocks isn't what I'd call an architecture redesign anyway.

 

I like the extra features that were add to Vega, they sound really useful just not so much for gaming.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, lots of unexplainable lag said:

May I add that it's doing so with most of its new features still disabled by drivers (rasterizer, HBCC, primitive shaders)? Sure, it's still a half-arsed rushed launch but if it's already on-par or beating the green team I think it's save to say FineWine will start doing its thing in a month or 2-3. 

9 hours ago, LAwLz said:

[Citation Needed]

I would also like to know what you think those features are and what they will do to performance. There is a lot of misunderstanding about Vega right now and I certainly don't want more people just parroting the hype train on /r/AMD.

Still waiting...

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, lots of unexplainable lag said:

May I add that it's doing so with most of its new features still disabled by drivers (rasterizer, HBCC, primitive shaders)? Sure, it's still a half-arsed rushed launch but if it's already on-par or beating the green team I think it's save to say FineWine will start doing its thing in a month or 2-3. 

I would like to see the source too please.

Obviously Vega's raw performance will get optimized over time by AMD, but this is the first I am hearing about wholesale features being disabled?

 

At the Vega FE launch I know that stuff was still disabled, but now?

Link to comment
Share on other sites

Link to post
Share on other sites

I have a question: i see that AMD is doing better at compute while being cheaper then Nvidia.

But isnt the increased powerdraw going to be an issue? Sure for the 1 computer at your home it wont matter, but i can see this being a problem for when u have alot more and suddenly have to take that into account.

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, VagabondWraith said:

Then wait for VoltaxD 

Not good enough. We need to wait for the GPU after that! 

 

Im amazed how long this 980Ti is going to last me. Previously the oldest GPU I had in use was a 8800 Ultra. ( BFG upgraded me from GTX because I RMA'd due to coil whine ).

 

I'll likely wait for Volta Titan and just get that, should last another 3 years before an upgrade then.

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LAwLz said:

Still waiting...

Here: https://www.computerbase.de/2017-08/radeon-rx-vega-64-56-test/8/

 

Under the image of Vega 64 with the spare die:

 

Definitiv nicht fertig sind die Treiber. Der Draw Stream Binning Rasterizer (DSBR) ist zwar offenbar noch in letzter Sekunde fertig geworden, der High Bandwith Cache Controller (HBCC) ist aber immer noch standardmäßig abgeschaltet und Primitive Shader fehlen komplett.

 

aka

 

Definitely not finished are the drivers. The Draw Stream Binning Rasterizer (DSBR) was apparently finished in the last second, the High Bandwith Cache Controller (HBCC) is switched off by default and Primitive Shaders are missing completely.

 

Ye ole' train

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, AnonymousGuy said:

I'll bet money no one is going to actually be able to buy a Vega card at anywhere near MSRP so it's pointless to even say "it's better value".

 

Basically AMD is going into a gun fight with a wet noodle with Vega vs. Volta.  They might not even survive long enough for Navi to come out.

Still. My monitor would cost about 220USD extra (converted from NOK) had i went with the Nvidia option. Vega 56 can cost 100 more than the 1070 and to me, it will still be a serious contender.

 

Currently, the Vega 64 is slightly cheaper than the cheapest stock cooled Nvidia 1080 in Norway. There are no prices on the Vega 56 yet available anywhere.

Motherboard: Asus X570-E
CPU: 3900x 4.3GHZ

Memory: G.skill Trident GTZR 3200mhz cl14

GPU: AMD RX 570

SSD1: Corsair MP510 1TB

SSD2: Samsung MX500 500GB

PSU: Corsair AX860i Platinum

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Humbug said:

I would like to see the source too please.

HBCC is currrently off at default, however with 8 Gigs of RAM the only way you would see a significant difference in performance under current games is if you loaded up Fallout 4 at 4k with every possible bloated texture mod you could think of. The one reviewer I saw who turned it on saw at most a 2 FPS increase in current games since nothing's memory bound as of yet. Maybe Cyberpunk 2077 will be able to push it or Vega 11 will have smaller memory sizes which will allow for it to actually help.

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, dorin said:

 

You need to get your facts straight. Amd hasn't hyped Vega for quite a while now. The only performance numbers they showed were accurate. Over hyping hasn't been there for a while either, and was only kept alive by some individuals who weren't part of AMD in any way.

Both products aren't that bad. They're not great, but they remain okay anyway.

Link to comment
Share on other sites

Link to post
Share on other sites

AMD havent hyped Vega that much, its the AMD fans that have hyped it up.

“Remember to look up at the stars and not down at your feet. Try to make sense of what you see and wonder about what makes the universe exist. Be curious. And however difficult life may seem, there is always something you can do and succeed at. 
It matters that you don't just give up.”

-Stephen Hawking

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, lots of unexplainable lag said:

Here: https://www.computerbase.de/2017-08/radeon-rx-vega-64-56-test/8/

 

Under the image of Vega 64 with the spare die:

 

Definitiv nicht fertig sind die Treiber. Der Draw Stream Binning Rasterizer (DSBR) ist zwar offenbar noch in letzter Sekunde fertig geworden, der High Bandwith Cache Controller (HBCC) ist aber immer noch standardmäßig abgeschaltet und Primitive Shader fehlen komplett.

 

aka

 

Definitely not finished are the drivers. The Draw Stream Binning Rasterizer (DSBR) was apparently finished in the last second, the High Bandwith Cache Controller (HBCC) is switched off by default and Primitive Shaders are missing completely.

 

Tile based rendering is used in the power saving mode to gain 50 to 100W for V56 and 80 to 150W for V64, judging from techreport a rule on the matter, giving up less that 5% decrease in performance, which is somewhat a good sign. It's not perfect, but it can have a decent efficiency if it's less powerful, which could get interesting with multi dies products.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, lots of unexplainable lag said:

Here: https://www.computerbase.de/2017-08/radeon-rx-vega-64-56-test/8/

 

Under the image of Vega 64 with the spare die:

 

Definitiv nicht fertig sind die Treiber. Der Draw Stream Binning Rasterizer (DSBR) ist zwar offenbar noch in letzter Sekunde fertig geworden, der High Bandwith Cache Controller (HBCC) ist aber immer noch standardmäßig abgeschaltet und Primitive Shader fehlen komplett.

 

aka

 

Definitely not finished are the drivers. The Draw Stream Binning Rasterizer (DSBR) was apparently finished in the last second, the High Bandwith Cache Controller (HBCC) is switched off by default and Primitive Shaders are missing completely.

 

OK, and what are those features and what do you think they will do to performance?

I've seen a lot of people shout about how "this isn't enabled!" and trying to build hype and future expectations based on that, but then when asked about the features they are hyping they have no clue what they are or even what they do.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, LAwLz said:

OK, and what are those features and what do you think they will do to performance?

I've seen a lot of people shout about how "this isn't enabled!" and trying to build hype and future expectations based on that, but then when asked about the features they are hyping they have no clue what they are or even what they do.

For DSBR, it seems to be a power efficiency feature in power saving mode.

As I saw in techreport it does that:

 

14 minutes ago, laminutederire said:

Tile based rendering is used in the power saving mode to gain 50 to 100W for V56 and 80 to 150W for V64, judging from techreport a rule on the matter, giving up less that 5% decrease in performance

(Article link)

It's a good sign if Navi indeed is a multi die solution akin to Ryzen, because the power efficiency gain is significant with power saving (+ a bit of undervolting maybe?), making those shrunk to 7nm, then glued together, quite a compelling solution. 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, laminutederire said:

For DSBR, it seems to be a power efficiency feature in power saving mode.

As I saw in techreport it does that:

 

(Article link)

It's a good sign if Navi indeed is a multi die solution akin to Ryzen, because the power efficiency gain is significant with power saving (+ a bit of undervolting maybe?), making those shrunk to 7nm, then glued together, quite a compelling solution. 

Isn't it amazing that I called that last month while others were telling me TBR would bring a 50% performance increase?

 

This is why I am asking lag about the features. Because I have seen a copious amount of people try to hype things they don't understand the basics of.

They just hear that some feature is disabled and then they start hyping it like crazy when they don't even know what it does.

 

Check out this post:

On 2017-07-21 at 6:27 AM, Captain_Tom said:

However I have to say that the people doubting there is a driver issue...... Well you are idiotic to be frank.   

 

  1. Tiled Rasterization is turned OFF.   This could bring a full 50% increase in performance.
 

And here was my response:

On 2017-07-21 at 0:14 PM, LAwLz said:

Based on other implementations and just logical sense 50% seems way overboard. The numbers I've seen being talked about by far more knowledgeable people than myself is more along the lines of ~5%, and that's for very high resolution gaming (like 4K) where there is pressure on the memory.

Where did you get that 50% number from? Tile based rasterization has never been touted as a performance increasing feature. It's about efficiency, not performance.

1

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, ravenshrike said:

HBCC is currrently off at default, however with 8 Gigs of RAM the only way you would see a significant difference in performance under current games is if you loaded up Fallout 4 at 4k with every possible bloated texture mod you could think of. The one reviewer I saw who turned it on saw at most a 2 FPS increase in current games since nothing's memory bound as of yet. Maybe Cyberpunk 2077 will be able to push it or Vega 11 will have smaller memory sizes which will allow for it to actually help.

HBCC seems to have a two fold purpose: 1) Future-proof the GPU/allow smaller SKU GPUs to not bottleneck in the future. 2) Run multiple memory types from a unified memory controller architecture. (That's probably more for future APU functions.)

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, LAwLz said:

Isn't it amazing that I called that last month while others were telling me TBR would bring a 50% performance increase?

 

This is why I am asking lag about the features. Because I have seen a copious amount of people try to hype things they don't understand the basics of.

They just hear that some feature is disabled and then they start hyping it like crazy when they don't even know what it does.

 

Check out this post:

And here was my response:

 

It does provide a nice 20% or so performance efficiency boost, which is a good improvement on power efficiency unlike this launch seems to be.

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, laminutederire said:

For DSBR, it seems to be a power efficiency feature in power saving mode.

As I saw in techreport it does that:

 

(Article link)


I don't see mention of DSBR being enabled ONLY in power saving mode anywhere.

I just think the power saver exploits GCN's capability to save a lot of power just by cutting power target and clocks, or better, making the GPU run at frequencies more suited for Vega, since AMD is pratically forced to run video cards out of their higher efficiency range since the 7970 GHz edition.

By now we all know that GCN scales badly when going up in frequencies (and core count), but dialing back a little bit  the core clock gives big improvements in power consumption. Just think about RX 4XX series and R9 Nano.

On a mote of dust, suspended in a sunbeam

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, laminutederire said:

It's a good sign if Navi indeed is a multi die solution akin to Ryzen, because the power efficiency gain is significant with power saving (+ a bit of undervolting maybe?), making those shrunk to 7nm, then glued together, quite a compelling solution. 

I would temper those expectations of a multi GPU MCM design for Navi. Infinity Fabric really is a very good technology and has a great foundation but there are a few things that is going to make this unlikely in this sort of time frame.

 

If we look at Zen the Infinity Fabric as a base line for where the technology is at now we can make some assessments, not very good ones but all we have, about what might be needed for a GPU.

 

On the same CPU package between dies Zen has a IF bandwidth of 42GB/s. We can rule out bandwidth to memory controllers etc as that is in use now with Vega on an extremely wide bus. So to keep a long story short unless you do want to go in to it deeper for an IF based MCM GPU we are likely going to need to increase the IF bandwidth by 10 times which I just don't think is going to happen in the required time frame using a low enough amount of die space and transistors etc.

 

The up coming APUs will tell us much more than speculation and bandwidth calculations will though, also remember the GPUs in an APU are very low end so the bandwidth required isn't that high. If we see the link between Zen and Vega on the APU at the similar 42GB/s mark then count MCM out for Navi until more solid information comes out.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, leadeater said:

I would temper those expectations of a multi GPU MCM design for Navi. Infinity Fabric really is a very good technology and has a great foundation but there are a few things that is going to make this unlikely in this sort of time frame.

 

If we look at Zen the Infinity Fabric as a base line for where the technology is at now we can make some assessments, not very good ones but all we have, about what might be needed for a GPU.

 

On the same CPU package between dies Zen has a IF bandwidth of 42GB/s. We can rule out bandwidth to memory controllers etc as that is in use now with Vega on an extremely wide bus. So to keep a long story short unless you do want to go in to it deeper for an IF based MCM GPU we are likely going to need to increase the IF bandwidth by 10 times which I just don't think is going to happen in the required time frame using a low enough amount of die space and transistors etc.

 

The up coming APUs will tell us much more than speculation and bandwidth calculations will though, also remember the GPUs in an APU are very low end so the bandwidth required isn't that high. If we see the link between Zen and Vega on the APU at the similar 42GB/s mark then count MCM out for Navi until more solid information comes out.

IF bandwidth is based on the memory in use, so with Navi could easily scale up to hundreds of GB/s. Moreover in both cases (Navi and Vega APUs) there could be bus width differencies and/or some kind of clock multiplier to increase IF frequency

I don't think RTG team is so stupid to limit Navi to 40 GB/s. In any case, a hypotetical 2x2048SP Navi gpu won't perform like a full 4096SP one based on the same architecture, that's the same "issue" nvidia is analyzing for future MCM solutions.

On a mote of dust, suspended in a sunbeam

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, leadeater said:

I would temper those expectations of a multi GPU MCM design for Navi. Infinity Fabric really is a very good technology and has a great foundation but there are a few things that is going to make this unlikely in this sort of time frame.

 

If we look at Zen the Infinity Fabric as a base line for where the technology is at now we can make some assessments, not very good ones but all we have, about what might be needed for a GPU.

 

On the same CPU package between dies Zen has a IF bandwidth of 42GB/s. We can rule out bandwidth to memory controllers etc as that is in use now with Vega on an extremely wide bus. So to keep a long story short unless you do want to go in to it deeper for an IF based MCM GPU we are likely going to need to increase the IF bandwidth by 10 times which I just don't think is going to happen in the required time frame using a low enough amount of die space and transistors etc.

 

The up coming APUs will tell us much more than speculation and bandwidth calculations will though, also remember the GPUs in an APU are very low end so the bandwidth required isn't that high. If we see the link between Zen and Vega on the APU at the similar 42GB/s mark then count MCM out for Navi until more solid information comes out.

Do we know if they can run multiple IF "lanes" between connection points to multiply the bandwidth? Depending on how the IF attaches itself/the Navi uArch is going to operate, you don't necessarily need full bandwidth that you would need to the Memory for communication between the GPUs. But this is also a story of some ultra-high level design concepts that's a bit past my depth when it comes to GPUs.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Agost said:

IF bandwidth is based on the memory in use, so with Navi could easily scale up to hundreds of GB/s. Moreover in both cases (Navi and Vega APUs) there could be bus width differencies and/or some kind of clock multiplier to increase IF frequency

I don't think RTG team is so stupid to limit Navi to 40 GB/s. In any case, a hypotetical 2x2048SP Navi gpu won't perform like a full 4096SP one based on the same architecture, that's the same "issue" nvidia is analyzing for future MCM solutions.

Only on Zen, IF on Vega isn't tied to the memory clock. And remember even in Zen it's only tied to the clock, the IF itself is it's own thing that requires die space and transistors. GPUs simply require way more bandwidth than CPUs do so it's purely an issue of making the IF 10 times faster without making it 20%-30% of the entire GPU die.

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, nawaf said:

massive failure 

I see you have have mastered the art of extreme and total over exaggeration

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Taf the Ghost said:

Do we know if they can run multiple IF "lanes" between connection points to multiply the bandwidth? Depending on how the IF attaches itself/the Navi uArch is going to operate, you don't necessarily need full bandwidth that you would need to the Memory for communication between the GPUs. But this is also a story of some ultra-high level design concepts that's a bit past my depth when it comes to GPUs.

You can scale it out for more bandwidth yes but how far do you need to go? How much better is the IF in Vega to Zen, how much better will it be next year? Will it be 3 times faster so an effective 10 times faster link might actually be viable for the require space it would need.

 

Even with the huge PCIe lanes between EPYC CPUs the total bandwidth is only 152GB/s and that's using 4 dies with 4 PCIe controllers and half of each of them to do it.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, leadeater said:

You can scale it out for more bandwidth yes but how far do you need to go? How much better is the IF in Vega to Zen, how much better will it be next year? Will it be 3 times faster to an effective 10 times faster link might actually be viable for the require space it would need.

 

Even with the huge PCIe lanes between EPYC CPUs the total bandwidth is only 152GB/s and that's using 4 dies with 4 PCIe controllers and half of each of them to do it.

I don't see MCM being a good (permanent) solution for performance scaling except in very specific workloads. IBM moved away from MCMs themselves.

 

who knows maybe they will license NVlink from NVidia for that nice 300GB/s link...not

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×