Jump to content

AMD confirms the Vega 20, 2018 7nm being tested.

Kamjam21xx
3 hours ago, laminutederire said:

It might or might not be the case, but I don't understand what people are awaiting of that. Isn't it supposed to be a machine learning card anyway, so how well it performs in gaming shouldn't matter that much granted it performs well in machine learning tasks.

 

It is the case :|.

 

Yeah its just a machine learning card. 

 

7nm is supposed to give 30% increase in performance or 60% drop in power consumption.

 

Vega 20 and 10 look to be the same chip, different node, so looks to me for this ES, its just dropped 60% in power consumption.

 

Vega's power envelope craziness is due to voltages across the chip as explained earlier.  That will not change too much with node changes, also explained earlier.  The node will let Vega 20 drop voltages but that new node will be using relative to the old node voltages at the upper limits of the archiecture's power envelopes.

 

I don't know if I wrote that well enough to be understood.

 

But its like this.  Hypothetically if Vega 10's range of clock speed to power consumption.  Is 10mhz to 20mhz for 100-150 watts  with .012 mV.  Then the new node for Vega 20, can give 10mhz to 20 mhz at 40-60 watts with .01 mV.  The problem here is .01 mV is your new norm for ideal power envelope which will give the same frequencies too.

 

Node don't give frequencies jump like 20 years ago, a new node by itself can only give 200 mhz increase or so.  If the uarch doesn't change the chip won't really clock higher nor will the relative energy efficiency envelopes change.

 

This is the problem with GCN, we can look at r2xx, r3xx, Tonga,  Fiji, Polaris, and Vega, respectively to the same generation cores, all of them are hitting similar limits.  So node aside since we are talking about spanning over 3 different nodes here.  They all hit a uarch limit.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Razor01 said:

This is the problem with GCN, we can look at r2xx, r3xx, Tonga,  Fiji, Polaris, and Vega, respectively to the same generation cores, all of them are hitting similar limits.  So node aside since we are talking about spanning over 3 different nodes here.  They all hit a uarch limit.

its true that amd haven't focused on increasing clocks until vega, but i still believe that going from a 20nm finfet low power node (glofo's 14nm lpp) to a 10nm node that aims at 5ghz (vs 3ghz on the old one)(7nm lp) will bring significant frequency gains.

vega 20 according to the leaks will also have full fp64 support something amd is missing 

Link to comment
Share on other sites

Link to post
Share on other sites

53 minutes ago, cj09beira said:

its true that amd haven't focused on increasing clocks until vega, but i still believe that going from a 20nm finfet low power node (glofo's 14nm lpp) to a 10nm node that aims at 5ghz (vs 3ghz on the old one)(7nm lp) will bring significant frequency gains.

vega 20 according to the leaks will also have full fp64 support something amd is missing 

 

If the uarch doesn't change don't expect miracles in frequency changes AT ALL.  Node is practically irrelevant when it comes to frequency.  GCN architecture frequency changes from the first iteration till Vega read like an identical map to what Intel did with Pentium 4, they lengthen pipelines to get that extra frequency, but that will only go so far and will not get much extra. 28nm we had 3 different GCN variation on it, Hawaii, Tonga, Fiji, all of them hit the same frequencies levels.  14nm, Vega, got a bump in Frequencies a little bit, it wasn't much.  14nm is essentially a 2 node change from 28nm, yet we only see 500-600 mhz difference.  Then you had Polaris in the middle there with 14nm, which got 1300 mhz or so, you can see they have been doing as much as they can with tweaking the GCN urach to get as much frequency as possible.  But still not that much of a change.  If the node change can give ~150 mhz per node and we got 2 node drops all at once, their uarch tweaks are only giving 200 more mhz

 

Well last RocM slides still show 1/16 for Vega 20 for DP.   Didn't it or was that for Vega 10 x2?

 

Yeah that was for Vega 10 x2, looks to me Vega 20 with its DP, will be for HPC settings as AI doesn't really need DP.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, cj09beira said:

its true that amd haven't focused on increasing clocks until vega, but i still believe that going from a 20nm finfet low power node (glofo's 14nm lpp) to a 10nm node that aims at 5ghz (vs 3ghz on the old one)(7nm lp) will bring significant frequency gains.

vega 20 according to the leaks will also have full fp64 support something amd is missing 

 

28 minutes ago, Razor01 said:

 

If the uarch doesn't change don't expect miracles in frequency changes AT ALL.  Node is practically irrelevant when it comes to frequency.  GCN architecture frequency changes from the first iteration till Vega read like an identical map to what Intel did with Pentium 4, they lengthen pipelines to get that extra frequency, but that will only go so far and will not get much extra. 28nm we had 3 different GCN variation on it, Hawaii, Tonga, Fiji, all of them hit the same frequencies levels.  14nm, Vega, got a bump in Frequencies a little bit, it wasn't much.  14nm is essentially a 2 node change from 28nm, yet we only see 500-600 mhz difference.  Then you had Polaris in the middle there with 14nm, which got 1300 mhz or so, you can see they have been doing as much as they can with tweaking the GCN urach to get as much frequency as possible.  But still not that much of a change.  If the node change can give ~150 mhz per node and we got 2 node drops all at once, their uarch tweaks are only giving 200 more mhz

 

Well last RocM slides still show 1/16 for Vega 20 for DP.   Didn't it or was that for Vega 10 x2?

 

Yeah that was for Vega 10 x2, looks to me Vega 20 with its DP, will be for HPC settings as AI doesn't really need DP.

 

I think the point of Vega 20 is just as the first design through the 7nm libraries/manufacturing process, allowing for a very large area shrink. They are going to be running 32 Gb of HBM2 (per the one that showed up on the testing), which really says this is just for some compute load. There must be a market for that much on-board memory, but I wouldn't be expecting much of anything from this except maybe lower heat output & a much smaller die.

 

Unless AMD can pull off a true MCM on Navi, we're really at the limits for what GCN can do, which is why Navi should be the last generation of it. It's a brilliant iGPU Architecture, but it doesn't scale to high-end dGPUs. Hopefully whatever next gen Architecture scales a lot better.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Razor01 said:

 

It is not definitely the case. For all we know it IS running at 1000MHz. Or it's an error and it could be at 2000MHz and perform like ass. It's a leak so we don't know. You can be skeptic and say that it's not good.

You can also look at the fact that they often have had 1GHz engineering samples in the past for Vega and Polaris, and there was no error for some of them (if not all of have to recheck).

It may not help with clocks but at least it will help with power consumption, which is still nice in itself. Especially for data centers. Which is why Vega 7nm is only for ML, since its not cost effective as is since its a new node, so it make sense to sell them at high prices for data centers!

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, laminutederire said:

It is not definitely the case. For all we know it IS running at 1000MHz. Or it's an error and it could be at 2000MHz and perform like ass. It's a leak so we don't know. You can be skeptic and say that it's not good.

You can also look at the fact that they often have had 1GHz engineering samples in the past for Vega and Polaris, and there was no error for some of them (if not all of have to recheck).

It may not help with clocks but at least it will help with power consumption, which is still nice in itself. Especially for data centers. Which is why Vega 7nm is only for ML, since its not cost effective as is since its a new node, so it make sense to sell them at high prices for data centers!

LOL yeah when we saw those performance numbers for 1 ghz parts for Vega and Polaris but when they came out in real life the performance figures were pretty much the same.

 

Come on man.  All the excuses in the world won't change what is what.

 

Don't even try that crap with me, drivers aren't ready, clock speeds BS.  I know how these things are made, and just doesn't happen that way.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Taf the Ghost said:

 

I think the point of Vega 20 is just as the first design through the 7nm libraries/manufacturing process, allowing for a very large area shrink. They are going to be running 32 Gb of HBM2 (per the one that showed up on the testing), which really says this is just for some compute load. There must be a market for that much on-board memory, but I wouldn't be expecting much of anything from this except maybe lower heat output & a much smaller die.

 

Unless AMD can pull off a true MCM on Navi, we're really at the limits for what GCN can do, which is why Navi should be the last generation of it. It's a brilliant iGPU Architecture, but it doesn't scale to high-end dGPUs. Hopefully whatever next gen Architecture scales a lot better.

 

Look Navi = GCN

 

Does GCN = MCM?  How people put those together baffles me.  Anyone that understands what it takes to get to a true MCM design for graphics work loads understands that architecture must change radically.

 

So that means the ISA,will change for MCM type tech.  If the ISA changes its not GCN anymore.

 

AND everyone needs to keep this in mind, AMD stated power consumption for Vega 20, anywhere from 150watts to 300 watts.  Yeah does that make sense when I kinda hinted at  2 GPU cards?  a single Vega 20 150 watts?  Double Vega 20 300 watts?

 

Lets take Vega 10 right now and crunch it down with 7nm power consumption savings.... it ends up under 150 watts but add in the extra set of vram, we can say around ~150 watts.

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, Razor01 said:

 

Look Navi = GCN

 

Does GCN = MCM?  How people put those together baffles me.  Anyone that understands what it takes to get to a true MCM design for graphics work loads understands that architecture much change.

 

So that means the ISA,will change for MCM type tech.  If the ISA changes its not GCN anymore.

 

AND everyone needs to keep this in mind, AMD stated power consumption for Vega 20, anywhere from 150watts to 300 watts.  Yeah does that make sense when I kinda hinted at  2 GPU cards?  a single Vega 20 150 watts?  Double Vega 20 300 watts?

 

Lets take Vega 10 right now and crunch it down with 7nm power consumption savings.... it ends up under 150 watts but add in the extra set of vram, we can say around ~150 watts.

amd engineers have said in the past that if they wanted/had the funds to they could increase the shader engine count and make gcn great for larger dies, the problem until now seems to have been lack of funding, with amds cpu division running on fumes R&D is had to be reduced, if navi will be that or not its anyones guess, i also dont expect navi to be a mcm but there is a bigger chance of having more shader engines.

even though i am saying it has a bigger chance i also realize that it might not happen because if amd has a whole new architecture coming after navi it would be a wierd move to solve its main issue right before moving away from it, the ROI of that investment wouldn't be great.

On the other hand if they don't do that at most they have a midrange gpu to sell

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, cj09beira said:

amd engineers have said in the past that if they wanted/had the funds to they could increase the shader engine count and make gcn great for larger dies, the problem until now seems to have been lack of funding, with amds cpu division running on fumes R&D is had to be reduced, if navi will be that or not its anyones guess, i also dont expect navi to be a mcm but there is a bigger chance of having more shader engines.

even though i am saying it has a bigger chance i also realize that it might not happen because if amd has a whole new architecture coming after navi it would be a wierd move to solve its main issue right before moving away from it, the ROI of that investment wouldn't be great.

On the other hand if they don't do that at most they have a midrange gpu to sell

 

Well relatively, GCN cores have been just as big as nV's for the biggest chips each companies have and when using HBM AMD saves on die space from the memory controller size.

 

I doubt it would even be wise for AMD to just throw shader units in their GCN architectures as they have done after a certain amount, which seems to be around 2500 units, GCN just doesn't scale well, need that die space for other things would be better.

 

Well they need to increase some shader counts even to stay in the midrange segment from Polaris.  Polaris has 2304 shaders. I expect that to go up by 30% or so for Navi.  AMD stated Navi is midrange, makes sense.

Link to comment
Share on other sites

Link to post
Share on other sites

49 minutes ago, Razor01 said:

 

Well relatively, GCN cores have been just as big as nV's for the biggest chips each companies have and when using HBM AMD saves on die space from the memory controller size.

 

I doubt it would even be wise for AMD to just throw shader units in their GCN architectures as they have done after a certain amount, which seems to be around 2500 units, GCN just doesn't scale well, need that die space for other things would be better.

 

Well they need to increase some shader counts even to stay in the midrange segment from Polaris.  Polaris has 2304 shaders. I expect that to go up by 30% or so for Navi.  AMD stated Navi is midrange, makes sense.

if they want to keep up with nvidea on anything high end they need to scale the gpu and the limiting fator is only having 4 shader engines each with a max of 16 CUs, with how much 7nm reduces die size i expect amd to release navi with a full 64 rops / 64 cus (that is in the scenario of no significant chances to the gpu from vega) just to compete in the mid range.

i think you misunderstood what i am trying to say, i am not saying just more shaders, i am saying more shader engines, which encompasses the geometry processor, rasterizer and the cus themselves, doing so would make gcn scale much bigger as in my eyes gcns problem is simply too many cus per the rest (rops, rbe geometry etc)

Spoiler

ShaderEngine.jpg

 

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, cj09beira said:

if they want to keep up with nvidea on anything high end they need to scale the gpu and the limiting fator is only having 4 shader engines each with a max of 16 CUs, with how much 7nm reduces die size i expect amd to release navi with a full 64 rops / 64 cus (that is in the scenario of no significant chances to the gpu from vega) just to compete in the mid range.

i think you misunderstood what i am trying to say, i am not saying just more shaders, i am saying more shader engines, which encompasses the geometry processor, rasterizer and the cus themselves, doing so would make gcn scale much bigger as in my eyes gcns problem is simply too many cus per the rest (rops, rbe geometry etc)

  Reveal hidden contents

ShaderEngine.jpg

 

AMD already stated Navi is only midrange, I don't see them doing that.   I see it as a polaris replacement, just an increase of 30% more units, ROPs possible not even increase that, Geometry units, up in the air, but for now not much change there either.  I expect to see 1070 level performance in Navi at 250 bucks.

 

I don't see them going into the performance bracket for Navi.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, cj09beira said:

if they want to keep up with nvidea on anything high end they need to scale the gpu and the limiting fator is only having 4 shader engines each with a max of 16 CUs, with how much 7nm reduces die size i expect amd to release navi with a full 64 rops / 64 cus (that is in the scenario of no significant chances to the gpu from vega) just to compete in the mid range.

i think you misunderstood what i am trying to say, i am not saying just more shaders, i am saying more shader engines, which encompasses the geometry processor, rasterizer and the cus themselves, doing so would make gcn scale much bigger as in my eyes gcns problem is simply too many cus per the rest (rops, rbe geometry etc)

  Reveal hidden contents

ShaderEngine.jpg

 

 

6 hours ago, Razor01 said:

AMD already stated Navi is only midrange, I don't see them doing that.   I see it as a polaris replacement, just an increase of 30% more units, ROPs possible not even increase that, Geometry units, up in the air, but for now not much change there either.  I expect to see 1070 level performance in Navi at 250 bucks.

 

I don't see them going into the performance bracket for Navi.

Given the area shrink, AMD really could run a full 64 CU as the Polaris replacement, though it might be more like 48 CUs given they won't be using HBM2 in the mainline consumer space. 48 CUs seem like it'd make more sense with GDDR6. Though that "scalable" tag should have meant they did something serious to make Navi scale better than Polaris. At least, I'd hope.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, cj09beira said:

if they want to keep up with nvidea on anything high end they need to scale the gpu and the limiting fator is only having 4 shader engines each with a max of 16 CUs, with how much 7nm reduces die size i expect amd to release navi with a full 64 rops / 64 cus (that is in the scenario of no significant chances to the gpu from vega) just to compete in the mid range.

i think you misunderstood what i am trying to say, i am not saying just more shaders, i am saying more shader engines, which encompasses the geometry processor, rasterizer and the cus themselves, doing so would make gcn scale much bigger as in my eyes gcns problem is simply too many cus per the rest (rops, rbe geometry etc)

  Reveal hidden contents

ShaderEngine.jpg

 

Hmmm thought that was already present

Link to comment
Share on other sites

Link to post
Share on other sites

On 27.4.2018 at 8:15 PM, BuckGup said:

I think if they had stock. I think gamers care more about using freesync then an extra 100W

I doubt that.

I don't know a single person using freesync (but plenty of people claiming how awesome it is because: almost free, yet not using it themselves lol).

 

On the other hand 100W = Heat and Heat = noise. A topic everyone and their mother seems to care about.

Yeah, the "person i know" argument is always flawed due to small sample size, but being the guy who repairs PCs as a side income that sample size is actually in the 3 digits for me. And not having a SINGLE freesync user in there should at least be an indicator.

Link to comment
Share on other sites

Link to post
Share on other sites

57 minutes ago, Rattenmann said:

I doubt that.

I don't know a single person using freesync (but plenty of people claiming how awesome it is because: almost free, yet not using it themselves lol).

 

On the other hand 100W = Heat and Heat = noise. A topic everyone and their mother seems to care about.

Yeah, the "person i know" argument is always flawed due to small sample size, but being the guy who repairs PCs as a side income that sample size is actually in the 3 digits for me. And not having a SINGLE freesync user in there should at least be an indicator.

I mean, idc... im all about those professional cards.

Link to comment
Share on other sites

Link to post
Share on other sites

Just slap on a nice 2560 bit bus like the Playstation 2 and it will be a huge success.

Intel Xeon E5640 4510mhz 1.10v-1.42v (offset) - C states on (◣_◢) 16GB 2x4 1x8 1296mhz CL7 (◣_◢) ASUS P6X58DE (◣_◢) Radeon R9 Fury Sapphire Nitro (◣_◢) 500GB HDD x2 1TB HDD x2 (RAID) Intel 480GB SSD (◣_◢) NZXT S340 (◣_◢) 130hz VS VX2268WM
Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, cj09beira said:

amd engineers have said in the past that if they wanted/had the funds to they could increase the shader engine count and make gcn great for larger dies,

Sounds interesting.

Link please?

 

13 hours ago, cj09beira said:

the problem until now seems to have been lack of funding, with amds cpu division running on fumes R&D is had to be reduced

Based on what Raja Koduri said in interviews there was a time that a lot of  the industry including AMD mispredicted and thought that discrete GPUs were going to go extinct. So AMD lost focus on discrete GPU R&D during that era (no doubt compounded by the fact that they were making big losses too). This has since been rectified with the formation of RTG but it takes years for it to bear fruition in terms of shipping products.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Taf the Ghost said:

 

Given the area shrink, AMD really could run a full 64 CU as the Polaris replacement, though it might be more like 48 CUs given they won't be using HBM2 in the mainline consumer space. 48 CUs seem like it'd make more sense with GDDR6. Though that "scalable" tag should have meant they did something serious to make Navi scale better than Polaris. At least, I'd hope.

They could, but I don't think they will lol, yeah 48 CU's more likely, also AMD doesn't need to go with GDDR6 with this card, GDDR6 is going to be expensive, to use GDDR6 it will no longer be in the midrange price range.  GDDR5x probably would be better a fit for such a chip anyways.

 

"scalable", means the ability to scale across all market sectors, just the way nV is able to create a GPU that is modular and they can cut out DP units, and use it for HPC, DL and gaming with their chips, without a redesign of their GPU's, this is what AMD wants to do.  Remember at that point Navi was not GCN, either, their road map shifted launch of Vega.

Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, Razor01 said:

Remember at that point Navi was not GCN, either, their road map shifted launch of Vega.

Pls explain?

Link to comment
Share on other sites

Link to post
Share on other sites

Prior to Raja leaving after Vega was released, Navi was hinted as a new architecture, not another GCN, it wasn't till after Raja left, AMD stated Navi was GCN, and the true successor to GCN won't be around till 2020 and 2021.  I think this is what Raja was saying when he asked RTG to stick with the road map, its changed, I don't even think Navi will have the scaleability which I was talked about in the previous post.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Razor01 said:

Prior to Raja leaving after Vega was released, Navi was hinted as a new architecture, not another GCN, it wasn't till after Raja left, AMD stated Navi was GCN, and the true successor to GCN won't be around till 2020 and 2021.  I think this is what Raja was saying when he asked RTG to stick with the road map, its changed, I don't even think Navi will have the scaleability which I was talked about in the previous post.

Vega was actually supposed to be a shift away from GCN, but it ended up a redesigned GCN. (NGC or something?) It seems partially that things have changed, but not enough that you'd really say they've gone away from GCN.  As for memory, I expect we'll see the slate of GDDR5, 5X & 6 going up the product stack. It really just depends where the Navi parts end up for performance, really.

 

I still think we see some wonky Compute cards with MCM before the end of the Navi generation. It ends up looking a lot like Nvidia's Tesla cards with on-board NVLink. We keep getting enough wonky rumors from different directions that at least someone in AMD R&D has put some effort into it. I'm not sure it'd work for 3D graphics, at all, but when the GPU acts as a mass-scale parallel computation device, die to die latency isn't much of a factor.

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Taf the Ghost said:

Vega was actually supposed to be a shift away from GCN, but it ended up a redesigned GCN. (NGC or something?)

NCU from memory, Next Compute Unit.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Taf the Ghost said:

Vega was actually supposed to be a shift away from GCN, but it ended up a redesigned GCN. (NGC or something?) It seems partially that things have changed, but not enough that you'd really say they've gone away from GCN.  As for memory, I expect we'll see the slate of GDDR5, 5X & 6 going up the product stack. It really just depends where the Navi parts end up for performance, really.

 

I still think we see some wonky Compute cards with MCM before the end of the Navi generation. It ends up looking a lot like Nvidia's Tesla cards with on-board NVLink. We keep getting enough wonky rumors from different directions that at least someone in AMD R&D has put some effort into it. I'm not sure it'd work for 3D graphics, at all, but when the GPU acts as a mass-scale parallel computation device, die to die latency isn't much of a factor.

Compute is not an issue with MCM cards at all, the problems only show up with graphical workloads.  So yeah I expect to see a 2x Vega 20 card coming out lol.

Link to comment
Share on other sites

Link to post
Share on other sites

Personally I really wish they would release a GPU for gamers. FreeSync is so much easier to get than G-Sync when it comes to monitors, and also the power draw doesn't matter to most people unless dealing with a small power supply.

Who needs fancy graphics and high resolutions when you can get a 60 FPS frame rate on iGPUs?

Link to comment
Share on other sites

Link to post
Share on other sites

I'm genuinely curious why people even use AMD graphics cards at the high end. Nvidia just dominates the high end. Can someone explain to me what the advantages are?

i5-8600k, MSI Z370-A Pro, 2x 8GB DDR4-3k, MSI Gaming X 1060, NZXT S340, 2TB HDD, 750w Corsair PSU, AOC 2775 OC'd to 80Hz and CFG73 at 144hz

Comic sans is the worst font

Check out my monitor overclocking guide

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×