Jump to content

AMD confirms the Vega 20, 2018 7nm being tested.

Kamjam21xx
7 hours ago, Trixanity said:

I think the extra bandwidth would be useful for chipset lanes but as you say I doubt it can be done without redesigning the boards. Isn't PCI-E 5 around the corner though? Like spec completion this or next year?

 

It will be interesting with Zen2 if we assume Rome has PCI-E 4 support because naturally that would mean Ryzen 3 has it too but that board design would limit it to PCI-E 3.

PCIe 5.0 should be laid down by 2019 we're told, but PCIe standards always take a long while. 4.0 seems to have been delayed because of board design issues more than anything else. Which is why I would expect it to work in Rome (Zen2 servers) and everything just runs in PCIe 3.0 mode, which is actually what AM3 did with PCIe 2.0 & 3.0. Something I'd forgotten about. (The fact it takes a Titan V to saturate a 2.0 x16 or 3.0 x8 lane explains part of why that's true.)

 

AMD seems to be honing in on the storage & mass core sections of the server market, which is part of why I expect Rome (Epyc 2) will have PCIe 4.0 lanes at full speed, especially as they connect the dies in 2U systems that way. Though I don't think AM4 would allow for the doubling of lanes to be used on consumer parts, but, hey, never know with X570 boards. If it's doable, it could be one of the more hilarious F-U moves AMD could pull. Threadripper 3 would be lit, that's for sure.

6 hours ago, Sampsy said:

 

You're right but I think it's still possible. The next gen of Ryzen boards may well have been designed from the beginning to support PCI-E 4.0. I imagine it could be possible to continue using the same socket but I lack the expertise to know. You're also right there isn't exactly a dire need for it. That said, it can still be useful. I believe the current generation of m.2 SSDs are almost saturating the PCI-E 3 4x link and fast IO such as thunderbolt 3 is also hungry for PCI-E lanes. And whatever platform gets PCI-E 4.0 first can claim the "future proof" advantage over its competitor. Although as you mentioned, if you wanted a new platform to be future proof it should really have both PCI-E 4.0 and DDR5 support, and I believe the latter is still not ready. 

 

 

I meant first on a consumer platform 9_9.

DDR5 is scheduled for 2020. The very first test chips only arrived late last year. There's still a lot of work to do, especially as DDR5 is a pretty big change from DDR3/4. Intel & AMD should be finishing up designs this year that use it, which is the reason we got that rumor about AMD putting a lot of R&D into the space. (And "Zen5", which might be called Genoa.) 

 

Intel's migration to DDR5 should be with Sapphire Rapids, which is their next big Architecture update. That's the post-Tiger Lake design, though there's apparently going to be a split between Server & Consumer pretty harshly at that point. (One rumor is Sapphire Rapids is going to be dropping a bunch of older aspects to x86, so we'll see. Be interesting if Intel finally decides to deprecate parts of x86 in the server space.)

Link to comment
Share on other sites

Link to post
Share on other sites

Intel is in trouble, wouldn't it be nice to see Nvidia scared as well :D


Still i think it's most likely not going to happen, Nvidia actually kept pushing themselves and not just milking the little small improvements. It will be hard to get a 1180 competitor from AMD.

.

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/27/2018 at 2:30 PM, dizmo said:

I really wish they'd get their power requirements under control.

Im fine with any power draw they want. 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, asus killer said:

Intel is in trouble, wouldn't it be nice to see Nvidia scared as well :D


Still i think it's most likely not going to happen, Nvidia actually kept pushing themselves and not just milking the little small improvements. It will be hard to get a 1180 competitor from AMD.

Im really hoping to see some new for of the w9100 and wx9100. I feel so alone being pumped about AMDs professional cards though.

 

I think intel is going to have a rough 2019 

 

Well "relatively" 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Kamjam21xx said:

Im fine with any power draw they want. 

I'm not.

 

I don't mind higher power draw when it's like rx580 vs gtx1060 or at most vega56 vs gtx 1070ti. The product quality and driver quality etc takes precedence.

 

But when you have such massive differences as vega64 vs gtx1080 then AMD clearly has a serious problem. It's obvious that they have just increased clocks / voltages on that part way outside the efficient range for the architecture in an attempt to reach the performance level of the gtx 1080.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Humbug said:

I'm not.

 

I don't mind higher power draw when it's like rx580 vs gtx1060 or at most vega56 vs gtx 1070ti. The product quality and driver quality etc takes precedence.

 

But when you have such massive differences as vega64 vs gtx1080 then AMD clearly has a serious problem. It's obvious that they have just increased clocks / voltages on that part way outside the efficient range for the architecture in an attempt to reach the performance level of the gtx 1080.

Well a wx9100 is slightly under the preformance of dual 1080tis in many applications and doesnt draw that much. So it just depends what card you buy. 

 

Idk why i compared those tbh, not even in the same ballpark. 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Humbug said:

I'm not.

 

I don't mind higher power draw when it's like rx580 vs gtx1060 or at most vega56 vs gtx 1070ti. The product quality and driver quality etc takes precedence.

 

But when you have such massive differences as vega64 vs gtx1080 then AMD clearly has a serious problem. It's obvious that they have just increased clocks / voltages on that part way outside the efficient range for the architecture in an attempt to reach the performance level of the gtx 1080.

at least in part is the fact that glofo's 14nm sucks balls at high clocks, but they really need to somehow set voltage per card isntead of having the same voltage across all cards, as it seems like a huge part of the problem is how different are the best gpus from the worst ones

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, cj09beira said:

at least in part is the fact that glofo's 14nm sucks balls at high clocks, but they really need to somehow set voltage per card isntead of having the same voltage across all cards, as it seems like a huge part of the problem is how different are the best gpus from the worst ones

Yep. In that case 7nm could really help.

 

Also if I was AMD I would not have had two Vega gaming products.

I would just have had one Vega, fully unlocked like the Vega64 but clocked more like the Vega56. It would have performed inbetween the GTX 1070ti and GTX 1080 and they would have had to price it accordingly.

 

They should have recognized the limitations and configured the product accordingly. If end users want to extend the TDP and overclock past GTX 1080 performance then they can do so... But I don't see the point of desperately trying to push the stock card outside it's comfort zone.

 

Maybe they were too embarrassed at bringing so little high end performance so late after Nvidia.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, cj09beira said:

at least in part is the fact that glofo's 14nm sucks balls at high clocks, but they really need to somehow set voltage per card isntead of having the same voltage across all cards, as it seems like a huge part of the problem is how different are the best gpus from the worst ones

 

AMD's Vega voltage problems comes from design of the Vega.  This has always been a problem with GCN, we can see a correlation with all GCN products where they are clocked very close to the upper limits of their power consumption curves.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Kamjam21xx said:

Im fine with any power draw they want. 

Right now the reason why they are behind is because of their excessive power draw for the performance they give.  They are a solid generation behind with Vega vs Pascal (one can say 1.5 gens because of the delay of Vega relative to when Pascal was released), with Volta and its gaming cards, AMD will be behind 2-3 generations, that's not good.  That 7nm node is the only thing that is going to keep them relevant in some parts of the market (midrange)

 

The largest leap we have seen in performance gen to gen has been with the FX series to the 6800 series on the same node.  Which was a 60% increase in performance, with a perf/watt increase of 60% (all without AA, the FX series go hit hard with AA another design issue).  The thing with the FX series though it had some serious design flaws lol, to the point MS had to make a new version of DX for it.

 

GCN doesn't have issues like that, well yeah for geometry through put it does, but that can't be changed unless they have more Geometry units, which is more of not enough space in the die.

 

Shader through put, is something they can't just magically fix, they need to change architecture for that.

 

They just held and still holding on to GCN for too long.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Razor01 said:

 

AMD's Vega voltage problems comes from design of the Vega.  This has always been a problem with GCN, we can see a correlation with all GCN products where they are clocked very close to the upper limits of their power consumption curves.

Both yes and no. What he's referring to is that many cards have room for undervolting - some significantly. Not only does it lower power consumption but it also increases performance.

 

However GCN cards are clocked above their efficiency curve in general and it does not look good. It gives a little extra performance for a lot of extra power. Frankly, I think it'd look better if they dialed back clocks to get a better perf/W but I guess AMD management wants absolute numbers first despite the fact the arch can't deliver.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Trixanity said:

Both yes and no. What he's referring to is that many cards have room for undervolting - some significantly. Not only does it lower power consumption but it also increases performance.

 

However GCN cards are clocked above their efficiency curve in general and it does not look good. It gives a little extra performance for a lot of extra power. Frankly, I think it'd look better if they dialed back clocks to get a better perf/W but I guess AMD management wants absolute numbers first despite the fact the arch can't deliver.

The Undervolt/Overclock nature of Polaris & Vega, for some of the GPUs, is one of those little things that has made the 14nm parts quite odd.

 

They need them clocked to the point they meet a certain performance target, though because of the nature of GCN, it appears that means it's a far better compute card at every stage.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, Trixanity said:

Both yes and no. What he's referring to is that many cards have room for undervolting - some significantly. Not only does it lower power consumption but it also increases performance.

 

However GCN cards are clocked above their efficiency curve in general and it does not look good. It gives a little extra performance for a lot of extra power. Frankly, I think it'd look better if they dialed back clocks to get a better perf/W but I guess AMD management wants absolute numbers first despite the fact the arch can't deliver.

undervolting is design thing too, some Vega cards can under volt every well, but that is because of design, that design is causing a "wide" bell curve when it comes to voltages.  That means some chips can sustain clocks at lower voltages while others can't.  Voltage regulation and signal cohesion across large chips is hard to do and it comes from design.  Node has very little to do with this.

 

We can see nV's attention to detail with Maxwell and then Pascal with its transistor layouts, to see this in practice.  This is why architecture needs to change, can't use the same thing over and over again with just tweaks.  Problems will arise as more transistors are used.  Can't just double transistors and expect everything to work fine lol.

 

Thing about clocking, they don't have much room to increase clocks, they actually have a good idea when designing the chip what the clocks are going to be.  They might have a 10% leeway here to change clocks but that is about it.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Trixanity said:

However GCN cards are clocked above their efficiency curve in general and it does not look good. It gives a little extra performance for a lot of extra power. Frankly, I think it'd look better if they dialed back clocks to get a better perf/W but I guess AMD management wants absolute numbers first despite the fact the arch can't deliver.

Exactly my point.

And I have no problem clocking a bit past the effeciency curve.

In fact I still own my Sapphire r9 290 vapor-x which is a beautiful factory overclocked gcn 1.1 card that runs cool and quiet. For sure it's clocked past the optimal effeciency range but I still have plenty of overclock head room. Which means they haven't pushed it the absolute limit.

 

With vega64 though they have pushed it to the absolute limit. I guess it's simply the fact that during gcn 1.0 and gcn 1.1 AMD knew that they did not need to do anything crazy to match Nvidia's high end performance. Now they are over stretching themselves to match Nvidia's 2nd highest gaming product.

 

If I was them I would have accepted the fact that this vega architecture isn't going to compete on that league out of the box, and just clocked it appropriately and priced it for that, at least it would have been a good product! If end users what to overclock past gtx 1080 performance then they can remove the power Limit and go crazy... But the stock product should be well balanced.

Link to comment
Share on other sites

Link to post
Share on other sites

I really wish AMD would get their gpu division in line, I want an all amd laptop with a R7 2700 and a 150w (same as current 1080 laptop spec) 7nm gpu that can compete with a 1080ti........................................................................ If only

8086k Winner BABY!!

 

Main rig

CPU: R7 5800x3d (-25 all core CO 102 bclk)

Board: Gigabyte B550 AD UC

Cooler: Corsair H150i AIO

Ram: 32gb HP V10 RGB 3200 C14 (3733 C14) tuned subs

GPU: EVGA XC3 RTX 3080 (+120 core +950 mem 90% PL)

Case: Thermaltake H570 TG Snow Edition

PSU: Fractal ION Plus 760w Platinum  

SSD: 1tb Teamgroup MP34  2tb Mushkin Pilot-E

Monitors: 32" Samsung Odyssey G7 (1440p 240hz), Some FHD Acer 24" VA

 

GFs System

CPU: E5 1660v3 (4.3ghz 1.2v)

Mobo: Gigabyte x99 UD3P

Cooler: Corsair H100i AIO

Ram: 32gb Crucial Ballistix 3600 C16 (3000 C14)

GPU: EVGA RTX 2060 Super 

Case: Phanteks P400A Mesh

PSU: Seasonic Focus Plus Gold 650w

SSD: Kingston NV1 2tb

Monitors: 27" Viotek GFT27DB (1440p 144hz), Some 24" BENQ 1080p IPS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Razor01 said:

undervolting is design thing too, some Vega cards can under volt every well, but that is because of design, that design is causing a "wide" bell curve when it comes to voltages.  That means some chips can sustain clocks at lower voltages while others can't.  Voltage regulation and signal cohesion across large chips is hard to do and it comes from design.  Node has very little to do with this.

 

We can see nV's attention to detail with Maxwell and then Pascal with its transistor layouts, to see this in practice.  This is why architecture needs to change, can't use the same thing over and over again with just tweaks.  Problems will arise as more transistors are used.  Can't just double transistors and expect everything to work fine lol.

 

Thing about clocking, they don't have much room to increase clocks, they actually have a good idea when designing the chip what the clocks are going to be.  They might have a 10% leeway here to change clocks but that is about it.

wouldn't that be the result of amd using more automated trace layout than nvidea, like IF where if you do it more manually you can optimize more, but also costs a lot more engineering time and thus money.

Link to comment
Share on other sites

Link to post
Share on other sites

Probably, but nV's ability to do that is straight from architecture and design perspective.  The chip was made for that :).

 

Even as nV's clocks go up, for any of the pascal variants, its voltages doesn't change much and their power/watt ratios don't change much if the voltages don't change.

 

The other side of this is Pascal has a extreme tight voltage amounts they can't go as high as Vega's voltage, we see that Pascal will get unstable with 1.25 volts, most vega's are happy with that much voltage.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Humbug said:

Well if by some chance that thing is really running at 1000Mhz then it is a massive improvement architecturally over the current Vega.

probability of that being the case =0

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, Razor01 said:

 

its just not reading the clocks right ;)

 

https://www.3dmark.com/compare/3dm11/12756151/3dm11/12754835#

 

here are the score results of two runs Vega 20

It might or might not be the case, but I don't understand what people are awaiting of that. Isn't it supposed to be a machine learning card anyway, so how well it performs in gaming shouldn't matter that much granted it performs well in machine learning tasks.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, laminutederire said:

It might or might not be the case, but I don't understand what people are awaiting of that. Isn't it supposed to be a machine learning card anyway, so how well it performs in gaming shouldn't matter that much granted it performs well in machine learning tasks.

its a good indication of the 7nm characteristics thats why people are interested 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, cj09beira said:

its a good indication of the 7nm characteristics thats why people are interested 

What would've been interesting was clocks, power consumption and so on. Since we all know Vega suffers from out of power range efficiency issues, this could have said to us if 7nm helps with that.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×