Jump to content

AMD RX Vega reviews

thexzenon
6 minutes ago, Dylanc1500 said:

I don't see MCM being a good (permanent) solution for performance scaling except in very specific workloads. IBM moved away from MCMs themselves.

I can see it working so long as the technology is there to do it. Technology wise we quite often move away from something to only move back to it later when the technology changes and it's now better. Just look at serial vs parallel links for a very rough/bad example, computers moved from serial links to parallel then back to serial again so there's nothing stopping us from going back to MCM, then likely back again heh.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, leadeater said:

I can see it working so long as the technology is there to do it. technology wise we quit often move away from something to only move back to it later when the technology changes it's now better. Just look at serial vs parallel links for a very rough/bad example, computers move from serial links to parallel then back to serial again so there's nothing stop use from going back to MCM, then likely back again heh.

Ya, that is very true. Hmm imagine running usb 3.1 in parallel over a printer port pins.

 

Edited (finished) my last post.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Dylanc1500 said:

Mmmm love me some gen-z. It would be great if they would bring to consumer level as an interconnect between cpu and GPU.

You'll get it from AMD if it does, they are one of the big contributors to the spec and it is my assumption that Infinity Fabric is based heavily off Gen-z anyway.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Dujith said:

I have a question: i see that AMD is doing better at compute while being cheaper then Nvidia.

But isnt the increased powerdraw going to be an issue? Sure for the 1 computer at your home it wont matter, but i can see this being a problem for when u have alot more and suddenly have to take that into account.

for compute the efficiency is leveled with nvidea +- and you get better fp64 and fp16 perf( both 2x the perf)

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Ezilkannan said:

Not anytime soon. Nvidia is not at all threatened by Vega.

They don't need to be threatened to release anything. This was the case for the past generations of GeForce.

No threat on high end, the battle was always at mid range for AMD while Nvidia has been enjoying the crown for years.

You can bark like a dog, but that won't make you a dog.

You can act like someone you're not, but that won't change who you are.

 

Finished Crysis without a discrete GPU,15 FPS average, and a lot of heart

 

How I plan my builds -

Spoiler

For me I start with the "There's no way I'm not gonna spend $1,000 on a system."

Followed by the "Wow I need to buy the OS for a $100!?"

Then "Let's start with the 'best budget GPU' and 'best budget CPU' that actually fits what I think is my budget."

Realizing my budget is a lot less, I work my way to "I think these new games will run on a cheap ass CPU."

Then end with "The new parts launching next year is probably gonna be better and faster for the same price so I'll just buy next year."

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, leadeater said:

You'll get it from AMD if it does, they are one of the big contributors to the spec and it is my assumption that Infinity Fabric is based heavily off Gen-z anyway.

That's a good possibility. Have they had any release info on the details of the fabric? I could see IBM possibilty being one of the first implementers. They are the first to release PCIe 4.0 hardware (that I'm knowledgeably aware of).

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Dylanc1500 said:

That's a good possibility. Have they had any release info on the details of the fabric? I could see IBM possibilty being one of the first implementers. They are the first to release PCIe 4.0 hardware (that I'm knowledgeably aware of).

Not really, my assumption is based simply on how AMD was saying Infinity Fabric was memory semantic and could be used over multiple different transport layers which is the similar to gen-z and the fact that AMD is part of the consortium. I could be completely wrong, assumptions are good like that lol.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, leadeater said:

Not really, my assumption is based simply on how AMD was saying Infinity Fabric was memory semantic and could be used over multiple different transport layers which is the similar to gen-z and the fact that AMD is part of the consortium. I could be completely wrong, assumptions are good like that lol.

I don't get to deal with AMD much (recent years not having much reason to) so I haven't had a whole lot of info on it. Hey what fun would it be if we didn't make assumptions on everything!

 

my question is if they move to an MCM design do you think it's going to be similar to the setups of the old obsidian multi chip cards or something with dies much closer, or even all in one die?

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, Dylanc1500 said:

my question is if they move to an MCM design do you think it's going to be similar to the setups of the old obsidian multi chip cards or something with dies much closer, or even all in one die?

Won't be a single die else it doesn't solve the fabrication size limit and yield percentages. 

 

In my dream land I have no idea about actual GPU fabrication design I'd move out the ACEs, Hardware Schedulers, Command Processor, Workgroup Distributor, Multimedia Engine, Display Engine, HBCC, XDMA and PCIe to their own die then split the Graphics Pipeline, Compute Engines and L2 cache to it's own die but use a cluster of two Compute Engines not 4 in the current design.

 

You can then put down two Compute Engine dies to make a Vega 64 or one or three. You could also change the Geometry Engines and/or Pixel Engines now that things have been separated out and in theory easier to modify and cheaper.

 

The easy route is literally just two Polaris size dies on a single package connected using the IF and some magic to make the Command Processors coherent with each other etc.

 

I think it's a lot harder than most are expecting it to be, unlike with CPUs for weird performance issues and latency/bandwidth differences depending on what core and die things get executed on there isn't really any visual indication of that but for a GPU you could get some very weird results like broken or delayed textures on half the screen or horrific frame times to prevent that using output buffers or something. There is less room for error in a GPU than there is for a CPU.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, leadeater said:

Won't be a single die else it doesn't solve the fabrication size limit and yield percentages. 

 

In my dream land I have no idea about actual GPU fabrication design I'd move out the ACEs, Hardware Schedulers, Command Processor, Workgroup Distributor, Multimedia Engine, Display Engine, HBCC, XDMA and PCIe to their own die then split the Graphics Pipeline, Compute Engines and L2 cache to it's own die but use a cluster of two Compute Engines not 4 in the current design.

 

You can then put down two Compute Engine dies to make a Vega 64 or one or three. You could also change the Geometry Engines and/or Pixel Engines now that things have been separated out and in theory easier to modify and cheaper.

 

The easy route is literally just two Polaris size dies on a single package connected using the IF and some magic to make the Command Processors coherent with each other etc.

 

I think it's a lot harder than most are expecting it to be, unlike with CPUs for weird performance issues and latency/bandwidth differences depending on what core and die things get executed on where there isn't really any visual indication of that for a GPU you could get some very weird results like broken or delay textured on half the screen or horrific frame times to prevent that using output buffers or something. There is less room for error in a GPU than there is for a CPU.

That's honestly not too bad of an idea. However could be back to an issue of die sizes. What size of dies do you think those could end up being? Are you thinking same substrate or entirely separate?

Link to comment
Share on other sites

Link to post
Share on other sites

Do you guys think we will see graphics cards with Vega 8, 11, 20?

 

Their were rumors a while ago about Vega 11, and 20. and we saw Vega 8 on the leaked Ryzen 5 2500U.

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, Dylanc1500 said:

That's honestly not too bad of an idea. However could be back to an issue of die sizes. What size of dies do you think those could end up being? Are you thinking same substrate or entirely separate?

Well most of the die size is in the Compute Engines and there are 4 of them so cutting down to two should save a fair amount of space. I'd imagine much like Threadripper/EPYC with dies on same substrate.

 

If the Command Processor die is like 100mm2 and the Compute Engine dies are 200mm2 each you would be slightly larger in size (IF connections are assumed for size increase) but you could also be smaller if you want for a cheaper SKU or larger for an even bigger one. Could have a monster with 2 HBM2 stacks on the left of the package, Command processor middle left beside the HBM then 5 Compute Engine dies around it to form a square which would make one huge GPU but in theory at a cheaper cost since all the die sizes are small. That would be 300mm2 ish larger than Nvidia's monster GV100 heh.

 

Edit:

Thinking about it 3 Compute Engine dies might be better config and give more SKU options, you'd only ever need to deploy 2 dies for the biggest GPU which is already 2 more than Vega 64 but you'd have options for SKUs ranging from 1 die with Compute Engines fully enabled or not and 2 dies with Compute Engines fully enabled or not.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, The Benjamins said:

Do you guys think we will see graphics cards with Vega 8, 11, 20?

 

Their were rumors a while ago about Vega 11, and 20. and we saw Vega 8 on the leaked Ryzen 5 2500U.

I hope we see Vega 8.  I think we should start using V## to designate the name (so V64, V56), so a Vega 8 would be a V8. xD 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, YoloSwag said:

They don't need to be threatened to release anything. This was the case for the past generations of GeForce.

No threat on high end, the battle was always at mid range for AMD while Nvidia has been enjoying the crown for years.

Apparently, Nvidia CEO said  they won't be launching it anytime soon for the gaming consumer space due to price constraints. http://www.pcgamer.com/nvidias-next-gen-volta-gaming-gpus-arent-arriving-anytime-soon/

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, leadeater said:

AMD is a bit like the good old 'Jack of all trades but master at none' kind of thing.

This is the part that I find to be the funniest in all of these Intel/AMD/NVidia threads recently...  If I want a pure gaming system with the highest raw performance, then I would go with an Intel/NVidia system.  On the other hand, if I want a system that I can do more multi-tasking compute work and gaming on then an AMD CPU with one of the new Vega cards actually looks fairly decent, especially if I like to keep some spare money in the bank...  

Link to comment
Share on other sites

Link to post
Share on other sites

48 minutes ago, Ezilkannan said:

Apparently, Nvidia CEO said  they won't be launching it anytime soon for the gaming consumer space due to price constraints. http://www.pcgamer.com/nvidias-next-gen-volta-gaming-gpus-arent-arriving-anytime-soon/

Yeah, this is bad news for all. They gimped on the high end market when they launched the GTX 680 because they were in a position to do so. Just like now.

 

So congrats to both AMD and Nvidia.

You can bark like a dog, but that won't make you a dog.

You can act like someone you're not, but that won't change who you are.

 

Finished Crysis without a discrete GPU,15 FPS average, and a lot of heart

 

How I plan my builds -

Spoiler

For me I start with the "There's no way I'm not gonna spend $1,000 on a system."

Followed by the "Wow I need to buy the OS for a $100!?"

Then "Let's start with the 'best budget GPU' and 'best budget CPU' that actually fits what I think is my budget."

Realizing my budget is a lot less, I work my way to "I think these new games will run on a cheap ass CPU."

Then end with "The new parts launching next year is probably gonna be better and faster for the same price so I'll just buy next year."

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×