Jump to content

Radeon VII neck and neck with RTX 2080 in rumored 3DMark leak

exetras
39 minutes ago, Mira Yurizaki said:

NVIDIA could've asked a lot of publishers beforehand to send in some representative samples.

 

I'm not talking about DLSS though. I'm talking about the AI that these people used to upscale 2D images without a reference point... because there is no reference point available. And while it was pointed out the Doom one needed cleanup, the Morrowind one made no mention of this.

 

Also people I'm aware how a neural AI works. What threw me off is that NVIDIA seemed to advertise DLSS as something packaged and ready to go, rather than an entire system with a process to it.

 

You completely threw me for a loop there as you jumped off on a super wild tangent. The reason it works is that your dealing with textures/limited images, not full blown full scene renders. There's a huge difference between those. textures are limited and pre-defined. A rendered scene even within one frame is enormously more complex containing data from dozens, sometimes hundreds of textures and lighting effects, and shadows,s and so on and so forth. There's a huge jump in complexity to get an acurratte result in that situation.

 

As for publishers sending stuff, bear in mind they probably need a fair variety of scenes and they probably need the dev teams to go through each and every scene frame by frame to check the 64x Super Sample looks the way it's supposed to. Even the best systems will produce the odd glitch result. It's not just about providing game imagery, it's about providing the right imagery.

 

Also i raised the neural net stuff because me a @leadeater got into a discussion previously about DLSS and tensor Core denoising where he was calling it equivalent to pre-rendering. I was having trouble explaining it and a computer crash eating a post mant i never got back to him but it's clear there's a serious lack of understanding of what DLSS and the like actually do.

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, CarlBar said:

Also i raised the neural net stuff because me a @leadeater got into a discussion previously about DLSS and tensor Core denoising where he was calling it equivalent to pre-rendering. I was having trouble explaining it and a computer crash eating a post mant i never got back to him but it's clear there's a serious lack of understanding of what DLSS and the like actually do.

The pre-render part was in reference to those doom texture upscales where they are upscaled then you replace the texture game files so when the game load in those textures they are the upscaled ones. All the work that actually makes the game look better, these new better textures, were pre-generated.

 

That's the big difference between that and DLSS is that DLSS is done on every single frame, all of it, in real time.

 

Edit:

The only other time I would of reference any pre-computing in relation to DLSS would be the training of it, the application of the trained model that is run on our GPUs when playing the game is Inference that is run on the Tensor cores.

 

Quote

Training vs Inference


Machine Learning has two distinct phases: training and inference. Training generally takes a long time and can be resource heavy. Performing inference on new data is comparatively easy and is the essential technology behind computer vision, voice recognition, and language processing tasks.

https://simpliv.wordpress.com/2018/08/14/what-is-ai/?utm_campaign=News&utm_medium=Community&utm_source=DataCamp.com

 

It takes a long time to learn to speak, or learn maths, or any other skill but once you have learnt it (even incorrectly) it's rather simple to apply it. We all hope we aren't applying incorrect knowledge. AI and Machine Learning isn't much different in that respect.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, Madgemade said:

This has been done to death.

They can't go 8GB because they need 4 stacks to get the 1TB/s memory bandwidth. Vega 10 (Vega 56/64) has two stacks and has 512 MB/s bandwidth.

If they had two stacks it would mean no better bandwidth than old Vega. From tests and memory overclocking results it is well known that Vega 10 is bandwidth starved.

They can't use 4x 2GB stack because they don't come in that size.

 

So there will never be an 8GB 7nm Vega, if there was it would barely be any better than Vega 64!

 

11 hours ago, Suika said:

An 8GB Radeon VII is a trash idea.

An 8GB Radeon VII is a trash idea.

An 8GB Radeon VII is a trash idea.

An 8GB Radeon VII is a trash idea.

An 8GB Radeon VII is a trash idea.

An 8GB Radeon VII is a trash idea.

An 8GB Radeon VII is a trash idea.

An 8GB Radeon VII is a trash idea.

 

Sorry, just wanted to make sure I nailed that Radeon VII with 8GB is a trash idea. The absolutely insane memory bandwidth is the reason we have a Radeon VII and not just a refreshed Vega 64. Cut the memory, you cut like 75% of the performance uplift, and suddenly AMD has to cut the price down to $500 to compete with a lower tier card (that $150 savings means nothing at that point).

 

Radeon VII exists for publicity while we're all waiting for big Navi. NVIDIA launched shiny new things so AMD had to pony something up.

2gb stacks of hbm2 do exist, and it would be very possible for amd to cut down to a 2x4 stack to still equate to the 1tbps memory bandwidth. In fact, I'd love to see a Radeon II M with only 56 CUs and 8gb of hbm2 in 2x4 configuration with a price of $500. It would outperform the 2070 and give the 2080 a run for its money since gcn is still front end bottlenecked.

8086k Winner BABY!!

 

Main rig

CPU: R7 5800x3d (-25 all core CO 102 bclk)

Board: Gigabyte B550 AD UC

Cooler: Corsair H150i AIO

Ram: 32gb HP V10 RGB 3200 C14 (3733 C14) tuned subs

GPU: EVGA XC3 RTX 3080 (+120 core +950 mem 90% PL)

Case: Thermaltake H570 TG Snow Edition

PSU: Fractal ION Plus 760w Platinum  

SSD: 1tb Teamgroup MP34  2tb Mushkin Pilot-E

Monitors: 32" Samsung Odyssey G7 (1440p 240hz), Some FHD Acer 24" VA

 

GFs System

CPU: E5 1660v3 (4.3ghz 1.2v)

Mobo: Gigabyte x99 UD3P

Cooler: Corsair H100i AIO

Ram: 32gb Crucial Ballistix 3600 C16 (3000 C14)

GPU: EVGA RTX 2060 Super 

Case: Phanteks P400A Mesh

PSU: Seasonic Focus Plus Gold 650w

SSD: Kingston NV1 2tb

Monitors: 27" Viotek GFT27DB (1440p 144hz), Some 24" BENQ 1080p IPS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, TheDankKoosh said:

2gb stacks of hbm2 do exist, and it would be very possible for amd to cut down to a 2x4 stack to still equate to the 1tbps memory bandwidth. In fact, I'd love to see a Radeon II M with only 56 CUs and 8gb of hbm2 in 2x4 configuration with a price of $500. It would outperform the 2070 and give the 2080 a run for its money since gcn is still front end bottlenecked.

I doubt 4 stacks of 2GB would be much cheaper, you'd also be taking up production line time to make this different configuration of GPU package. Slapping 'Gaming' on an existing product is a lot simpler and cheaper, for AMD, than creating a new product even if only slightly different.

 

Hell maybe they even over produced MI50s and this is an easy way to move the stock more quickly.

 

In any event a GPU that is the same or near the performance of a competing product will be priced near as much the same. Dropping it to 8GB and the card still performing on par with an RTX 2080 (or claimed so) would be priced exactly the same, unless Nvidia price dropped and AMD can't due to cost of product which would be the only time I could see the 16GB being an issue.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, CarlBar said:

You completely threw me for a loop there as you jumped off on a super wild tangent. The reason it works is that your dealing with textures/limited images, not full blown full scene renders. There's a huge difference between those. textures are limited and pre-defined. A rendered scene even within one frame is enormously more complex containing data from dozens, sometimes hundreds of textures and lighting effects, and shadows,s and so on and so forth. There's a huge jump in complexity to get an acurratte result in that situation.

If DLSS is a post processing technique then the render is already a 2D image. So no, in theory it shouldn't be any more complicated than upscaling any other image . I can see some issues with the output producing a result that could confuse the upscaler but that's about it.

 

The thing that's getting to me is in a game there are an impractical number of scenes and angles to choose from. It's easy for a tech demo or a benchmark to have DLSS applied because it's nothing more than a movie where the frames are generated on the fly and it's going to be the same frames every time.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

I doubt 4 stacks of 4GB would be much cheaper, you'd also be taking up production line time to make this different configuration of GPU package. Slapping 'Gaming' on an existing product is a lot simpler and cheaper, for AMD, than creating a new product even if only slightly different.

 

Hell maybe they even over produced MI50s and this is an easy way to move the stock more quickly.

 

In any event a GPU that is the same or near the performance of a competing product will be priced near as much the same. Dropping it to 8GB and the card still performing on par with an RTX 2080 (or claimed so) would be priced exactly the same, unless Nvidia price dropped and AMD can't due to cost of product which would be the only time I could see the 16GB being an issue.

Cutting down to a 2x4 configuration would save about $150 for amd and cutting a different sku down to 56 CUs would allow for defective dies to be utilized where they otherwise would have been wasted. AMD isn't making money on the Radeon VII, they just merely made it to appease to the greater masses so they aren't totally left behind. A theoretical Radeon VII M would allow amd to make money, where they would otherwise be losing it from defective MI50 dies. 

8086k Winner BABY!!

 

Main rig

CPU: R7 5800x3d (-25 all core CO 102 bclk)

Board: Gigabyte B550 AD UC

Cooler: Corsair H150i AIO

Ram: 32gb HP V10 RGB 3200 C14 (3733 C14) tuned subs

GPU: EVGA XC3 RTX 3080 (+120 core +950 mem 90% PL)

Case: Thermaltake H570 TG Snow Edition

PSU: Fractal ION Plus 760w Platinum  

SSD: 1tb Teamgroup MP34  2tb Mushkin Pilot-E

Monitors: 32" Samsung Odyssey G7 (1440p 240hz), Some FHD Acer 24" VA

 

GFs System

CPU: E5 1660v3 (4.3ghz 1.2v)

Mobo: Gigabyte x99 UD3P

Cooler: Corsair H100i AIO

Ram: 32gb Crucial Ballistix 3600 C16 (3000 C14)

GPU: EVGA RTX 2060 Super 

Case: Phanteks P400A Mesh

PSU: Seasonic Focus Plus Gold 650w

SSD: Kingston NV1 2tb

Monitors: 27" Viotek GFT27DB (1440p 144hz), Some 24" BENQ 1080p IPS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Mira Yurizaki said:

Where are you getting that figure from?

The price figures for hbm2 which as of now are estimated at $160 for 8gb.

8086k Winner BABY!!

 

Main rig

CPU: R7 5800x3d (-25 all core CO 102 bclk)

Board: Gigabyte B550 AD UC

Cooler: Corsair H150i AIO

Ram: 32gb HP V10 RGB 3200 C14 (3733 C14) tuned subs

GPU: EVGA XC3 RTX 3080 (+120 core +950 mem 90% PL)

Case: Thermaltake H570 TG Snow Edition

PSU: Fractal ION Plus 760w Platinum  

SSD: 1tb Teamgroup MP34  2tb Mushkin Pilot-E

Monitors: 32" Samsung Odyssey G7 (1440p 240hz), Some FHD Acer 24" VA

 

GFs System

CPU: E5 1660v3 (4.3ghz 1.2v)

Mobo: Gigabyte x99 UD3P

Cooler: Corsair H100i AIO

Ram: 32gb Crucial Ballistix 3600 C16 (3000 C14)

GPU: EVGA RTX 2060 Super 

Case: Phanteks P400A Mesh

PSU: Seasonic Focus Plus Gold 650w

SSD: Kingston NV1 2tb

Monitors: 27" Viotek GFT27DB (1440p 144hz), Some 24" BENQ 1080p IPS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Mira Yurizaki said:

Where are you getting that figure from?

The price of the 16GB is only rumor anyway, but half the amount with the same number of stacks isn't half the price. Then you need to add on the cost of a new production line for that new GPU package further reducing any cost savings. If it was worth it it would have been done in my opinion.

Link to comment
Share on other sites

Link to post
Share on other sites

What time do we start getting real reviews?

i9-9900k @ 5.1GHz || EVGA 3080 ti FTW3 EK Cooled || EVGA z390 Dark || G.Skill TridentZ 32gb 4000MHz C16

 970 Pro 1tb || 860 Evo 2tb || BeQuiet Dark Base Pro 900 || EVGA P2 1200w || AOC Agon AG352UCG

Cooled by: Heatkiller || Hardware Labs || Bitspower || Noctua || EKWB

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, TheDankKoosh said:

The price figures for hbm2 which as of now are estimated at $160 for 8gb.

That price is for 2 4GB stacks, at the $80 per stack price of that capacity. 2GB stack won't be $40 per stack.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, leadeater said:

That price is for 2 4GB stacks, at the $80 per stack price of that capacity. 2GB stack won't be $40 per stack.

I don't figure it would be much more, probably 45 per 2gb stack. I'm just kinda spitballing an idea with my concept anyway. I just think that amd should have a mid-high range competitor to combat the 2060/2070 till midrange navi launches.

8086k Winner BABY!!

 

Main rig

CPU: R7 5800x3d (-25 all core CO 102 bclk)

Board: Gigabyte B550 AD UC

Cooler: Corsair H150i AIO

Ram: 32gb HP V10 RGB 3200 C14 (3733 C14) tuned subs

GPU: EVGA XC3 RTX 3080 (+120 core +950 mem 90% PL)

Case: Thermaltake H570 TG Snow Edition

PSU: Fractal ION Plus 760w Platinum  

SSD: 1tb Teamgroup MP34  2tb Mushkin Pilot-E

Monitors: 32" Samsung Odyssey G7 (1440p 240hz), Some FHD Acer 24" VA

 

GFs System

CPU: E5 1660v3 (4.3ghz 1.2v)

Mobo: Gigabyte x99 UD3P

Cooler: Corsair H100i AIO

Ram: 32gb Crucial Ballistix 3600 C16 (3000 C14)

GPU: EVGA RTX 2060 Super 

Case: Phanteks P400A Mesh

PSU: Seasonic Focus Plus Gold 650w

SSD: Kingston NV1 2tb

Monitors: 27" Viotek GFT27DB (1440p 144hz), Some 24" BENQ 1080p IPS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

@TheDankKoosh The problem is you can't just get rid of two stacks and call it a day. Getting rid of two stacks means you lose half the bandwidth because GPU memory is setup so that one memory chip is a memory channel.

 

You'd have to build 4 stacks at half density or something. That's not going to save you a lot of money in the manufacturing process.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, TheDankKoosh said:

I don't figure it would be much more, probably 45 per 2gb stack. I'm just kinda spitballing an idea with my concept anyway. I just think that amd should have a mid-high range competitor to combat the 2060/2070 till midrange navi launches.

Yea I really don't think it'd be that cheap, it's probably more like $60-70/stack, otherwise I don't see why they wouldn't just refresh Vega 56 and 64 to have remained mainstream during the RTX launch.

 

AMD had the MI50 and decided that if they just neutered some of the professional features and gave it the gamer A E S T H E T I C, they'd have a card to barely compete with RTX just to remain on consumer and enthusiast minds while we wait for Navi. If AMD was serious about competing, I doubt we'd be hearing all these "lack of supply and availability" rumors that we are.

 

Seriously guys, if you really want a shaved down Radeon VII, go buy a Vega 64. If you want lower end Vega, it's still purchasable, but by the looks of it, nobody is buying it. I mean it's $100-150 cheaper than an RTX 2070 usually sells for and is actually a bit competitive with it, especially considering most benchmark comparisons are of the higher quality (and more expensive) 2070 silicon.

if you have to insist you think for yourself, i'm not going to believe you.

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, TheDankKoosh said:

I don't figure it would be much more, probably 45 per 2gb stack. I'm just kinda spitballing an idea with my concept anyway. I just think that amd should have a mid-high range competitor to combat the 2060/2070 till midrange navi launches.

I was thinking more around the $50-$60, though probably not $60 but there is a chance. There's still the common buffer die on all HBM stacks then on top of that you layer the memory dies, each time you do that is a chance you wreck the whole thing, maybe the yields on a 2 high stack really are that much better than a 4 high putting the cost in the lower end.

 

Then you have the same problem when making the actual GPU. You're mounting 5 components on an interposer and if any one of those is a bad mount or causes a defect or becomes defective during that everything has to be scrapped. That's the main reason I don't see a lower memory capacity or lower CU product becuase it's so much cheaper to multipurpose a single package than multiple. Plus when something does go wrong during the interposer mounting phase the defect won't be the kind where you can disable a few CU and get a usable part, it's more typically completely ruined.

 

That's the biggest drawback to using HBM, overall yields suck, even if the die yields are good.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

Then you have the same problem when making the actual GPU. You're mounting 5 components on an interposer and if any one of those is a bad mount or causes a defect or becomes defective during that everything has to be scrapped. That's the main reason I don't see a lower memory capacity or lower CU product becuase it's so much cheaper to multipurpose a single package than multiple. Plus when something does go wrong during the interposer mounting phase the defect won't be the kind where you can disable a few CU and get a usable part, it's more typically completely ruined.

Honestly I keep forgetting about this bit. Further solidifies the idea that Radeon VII just isn't a feasible product to sell for $699, it's really just a marketing stunt for the time being.

if you have to insist you think for yourself, i'm not going to believe you.

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Suika said:

An 8GB Radeon VII is a trash idea.

An 8GB Radeon VII is a trash idea.

An 8GB Radeon VII is a trash idea.

An 8GB Radeon VII is a trash idea.

An 8GB Radeon VII is a trash idea.

An 8GB Radeon VII is a trash idea.

An 8GB Radeon VII is a trash idea.

An 8GB Radeon VII is a trash idea.

 

Sorry, just wanted to make sure I nailed that Radeon VII with 8GB is a trash idea. The absolutely insane memory bandwidth is the reason we have a Radeon VII and not just a refreshed Vega 64. Cut the memory, you cut like 75% of the performance uplift, and suddenly AMD has to cut the price down to $500 to compete with a lower tier card (that $150 savings means nothing at that point).

 

Radeon VII exists for publicity while we're all waiting for big Navi. NVIDIA launched shiny new things so AMD had to pony something up.

 

15 hours ago, Madgemade said:

This has been done to death.

They can't go 8GB because they need 4 stacks to get the 1TB/s memory bandwidth. Vega 10 (Vega 56/64) has two stacks and has 512 MB/s bandwidth.

If they had two stacks it would mean no better bandwidth than old Vega. From tests and memory overclocking results it is well known that Vega 10 is bandwidth starved.

They can't use 4x 2GB stack because they don't come in that size.

 

So there will never be an 8GB 7nm Vega, if there was it would barely be any better than Vega 64!

2GB HBM2 stacks exist, they could still get the same performance with 4x2GB stacks, I'm not entirely sure how much cost that would save though, I would speculate if 4x4GB stacks cost $300 that 4x2GB stacks would cost $200.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Suika said:

Honestly I keep forgetting about this bit. Further solidifies the idea that Radeon VII just isn't a feasible product to sell for $699, it's really just a marketing stunt for the time being.

AMD needs to be in the market, so it's not really a stunt. It's a real product. They just don't want to sell too many of them.

 

Anyone seen any more early benchmark results?

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, Taf the Ghost said:

Anyone seen any more early benchmark results?

Early?! It's almost Friday, everyone is late. How dare they!

 

GMT+12 haha

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

Early?! It's almost Friday, everyone is late. How dare they!

 

GMT+12 haha

The NDA lift is supposed to be pretty early, if I remember right.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Taf the Ghost said:

The NDA lift is supposed to be pretty early, if I remember right.

So what you are saying is don't go to sleep? I can do that.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, leadeater said:

So what you are saying is don't go to sleep? I can do that.

I'm doing all-night marathon of Radeon VII reviews, in the lead up I'm going to rewatch some RTX 2080 reviews. You know, just for fun. Oh, and I need to type "First" in the YouTube comments.

 

\s

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, leadeater said:

So what you are saying is don't go to sleep? I can do that.

Lol.

 

Per Steve at HUB, seems about 9 am EDT, if he was remembering right. So I think it's about 3 hours out from now.

Link to comment
Share on other sites

Link to post
Share on other sites

Though the interesting story going around is really low supply in EU. This was always going to be a small run, but I wonder if this is almost a soft launch. (Or AMD would really like to sell them for about 75USD more, each.)

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Taf the Ghost said:

Though the interesting story going around is really low supply in EU. This was always going to be a small run, but I wonder if this is almost a soft launch. (Or AMD would really like to sell them for about 75USD more, each.)

Oh yeah good point. EU will get rekt, there seems to be little incentive to try and hunt one down if you live in that region. I've heard technology is expensive in the EU but I'm not really sure why. Is something to do with Brexit or Tariffs or similar?

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×