Jump to content

AMD reduces Navi GPU Prices before launch

24 minutes ago, mr moose said:

 

They would find that in the perpetual ignorance of the internet promoting such a narrative.  For every myth we successfully dispel they simply create another 5. 

Those sand miners be jacking up the prices again ?! Why can't they scoop up sand faster?!

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, thorhammerz said:

Those sand miners be jacking up the prices again ?! Why can't they scoop up sand faster?!

 

They need it for their silicon crystals,  I heard Intel wasted a lot trying to get 10nm working.  ?

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, mr moose said:

Just want to correct one notion, Intel didn't stop/slow their innovating or developing, they just couldn't develop or innovate fast enough to keep in front.  And I am only saying that because it sounds like they are doing these things intentionally, they are not, no company intentionally sits on its ass or sandbags unless they are unquestionably that far in front (they aren't) or they don't want to be a company anymore.

at first i thought they sandbagged to avoid getting split up because of being a monopoly. needed amd to get some market share

 

though it doesnt seem like thats the case now

MSI GX660 + i7 920XM @ 2.8GHz + GTX 970M + Samsung SSD 830 256GB

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Neftex said:

at first i thought they sandbagged to avoid getting split up because of being a monopoly. needed amd to get some market share

 

though it doesnt seem like thats the case now

 

I think it's fair to expect a little bit of sandbagging if they have no obvious competition on the horizon (as Intel did for bit back in the late fx8350 days) but that is normal business practice and is usually done as much to conserve costs that get channeled into further R+D, it is never a complacent or intentional choice to stop working on further product.   

 

I think it's fair to say when any company can't compete, it's because they can't, not because they didn't try.  I get in trouble for saying that about AMD and their GPU lineup a lot, but it's true for all companies, x299 was a shamozle because at that time Intel couldn't compete and had to rush a product.  I think the 3000 series will show us more clearly how far along Intel is with 14nm and 10nm.  If Intel don't respond it's because they can't, not because they don't want to or stopped trying.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Shorty88jr said:

Honestly what amazes me is the fact that people still don't understand that AMD will never be competitive in the ultra high end again. Ok now before people tell at me about being a Nvidia shill I would like to say I have an AMD CPU and almost got a 5700xt instead of the 2070 super but some of the features on the Nvidia made me get a 2070 super but that's besides the point. Nvidia isn't Intel they aren't sitting on their butt and not doing anything. Nvidia has multiple paths to take whenever AMD comes out with a semi competitive product. They can do like they did this time and release an updated card early or if that doesn't work keep dropping prices way further than AMD can because they have more profit margins to start with than AMD. The reason Intel fell behind is because the stopped innovating however Nvidia didn't.

It's all about scaling. If this architecture scales, AMD has a path towards ultra high-end. Hell, even a 60 CU RDNA design should be faster than 2080 Ti. Going beyond that is the question.

 

The profit margins on 5700 should be very high by the way. The yields and die size should ensure that.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Trixanity said:

It's all about scaling. If this architecture scales, AMD has a path towards ultra high-end. Hell, even a 60 CU RDNA design should be faster than 2080 Ti. Going beyond that is the question.

 

The profit margins on 5700 should be very high by the way. The yields and die size should ensure that.

rdna to completely be great needs more shader engine so that it doesn't cap at 64 Cus again, and to work on power, though if they can make it fat they might be able to get away with simly lowering the clocks a bit to use less power that way. either way their next high end card will probably go back to hbm 2/3 

ps: i wonder how the now dual Cu affects max Cu count per shader engine 

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, cj09beira said:

rdna to completely be great needs more shader engine so that it doesn't cap at 64 Cus again, and to work on power, though if they can make it fat they might be able to get away with simly lowering the clocks a bit to use less power that way. either way their next high end card will probably go back to hbm 2/3 

ps: i wonder how the now dual Cu affects max Cu count per shader engine 

It's evident that more memory bandwidth is required to feed a large chip. So the only options are wider GDDR6 or HBM. 2080 Ti opted for the former.

 

It's also pretty likely that clock speed will drop a fair bit with such a big design. That would help power efficiency but AMD still has some efficiency gains left on the table to get Navi out sooner from what I can tell.

 

The shader engines are very different this time around. They're much bigger now so Navi 10 only has two but still manages roughly double the performance to Polaris' four. It depends on how they can scale that up. I don't see it as possible to use three engines for 60 CUs (because of the shared resources) so they 

need to alter the engine configuration to get there and/or go straight to four but unaltered four would obviously be a massive 80 CU that you would somehow need to power and feed with HBM. They need something between those two in whatever way they hope to accomplish it. Or in other words they need a design that's on the good side of a 2080 Super as well as Ti (that's two designs). It also makes sense to have at least four chip designs (at roughly 20/40/60/80? CUs for each performance level). I think if they just start watering down the shader engines you'll end up with a Vega-like bottlenecked execution units so I'm definitely interested to see how they intend to accomplish it because it would become extra thiccc if they just bolt on more shader engines as is.

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, Trixanity said:

It's evident that more memory bandwidth is required to feed a large chip. So the only options are wider GDDR6 or HBM. 2080 Ti opted for the former.

 

It's also pretty likely that clock speed will drop a fair bit with such a big design. That would help power efficiency but AMD still has some efficiency gains left on the table to get Navi out sooner from what I can tell.

 

The shader engines are very different this time around. They're much bigger now so Navi 10 only has two but still manages roughly double the performance to Polaris' four. It depends on how they can scale that up. I don't see it as possible to use three engines for 60 CUs (because of the shared resources) so they 

need to alter the engine configuration to get there and/or go straight to four but unaltered four would obviously be a massive 80 CU that you would somehow need to power and feed with HBM. They need something between those two in whatever way they hope to accomplish it. Or in other words they need a design that's on the good side of a 2080 Super as well as Ti (that's two designs). It also makes sense to have at least four chip designs (at roughly 20/40/60/80? CUs for each performance level). I think if they just start watering down the shader engines you'll end up with a Vega-like bottlenecked execution units so I'm definitely interested to see how they intend to accomplish it because it would become extra thiccc if they just bolt on more shader engines as is.

problem is unless i missed something there is only that one slide pointing towards it being 2 shader engines, so i am not sure how accurate it is and the tech day amd did has yet to become public which is a massive shame, 

Link to comment
Share on other sites

Link to post
Share on other sites

When you really begin to regret buying Radeon VII early..I could have really done with this because it looks great. That division 2 bit at the end is SO sharp, I couldn't believe what I was watching. Nvidia have spent a bucket load on the blurfest known as DLSS and AMD just release a simple sharpening algorithm and beat it.

 

I might trade my Radeon VII down the line for the new Navi flagship (5950 XT or whatever it'll be called) because that feature is tasty.

 

Navi is just getting better and better. Anti-lag and Radeon sharpening have both exceeded expectations and aren't what we thought they were going to be. If it weren't for the blower cooler, they'd have knocked this out of the park.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, cj09beira said:

problem is unless i missed something there is only that one slide pointing towards it being 2 shader engines, so i am not sure how accurate it is and the tech day amd did has yet to become public which is a massive shame, 

I'm still waiting for Anandtech's deep dive on it but from what I can dig up the slide is accurate enough. 5 dual compute units per workgroup (10 total CUs) with two workgroups per shader engine meaning two shader engines for a grand total of 40 CUs or 20 DCUs. 

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, MeatFeastMan said:

When you really begin to regret buying Radeon VII early..I could have really done with this because it looks great. That division 2 bit at the end is SO sharp, I couldn't believe what I was watching. Nvidia have spent a bucket load on the blurfest known as DLSS and AMD just release a simple sharpening algorithm and beat it.

 

I might trade my Radeon VII down the line for the new Navi flagship (5950 XT or whatever it'll be called) because that feature is tasty.

 

Navi is just getting better and better. Anti-lag and Radeon sharpening have both exceeded expectations and aren't what we thought they were going to be. If it weren't for the blower cooler, they'd have knocked this out of the park.

BFV also looked very good, and that DLSS vs 0.78 Sharpened was a complete win for AMD's implementation. DLSS made the tank surface extremely muddy.

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, mr moose said:

Just want to correct one notion, Intel didn't stop/slow their innovating or developing, they just couldn't develop or innovate fast enough to keep in front.

I feel Intel spent more time developing in areas muggles don't really care about. i.e., the server/workstation market.

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.


×