Jump to content

AMD RX 5700 Navi New arch

Firewrath9

At least they finally woke up and stopped trying to make HBM a thing.

 

Now they need to just stop aiming only at the cheap seats and make a high end card people actually want to buy,

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Firewrath9 said:

yeah, but it is more than 10% cheaper (1080s were 480$ IIRC, 1070 Ti was ~420$)

You can always OC the mem, I got a +620mhz before FPS got worse (10.5ghz mem) on my 1070 SC2.

My 1070 ti can only do +500 on the mem, so I got the metaphorical shaft in that manor, if I could do 10ghz like your 1070 then I wouldn't even be complaining, but only samsung memory can do that as far as I can tell, mine has micron.

8086k Winner BABY!!

 

Main rig

CPU: R7 5800x3d (-25 all core CO 102 bclk)

Board: Gigabyte B550 AD UC

Cooler: Corsair H150i AIO

Ram: 32gb HP V10 RGB 3200 C14 (3733 C14) tuned subs

GPU: EVGA XC3 RTX 3080 (+120 core +950 mem 90% PL)

Case: Thermaltake H570 TG Snow Edition

PSU: Fractal ION Plus 760w Platinum  

SSD: 1tb Teamgroup MP34  2tb Mushkin Pilot-E

Monitors: 32" Samsung Odyssey G7 (1440p 240hz), Some FHD Acer 24" VA

 

GFs System

CPU: E5 1660v3 (4.3ghz 1.2v)

Mobo: Gigabyte x99 UD3P

Cooler: Corsair H100i AIO

Ram: 32gb Crucial Ballistix 3600 C16 (3000 C14)

GPU: EVGA RTX 2060 Super 

Case: Phanteks P400A Mesh

PSU: Seasonic Focus Plus Gold 650w

SSD: Kingston NV1 2tb

Monitors: 27" Viotek GFT27DB (1440p 144hz), Some 24" BENQ 1080p IPS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Trik'Stari said:

At least they finally woke up and stopped trying to make HBM a thing.

 

Now they need to just stop aiming only at the cheap seats and make a high end card people actually want to buy,

HBM is great actually, but AMD was just ahead of time so much it just wasn't feasible from a supply and price point. You can be assured it'll be a standard sometime in the future like GDDR5/5X/6 is now. Because it's really the only way to go forward as PCB simply cannot carry enough traces to all the memory modules when it comes to large capacity beyond 16GB.

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/31/2019 at 7:45 AM, RejZoR said:

HBM is great actually, but AMD was just ahead of time so much it just wasn't feasible from a supply and price point. You can be assured it'll be a standard sometime in the future like GDDR5/5X/6 is now. Because it's really the only way to go forward as PCB simply cannot carry enough traces to all the memory modules when it comes to large capacity beyond 16GB.

Hmm. RTX Quadro 8000 has 48GB Vram, HBM2 is good, but too expensive. Also makes cooling a bitch as the hbm stacks might be taller or shorter than the die

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/31/2019 at 7:28 AM, TheDankKoosh said:

My 1070 ti can only do +500 on the mem, so I got the metaphorical shaft in that manor, if I could do 10ghz like your 1070 then I wouldn't even be complaining, but only samsung memory can do that as far as I can tell, mine has micron.

+500 on mem is 10ghz actual clock for GDDR5...

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Firewrath9 said:

+500 on mem is 10ghz actual clock for GDDR5...

 

No it isn't, most pascal cards have 8ghz memory stock, not 9ghz.

8086k Winner BABY!!

 

Main rig

CPU: R7 5800x3d (-25 all core CO 102 bclk)

Board: Gigabyte B550 AD UC

Cooler: Corsair H150i AIO

Ram: 32gb HP V10 RGB 3200 C14 (3733 C14) tuned subs

GPU: EVGA XC3 RTX 3080 (+120 core +950 mem 90% PL)

Case: Thermaltake H570 TG Snow Edition

PSU: Fractal ION Plus 760w Platinum  

SSD: 1tb Teamgroup MP34  2tb Mushkin Pilot-E

Monitors: 32" Samsung Odyssey G7 (1440p 240hz), Some FHD Acer 24" VA

 

GFs System

CPU: E5 1660v3 (4.3ghz 1.2v)

Mobo: Gigabyte x99 UD3P

Cooler: Corsair H100i AIO

Ram: 32gb Crucial Ballistix 3600 C16 (3000 C14)

GPU: EVGA RTX 2060 Super 

Case: Phanteks P400A Mesh

PSU: Seasonic Focus Plus Gold 650w

SSD: Kingston NV1 2tb

Monitors: 27" Viotek GFT27DB (1440p 144hz), Some 24" BENQ 1080p IPS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Firewrath9 said:

Hmm. RTX Quadro 8000 has 48GB Vram, HBM2 is good, but too expensive. Also makes cooling a bitch as the hbm stacks might be taller or shorter than the die

It's funny though, we make things at 7nm scale, but have problems aligning two big ass chips measured in millimeters to the same height on the PCB... XD

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, RejZoR said:

HBM is great actually, but AMD was just ahead of time so much it just wasn't feasible from a supply and price point. You can be assured it'll be a standard sometime in the future like GDDR5/5X/6 is now. Because it's really the only way to go forward as PCB simply cannot carry enough traces to all the memory modules when it comes to large capacity beyond 16GB.

My issue was primarily the price, availability, and more importantly, the performance just wasn't there to justify the first two. IIRC none of their HBM cards were capable of beating out a 1080ti overclocked.

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, Trik'Stari said:

My issue was primarily the price, availability, and more importantly, the performance just wasn't there to justify the first two. IIRC none of their HBM cards were capable of beating out a 1080ti overclocked.

The memory was, the GPU itself not so much. It's like strapping 8 channels of DDR4 memory to Bulldozer and wondering why Intel could be faster with dual channel DDR3. If you remove the memory bottleneck but you're bottlenecked in multiple other areas then you won't accomplish much. Besides, HBM will be for ultra high performance going forward until HBM gets cheaper and/or GDDR is no longer viable.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Firewrath9 said:

Hmm. RTX Quadro 8000 has 48GB Vram, HBM2 is good, but too expensive. Also makes cooling a bitch as the hbm stacks might be taller or shorter than the die

 

 There's a reason thats a 10k card, (beyond the usual reasons Quadro's are expensive). 4GB GDDR6 chips are really expensive, i don't know anything like exact numbers but i wouldn't be shocked to find that it's on par with HBM pricing. It's also the current upper limit. Whilst the cost is going to come down and the capacities likely will increase some more it's really not a good situation for GDDR6 as the top end cards keep pushing the limits of what it can do whilst HBM based cards are nowhere near the limits, (thats at least 128Gb of capacity and 3.2TB of bandwidth).

Link to comment
Share on other sites

Link to post
Share on other sites

Seeing this RX 5700 GPU is possibly at RTX 2070 level, with possibly lower price than 2070, this might be the GPU which we (or just me) really need for over 6Gb VRAM 1080p 144 fps standards without being overkill.

 

Do you guys think so?

 

I'm still looking for a GPU for my 1080p 144 fps gameplay with more than 6Gb VRAM without cost me too much money or bad value for my need, and the one which power efficient.

My system specs:

Spoiler

CPU: Intel Core i7-8700K, 5GHz Delidded LM || CPU Cooler: Noctua NH-C14S w/ NF-A15 & NF-A14 Chromax fans in push-pull cofiguration || Motherboard: MSI Z370i Gaming Pro Carbon AC || RAM: Corsair Vengeance LPX DDR4 2x8Gb 2666 || GPU: EVGA GTX 1060 6Gb FTW2+ DT || Storage: Samsung 860 Evo M.2 SATA SSD 250Gb, 2x 2.5" HDDs 1Tb & 500Gb || ODD: 9mm Slim DVD RW || PSU: Corsair SF600 80+ Platinum || Case: Cougar QBX + 1x Noctua NF-R8 front intake + 2x Noctua NF-F12 iPPC top exhaust + Cougar stock 92mm DC fan rear exhaust || Monitor: ASUS VG248QE || Keyboard: Ducky One 2 Mini Cherry MX Red || Mouse: Logitech G703 || Audio: Corsair HS70 Wireless || Other: XBox One S Controler

My build logs:

 

Link to comment
Share on other sites

Link to post
Share on other sites

John Bridgman, one of AMD's Linux driver developers made a couple of comments on RDNA at the phoronix forums, I cannot really interpret these so will just post them here.
 

Quote


we used to talk about GCN as an ISA, but it seems that most people outside AMD think of GCN as a micro-architecture instead (ie an implementation of the ISA). RDNA is GCN ISA but not what you think of as GCN architecture.

 

With ISA you mean Instruction Set Architecture?

 

Right... programming model would be another term.

R3xx-R5xx was separate PS/VS with vector+scalar instructions, R6xx was unified shader VLIW SIMD, GCN is unified shader SIMD+scalar.

There seemed to be a lot of confusion between ISA and architecture, leading to unintentional forum comments that in the CPU world would translate to something like "Ryzen is never going to be much better than an 8086 because they are both x86".

 

Can you delve into this a bit deeper? There's very little actual information about Navi.

 

Sorry, but not yet. I think the next level of detail will come out around E3.

 

https://www.phoronix.com/forums/forum/phoronix/latest-phoronix-articles/1103202-amd-is-aiming-for-radeon-rx-5700-navi-support-in-linux-5-3-mesa-19-2

Link to comment
Share on other sites

Link to post
Share on other sites

He's basically mentioning what I've complained about when people always started with "but muh GCN" when it came to modern Radeon cards...

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, RejZoR said:

He's basically mentioning what I've complained about when people always started with "but muh GCN" when it came to modern Radeon cards...

Both yes and no. We know the ISA hasn't changed since Tahiti but we also know the uarch couldn't keep up since 2014 and it being largely iterative from the start with a few novel ideas. So it's definitely a valid complaint. People use the words AMD use in their marketing and they've referred to it as GCN + generation for years. They've only really stopped using it less than a week ago.  There's probably a very good reason for that and it isn't to get people to stop talking about "muh GCN". 

Link to comment
Share on other sites

Link to post
Share on other sites

I think it is the exact reason AMD is dropping it. Because GCN now has negativity attached even if there is nothing wrong with it. Card could be a stellar performer and if GCN was mentioned anywhere, people would have doubts literally because of "muh GCN" syndrome.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, RejZoR said:

I think it is the exact reason AMD is dropping it. Because GCN now has negativity attached even if there is nothing wrong with it. Card could be a stellar performer and if GCN was mentioned anywhere, people would have doubts literally because of "muh GCN" syndrome.

Very doubtful. It's more likely that they've actually reworked the uarch considerably. They used to update 2-3 blocks per iteration and now they've finally found the time and money to do something different to get out from under the crushing weight of a uarch that reached its limits half a decade ago.

 

It remains to be seen if this RDNA thing can actually do something. Both short term and long term.

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Trixanity said:

Very doubtful. It's more likely that they've actually reworked the uarch considerably. They used to update 2-3 blocks per iteration and now they've finally found the time and money to do something different to get out from under the crushing weight of a uarch that reached its limits half a decade ago.

 

It remains to be seen if this RDNA thing can actually do something. Both short term and long term.

Ya, that's where I stand on it atm. I think they have put significant engineering resources into reworking it.

 

Now whether it's any good both in the form of Navi and other future RDNA iterations remains to be seen. 

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, Trixanity said:

Both yes and no. We know the ISA hasn't changed since Tahiti but we also know the uarch couldn't keep up since 2014 and it being largely iterative from the start with a few novel ideas. So it's definitely a valid complaint. 

So you're saying that Radeon HD5870, 6870 or maybe even the 6970 are the same as Radeon HD2900, 3850 because both are Terrascale, based on the VLIW Architecture??

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, Stefan Payne said:

So you're saying that Radeon HD5870, 6870 or maybe even the 6970 are the same as Radeon HD2900, 3850 because both are Terrascale, based on the VLIW Architecture??

low-angle-view-scarecrow-against-cloudy-

Link to comment
Share on other sites

Link to post
Share on other sites

Just showing what AMD means with GCN, as the models I listed are all what AMD calles "TeraScale".

HD2000 Series and 6000 series are however reallly different..

AMD even had a VLIW4 Chip instead the usual VLIW5 ones (only 6900 series and Trinity + Richland APUs)...

 

With "GCN" all they mean is the way the CUs work, nothing more...

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Stefan Payne said:

Just showing what AMD means with GCN, as the models I listed are all what AMD calles "TeraScale".

HD2000 Series and 6000 series are however reallly different..

AMD even had a VLIW4 Chip instead the usual VLIW5 ones (only 6900 series and Trinity + Richland APUs)...

 

With "GCN" all they mean is the way the CUs work, nothing more...

You can't retcon this shit. Certainly not that way.

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, CarlBar said:

 

 There's a reason thats a 10k card, (beyond the usual reasons Quadro's are expensive). 4GB GDDR6 chips are really expensive, i don't know anything like exact numbers but i wouldn't be shocked to find that it's on par with HBM pricing. It's also the current upper limit. Whilst the cost is going to come down and the capacities likely will increase some more it's really not a good situation for GDDR6 as the top end cards keep pushing the limits of what it can do whilst HBM based cards are nowhere near the limits, (thats at least 128Gb of capacity and 3.2TB of bandwidth).

well, first of all, it uses 16Gb (2GB GDDR6) chips, (just 2 of them for every slot). It's basically a RTX Titan with an extra set of 2GB chips and Quadro Drivers.

RTX Titan is 2500$, so at MOST the price of doubling the VRAM is 2500$ (ripping off the GDDR6 on a different titan and putting it on a Titan + Driver = Titan RTX)

Most likely its ~200$, as 12 GB of 8Gb ddr6 is ~50-80$

 

HBM is fine for datacenter usage, but it does not belong on mainstream cards.

Also, engineering problems due to differing heights, which is why several Vegas had TJunction issues

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, RejZoR said:

It's funny though, we make things at 7nm scale, but have problems aligning two big ass chips measured in millimeters to the same height on the PCB... XD

they would have to adjust the substrate, which I don't think is possible.

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/31/2019 at 10:26 AM, TheDankKoosh said:

No it isn't, most pascal cards have 8ghz memory stock, not 9ghz.

sigh

 

+500mhz on base memory clock is 2500mhz, which is multiplied by 4 to get the effective clock, 10ghz.

Can you do some research on things before saying them?

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, Firewrath9 said:

sigh

 

+500mhz on base memory clock is 2500mhz, which is multiplied by 4 to get the effective clock, 10ghz.

Can you do some research on things before saying them?

ddr means DOUBLE data rate, that doesn't change just because it's gddr. Just because amd calculates it in that way does not mean that it's the true way of calculating it.  

8086k Winner BABY!!

 

Main rig

CPU: R7 5800x3d (-25 all core CO 102 bclk)

Board: Gigabyte B550 AD UC

Cooler: Corsair H150i AIO

Ram: 32gb HP V10 RGB 3200 C14 (3733 C14) tuned subs

GPU: EVGA XC3 RTX 3080 (+120 core +950 mem 90% PL)

Case: Thermaltake H570 TG Snow Edition

PSU: Fractal ION Plus 760w Platinum  

SSD: 1tb Teamgroup MP34  2tb Mushkin Pilot-E

Monitors: 32" Samsung Odyssey G7 (1440p 240hz), Some FHD Acer 24" VA

 

GFs System

CPU: E5 1660v3 (4.3ghz 1.2v)

Mobo: Gigabyte x99 UD3P

Cooler: Corsair H100i AIO

Ram: 32gb Crucial Ballistix 3600 C16 (3000 C14)

GPU: EVGA RTX 2060 Super 

Case: Phanteks P400A Mesh

PSU: Seasonic Focus Plus Gold 650w

SSD: Kingston NV1 2tb

Monitors: 27" Viotek GFT27DB (1440p 144hz), Some 24" BENQ 1080p IPS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×