Jump to content

I don't understand AMD Radeon 7

ToneStar

Why are they keeping this card in production and not just making a 5800xt? Why not just put a better cooler on it, clock it a little higher than the 5700xt, pick some better yields with more processors and compute units and put 4gb or 8 more gb of ram on it and get rid of the Vega VII.  I mean it will have been about 6 months, back in the day cycles were 6 months so I don't think vega 7 people could complain so much and it would be similar in performance and price still.

Link to comment
Share on other sites

Link to post
Share on other sites

It is just very confusing to me how you can have competing architectures at the same time.

Link to comment
Share on other sites

Link to post
Share on other sites

What don’t u get ? Radeon 7 = 2080

5700tx or what ever it is is 2070

5700 = 2060

-13600kf 

- 4000 32gb ram 

-4070ti super duper 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Ebony Falcon said:

What don’t u get ? Radeon 7 = 2080

5700tx or what ever it is is 2070

5700 = 2060

Already explained that, why not just make a 5800xt that is a little better version of the 5700xt with a better cooler and yields with more ram if this is their new architecture.

Link to comment
Share on other sites

Link to post
Share on other sites

Clearly because they aren't ready to release the high-end Navi cards. And also, they wanted to get rid of the stock and the bad eggs of Vega II used in Radeon Instinct MI50's, so they sold them for cheap to get rid of them.

 

The situation would have looked worse without Radeon VII coming to market. You'd have been stuck with the 5700 XT as the best AMD card. And that would have looked bad as people could argue that AMD didn't have a 'high-end enthusiast' card.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, MeatFeastMan said:

Clearly because they aren't ready to release the high-end Navi cards. And also, they wanted to get rid of the stock and the bad eggs of Vega II used in Radeon Instinct MI50's, so they sold them for cheap to get rid of them.

 

The situation would have looked worse without Radeon VII coming to market. You'd have been stuck with the 5700 XT as the best AMD card. And that would have looked bad as people could argue that AMD didn't have a 'high-end enthusiast' card.

I'm sure they could make a card with Navi right now better than the 5700xt it only has 8 gigs of ram and a shitty cooler.  Just upgrading those 2 things would probably put it close to on par with a 2080.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, ToneStar said:

I'm sure they could make a card with Navi right now better than the 5700xt it only has 8 gigs of ram and a shitty cooler.  Just upgrading those 2 things would probably put it close to on par with a 2080.

High end NAVI will come in q1-q2 2020.

I only see your reply if you @ me.

This reply/comment was generated by AI.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, ToneStar said:

I'm sure they could make a card with Navi right now better than the 5700xt it only has 8 gigs of ram and a shitty cooler.  Just upgrading those 2 things would probably put it close to on par with a 2080.

Yeah no.. that's not how GPU design works, if you think (using that same logic you laid out for how AMD should design GPUs) on the NVIDIA side all they did between the RTX 2070 and RTX 2080 was a better cooler and more RAM you are vastly mistaken.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, ToneStar said:

I'm sure they could make a card with Navi right now better than the 5700xt it only has 8 gigs of ram and a shitty cooler.  Just upgrading those 2 things would probably put it close to on par with a 2080.

Shitty cooler is subjective.  OEMs love blower coolers because they reduce the amount of heat dumped into the case.

 

Also, AMD does have a dual Radeon VII card for professional work, you just cant buy it for your PC.

Link to comment
Share on other sites

Link to post
Share on other sites

Really simple, i think they are not ready for Navi, and the Vega still has something to proof.

They need something to fill the gap.

Ryzen 5700g @ 4.4ghz all cores | Asrock B550M Steel Legend | 3060 | 2x 16gb Micron E 2666 @ 4200mhz cl16 | 500gb WD SN750 | 12 TB HDD | Deepcool Gammax 400 w/ 2 delta 4000rpm push pull | Antec Neo Eco Zen 500w

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, ToneStar said:

Why are they keeping this card in production and not just making a 5800xt? Why not just put a better cooler on it, clock it a little higher than the 5700xt, pick some better yields with more processors and compute units and put 4gb or 8 more gb of ram on it and get rid of the Vega VII.  I mean it will have been about 6 months, back in the day cycles were 6 months so I don't think vega 7 people could complain so much and it would be similar in performance and price still.

They have to make the chips for the workstation cards, which sell for thousands of dollars and go in datacenters and to lots of companies.

 

The chips that don't meet the minimums required to be put on these cards (frequency vs power consumption, stay within some temperature etc etc)  are repurposed as Radeon 7 ... they cut a few shaders, they lower some frequencies, they increase the power budget and these chips get used.

 

So if they make 10.000 chips and 500 of these aren't worthy to be put on "pro" cards, don't throw them away, put them on Radeon 7 cards even if they sell at practically no profit due to price of HBM2 memory ... at least you have something to compete with RX 2070 and higher.

 

As for why not switch to navi for workstations and datacenters and various places... sometimes you make contracts and deals to guarantee availability of replacement parts and stock and option to buy more for the next 1-2 years or so... would you tell a company like Pixar that that bought 10000 cards for their rendering farm that their next order must be Navi cards and that they have to change their software to support two different architectures, or you just continue to make the old chips?

 

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, ToneStar said:

It is just very confusing to me how you can have competing architectures at the same time.

It's not confusing to me.  The two products don't compete because of their price.  Simple as that.  

 

This happens far more often than you might realize when it comes to how AMD's APU Ryzen 5 3400G is based on the same design as the Ryzen 2000 series but it'll be around at the same time as the Ryzen 3000 series CPUs.

Link to comment
Share on other sites

Link to post
Share on other sites

the VII is just Instinct cards that werent good enough, so they sell them to gamers. im not sure whats strange here, the instinct cards are not going to stop being made because Navi launched

I spent $2500 on building my PC and all i do with it is play no games atm & watch anime at 1080p(finally) watch YT and write essays...  nothing, it just sits there collecting dust...

Builds:

The Toaster Project! Northern Bee!

 

The original LAN PC build log! (Old, dead and replaced by The Toaster Project & 5.0)

Spoiler

"Here is some advice that might have gotten lost somewhere along the way in your life. 

 

#1. Treat others as you would like to be treated.

#2. It's best to keep your mouth shut; and appear to be stupid, rather than open it and remove all doubt.

#3. There is nothing "wrong" with being wrong. Learning from a mistake can be more valuable than not making one in the first place.

 

Follow these simple rules in life, and I promise you, things magically get easier. " - MageTank 31-10-2016

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I just can't think of another time when a graphics card maker did this.  Maybe the closest thing was the 3dfx Banshee and Voodoo 2

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, ToneStar said:

I just can't think of another time when a graphics card maker did this.  Maybe the closest thing was the 3dfx Banshee and Voodoo 2

Really?

 

HD 3000 (HD 2900XTX was faster than HD 3870)

GF 700 series (GTX 750 Ti was actually a baby GTX 900 series card that was ~6 months early)

HD 7000 series (HD 7790 was a newer architecture than everything else)

 

All had new architecture chips that were not "flagships".

Link to comment
Share on other sites

Link to post
Share on other sites

Often when a manufacturer moves to a new manufacturing process, they'll want to take less risks and start with smaller chips that are easier to test, debug, correct and LEARN from the experience and use the stuff they learn to make the bigger chips.

 

For example, AMD was using 28nm on GlobalFoundries and TSMC to make the R7 and R9 series of video cards and they wanted to move to 14nm manufacturing process, and make the RX 4xx series cards .

But, this is not straight forward, you have to change some things you're used to, there's different rules, different tweaks.

 

It's way expensive to be at the "bleeding edge", to be one of the first that makes chips on a new process like 14nm ... not only you're not 100% confident the chip you make will work fine (some assumptions you make or some approximations may not be good enough) but it can also take 1-2 years for the actual manufacturing plant to fine tune, optimize, tweak loads of parameters to reduce the failure rates in manufacture.

So company may decide to go with a smaller chip like the ones used in RX 460 and RX 450 first before going with Vega that's a huge chip - even if there's a high failure rate during manufacturing because the factory is not yet optimized and workers there don't have experience, they can still recover enough working tiny chips to be able to launch some product.

 

When the factory improves and you also learn how to work with the new process you can apply all these to a design that's more complex and be confident it would work well.

 

To go from 14nm to 12nm it's less of a learning curve, it's more like a refinement, so it's not a big deal.

However, from 14nm or 12nm to 7nm it's again a big step, because there's new technologies and processes involved in 7nm

 

Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, ToneStar said:

I just can't think of another time when a graphics card maker did this.  Maybe the closest thing was the 3dfx Banshee and Voodoo 2

 

32 minutes ago, KarathKasun said:

Really?

 

HD 3000 (HD 2900XTX was faster than HD 3870)

GF 700 series (GTX 750 Ti was actually a baby GTX 900 series card that was ~6 months early)

HD 7000 series (HD 7790 was a newer architecture than everything else)

 

All had new architecture chips that were not "flagships".

That's right, and also the HD 4770 was (then) ATI's first GPU to use the 40nm process (prior card like HD 4850 and 4870 was based on 55nm).  They accidentally made it too good and it became a cheaper, better HD 4850 that used less power and ran cooler.  But it was not a flagship, it was a smaller die-space chip they used to dip their toe in the next transistor process tech so they could ready themselves to make HD 5970 (a big 'ol die) and the rest of the HD 5000-series with the foundry.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LogicWeasel said:

 

That's right, and also the HD 4770 was (then) ATI's first GPU to use the 40nm process (prior card like HD 4850 and 4870 was based on 55nm).  They accidentally made it too good and it became a cheaper, better HD 4850 that used less power and ran cooler.  But it was not a flagship, it was a smaller die-space chip they used to dip their toe in the next transistor process tech so they could ready themselves to make HD 5970 (a big 'ol die) and the rest of the HD 5000-series with the foundry.

Yep.  And the 4770 was so good that it just became the 5770 with no changes.  One of the longest lived chips before the GCN based 290..

Link to comment
Share on other sites

Link to post
Share on other sites

Radeon 7 is a compute card as well.  

 

My brother has 7 mining right now.  

Workstation Laptop: Dell Precision 7540, Xeon E-2276M, 32gb DDR4, Quadro T2000 GPU, 4k display

Wifes Rig: ASRock B550m Riptide, Ryzen 5 5600X, Sapphire Nitro+ RX 6700 XT, 16gb (2x8) 3600mhz V-Color Skywalker RAM, ARESGAME AGS 850w PSU, 1tb WD Black SN750, 500gb Crucial m.2, DIYPC MA01-G case

My Rig: ASRock B450m Pro4, Ryzen 5 3600, ARESGAME River 5 CPU cooler, EVGA RTX 2060 KO, 16gb (2x8) 3600mhz TeamGroup T-Force RAM, ARESGAME AGV750w PSU, 1tb WD Black SN750 NVMe Win 10 boot drive, 3tb Hitachi 7200 RPM HDD, Fractal Design Focus G Mini custom painted.  

NVIDIA GeForce RTX 2060 video card benchmark result - AMD Ryzen 5 3600,ASRock B450M Pro4 (3dmark.com)

Daughter 1 Rig: ASrock B450 Pro4, Ryzen 7 1700 @ 4.2ghz all core 1.4vCore, AMD R9 Fury X w/ Swiftech KOMODO waterblock, Custom Loop 2x240mm + 1x120mm radiators in push/pull 16gb (2x8) Patriot Viper CL14 2666mhz RAM, Corsair HX850 PSU, 250gb Samsun 960 EVO NVMe Win 10 boot drive, 500gb Samsung 840 EVO SSD, 512GB TeamGroup MP30 M.2 SATA III SSD, SuperTalent 512gb SATA III SSD, CoolerMaster HAF XM Case. 

https://www.3dmark.com/3dm/37004594?

Daughter 2 Rig: ASUS B350-PRIME ATX, Ryzen 7 1700, Sapphire Nitro+ R9 Fury Tri-X, 16gb (2x8) 3200mhz V-Color Skywalker, ANTEC Earthwatts 750w PSU, MasterLiquid Lite 120 AIO cooler in Push/Pull config as rear exhaust, 250gb Samsung 850 Evo SSD, Patriot Burst 240gb SSD, Cougar MX330-X Case

 

Link to comment
Share on other sites

Link to post
Share on other sites

the 5700 XT is a beta card for the consoles that are coming out next year, this launch is a beta run, i'm pretty sure these cards are slower than the consoles coming out next year, buy it and your pc will be slower than a console within 12 months, and they'll have ray tracing, this navi launch is probably worse than the RTX launch last year, the cards shoulda been 50usd cheaper.

 

Radeon vii is the only one that makes sense, 650usd for 2080 performance, and as others have said it's a compute card.

5950x 1.33v 5.05 4.5 88C 195w ll R20 12k ll drp4 ll x570 dark hero ll gskill 4x8gb 3666 14-14-14-32-320-24-2T (zen trfc)  1.45v 45C 1.15v soc ll 6950xt gaming x trio 325w 60C ll samsung 970 500gb nvme os ll sandisk 4tb ssd ll 6x nf12/14 ippc fans ll tt gt10 case ll evga g2 1300w ll w10 pro ll 34GN850B ll AW3423DW

 

9900k 1.36v 5.1avx 4.9ring 85C 195w (daily) 1.02v 4.3ghz 80w 50C R20 temps score=5500 ll D15 ll Z390 taichi ult 1.60 bios ll gskill 4x8gb 14-14-14-30-280-20 ddr3666bdie 1.45v 45C 1.22sa/1.18 io  ll EVGA 30 non90 tie ftw3 1920//10000 0.85v 300w 71C ll  6x nf14 ippc 2000rpm ll 500gb nvme 970 evo ll l sandisk 4tb sata ssd +4tb exssd backup ll 2x 500gb samsung 970 evo raid 0 llCorsair graphite 780T ll EVGA P2 1200w ll w10p ll NEC PA241w ll pa32ucg-k

 

prebuilt 5800 stock ll 2x8gb ddr4 cl17 3466 ll oem 3080 0.85v 1890//10000 290w 74C ll 27gl850b ll pa272w ll w11

 

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, xg32 said:

the 5700 XT is a beta card for the consoles that are coming out next year, this launch is a beta run, i'm pretty sure these cards are slower than the consoles coming out next year, buy it and your pc will be slower than a console within 12 months, and they'll have ray tracing, this navi launch is probably worse than the RTX launch last year, the cards shoulda been 50usd cheaper.

 

Radeon vii is the only one that makes sense, 650usd for 2080 performance, and as others have said it's a compute card.

GCN is so heavily compute biased that it likely doesnt need as much help from things like RT specific cores.

 

Right now NV RTX is the only thing being worked on though.

Link to comment
Share on other sites

Link to post
Share on other sites

This is actually all really simple if you just look at it from the manufacturing side. Want to make a big jump from 14/12nm to 7nm? Grab and existing design, especially one that can benefit from any power reduction, I.E. the Vega 64 and die shrink it so you can see how yields are and what kind of power reduction and frequency boost you can get out of it. If yields aren't great, cut a few CUs out of it and bob's your uncle, you and the fab can get experience on the production of a GPU, on a new process node. 

 

Now, you have a whole new architecture you want to introduce, but the process node you're using is still quite young and yields aren't perfect yet. Do you want to produce one huge die? NO! You want to start with a somewhat, if not substantially smaller die so you can see how the new architecture actually works in silicon. Getting really iffy yields on them? Introduce a second model, based on the same chip, with the non-functioning "cores" disabled. Performance too close to each other, stretch your yields even further by only having to verify them to a lower clock-rate and reduce waste even more.

 

Ok, our architecture is working better now, the process has matured and the yields are going up, let's introduce a larger chip, with more processing cores and additional features at a higher price, where we can make the margin we need on it now that we're only going to lose 5% of chips, instead of the 10% we saw on the small chip, and the 15% we would have seen on this big chip when the process was young.(totally made up numbers just to highlight the point)

 

Why keep selling Radeon VII until that bigger chip is ready for launch? Because they have them, they're selling them, and they're making money on them. If yields on the 7nm Vega chips get high enough, they won't have to worry about the waste, and they can reduce production, then try and clear them out when the new, big 7nm Navi chip is actually ready.

 

You want a perfect example to explain why not create the big Navi chip right off the bat and risk it all? Intel 10nm. They put all their eggs in one basket thinking they could do 10nm three years ago. They lost out on the architecture upgrades that were designed around 10nm, and had to keep trying to squeeze lemons for lemonade with 14nm++++. AMD came along, pulled off a whole new architecture on existing lithography, and broke the chips up, so they could utilize larger process nodes on the I/O which sees minimal benefit from smaller process, and allows them to put 8 cores on a chiplet, and potentially use as few as 4 of them if yields are low, but stack 2 together on mainstream and 4 together on HEDT. These incremental steps they're using are drastically reducing risk, both in terms of cost, and losing time to failed processes.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×