Jump to content

Navi 21/23 Cards Rumored (aka "Nvidia Killers" xD)

30 minutes ago, leadeater said:

Sadly, at least competition wise, this is the go to solution for anything professional and graphics. To go with AMD you either really know what your doing and know it'll work or you're unaware of what you are getting in to and will regret the purchase.

 

The only way this will ever change is getting prompt support on the issues like you had and active efforts to fix it, even if you have to wait for the next generation of cards you at least know they acknowledge the issue and are willing to work on it. Without that as a minimum why would you buy hardware you know you couldn't rely on to work or get support for.

I totally agree. I wasn't saying all that to bash directly on AMD for that. I meant that there is issue that leave professional with no other option. I would love not have to buy a 2,500$ Quadro when all i need is 8gb VRAM for the light CAD work when a WX in the 800$ range does the SAME job (excluding FEA obviously). For for a drafter i can save tons of money or upgrade other critical part like having at least 64gb ram instead of only 32gb.

 

Relaying issues is unfortunately near impossible. The dev forum your post get deleted when you have real bugs and eventually your email or IP get banned as you are revoked account creation. I stopped at fighting with them in february 2018 after a 3 years old battle about problems. Now i go with the 3 options i stated, it's simple and cost me no more money to have to buy on the web the same card as a client and ship it for testing. Obviously we rarely get to the throw away and buy a new one solution.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, RejZoR said:

They'll make them cheaper. Just look at Ryzen vs Intel. But there, they are doing the price magic with chiplets. So, price drop will be smaller, but you can expect it, because even AMD knows people sort of expect that from AMD.

The CPU desktop enthusiast / gamer / workstation market is more rational and less brand loyal. Whenever AMD has the best product technically it sells well, therefore AMD has incentive to price well and sell large volumes and increase their market share.

 

The GPU market is different. More brand loyal and irrational. Most people would rather buy even an inferior, slower Geforce GPU than a superior, faster Radeon GPU. Most people in this market just buy the best Geforce GPU that they can afford. So when AMD reduces their prices they are taking a hit to their margins yet still with minimal market share impact. AMD has realized this now, so they are going to focus on good margins going forward.They will only undercut Nvidia slightly by about $50. Because they know from history that either way only 20% or so of consumers are going to buy radeon, so may as well settle for that and make as much money as possible.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Humbug said:

AMD no longer has to worry about selling at loss. Remember as shown in the below chart what has happened in the last few generations is that Nvidia has continuously escalated GPU pricing to ridiculous levels. In this environment it is much easier for AMD to also raise prices and sell at a good profit.

 

For example the Navi GPUs that have been launched are little more than Polaris replacements when you look at the number of compute units and the die size. However they are priced way higher because AMD can now choose to increase their margins too enjoying the fact that there is no price pressure from Nvidia. So they just price about USD 50-100 lower than the equivalent performance Geforce part and still enjoy the big margins. If Nvidia cuts pricing AMD too has lots of room to cut further.

 

47009273914_48f5e0199e_o.png

 

With the 2080ti at more than a thousand US dollars AMD knows that the market is moving up. It would be stupid of AMD to lower prices drastically and force Nvidia down, they would rather make big margins than try to grab market share.

Yeah, that's why I'm excited for Big Navi. AMD has the flexibility to do whatever the hell they like with these cards. I'm expecting a beastly $850-900 card that matches/beats the 2080ti, with 16gb HBM2E, 7nm+, crazy die, a proper watercooler for the reference design to push clocks etc. Nvidia's greed seems to be their downfall here. The 2080's stupid pricing let leftover Instinct MI50's in through the back door with Radeon VII, and they're gonna do it again with Big Navi.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, MeatFeastMan said:

Yeah, that's why I'm excited for Big Navi. AMD has the flexibility to do whatever the hell they like with these cards. I'm expecting a beastly $850-900 card that matches/beats the 2080ti, with 16gb HBM2E, 7nm+, crazy die, a proper watercooler for the reference design to push clocks etc. Nvidia's greed seems be their downfall here. The 2080's stupid pricing let leftover Instinct MI50's in through the back door with Radeon VII, and they're gonna do it again with Big Navi.

 

 

there is still the middle one, i wonder how long till that one arrives (50ish cus)

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Humbug said:

The GPU market is different. More brand loyal and irrational. Most people would rather buy even an inferior, slower Geforce GPU than a superior, faster Radeon GPU. Most people in this market just buy the best Geforce GPU that they can afford. So when AMD reduces their prices they are taking a hit to their margins yet still with minimal market share impact. AMD has realized this now, so they are going to focus on good margins going forward.They will only undercut Nvidia slightly by about $50. Because they know from history that either way only 20% or so of consumers are going to buy radeon, so may as well settle for that and make as much money as possible.

I also think that AMD kinda have given up getting more market share by undercutting the maximum they can, AMD could probably sell the Navi cards for quite less than these current prices. I really doubt that the 5700 costs the same to make as the 2060 yet it costs the same, and the price of the 2060 was probably already higher than what Nvidia could sell it for too.

AMD is probably really happy with the margins that Nvidia allowed them to take, and Nvidia is probably satisfied with their current margins so no one is trying to change, at least that's what it looks like to me.

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Maticks said:

I get what you are saying here, but... there is a big difference between an 60FPS experience to 100FPS, aside from fast motion movements in competitive gaming i dont think 150-240FPS is really going to be as highly noticeable, vs 100FPS being brought down to 60FPS.

 

RTX from users experience seems to be very expensive to adopt and take a top end card and ram it into the 60FPS area, for anyone running at 150FPS on average or even 100FPS is going to see a massive difference in game experience.

Gsync only really softens the jumping FPS but when you start going under 60FPS there is nothing Gsync is going to do to make it feel not garbage. :)

 

Completely depends on the game. I play the new metro game with rtx on and I much prefer it over higher fps because it's the type of game where I prefer the eye candy.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Dan Castellaneta said:

Can't wait for another disappointment from AMD.

How many times have we hyped up an AMD GPU only for it to be disappointing at launch? See: Fury X, Vega Frontier Edition, etc.

i'm actually being hopeful on these

20% more cu prolly be like 10-15% more performance unless they nail the scaling as everything goes up

 

but I am thinking there might be low supply considering their contracts to fill

 

and I wonder if they can chiplet design for rt without having monolithic 

 

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Dan Castellaneta said:

Can't wait for another disappointment from AMD.

How many times have we hyped up an AMD GPU only for it to be disappointing at launch? See: Fury X, Vega Frontier Edition, etc.

7nm+, flexibility due to Nvidia pricing, full-fat RDNA and the fact that the RX 690 midrange chip (5700XT) is near a 1080ti suggests that Nvidia's high-end lead could come to an end. And even if Nvidia reveals 7nm+ cards, AMD won't be far away and will at least compete with the 3080.

 

Unlike Vega, we know where Navi stands. And that's why we know it won't disappoint. Even if it doesn't beat the 2080ti, it is pretty much certain that AMD will have a proper high-end offering. And that to me is far from a disappointment.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, MeatFeastMan said:

Yeah, that's why I'm excited for Big Navi. AMD has the flexibility to do whatever the hell they like with these cards. I'm expecting a beastly $850-900 card that matches/beats the 2080ti, with 16gb HBM2E, 7nm+, crazy die, a proper watercooler for the reference design to push clocks etc. Nvidia's greed seems to be their downfall here. The 2080's stupid pricing let leftover Instinct MI50's in through the back door with Radeon VII, and they're gonna do it again with Big Navi.

 

 

Let's be honest here. The readon 7 was a flop for anything other than being a cheaper version of the instinct card. Unless you were using it for workstations uses it would be dumb to pick it over the 2080. 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Brooksie359 said:

Let's be honest here. The readon 7 was a flop for anything other than being a cheaper version of the instinct card. Unless you were using it for workstations uses it would be dumb to pick it over the 2080. 

Yes, but from AMD's standpoint, that didn't matter. It was close enough that it would sell. And they didn't have many to shift, so to them the fact that the only advantage they had was in a few games and the 16gb VRAM..it didn't matter because they just wanted to get rid of MI50 leftovers.

 

It was more for the AMD fanboys like myself to buy one and have a decent high-end experience.

 

The fact they managed to sell them is all down to Nvidia's pricing. If the 2080 was 550, or even 600, it would have been hard for AMD to sell a 16gb card like that.

 

My main reason for buying it was the Battlefield V performance. I am a Battlefield addict, the Radeon VII stomped all over the 2080 in that title so it made sense for me.

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, RejZoR said:

They'll make them cheaper. Just look at Ryzen vs Intel. But there, they are doing the price magic with chiplets. So, price drop will be smaller, but you can expect it, because even AMD knows people sort of expect that from AMD.

I honestly was expecting AMD to also start doing chiplets for GPUs. I bet it's doable and it would allow for massive, 600mm2 GPUs to be produced for cheap (and with great performance of course).

 

Not to mention it trickling down the product stack.

We have a NEW and GLORIOUSER-ER-ER PSU Tier List Now. (dammit @LukeSavenije stop coming up with new ones)

You can check out the old one that gave joy to so many across the land here

 

Computer having a hard time powering on? Troubleshoot it with this guide. (Currently looking for suggestions to update it into the context of <current year> and make it its own thread)

Computer Specs:

Spoiler

Mathresolvermajig: Intel Xeon E3 1240 (Sandy Bridge i7 equivalent)

Chillinmachine: Noctua NH-C14S
Framepainting-inator: EVGA GTX 1080 Ti SC2 Hybrid

Attachcorethingy: Gigabyte H61M-S2V-B3

Infoholdstick: Corsair 2x4GB DDR3 1333

Computerarmor: Silverstone RL06 "Lookalike"

Rememberdoogle: 1TB HDD + 120GB TR150 + 240 SSD Plus + 1TB MX500

AdditionalPylons: Phanteks AMP! 550W (based on Seasonic GX-550)

Letterpad: Rosewill Apollo 9100 (Cherry MX Red)

Buttonrodent: Razer Viper Mini + Huion H430P drawing Tablet

Auralnterface: Sennheiser HD 6xx

Liquidrectangles: LG 27UK850-W 4K HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Energycore said:

I honestly was expecting AMD to also start doing chiplets for GPUs. I bet it's doable and it would allow for massive, 600mm2 GPUs to be produced for cheap (and with great performance of course).

 

Not to mention it trickling down the product stack.

with just how lazy devs are not anytime soon, only when they can make it work as a single gpu, which will take a while to do right 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, cj09beira said:

with just how lazy devs are not anytime soon, only when they can make it work as a single gpu, which will take a while to do right 

 

Why are you being a stickler for monolithic chip designs? Yes, SoC designers now have to deal with latency between chiplets (down to nanoseconds), but as long as the interconnect fabric is fast/strong/versatile enough, is being able to more easily scale performance (from 2 chiplets on the low-end to 16 chiplets on the insane end) not all the more worth it?

Link to comment
Share on other sites

Link to post
Share on other sites

Doesn't the arch shit itself after 56 CU's?  I thought it was basically impossible to run more than 64.  Maybe this could be achieved with infinity fabric, but as OP said get a dumptruck full of salt ready 

Want to custom loop?  Ask me more if you are curious

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, cj09beira said:

with just how lazy devs are not anytime soon, only when they can make it work as a single gpu, which will take a while to do right 

Devs will be quite happy to make them work for you... you merely need to foot the bill ;).

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Damascus said:

Doesn't the arch shit itself after 56 CU's?  I thought it was basically impossible to run more than 64.  Maybe this could be achieved with infinity fabric, but as OP said get a dumptruck full of salt ready 

That was GCN. We haven't seen yet how RDNA scales.

Dell S2721DGF - RTX 3070 XC3 - i5 12600K

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, System32.exe said:

That was GCN. We haven't seen yet how RDNA scales.

That's true, hopefully RDNA can get the ball rolling again

Want to custom loop?  Ask me more if you are curious

 

Link to comment
Share on other sites

Link to post
Share on other sites

Let's just hope Navi is really able to have at least 2x the CUs, this should make high end Radeons maybe even twice as fast as the 5700X(including some optimisation considering how much time there is between 5700X and some new GPU), so the high end where there is competition moves significantly higher and I think it's unlikely both AMD and Nvidia will increase prices even further, I think that Radon 5700 will go for less than $200 in 6 months after the new gen release, and this would be huge.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Results45 said:

 

Why are you being a stickler for monolithic chip designs? Yes, SoC designers now have to deal with latency between chiplets (down to nanoseconds), but as long as the interconnect fabric is fast/strong/versatile enough, is being able to more easily scale performance (from 2 chiplets on the low-end to 16 chiplets on the insane end) not all the more worth it?

That's a poor take on his comment. He's not a stickler for monolithic designs. He's saying developers either need to get their shit together or AMD needs to obfuscate the shit out of the chip design. We've seen how an otherwise incredibly wide uarch like GCN is sitting idle half the time and we've seen how multiGPU scaling is awful half the time and the other half it's microstuttering all over the place. Yet you're acting like it's just a matter of the interconnect and then it'll just slot in like nothing. We've also seen what a mess TR (and Ryzen in general) was on Windows until recently. These things take time to perfect and AMD has gone on record to say it isn't ready yet. So he's not just a Debbie Downer. This is a real problem. Otherwise everyone would be doing it right now. It's the most ideal scenario yet we're still doing monolithic. Things are like they are for a reason (most of the time).

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, Trixanity said:

He's saying developers either need to get their shit together or AMD needs to obfuscate the shit out of the chip design.

I would also amend the original statement to development studios or project managers because the actual developers work ridiculous hours and releasing the product as soon as possible comes before all else, even at the cost of quality.

 

Games like Ashes of the Singularity perfectly illustrate how priorities effect the final product, when technical standards come first with an aim to cover all available features on the market then the final product will deliver on that. Stardock/Oxide Games doesn't have a greater technical capability than any other development studio they just have different priorities. Galactic Civilization III, another Stardock developed game, also shows this because after the game was released and greater than 4 core CPUs came on the market, even if they were only in the HEDT sector, they did a major redesign of the entire game engine to allow true multi-core scaling across as many cores in the system and even works across multiple sockets.  This was 2 years after the games initial release, that sort of redevelopment is very rare.

 

The willingness to walk away from multi GPUs is very short sighted, that goes for everyone involved. There is nothing fundamental that prevents this from working and everyone would benefit from it. GPUs would be cheaper to buy, cheaper to make, architecture generations would evolve quicker and there would likely be flow on benefits to motherboard designs and CPU focus areas. Tech reviewers really need to change their tune, instead of giving the standard run down how multi GPU is effectively dead and advising us to buy the single best card we can they should actively put pressure on publishers (these are the ones that actually matter) to have games that actually support this technology. Public shaming really does go along way, the type I agree with.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, VegetableStu said:

i kinda wonder if AMD would bring some implementation of multiGPU over infinity fabric to the consumer side o_o

There isn't a lot of point to doing so, not with DX12 and Vulkan, also PCIe 4.0. It's currently a software issue so the only real solution is *cough* GameWorks *cough*, you know what I mean though ?.

 

Tile based rendering needs a good kick start and brought up a layer in to the game engine and graphics APIs, Nvidia already does this but it's down at driver and hardware level which isn't where you want to work from for multi GPU.

 

Interestingly Ray Tracing might actually help this become a thing again, one of the problems with multi GPUs in the past was post processing effects and lighting/shadows. If you split those tasks across GPUs and don't evaluate the entire frame you can get differences in shadow depth, coverage/alignment and lighting levels which is why (from my understanding) the resulting rendered frame is reconstructed then post effects applied afterwards and that is where the majority of multi GPU setups have the most issues (every OMG wft moment I have seen has been either shadows or lighting). Ray Tracing can allow the distribution of work across GPUs without that problem, http://khrylx.github.io/DSGPURayTracing/

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, leadeater said:

There isn't a lot of point to doing so, not with DX12 and Vulkan, also PCIe 4.0. It's currently a software issue so the only real solution is *cough* GameWorks *cough*, you know what I mean though ?.

 

Tile based rendering needs a good kick start and brought up a layer in to the game engine and graphics APIs, Nvidia already does this but it's down at driver and hardware level which isn't where you want to work from for multi GPU.

 

Interestingly Ray Tracing might actually help this become a thing again, one of the problems with multi GPUs in the past was post processing effects and lighting/shadows. If you split those tasks across GPUs and don't evaluate the entire frame you can get differences in shadow depth, coverage/alignment and lighting levels which is why (from my understanding) the resulting rendered frame is reconstructed then post effects applied afterwards and that is where the majority of multi GPU setups have the most issues (every OMG wft moment I have seen has been either shadows or lighting). Ray Tracing can allow the distribution of work across GPUs without that problem, http://khrylx.github.io/DSGPURayTracing/

All of the software using transform feedback / stream output will kill the performance of a tile based architecture. DXVK found a large amount of it using transform feedback. It is a mandatory feature of Direct3D 11 and Direct3D 12:

 

https://docs.microsoft.com/en-us/windows/win32/direct3d12/hardware-feature-levels

 

Interestingly, they made raster ordered views mandatory on 12_1. It came out that this kills the performance of AMD hardware, so Microsoft really is not making life easier for AMD.

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/25/2019 at 6:12 AM, Trixanity said:

I think it's less than a year ago that AMD said chiplet GPUs weren't ready for gaming. Of course it could all be misdirection but the problem is obfuscating the chiplets from software developers to avoid scaling nightmares as well hiding latencies and scheduling work properly. If these can be solved then it should be ready for primetime but it could take years. I think we'll see chiplets in enterprise/HPC first before anything else. It should be much easier to implement assuming the workloads are insensitive to latency.

 

AMD won't be selling at a loss if they can deliver on performance and efficiency. They could just up the price to offset higher unit costs. The reason AMD's HBM offerings have been selling without profits is because they were expensive products to make but the performance wasn't there so they couldn't sell them for the price necessary to make a decent profit.

I doubt that they were making it up. The bandwidth needed across the dies would be enormous compared to a CPU. Just use the relative size of the memory bandwidth provided to the GPU as a rough guide to guess how much bigger it would need to be. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/24/2019 at 8:15 AM, leadeater said:

@RejZoR

To keep it extremely simple what you are saying is equivalent to 'addition can only be used to count apples'. I'm pretty sure you can think of more uses for addition than to count apples. So if I were to make a hardware accelerated addition logic to count apples you're sure as heck going to be able to count oranges with it, it doesn't matter if I labeled the hardware logic as 'The Apple Adder' you can still use the hardware to count oranges so long as I give you a means of access to that hardware.

If it were a function in the iPhone, I could see Apple kicking software that used it to count oranges off their platform. :P

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×