Jump to content

Trixanity

Member
  • Posts

    3,333
  • Joined

  • Last visited

Everything posted by Trixanity

  1. The issue isn't that it's not advanced or not doing a lot. It just appears to either do things incorrectly or get told to do things incorrectly. Whether it's in the name of die harvesting or classic AMD FineWineTM meaning enough updates will solve it; I don't know. For example it appears that it uses too high voltage across most or every frequency level and sometimes even attempts to up the voltage to get higher clock speeds while it actually needs to reduce it to hit those frequencies. That's what's at least partially to blame for the problems resulting in people reporting that their product can't hit the advertised speeds while it actually could if the DVFS worked as it should. It sounds crazy to me that people have to manually undervolt (or set offsets) their AMD products by 100 mV or more to achieve optimal frequencies and efficiency. I should probably add that people still need to do that. It's been a thing for years now and AMD does not appear to want to solve it. With the yields they're having on their chip philosophy and process node, it should not be necessary to blow up the voltage to get a couple of more chips off the wafer that's barely stable anyway. It was bad enough on the GPU side but that people have discovered the same issue on the CPU side is mental. Their competitors do not appear to have the same issues, so I don't see a reason to defend it.
  2. If there are any frequency gains to be had on 7nm+ it's probably no more than 100 MHz at the top end - more likely the improvements would be more consistent high clock speeds across the stack. The architectural improvement are probably limited as well. What AMD really needs is a team dedicated entirely to DVFS for the entire product range. All material I can find highlights how poorly it's done on both CPUs and GPUs. A lot of efficiency has been thrown out the window by poor DVFS implementations. Get that out of the way once and for all and we might just see AMD reach complete parity with Nvidia and Intel in that metric.
  3. While the name could be arbitrary for all intents and purposes, AMD seems to consider bumping the numbers to be part of architectural improvements. Zen+ had none hence the difference. Of course there's plenty room to backpedal on it.
  4. Why not? It's pointless to refresh a product you've already replaced. Zen+ was a (so far) one-time affair caused by scheduling, design readiness and process availability and capacity.
  5. ITT: people unfamiliar with sarcasm. Also ITT: people not reading the actual article.
  6. So making the dregs of society, as you call them, feel hunted will help them onto the path of enlightenment? I somehow doubt it'll improve the situation. In fact it's more likely that they'll explode if they have no outlet at all because they're not welcome anywhere and they won't feel so either. That's the core problem. Basically you've got a rabid dog and you're trapping it in a corner. Good luck with that.
  7. Aren't we just moving into the same territory as the war on drugs? This is essentially "if we kill all bad sites, there will be no bad people" similar to "if we try to get rid of drugs, say how bad they are and punish any seller or buyer severely; then drugs will go away". How did that one go again?
  8. I fully expect it to be a red herring. However we do know high end Navi is coming in one form or another. Keep in mind that, like all AMD cards, the 5700 series was clocked beyond the efficiency range of the silicon. So doubling the chip size does not necessarily result in double the power consumption because you'll just clock it lower to keep the power down. 5700 XT at stock (running at 1900~ MHz) runs at 185W but can drop to 110W by running it at 1700 MHz and lowering the voltage. While I wouldn't claim to know how the GPU scales with CUs but let's just for the sake of argument say that 80 CU at 1900 MHz would be 370W but lowering it to 1700 MHz would then make it a mere 220W (assuming perfect scaling). That's of course GPU power only. Still need memory power consumption and it's likely a high end card would roll with HBM again. So let's say 250-260W for a massive 80 CU card. That's not all that bad. However realistically speaking: AMD would probably up the voltage a bit for the sake of yields and the scaling is probably not 1:1 but it's very realistic to get a 200-300W behemoth with a decent clock speed. It's a matter of binning and a matter of tuning. It's not impossible. Although I have heard some claims that there's still some bottlenecking in the current RDNA uarch that prevents scaling up the chips but a Reddit comment isn't exactly admissible in court.
  9. He calls it jebaited. As in Nvidia responded as expected to the initial pricing meaning they could act accordingly. I specifically said they wanted to keep the margins high and in that sense mission accomplished. The pricing would be fake if they never intended to sell it at the initial price. You could argue the likelihood of that is low considering Nvidia would have been pushing it if they tried to increase the price for a product refresh based on minor improvements but again: the goal was to avoid price wars and that much worked out; Nvidia did not attempt to lower the price either. However I also think AMD lowered the price because of consumer backlash. People weren't happy about AMD following Nvidia with the pricing so I think it also gave them a reality check. However companies never admit fault so it's easier to say that it was part of a marketing strategy to fool Nvidia and their pricing strategy. Whether or not that strategy was as fleshed out they claim. The performance of the product is pretty good all things considered (just about double of Polaris 10) and hits where it should so I don't think it's a case of product performance. In addition, AMD would have known where the performance was months in advance so it seems unlikely that they would go on stage, look at the charts and say "wait a minute, the pricing is wrong". Planning things doesn't make them artificial. You could say prices are artificially high right now but that's a market trend. AMD would, however, be grossly incompetent if they had nothing planned for Nvidia's launch.
  10. Well, they are different. If the price was fake that would imply that AMD, no matter what, would have lowered the price on launch by $50 (which would make no sense). Them saying they put out a higher price initially to see Nvidia's response and then reply to that with some pre-planned scenarios is different. If Nvidia launched The 2070 Super at $600 (or $550 even) instead, then the price would have remained in place because the perf/$ would firmly have been in AMD's favor. So would the price still be fake if that was the case and if so, how? It's basic business when you know your only competitor is about to bring to market a refreshed stack at roughly the same time. I think the only thing that's disputable is whether or not it was as premeditated as they (AMD) claim. They claim they put out a price hoping Nvidia would respond like they usually do and it seems they did. It would have been surprising if they lowered prices (eg. $450 or even $400 2070 Super) and it seems AMD was afraid that if they launched aggressively with $400 or lower, Nvidia might just have to also aggressively match resulting in AMD having to lower the price even further. They all care about margins so the question is: why start an unnecessary price war if you can get both the margins you'd like and put your product in the position of being the better deal? Nvidia doesn't have to respond to the $400 price tag on the 5700 XT; the price is at a point where Nvidia will still sell cards so they don't need to lower it and they're also in a tight spot due to the die size so I think Nvidia, while not happy, is content with how things turned out. However, to reiterate: it's in no way fake pricing. The odd thing is them taking their strategy public but I'm guessing it's to control the narrative so it does not look like AMD is the one taking it on the chin in this bout.
  11. They are. Nvidia's margins are pressured as it is because they're using a lot of silicon on tensor and rt features. I'm guessing they were thinking of when compared to Vega (or gross margin as reported in their financials across product stacks) otherwise it doesn't make sense. However half sized dies at 7nm should be cheaper per chip than 12nm. So 5700 is cheaper than 2060 super and definitely cheaper than 2070 super. The 2070 super is basically a gimped 2080 aka TU104 instead of TU106. In short the whole point of this marketing exercise is to pressure Nvidia's margins and sales figures more than they already are because AMD has a lot of room to work with because of 7nm with good yields and they got a lot of performance out of the silicon despite the relatively small size.
  12. Look at how it scales with voltage and power targets: With AIB coolers it should do 2200+ MHz with decent temps - at least if you mess with the power tables from what I've seen elsewhere. Right now you need to undervolt to keep temperatures in check. Power will go through the roof by pushing it to 2200+ though as you can see it's already above the sweet spot for frequency and definitely voltage. You can see why Samsung would want this uarch in their phones. It's power efficient when you reduce voltage and bring the frequency back into the sweet spot. You can drop power consumption 40% and only lose 7% performance in the process.
  13. No matter what the actual truth is it's quite clear that AMD has a lot more wiggle room for the margins so the reasoning is sound. So what came first? The chicken or the egg? A planned price drop or an emergency response to unexpected performance numbers?
  14. I just looked it up. There doesn't seem to be an EU law on this but apparently it's considered bad practice in my neck of the woods to charge before shipping (with exceptions like digital goods and pre-orders) so it's mostly a de facto agreement rather than legal obligation.
  15. I'm quite sure it's illegal in the EU to charge money for an item that hasn't shipped. So it depends on legislation.
  16. He means the last part of your post which is supposedly your own opinion but is also mandatory to post according to the forum guidelines. However it seems that he views you posting your opinion as required invalidates the rest for whatever reason. Maybe I'm wrong. However it's pretty obvious why the US government would be against crypto currency. Likewise it's obvious why banks are against it.
  17. Yeah but we've seen before that sometimes the price never really drops all the way down again especially when companies manage their supply in the aftermath to make sure the unit price never hits that low again. It genuinely appears that some companies view these short term disasters as a long term blessing in disguise.
  18. These would be CPU lanes, not chipset lanes. So it'll be something like only the nearest x16 slot running gen 4 as well as the m.2 slot connected to the CPU. Everything else would be gen 3 including the chipset. That is if this is accurate at all.
  19. I'm still waiting for Anandtech's deep dive on it but from what I can dig up the slide is accurate enough. 5 dual compute units per workgroup (10 total CUs) with two workgroups per shader engine meaning two shader engines for a grand total of 40 CUs or 20 DCUs.
  20. It's evident that more memory bandwidth is required to feed a large chip. So the only options are wider GDDR6 or HBM. 2080 Ti opted for the former. It's also pretty likely that clock speed will drop a fair bit with such a big design. That would help power efficiency but AMD still has some efficiency gains left on the table to get Navi out sooner from what I can tell. The shader engines are very different this time around. They're much bigger now so Navi 10 only has two but still manages roughly double the performance to Polaris' four. It depends on how they can scale that up. I don't see it as possible to use three engines for 60 CUs (because of the shared resources) so they need to alter the engine configuration to get there and/or go straight to four but unaltered four would obviously be a massive 80 CU that you would somehow need to power and feed with HBM. They need something between those two in whatever way they hope to accomplish it. Or in other words they need a design that's on the good side of a 2080 Super as well as Ti (that's two designs). It also makes sense to have at least four chip designs (at roughly 20/40/60/80? CUs for each performance level). I think if they just start watering down the shader engines you'll end up with a Vega-like bottlenecked execution units so I'm definitely interested to see how they intend to accomplish it because it would become extra thiccc if they just bolt on more shader engines as is.
  21. It's all about scaling. If this architecture scales, AMD has a path towards ultra high-end. Hell, even a 60 CU RDNA design should be faster than 2080 Ti. Going beyond that is the question. The profit margins on 5700 should be very high by the way. The yields and die size should ensure that.
  22. That's a software problem though. So hopefully it'll be fixed although given the timelines it seems like it might not. Depends on whether AMD will put more resources towards fixing their neglect.
×