Jump to content

Trixanity

Member
  • Posts

    3,333
  • Joined

  • Last visited

Reputation Activity

  1. Like
    Trixanity got a reaction from CarlBar in AMD RX 5700 Navi New arch   
  2. Funny
    Trixanity got a reaction from exetras in -ENDED- AMD Next Horizon Gaming E3 2019 Live Thread   
    That's the problem isn't it? The price is reasonable and the performance likewise for the segment it's in yet you're pretty much saying that it's an Nvidia card you want.
    Yeah, I haven't seen anyone else say $750 is expensive. In fact I've mostly heard it's a great price for a 16 core. TR 2950X launched at $900.
  3. Like
    Trixanity got a reaction from Taf the Ghost in -ENDED- AMD Next Horizon Gaming E3 2019 Live Thread   
    I doubt the upgrade cycle is hard-locked to product launches. Some will of course upgrade immediately but not everyone have disposable income to drop $500 immediately on the launch of a product so I wouldn't make the presumption that those who want the performance would already have bought a 2060/2070. Those in the market for a GPU now will want to buy these over their Nvidia counterparts (especially the vanilla 5700) unless they're buying brands over performance. It's not a weak release but it's not exactly impressive either especially in the sense that the product stack needs to be filled out - in other words it's too limited in scope. If the reviews hold up to the preliminary data, then I would definitely recommend this within their given price range and with no hesitation.
  4. Like
    Trixanity got a reaction from Humbug in AMD RX 5700 Navi New arch   
    I wouldn't even call the VRAM difference to be about future proofing (not that you said that) but actually a necessity right now. There has been multiple games (not even new games) that I've heard of being bottlenecked by 6 GB VRAM. So (of course) depending on what you need, the 6 GB of VRAM might be a deal breaker from the get-go. 
     
    My understanding of it is that it isn't even Nvidia screwing consumers over but it's a design limit of their architecture. From what I understand the memory controller, caches and (I think) SMs are structured in a way that when you disable/remove one you have to remove the others as well as they're linked. So when you want to make a GPU with X number of Y and in this case want to scale it down, you'll end up with a narrower memory bus and therefore you end up in a situation where it's capped at 6 GB. So either they'll have to rethink that design or they'll have to (if possible) buy VRAM chips with higher density but I think you might end up with a 12 GB card in that case and that would be weird. Of course someone with more intimate knowledge of Turing (or Nvidia's uarchs in general) may want to correct me if I'm wrong. If I recall it's the exact same (or a similar) problem that led to the famed GTX 970 with 3.5 GB VRAM at full speed and like 0.5 GB at 1/8th the speed.
     
    I think the 5700 will be a neat card for a lot of consumers who need more than the 2060 can provide but AMD really do need to fill out the product stack. Two GPUs aren't enough, not by a long shot. The most important would be a card to compete above the 2080 and preferably a halo card around the 2080 Ti or higher. However it seems we could be waiting as much as another year for that.
     
    A lot of people dismiss these because they're late to the party and they don't cost $200 but I honestly think they're positioned in a way that they fill in a spot in between two of Nvidia's most popular GPUs and in a way that gives consumers a reason to pick these over Nvidia's offerings. Not only that it fills in a performance gap if you look at the discrete GPU market as a singular entity versus the duopoly it is.
    They're not incredible by any means but they're a sensible purchase. It's a logical choice to make.
  5. Agree
    Trixanity got a reaction from _Hustler_One_ in AMD RX 5700 Navi New arch   
    I wouldn't even call the VRAM difference to be about future proofing (not that you said that) but actually a necessity right now. There has been multiple games (not even new games) that I've heard of being bottlenecked by 6 GB VRAM. So (of course) depending on what you need, the 6 GB of VRAM might be a deal breaker from the get-go. 
     
    My understanding of it is that it isn't even Nvidia screwing consumers over but it's a design limit of their architecture. From what I understand the memory controller, caches and (I think) SMs are structured in a way that when you disable/remove one you have to remove the others as well as they're linked. So when you want to make a GPU with X number of Y and in this case want to scale it down, you'll end up with a narrower memory bus and therefore you end up in a situation where it's capped at 6 GB. So either they'll have to rethink that design or they'll have to (if possible) buy VRAM chips with higher density but I think you might end up with a 12 GB card in that case and that would be weird. Of course someone with more intimate knowledge of Turing (or Nvidia's uarchs in general) may want to correct me if I'm wrong. If I recall it's the exact same (or a similar) problem that led to the famed GTX 970 with 3.5 GB VRAM at full speed and like 0.5 GB at 1/8th the speed.
     
    I think the 5700 will be a neat card for a lot of consumers who need more than the 2060 can provide but AMD really do need to fill out the product stack. Two GPUs aren't enough, not by a long shot. The most important would be a card to compete above the 2080 and preferably a halo card around the 2080 Ti or higher. However it seems we could be waiting as much as another year for that.
     
    A lot of people dismiss these because they're late to the party and they don't cost $200 but I honestly think they're positioned in a way that they fill in a spot in between two of Nvidia's most popular GPUs and in a way that gives consumers a reason to pick these over Nvidia's offerings. Not only that it fills in a performance gap if you look at the discrete GPU market as a singular entity versus the duopoly it is.
    They're not incredible by any means but they're a sensible purchase. It's a logical choice to make.
  6. Agree
    Trixanity got a reaction from leadeater in (16core added)AMD 3000 specs! 4.7 GHZ, R9 3950x, R7 3700x, 3800x.   
    @leadeater
    There's indeed a lot of interesting bits in the Anandtech article. For example the new Windows scheduler updates (latest Windows 10 update) that basically schedules threads to per CCX (I guess finally) for all Zen products (gen 1 through 3). In other words it'll fill up a CCX before moving onto the next. I think that's another reason why they want to avoid multi-die (other than price) as much as possible so they don't have to split threads across CCXs and CCDs more than necessary. <8 core SKUs with single CCDs will help facilitate that.
    They also enable faster clock ramp ups on gen 3 and claims it's down to 1-2 ms (although that sounds crazy when compared to Intel Speed Shift which is advertised as around 15 ms I believe) versus 30 ms previously. Sadly it's not backwards compatible for whatever reason (not sure if technical or artificial - sounds like the former).
     
    Infinity Fabric is now a lot faster (double the bus width due to PCIe 4.0) and 27% more efficient. I really do wish they'd use this to get Zen 2 APUs out the door faster with a Navi chiplet in place of a second CCD although I'm not sure what can be accomplished if it's limited to 80 mm^2. Alas, I guess we're stuck with monolithic for now.
     
    Also, let's rejoice in that AMD has (supposedly) provided full hardware security hardening for the known vulnerabilities.
     
    Edit: they might also want to provide an updated memory controller for the APUs anyway. Reason being they could use LPDDR4 memory support for mobile and since they use the same die for gen 1 and 2 it sounds likely they'll want to do the same for gen 3. However they might still not do that despite the power efficiency gains.
  7. Informative
    Trixanity got a reaction from Taf the Ghost in -ENDED- AMD Next Horizon Gaming E3 2019 Live Thread   
    They didn't say it at the E3 event. As you can see I've lifted it directly from the latest GN video. Given the context I think it was a backstage event at Computex.
  8. Like
    Trixanity got a reaction from leadeater in AMD RX 5700 Navi New arch   
    The blower design is alleged to be for the benefit of system integrators. If that is the case, I don't see AMD switching cooler design until they get feedback to stop. AMD should have said something about AIB design availability as the rumors are that they're ready on launch day or shortly after but of course no confirmation was forthcoming. 
    It isn't really much of a competition between the 5700 and 2060. AMD pretty much hinted at the 2060 just being the nearest and they had to compare it to something. The performance difference between them is bigger than between XT and 2070. 2060 is pretty gimped on multiple fronts including VRAM. The last part is funny because Samsung just scrapped their GPUs for RDNA so how do you figure that makes sense?
    I wouldn't put too much stock in TDP considering GPU power consumption figures rarely follow the TDP if you look at benchmarks. Some are higher, some are lower and some are roughly similar. We'll see where these fall.
    AMD claims that the noise is much reduced on the new blower design (although it also sounds like they're limiting RPMs to accomplish that - more specifically I heard 1800 RPM) and that it's measured at 43 db full tilt (I heard mid-to-high 50s for some of the older designs).
    It does seem like they've pulled out a lot of new tricks to make this blower design work so I'm not sure if it's possible to do more with a blower design than they've done with this one. I'm not exactly an expert on cooler design so maybe someone else can chime in on that.
  9. Agree
    Trixanity got a reaction from Castdeath97 in -ENDED- AMD Next Horizon Gaming E3 2019 Live Thread   
    Interesting bits. AMD will bring raytracing with the next iteration. Makes sense. It just doesn't make sense for gaming right now. People will of course scream and shout for it anyway. Not all that different to PCI-E 4.0 in that regard: it's just not useful for gaming presently. Just to throw some extra salt: by the time raytracing makes sense, a Turing card will be irrelevant so you're not 'future-proofing' by buying one.
     
    No variable rate shading either for now but get this: primitive shaders are back with a vengeance and this time it actually works (note: apparently it also worked on Vega but it didn't provide any tangible performance benefits so it was disabled). Rapid packed math is also here.
  10. Informative
    Trixanity got a reaction from Tuugeboi in (16core added)AMD 3000 specs! 4.7 GHZ, R9 3950x, R7 3700x, 3800x.   
    12nm does have an area reduction. AMD just didn't want to use the resources required to shrink the die or to add transistors. So they just adapted the design for more space between transistors. Minimal effort.
  11. Agree
    Trixanity got a reaction from Tuugeboi in (16core added)AMD 3000 specs! 4.7 GHZ, R9 3950x, R7 3700x, 3800x.   
    No one says you have to. You won't be seeing big leaps anymore so most people can easily manage with the same CPU for five years. If 15% isn't enough you'll be waiting a few years for more.
  12. Funny
    Trixanity reacted to GoldenLag in (16core added)AMD 3000 specs! 4.7 GHZ, R9 3950x, R7 3700x, 3800x.   
    Thats what i said in that response.........
     
    And 12nm was a node meant to just be copypasted from 14nm
  13. Like
    Trixanity got a reaction from Taf the Ghost in (16core added)AMD 3000 specs! 4.7 GHZ, R9 3950x, R7 3700x, 3800x.   
    12nm does have an area reduction. AMD just didn't want to use the resources required to shrink the die or to add transistors. So they just adapted the design for more space between transistors. Minimal effort.
  14. Funny
    Trixanity got a reaction from BiG StroOnZ in (16core added)AMD 3000 specs! 4.7 GHZ, R9 3950x, R7 3700x, 3800x.   
    And the award for the worst post of the year goes to...
  15. Funny
    Trixanity got a reaction from leadeater in (16core added)AMD 3000 specs! 4.7 GHZ, R9 3950x, R7 3700x, 3800x.   
    And the award for the worst post of the year goes to...
  16. Agree
    Trixanity got a reaction from thorhammerz in (16core added)AMD 3000 specs! 4.7 GHZ, R9 3950x, R7 3700x, 3800x.   
    Doubt it. Intel doesn't really need to compete on price.
  17. Agree
    Trixanity got a reaction from Origami Cactus in (16core added)AMD 3000 specs! 4.7 GHZ, R9 3950x, R7 3700x, 3800x.   
    Doubt it. Intel doesn't really need to compete on price.
  18. Agree
    Trixanity got a reaction from Origami Cactus in (16core added)AMD 3000 specs! 4.7 GHZ, R9 3950x, R7 3700x, 3800x.   
    I'm not sure I follow. The stack you want to see appears to be exactly the one you got except there's now an R9 at the top. APUs are quite removed from the CPU stack and it'll be at least 6 months before we can expect to hear about Navi APUs so the stack would be quite barebones if you only expected 4 products considering there's 4 tiers.
  19. Like
    Trixanity got a reaction from cj09beira in AMD RX 5700 Navi New arch   
    The memory was, the GPU itself not so much. It's like strapping 8 channels of DDR4 memory to Bulldozer and wondering why Intel could be faster with dual channel DDR3. If you remove the memory bottleneck but you're bottlenecked in multiple other areas then you won't accomplish much. Besides, HBM will be for ultra high performance going forward until HBM gets cheaper and/or GDDR is no longer viable.
  20. Funny
  21. Funny
  22. Agree
    Trixanity got a reaction from Humbug in (16core added)AMD 3000 specs! 4.7 GHZ, R9 3950x, R7 3700x, 3800x.   
    Definitely. Anything else would be crazy given the audience.
  23. Like
    Trixanity got a reaction from Taf the Ghost in (16core added)AMD 3000 specs! 4.7 GHZ, R9 3950x, R7 3700x, 3800x.   
    We still lack a lot of details. It seems like AMD is forcefully splitting the info across two events (probably for the hype factor) so we might hear more on the 10th (if not, then 7/7). They haven't mentioned XFR yet so either it's deprecated or it can't do anything for Zen 2. 
    So if there is any more frequency left on the table I actually think it's smart to wait until the 10th to unveil that. Will be all the more of a spectacle that way. Although it's not like I'm expecting anything just so we're clear. I fully expect it to top out at 4.6 until I hear anything else. 
  24. Funny
  25. Agree
×