Jump to content

uzzi38

Member
  • Posts

    16
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. As for point one yes, but that's only if you keep die sizes the same. I highly doubt that Nvidia will be selling consumers 700+mm^2 dies yet again with available N7 supply and price per wafer. Not only do they not have enough supply to meet market demands, they also wouldn't have much margins at all on these dies. As for architectural improvements, really? It doesn't feel right? Come on man, what I posted isn't even a leak it's a guy's talking about CUDA11 which has been publically released, and that after some playing around it not much seems to be different with Ampere vs Turing. It doesn't feel right isn't a counter-point.
  2. That 350W rumour exists for a reason. I highly doubt Nvidia will be happy with only a 30% uptick this generation. Not even slightly.
  3. Yeah, so uh, we may or may not have used Guru3D's numbers originally (first search result that came up) but for some reason they compared multiple GPUs together with overall performance ratings, not graphics performance ratings from 3DMark like you'd think. The new numbers are actually purely graphics scores and can actually be compared with standard 2080ti performance figures.
  4. https://hardwareleaks.com/2020/06/21/exclusive-first-look-at-nvidias-ampere-gaming-performance/ Long story short are the following details from the article: GPU core clock reports as 1935mhz, Memory clock is 6000mhz but likely a misreport due to new memory type/early driver. EDIT: Alright, if I'm to add in something original, then here we go: GA100 at least doesn't show any major uplifts in performance/flop or IPC, whatever you want to call it. There seems to be little adjustments to the uArch in terms of shaders. Consumer facing Ampere I'd hope is different, but I can't see some revolutionary jump in performance coming as a result of this. The score is in line with what you'd expect from the same kind of clocks and 20-something% extra CUDA cores. I do hope the final clocks are higher though at the least, especially given the 350W rumour. At, say, 2.3GHz, it would become a 50% lead over the 2080Ti which is more like what you'd expect generation on generation. But in any case, the main point I'm trying to make here I'm fairly sure this is either the rumoured 3090 or Titan GPUs.
  5. Nah, the Vega uArch itself hasn't changed with Renoir past the media and display engines getting switched out for Navi's. The main uplift in performance actually does come from the increase in sustained clocks, the increased clocks boosted rasterisation performance - the difference scales perfectly with clocks as the boosted iGPU clocks also boost the output of the related specifc FF hardware. The CUs themselves are practically identical otherwise. Also it may or may not be the final Vega iGPU product - only time or a damn good leak will tell.
  6. Its facing the 1650 max-Q and 5500M, not the 1650 and 570. ...also, 1650 for performance and 570 for pricing? Wut? The 570 is a head and sholders above the 1650 in performance. Anyway, yeah it's laptop only, and by the nature of it being an Intel product will make it to many designs because Intel.
  7. You're thinking about these chips the wrong way around. They don't exist to replace the 1600AF or 2600. They exist to ensure people don't try and buy Comet Lake i3s, by offering similar gaming performance for non-AAA games, but a way better platform for them (second hand or even new 3700Xes in a year from now will be dirt cheap, like what's happened to 1st and 2nd Gen Ryzen, and there's Zen 3 next year too, as opposed to Comet and Rocket Lake housefires). And yes, 3rd Gen Ryzen will see major discounts. After all, it already is in most places. Anyway, apparently there's something different about these chips? Maybe? I dunno, just a rumour, so might be worth ignoring this last bit from me.
  8. Exactly. Nobody does, which is why nobody really tests battery life like that I mean, they often do, but nobody cares about the sustained performance battery life figures. Stuff like PCMark10 results are worth talking about, because more often than not if you're on battery you're doing short, bursty workloads - the type of which is exactly what that suite tests.
  9. Okay, that's one thing I want to try and make clear: for the most part battery life =/= power consumption, and higher ST frequencies don't have a huge effect on battery life. Reminder, Comet Lake-U has a 4.9GHz 1T turbo on the 10510U, yet the battery life is fantastic. The higher boost frequencies won't have the largest effect on battery life... the main reason for poor battery life in their -H chips is because of how Intel does their -H chips. See, all their -H chips are really just full desktop dies, just put onto a different package, whereas on AMD's side they use the same dies for laptops in both the -U and -H segments. This makes quite a big difference as those smaller dies - often called ULV dies - cut down several features (ever wondered why Picasso, Raven Ridge and Renoir only have a x8 PCIe link for the GPU, or why AMD cut down the cache?), significantly more power gating and various other in-silicon optimisations to minimise leakage as well as idle power draw. With Picasso and Raven Ridge, it didn't matter for AMD as their power gating and management was so poor they did horrendously in battery life even despite them using the same die for both segments. With Renoir, it looks like they put in a hell of a lot of time working on all of that optimisation to get a product they could use in mobile and use well, which is why the Zephyrus G14 and G15 have such good battery life despite using -H chips that are allowed to boost up until ~65W (though for <10 seconds).
  10. At this point I honestly don't know what's worse... If you want a datapoint for how bad Comet Lake is, then here's one for you: Cinebench R15 1T turbo on the 10875H at 4.9GHz (reminder, it's rated for 5.1GHz 1T boost under the official specs, +200mhz over this is via Turbo Velocity Boost) and it pulls 35W. And this is a relatively light workload that, iirc, doesn't include any AVX. Certainly not AVX2. Imagine that. The entire power budget of the 4900HS on a single core for 4.9GHz. In a laptop. But I guess it's part of Intel's strategy to improve gaming experience, I mean think of all the RGB you get as your VRMs start glowing on multi-core workloads!
  11. It's worth pointing out that OEMs can and will set whetever the heck PL1, PL2 and Tau they want because Intel doesn't stop them from doing otherwise. The device in here is an OEM PC, so it's altogether possible that when it comes to DIY later, final values may be lower or (more likely) higher. Enjoy your housefire!
  12. It's also done to reduce idle power draw. SRAM is extremely difficult to power gate, so simply not having the amount they have on desktop helps.
×