Jump to content

Briggsy

Member
  • Posts

    4,764
  • Joined

  • Last visited

Everything posted by Briggsy

  1. 1nm + + + + + + + + + + + + + joking aside, they might just move the goal post to advertise transistor density in a different way. Transistors per mm2 seems like a good metric for Intel to market, as their 14nm density isn't far off from TSMC 7nm. At this point the whole nm designation is pure marketing bullsnot.
  2. Its a curious notion for sure. Back when AMD were overbuilding their hardware, it took months and sometimes years for the drivers to catch up, and so the fanboys fell back on the idea that AMD performance improved over time like fine wine, instead of acknowledging that AMD couldn't get their drivers optimized in a timely fashion for AAA games. The fine wine argument is a reaction to AMD having slow driver releases, because if AMD didn't have slow driver releases then there wouldn't be any "fine wine" improvements later. You can't have both, they're mutually exclusive. The only other aspect is the amount of VRAM that AMD uses compared to Nvidia. Go all the way back to GCN 1.1 with the R9 290 and there were 4GB and 8GB variants, while Nvidia was playing around with 3GB 780s and 6GB Titans. As far back as I can remember, AMD have always had more VRAM. I think the VRAM size might be the only meaningful way that AMD cards could age better, but at some point all the VRAM in the world isn't going to give you more performance, and in my own testing Nvidia manages VRAM usage better than AMD does, which means AMD having more VRAM might simply be to compensate for less aggressive memory management.
  3. While I like rooting for the underdog, AMD is hardly the underdog anymore. They own the console market and they've got their boot on Intel's throat. Big Navi is AMD's Zen moment for sure, but I owned a Zen 1 processor when they launched and it was hot garbage, so not really a good selling point for Team Red imo. Based on reviews and benchmarks, Ampere is the safer option to go this time. Plus, the 6900xt is going to be as rare as hen's teeth for a long time. If you're concerned about power draw, Ampere can undervolt extremely well.
  4. It wouldn't hurt to try setting your Ram to default anyway and see if that's the problem.
  5. Don't return the card yet before checking your Ram. I was having problems with my 3090 at first, same exact thing. I'd be fine gaming but as soon as I started watching a youtube video at the same time the game I'd be playing would crash, occasional bluescreens, etc. Turns out that my System Memory was getting overheated by the 3090 and becoming just unstable enough that youtube alone couldn't be run at the same time as a game. I have excellent airflow in my case, graphic card usually sits in the high 60's or low 70's under load and fans are completely silent. From what I can tell It's not the air temps in my case, but the heat being transferred through the motherboard itself across to the Ram. I've since adjusted my ram to tighter timings, lower clocks and lower voltage, and everything is 100% stable now. Set your memory in Bios to default/auto and see if your problem is still happening.
  6. NVidia are selling mining companies the GPU chips, and the mining companies have their own PCB design, components and drivers. Not only that, but Nvidia are selling them the chips that have too many defects to be used in a consumer product (but still have working SM's), thus giving NVidia the ability to make up for poor Samsung yields. To be honest, I've never seen a more misinformed and entitled forum thread in my life. Outrage culture needs some valium.
  7. You ran the OC Scanner in Afterburner? That's the first thing anyone with an Ampere card should do before thinking they got a dud.
  8. nvidia aren't selling cards to miners, only the wafers. Mining companies have their own drivers and pcb layout etc. To think AMD are not selling wafers to mining companies as well is ludicrous. Mining companies don't want the consumer cards, the reference pcb design is likely too inefficient for mining.
  9. From what I've read and heard nvidia aren't selling cards to miners, only the GPU chips. The mining companies have their own pcb design, components and in-house drivers. In theory samsung might be able to produce more wafers than AIB's need, so the excess being sold to miners doesn't impact how many cards get manufactured and sold to gamers. For all we know, the exact same thing was happening with Pascal and Turing, but the demand wasn't nearly as high from gamers as it is now, hence the outrage.
  10. Totally agreed. If there's any silver lining, the rumor does suggest that it's only the chips being sold to mining companies and not the assembled graphic cards, with the mining companies having their own pcb design, components and in-house drivers. It may be that Samsung and TSMC can produce the GPU wafers faster than AIB's can put cards together, allowing Nvidia and AMD to saturate the production pipeline better. But now I'm just speculating on a rumor, and the demand for graphic cards is so high right now that I can see this being AMD and Nvidia biting off more than they can chew. And then there's intel with their Xe graphic chips, I wonder where they sit with large mining farms.
  11. Rumor is that both AMD and Nvidia are shipping large quantities of chips to crypto mining companies, so there's that.
  12. From a bird's eye view its very easy to see where the problems are in the supply chain, but there isn't a single entity that's in control of everything. From the manufacturers of components to the retailers and everywhere in-between, everyone involved is only responsible for a tiny slice of the whole pie, even Sony. I personally think retailers hold the lion's share of the blame, but it would be up to Sony to provide some kind of incentive to retailers to combat scalpers, because retailers are made up of individuals who are only interested in their own sales numbers, or how well they are moving product. A product manager that tries to be conscientious about getting PS5's to proper customers will be risking their own job, so they won't do anything unless their boss tells them to, but their boss doesn't want to start discriminating who is and isn't a customer, that's bad for business. I think my point is that nobody involved in the supply chain has any incentive to change how they do things, and they even risk reprimand from their superiors if they try to fight scalpers. AMD, Nvidia, Intel, sony, Microsoft, Nintendo, etc will continue to give lip service to consumers and feign interest in fighting scalpers, but it won't change anything. As long as there's someone willing to pay scalper prices, there's going to be scalpers.
  13. Probably because AMD have had something for a couple years now that works just fine without any overhead, nor has the need for die-space hogging tensor cores and ai training. If my 2080 had anything like radeon image sharpening, I'd play everything in 1440p and upscale to 4K for the added performance. For the RX 6000 series AMD users will use radeon image sharpening to get more performance with raytracing, without the artifacting that exists with DLSS. My biggest gripe with tech forums is that a lot of comments contain misinformed opinions. Even techtubers lack the knowledge you'd think they would have working in this space, which only serves to create an echo chamber of misinformation. I know that my own knowledge base is limited, but the amount of times I've seen people asking "what is AMD's answer is to DLSS" makes me wonder where people get their information. How about Nvidia Reflex? AMD have had their own anti-lag for over a year now, and it works insanely well. DLSS? AMD already have great upscaling that works with every game and requires no training. Fast Sync? ditto.
  14. For moderate upscaling its totally doable. see the video below starting at around 15:20 mark. This is a video from a couple years ago Wendel and crew did where they talk about Radeon Image Sharpening, which basically allows you to game at 1440p and upscale to 4K with almost no discernable difference. iirc It's the same tech the game consoles have been using for years. Right now Nvidia have nothing like this that works across all games, but I assume a more ubiquitous version of DLSS would do something similar to what AMD has. It's not going to do what the true Deep Learning DLSS can do with 240p upscaled to 1080p kind of sorcery, but 1440p to 4K with minimal differences is pretty good considering its slightly more than a 2x upscale. If that's all DLSS ends up being for most games, I'd be fine with it.
  15. Unless someone hands them a big sack of cash to make it happen
  16. The problem I see with DLSS right now is that we don't know if DLSS is going to be a bait and switch or not. Most scaling algorithms used (i.e. bicubic, bilinear, etc) can be thought of as dumb algorithms (figuratively speaking). With DLSS in its current form, it has to be trained through machine learning using high resolution reference images supplied by the game developer to Nvidia. It's theoretically possible that at some point an AI can be trained with enough reference images to upscale all games without any training and provide better fidelity than dumb upscaling algorithms, but I would argue that it will never be as effective as a smart upscaler that is trained for a specific game the way that Nvidia are doing it now, and there are not that many games with DLSS to begin with. I would also argue that the computation power required with a general purpose AI upscaler might be better served in removing the tensor cores and adding more Float/Integer processors instead. I'm not saying DLSS is snake oil, I'm merely saying that unless Nvidia train the upscaler for a specific game, its never going to be that much better than dumb upscaling, and even if it is the horsepower required and the extra diespace used is a wash. I hope to be proven wrong and Nvidia will eventually release a ubiquitous form of DLSS that is better than a dumb algorithm, without sacrificing die space on the GPU. for now the extra $80 gets you 16GB of Vram vs 8GB, and better rasterization performance. Nvidia does manage Vram usage better than AMD in drivers, but that only gets you so far.
  17. not bad considering we're looking at the 6800, which is 60CU vs the 80 CU of the 6900xt and 72 CU of the 6800xt, where these cards have 1 Ray accelerator per CU.
  18. I've recently tested this myself as part of my decision on whether I would go Ampere or RDNA2 for my next daily driver for gaming, so I could care less what someone else thinks vs. my own first hand experience. First, AMD have Radeon Image Sharpening that makes what you see on screen look better, like having sweetFX or reshade with one click. I wouldn't confuse it with Nvidia's sharpening feature in the control panel. The other thing I noticed right away was the color quality. Nvidia colors seem to be washed out in comparison, but you wouldn't notice unless you did a sidebyside. Second, and its kind of a big one, is how Nvidia manage memory usage in drivers. In particular with ARK in 4K, I noticed that my Vega56 would be hitting the 8GB frame buffer limit all the time and only then swapping to system memory (causing brief hitches), while my 2080 seems to dump resources pre-emptively from VRam when it passes 7-7.5GB of usage, but the quality of textures and objects ingame start bouncing back and forth between playdough and high quality. The solution in either case was to lower one of a setting like LODs or texture quality to stay below 8GB. The gameplay experience is less disruptive for Nvidia because the swapping to system memory is far more aggressive and pre-emptive, but sometimes one of my tames will turn into an N64 model for a few seconds if I don't make adjustments, which is itself annoying and kills immersion. The third thing that stands out to me was the upscaling capabilities. In order to get the same framerate in 4K on the Vega56 compared to my 2080, I had to drop resolution scaling to about 60% on the Vega, whereas I normally play with scaling at about 90% with my 2080. For whatever reason, the Vega56 image quality with Radeon Sharpening turned on looked better in some cases than the 2080. If I turn down resolution scaling to 60% on the 2080, the aliasing is so bad that objects 20 feet away ingame looking like garbage. I suspect the upscaling AMD are using on desktop is what the XBone uses to adjust resolution on the fly. Nvidia's newest upscaling tech DLSS is far superior of course, but only if it's been trained. I suspect that AMD's upcoming Super Resolution feature will be an improvement on Radeon Sharpening, which is already pretty good. I'm also curious how good Nvidia's next iteration of DLSS will be that is supposed to work without being trained. My guess is that they won't be much different, and only the AI trained DLSS will be superior. One thing to note is that Nvidia themselvess back with either Maxwell or Pascal (iirc) introduced their shader optimizations in drivers. This isn't uncommon, AMD optimizes Tessellation, and Nvidia optimizes Antistrophic filtering as well. These optimizations are not supposed to effect image quality. I can confirm that Nvidia's aggressive memory management does affect image quality, but prevents frame drops by doing so. AMD going with 16GB with RDNA2 isn't just a flex on their part imo, I think it's because they need to for 4K gaming. It's probably cheaper to slap 16GB of VRam on a card than to invest in driver level memory optimizations. This is all just my own observations and/or opinions, take them for what they are. I could care less if someone believes me, I originally only set out to figure this stuff out for myself and to help me decide what I'm buying this generation.
  19. what are you even talking about? Why respond if you're going to BS for no reason?
  20. It will do it with optimal power as well when you lock voltage, even with idle clocks, hence why OP is asking for advice on an automated solution.
  21. I'm curious about this as well, as in Afterburner even if you save the profile it won't load the profile properly later on. In my example I lock my 2080 to 1.000v and 1950Mhz using the curve editor. I save it to profile. When I'm done gaming I restore default settings. When I load up the profile again later the curve editor doesn't revert to what I had locked in, but loads whatever curve it feels like instead. I mean it only takes me 10 seconds to redo the voltage lock every time I'm gaming, but yeah it would be nice to have an easier implementation that doesn't ruin your custom curve that you saved. AMD have this built into their in-game software overlay to automatically load custom voltage curve on a per-game basis, it's litle things like this that I wish Nvidia would add to their suite.
  22. In AMD's footnotes for the recent presentation they mention that with their hardware Ray Accelerators being used they got a 13.8x (1380%) FPS increase over just using software DXR. I can't recall who but someone on Youtube extrapolated that it would put the 6800xt on par with the 3070 in Raytracing capability. If true that some AIB cards have boost clocks past 2.5Ghz for the 6800xt, then maybe a little faster in a best case scenario. The performance hit iirc (again I can't remember source) said it was similar to Turing's performance hit.
  23. Next gen will be very interesting. Next iteration of Navi is rumored to be chiplet based like Zen CPUs, but with how GPUs operate I don't see the same infinity fabric latency issues existing at all. The purchase of Xilinx may help reduce costs, who knows. Nvidia Hopper has rumors of a chiplet based design as well, which if true means RTX 3000 series and RX 6000 series might get roflstomped by their own next gen replacements.
  24. Looking on Newegg.ca the 3070 is priced (converted to USD) from $525-$638. In local currency thats 700-850 loonies before sales tax.
×