Jump to content

jerubedo

Member
  • Posts

    1,444
  • Joined

  • Last visited

Everything posted by jerubedo

  1. Well it really depends on what you're looking for. If you want just the highest possible FPS, period, out of the 6800, then yes you'll "need" an upgrade on the CPU. If you want 90-95% of the performance the 6800 has to offer, then no you really don't need to upgrade. As for if the 5000 series will run on B350, it's kind of doubtful. Already support on 400 series boards is delayed until 2021, supposedly. I'd check with your board's manufacturer, though. It could be possible for some of them to provide support for 5000 series, although I'd think unlikely.
  2. In ANY way? Yes. Realistically? not really. The 3600 is more than enough for up to 144Hz in most games. Beyond that and you'll probably want something better. The 5900X would be what to look at, at a premium of course. Outside of that any 5000 series would be what to go for, or maybe Rocket Lake. Intel seems to think they can still keep the gaming crown, but I am kind of doubting it after seeing 5000 numbers.
  3. @Falkentyne Okay, here's the data I've gotten. At the wall: - 579W while at a scene at 85 FPS - 605W while at a scene at 55 FPS So the wattage went up, but of course that's total system draw, so which component actually used more power? No way to tell Software readings (Aida 64): The 75 FPS scenario: GPU:353W CPU: 110W The 55 FPS scenario: GPU: 315W CPU: 150W So according to software, a 38W drop, which sure does point to the CPU being the problem (a 9900K at 5.0GHz by the way). However TDP for the card is 350W, so 315 is then using roughly 90% of its potential. Something still seems off, though. In the 75 FPS scenario the card is getting .21 frames per watt, but the 55 FPS scenario is only getting .17 frames per watt. Seems odd, although software based reading of wattage could be off.
  4. Oh, interesting! I'll have a look at power draw too, then! That might shed some more info on this.
  5. @Falkentyne I just re-tested 4K. I think I made a typo last night. At that same scene it's 35, not 45. I corrected that post with an edit just now.
  6. Yes, I've monitored each thread and the max loaded thread is 80%. Also, if it were a CPU limitation, the GPU usage would not be 96%. An example of a clear CPU bottleneck is in Flight Simulator 2020. In that I get only 45% GPU usage and only 45ish FPS. In that there are 2 maxed out threads, and overall CPU usage is 85%+
  7. At the scene I was at? I can largely get 60 FPS or over as well but at certain spots like the one pictured it stays at a constant 51. What average do you get on the benchmark run?
  8. RTX Ultra with DLSS off and Extra Details off actually does the trick. I guess I can try it for a bit.
  9. That would only hold true if the GPU wasn't being utilized close to 100%. If you can get a 1080p load to put 96% utilization on the GPU and not maintain 60 FPS, then that GPU, regardless of what it is, is not able to handle the load being requested of it. Another good example is Red Dead Redemption 2. With everything maxed it's able to pull about 75 FPS at 1080p in the most demanding scenes (snowy areas in the beginning as an example) with 99% GPU utilization. At 4K those same scenes are right around 60 FPS. That makes it just fine for 144Hz 1080p or 60Hz 4K. A side note: most people assume that San Denis is the most demanding area, but it's not. Those snowy areas are the worst case load.
  10. Well, given what you actually have, you did as well as anyone could. As @brob said, if you want to spend some extra cash to tidy up, buying shorter PSU cables would certainly help, but that's at a premium. In that tower, the motherboard only mounts in one place. You can just change the outer panels around and put the panel with the feet on the side, and put the side panel on the bottom, and change the orientation of the front panel. That effectively puts the motherboard in the horizontal position like you see in other pictures. I'm not sure how that helps them, though. Everything internally would still be mounted just the same.
  11. There's a few considerations here: 1) Games right now are using 4-6GB in 4k, but that's largely because consoles only have about 6GB of VRAM available to them in games (the other 2 is for the system and its various processes since VRAM is shared on both PS4 and Xbox One). Devs will absolutely eat up any and all resources given to them. PS5 and Series X will both have 16GB of VRAM. Believe me when I say we will see games using close to that! It likely won't be in any kind of optimized fashion, but that bring me to number 2: 2) Poorly optimized games. Some games will use more than 6GB of VRAM simply from being poorly optimized. Those games being "just poorly optimized examples and not representative" doesn't hold as an argument because, well, if they're games you want to play then that's what you're stuck with, unless the devs fix optimizations, which I would not count on. 3) Textures are constantly improving. 8K is just starting to gain some traction (nowhere near mainstream of course). That's not to say that these cards are even capable of doing 8K gaming, but using 8K textures at 4K (effectively texture down-sampling) still yields a nice uplift in quality, and that will eat up VRAM.
  12. Yeah, I've already identified that the biggest 2 settings are RTX of course and extra details. Unfortunately I see a WORLD of difference in reflections between RTX off and on. The extra detail is noticeable as well. With it at 100% the background objects are super sharp but at 0% they are blurry as hell, albeit then I get 60 FPS. 50% is a good compromise but it's still not 60 FPS (only 57) so my mentality is well if it's still not 60, might as well roll with 51 instead of 57 and get all the goodies.
  13. Oh yeah, other games are fine, but this worries me about upcoming releases. I play a lot of Ubisoft games. Valhalla is next for me after this lol. It will probably be the same . I really hoped the 3090 would power through dev issues and bull crap.
  14. See my previous response: Yes, at 4k DLSS brings me back up to 51 FPS. But that's it. On that same scene of course.
  15. No, it gets worse at both 1440p and 4k. At 1440p it goes down to 47 FPS at the same scene (with 99% usage). And at 4k it goes down to 45 FPS (still with 99% usage). EDIT: 35 FPS at 4K, not 45.
  16. I mean sure, but I really wasn't looking to turn down any settings on a $1500 card at 1080p. I could understand at 1440p or 4k, but not 1080p. It's not even like this is a CPU issue. With 96% usage on the GPU, it's simply that the GPU can't hold 60. CPU usage is around 50-60% and no thread is capped, either. The highest thread usage % is 80%.
  17. Yes, RTX on ultra, FOV 90, 4x Headlights, 100% on the slider for extra details, no DLSS (but even with it on the results are the same). Everything is max (beyond ultra). Here are the benchmark results, which are really not indicative of actual gameplay:
  18. Hi Everyone! So today the patch was released which improved performance on PC in Watch Dogs Legion. I'm playing at maximum settings. It definitely did improve performance because prior to today the GPU usage % on the 3090 was about 80% at best. Now it's 96%, which is a great improvement albeit still not perfect (99% would be perfect). In the screenshot attached below, I am getting 51 FPS (yesterday it was in the low 40s). So bravo on the performance uplift! However, now that the GPU is being almost fully utilized, 51 FPS at 1080p with 96% usage??? 99% usage would only yield 1 or 2 more FPS if that, so we'd be looking at a theoretical max of 53 FPS at 1080p. I know, I know, it's an Ubisoft game and they are never optimized and performance is crap, but I fully expected to power through all that with a 3090 (with a +80MHz OC on core and +400MHz on VRAM). So questions: 1) Anyone else seeing the same or similar performance on a 3090 at max settings? 2) Was I expecting too much in expecting to power through any optimization issues with a 3090 at 1080p?
  19. Are you sure that residuals must be the same?? In running 1.1.1, the English UI says that each line item is a pass despite different residuals, and that the test on the whole is error-free. See here: Additionally, I tested my old 4790K and it doesn't have matching residuals either at stock.
  20. Oh, interesting! I took a look at the changelog for OCCT, and look at this: Version 5.3.5 CPU:LINPACK : Fixed a bug where false positive could be reported in some (thankfully rare) cases Translations : Updated Korean language (Thanks again JaeHyung) Now, at the time of writing my original post, the latest version at that time was 5.1.0 with 5.2.0 as the beta version, both before this fix. This SOUNDS promising and would also explain why a few people reported the same issue at the time. @Falkentyne @SteveOrlando
  21. I use 5.1.0, which it looks like isn't the newest one anymore, but it's only a few months old. I can try the new 5.4.0 version, too. I use the default settings myself, under the linpack tab, which is 90% RAM usage and the default size. Also, 20 runs of 0.9.6 passed just fine, as seen here. I think I'll try 1.1.1 next and I'll keep you posted:
  22. What's odd in my case, though, is that 40 or so loops produced no wrong residuals for me at stock (and yes, MCE is disabled) in LinX 0.9.5. I'm trying 0.9.6 now from the link you provided (although I can't read anything in it lol). I'll see if anything is different with this version. For now it remains that the only place I've seen an error is with the LinPack 2019 test. I'll update with my findings after a few hours of running 0.9.6. I might also try 1.1.1 as well just for good measure.
×