Jump to content

Emberstone

Member
  • Posts

    1,271
  • Joined

  • Last visited

Awards

This user doesn't have any awards

1 Follower

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Not all are made equal, even from an input processing point of view. Consistent performance is key when a single frame can decide a game. There's a reason Brook boards/etc. have been around so long. But even if some fightsticks are available, that alienates people who use other controllers which aren't sticks, like those who prefer Hitboxes or Mixboxes, which require something like a Brook board to connect to an Xbox.
  2. A bit late, but fighting game players use all kinds of different controllers (arcade sticks, Hotboxes, Mixboxes, keyboards, etc.) and this will effectively destroy those communities on Xbox unless you're one of the few competitive people who use a stock Xbox controller for games like Guilty Gear Strive or SF6. For comfort reasons, it will make fighting games unplayable on Xbox for many people. When you're used to an input method, and you've been playing fighting games for years using that input method, it's INCREDIBLY difficult to change. I play on PC thankfully, but I cannot play fighting games without my arcade stick. Makes me feel bad for the FGC on Xbox.
  3. I've noticed an issue with my Samsung 6 Series 50" TV (model number UN50NU6900FXZA) now that I've fixed up my PS3 and have been playing on it quite a bit. When I don't use Game Mode, static objects on-screen such as a game's UI elements will vibrate or get very fuzzy-looking whenever I turn the camera in-game or there's a lot of motion occurring on-screen. If there isn't a ton of fast-moving motion, then static objects won't vibrate, but it is easy enough to trigger just by turning the camera in an FPS or displaying other types of motion akin to that. There is no ramp-up to the effect, it's either occurring or it isn't, and the effect is full-screen and applies evenly to everything regardless of screen position or what is occurring behind/around it. I prefer not using Game Mode in this scenario as it creates a lot of upscaling artifacts when not using a 4K signal. There will be a lot of haloing around edges and such, which do not disappear with Sharpening set to 0 and it looks pretty bad, so Game Mode off is my preferred setting for non-4K content. However, this creates the vibrating issue I mentioned above. It occurs regardless of resolution, and as far as I can tell enabling Game Mode is the only way to stop the vibrating. The other picture modes: Dynamic, Standard, Natural and Movie all show the issue from what I can tell. I've also tried two other Samsung TVs in the house, one of them bought around Christmas, and this same issue occurs on all of them. I've linked a video at the bottom of this post showing the issue on Minecraft's UI since it's pretty easy to see what's going on there. While turning the camera, the words displayed on the UI get very fuzzy-looking even if there's no motion occurring behind them. It's most obvious if you look at the letters' shadows. Picture Mode: Movie (only mode with no upscaling artifacting) Digital Clean View is off. Auto Motion Plus is off. Contrast Enhancer is off. Is there something I'm missing, or a setting in the service menu I can disable that could remedy this? I'm able to access the service menu fairly easily on this TV. All I have to go off of is something Game Mode does prevents it from happening. The picture otherwise looks identical between Movie and Game Mode. Thanks.
  4. Using an NZXT Kraken X53 240mm, I sit around 80 C with ambient floating around 21-23 C in Cinebench R23 on my 5800X3D. Considering the Arctic II 240 has extremely similar performance to the X53 in the videos I've seen comparing them, something does seem to be a little bit off unless you have high ambient temperatures. If your fans are working and the pump is working, that's the only other thing I can think of causing temperatures higher than others see on their X3Ds. For what it's worth, unless I undervolt, my clock speeds in Cinebench float around 4.15 to 4.25 GHz aswell and I usually score 14300-14500 depending on ambient temperature. So if you're hitting 90 C but it's still boosting over 4.0 GHz, this is still normal behavior and not technically a thermal throttle as the boost clock is still kicking in. As the others have pointed out, open HWInfo and look at your L3 cache temp sensor and see if it is getting as hot as the cores are. I ran a quick test right now and peaked at 85 Celsius on the cores according to Ryzen Master (my furnace is currently on so it's going to be a bit warmer than usual), and the L3 cache peaked at 50 Celsius according to HWInfo. This tells me that the heat is being dissipated and things are functioning normally. Peak core boost was 4.19 GHz. If I undervolt using PBO Tuner I can get 4.4 GHz on all cores.
  5. I never made this blanket statement. Gonna stop here because I would be derailing going forward.
  6. Ah I see. I did misread it. Thanks!
  7. I ain't lying and already included a known caveat (it's spicy). I didn't say all games. Right now every high end CPU wins in many games. Up to you to figure out which wins for yours. I was making the argument from a value perspective because it does trade blows with the most expensive CPUs on the market at only $330, and not exclusively in GPU-bound scenarios. I ain't gonna clarify which games. That's Steve's job.
  8. sfcscannow probably failed because Microsoft's support for Windows 7 ended two years ago. It can't even access Windows Update anymore, so the servers needed for sfcscannow to pull new system files is also probably gone. You should update to Windows 10 or 11 just as a general guideline. Keeping your data shouldn't be an issue because you can save all your files to a USB drive or external hard drive pretty easily these days. Use DDU to cleanly remove any video drivers you have and try using this: https://www.nvidia.com/Download/driverResults.aspx/195294/en-us/
  9. Personally, I say just get the 5800X3D and be done with it if your mobo can handle it. AntOnline is still selling them on their online store and on Ebay for $330. I got mine through them on Ebay a little over a week ago, and I hear stock is starting to run low. $330 for a CPU that beats the 7950X or 13900K in many games seems like a no-brainer to me if you have the cash for it. Just be warned that you'll need some decent cooling for this chip to get its absolute max performance. That said, hiccups in mouse movement and stuff don't seem like a CPU issue. We figured out smooth mouse movement with CPUs that have double-digit MHz clock speeds back in the 90s, so it's likely software causing this issue, not your 3600X.
  10. As the others have said, CPUs these days tend to be pretty damn hot and the 5800X3D is not only no exception, it's noticeably worse in this regard than a standard 5800X because of the 3D cache sitting on top of the cores; between the cores and the IHS, limiting the thermal transfer efficiency. I upgraded to a 5800X3D myself last week and using an NZXT Kraken X53, mine boosts to around 4.15 to 4.25 GHz in Cinebench and sits around 78-82 degrees depending on whether or not my furnace is on. If your cooling/ambient is better than mine, it'll likely boost higher than ~4.2 GHz and potentially get even hotter under full load than my temps. This can be verified by undervolting it using PBO Tuner, which makes it run cooler and sort of "tricks" Precision Boost into thinking you have stronger cooling. On my own rig I've seen 4.35 GHz sustained after a minor undervolt (-20 all cores) in CB R23 with the same temperature range noted above. I wouldn't worry too much about temperature, just make sure your pump speed is set to 100% and your radiator fans are on an aggressive curve that maxes around 70 C or so, and the CPU will take care of the rest. As long as you aren't hitting these temps under everyday or most gaming workloads, it's working as intended.
  11. First gen Ryzen chips ran pretty dang cool. Idling around 20 C when the ambient is 18 is very normal. As long as your load temps look okay you've got nothing to worry about. That said, I would push for something a little more aggressive than 3.5 GHz on an R7-1700 if you can.
  12. I do have Ryzen Master installed. I uninstalled it, rebooted, and retried everything and got the same behavior. Checked clocks in Cinebench before sleep and they floated around 4.15 to 4.2 GHz as described. I put the PC to sleep, woke it up, and I got 4.3 GHz all-cores afterward with higher temps.
  13. So I made a discovery today, and I'm wondering if there's the possibility that I keep this performance without having to put my PC to sleep. Normally when I run Cinebench R23, I get boost clocks around 4.15 to 4.2 GHz all-core on my 5800X3D and I score ~14300 points. I put my PC to sleep, woke it back up and ran Cinebench again: now it reports 4.3 GHz all-core boost with a score of 14726. It also ran several degrees hotter, too. The higher temperature coupled with the higher score tells me this isn't a reporting error and that my CPU is actually boosting better after waking from sleep. This was consistent, replicable behavior as I was able to do it multiple times and achieve scores over 14700 after waking from sleep. Is there any possible explanation for this? Is there any way to keep the better boosting behavior permanently without having to put my PC to sleep, such as with a BIOS setting I could look at on my Asus Tuf X570 Plus Wifi? Or is this simply a case of the Ryzen sleep bug rearing its ugly head again and this isn't actually normal? I also updated my BIOS to the latest version this week, which included the AGESA 1.2.0.7 update as well.
  14. Forgot to mention, I had already done this alongside clearing the CMOS via removing the battery for a few minutes and nothing changed. However, during my time screwing with this I think I narrowed down exactly what the symptom is after the BIOS update. If OpenRGB tries to address the memory first, it becomes inoperable until next boot and exhibits the behavior I outlined above. If I let G.Skill's software run first, then the memory works as normal and I can even use OpenRGB afterward; G.Skill's software just has to address the memory first (such as applying a color profile at startup to replace the default rainbow cycle). So I just have to use two programs for my RGB to avoid this completely (I was using just OpenRGB for a few days after discovering it works for both my mobo and RAM). Kind of annoying, but at least I figured it out. OpenRGB hasn't been updated since December 2021, so I wonder if it just isn't playing nice with newer/certain BIOS revisions or something. Extremely strange that G.Skill's software "opens the door" for it to work on both, though. Or I suppose this might be why there are Intel and AMD variants of the Trident Z RGB.
  15. I just rebuilt my rig with an Asus Tuf X570 Plus Wifi and R7-5800X3D, and out of the box my Trident Z RGB ram was picked up by both the G.Skill software and OpenRGB and I was easily able to change my colors. I saw my motherboard was on an old BIOS and didn't have the latest AGESA microcode, so I figured I'd go ahead and update the BIOS. Seemed innocent enough. However, after updating my BIOS, my ram is no longer able to be addressed by the G.Skill Trident Z Lighting Control software nor OpenRGB. It just has the default rainbow cycling across it. If I try to change colors in the G.Skill software or OpenRGB, nothing happens. However, if I click Save to Device in OpenRGB after selecting a color, the rainbow cycle will reset back to the beginning of the cycle. So in some fashion the RAM is seeing something, but isn't able to set a color after the BIOS update. I tried installing Armoury Crate to see if Aura could address the RAM, but it didn't detect anything but my motherboard and graphics card, so that was a bust. I uninstalled it. I went from BIOS version 4204 to version 4403. If anyone has any ideas of what I can try, please let me know. I'm probably going to try reinstalling Windows again next.
×