Jump to content

pyrojoe34

Member
  • Posts

    1,644
  • Joined

  • Last visited

Awards

This user doesn't have any awards

1 Follower

System

  • CPU
    i7-6800k (4.0-4.2Ghz)
  • Motherboard
    MSI X99A SLI Plus
  • RAM
    32GB Quad-channel DDR4-2800 (Corsair Vengence LPX)
  • GPU
    EVGA GTX 1080 SC2 iCX
  • Case
    Corsair 750D Obsidian
  • Storage
    500GB Samsung 960 Evo NVME, 256GB Samsung 850 Pro, 3TB Toshiba HDD, 1TB Seagate HDD
  • PSU
    Corsair RM1000i
  • Display(s)
    Acer PredatorXB271HUC (1440p, 144Hz), LG 29UM55 Ultrawide (2560x1080)
  • Cooling
    CPU: bequiet Dark Rock Pro 4
  • Keyboard
    Logitech G810 Orion Spectrum
  • Mouse
    Logitech G502 Proteus Core
  • Sound
    Logitech G933 Artemis Spectrum
  • Operating System
    Windows 10 Pro 64bit

Recent Profile Visitors

3,722 profile views
  1. Haha, whoops. Started writing it like an hour ago then had to step away from my desk and just got around to posting it... Oh well....
  2. Summary So AMD has given the details on their new cards the 7900 XTX ($999) with 24GB vRAM and the 7900 XT ($899) with 20GB vRAM. Both have the same number of compute units (96) and ray accelerators (84) but the XTX has 24GB vRAM, 96MB infinity cache, and clocks at 2300MHz compared to the XT 20GB vRAM, 80MB infinity cache, and 2000MHz clock speed. They are claiming the XTX has a 1.5x-1.7x performance uplift in games compared to the 6950 XT. Other notable details include support for DP 2.1, much lower power consumption than Nvidia, and a new version of their AI upscaler. Quotes My thoughts As expected we will have to see 3rd party benchmarks to really know how well these perform compared to the 40-series. I doubt they will be able to compete with the 4090 but if the 7900 XTX can come close to 4080 performance at $200 less, then Nvidia may have to rethink their prices. I sure hope this helps spark some actual price competition. Unfortunately, now I have to wait until these benchmarks before I can choose between the 4080 and the 7900 XTX... Sources https://www.amd.com/en/graphics/radeon-rx-graphics
  3. From the very limited coverage I saw it seems like you're about right on gaming but it's honestly very close. The one big thing leaning me towards the 7700X is that when the X3D versions of Zen4 release I can easily do a drop in upgrade. The power consumption and heat management is also a plus for Zen4 for me. I want to use my old cooler (beQuiet Black Rock Pro 4, 250W TDP) which is plenty for Zen4 rather than have to also buy a 360mm AIO which seems almost obligatory for Intel 13th. I was hoping for more detailed coverage which help my decision but so far there are very few reliable or detailed reviews.
  4. So many people are only covering the 13900k and 13600k in detail. Why is that? I'm currently having a hard time deciding between the 13700k, the 7700x or the 7900x for my new build (which will be paired with either a 4080 or 4090). It'll be for 75% 1440p gaming, 25% 4k video production. I'm currently in a bit of choice paralysis and for some reason most reviewers are kinda ignoring the 13700k.
  5. Don't bother upgrading AM3+. My FX8350 OCed severely bottlenecks even the GTX760 I have in that system. It's not worth the money to even consider an AM3+ system at this point. Just save up and make a modern upgrade when you can.
  6. Can intel stop with this artificial weakening of their chips by disabling hyperthreading on anything but the flagships chips? Why is there not a 6c/12t or 4C/8t option?? Maybe continued pressure by AMD will finally force them to include unlocked multipliers and HT on all their chips. Arbitrarily disabling features that are already built into the chips is just devious. It’s like buying a car that has a radio but it’s disabled in the base model and they just plug it in if you pay extra.
  7. Firefox is the only browser that has shown me they care about privacy and are not inherently incentivized to market your data. I’ve been using it for over a decade and the only way I’d switch is if they violated that trust. I’ll even take a small performance hit for the tradeoff, I’ve never actually been using a browser and thought it was too slow for me. Any significant bottleneck is always due to the internet speed or host server, not the browser.
  8. Try using Diskpart -run cmd as admin -type "diskpart" -type "list disk" -find the disk number for the drive in question -type " select disk {insert disk number here}" (example: "select disk 3") -type "clean" now see if you can interact with it in disk manager Edit: Here's a visual guide to help: https://www.tenforums.com/tutorials/85819-erase-disk-using-diskpart-clean-command-windows-10-a.html
  9. Samsung CRG9 is an option. 49", 32:9, 5120x1440, 120hz, freesync. or a more traditional ultrawide. they have some that are 3840x1600, I think they're only 75hz right now but 144hz are coming
  10. I think the most likely and obvious reason is convention... because that's how everyone has been doing it for so long... It could also be that 3200Mhz sounds bigger than 3.2Ghz to tech illiterate people and that using Ghz for both CPU and RAM may confuse those same people. Just think of how often people confuse RAM and drive space already. But to be honest, I think the biggest reason is probably just convention. It's the same thing with GPU clocks, they don't use Ghz even though they could at this point.
  11. I tend to agree that it is likely the CPU. Total CPU usage is not a good metric to go by. The real question is do you have any single thread that is running at >95%? If so then you have a CPU bottleneck. For example many games for me will only use 20-30% overall CPU (6c/12t) but will have one or two threads almost constantly at 100%, that is a CPU bottleneck. To check individual thread use either use a monitoring program (like AIDA64 or whatever you already use) or open task manager, right click on CPU graph, and click "Change graph to" -> "Logical processors", which will show you a separate graph for each thread.
  12. If you want to compare the latency the math is easy to do: CL / (Frequency / 2) = Latency CL is latency in # of cycles Frequency in Mhz (divided by two since it is DDR) is million cycles per second This equation gives latency in microseconds, multiply by 1000 to get nanoseconds. So... It's entirely possible that a ram kit with a higher CL value still has lower absolute latency than a kit with a lower CL value. For example: 4000Mhz kit with CL of 16 has a lower latency (4ns) than a 3000Mhz kit with a CL of 14 (4.7ns). Don't get too sucked into the CL values, they are not absolute metrics but relative metrics and need to be converted if you are comparing kits of different frequencies.
  13. Give it a try, I suspect that will do it (or 60hz since it won't run at 120 if it's locked), or turn off any syncing (but then you have to deal with tearing).
  14. That's probably your issue then. If it's like FO4 then it has a framerate lock at 60fps. 48fps is a perfect division of 144hz so if you a using vsync you will probably lock to 48fps (144hz does not divide evenly by 60). You might be able to fix this by switching your monitor to 120 or 60hz when playing this game, or turning off any syncing.
  15. Try reinstalling drivers from scratch? Also what are your game settings? I remember in FO4 I would get 60fps (game is locked at 60 so you can't get more than that) on most of the map but only like 35fps in the city. The issue was terrible optimization (and a really outdated engine). The fix was to lower the view distances in the ini file and tweak the godrays settings.
×