Jump to content

pyrojoe34

Member
  • Posts

    1,644
  • Joined

  • Last visited

Everything posted by pyrojoe34

  1. Haha, whoops. Started writing it like an hour ago then had to step away from my desk and just got around to posting it... Oh well....
  2. Summary So AMD has given the details on their new cards the 7900 XTX ($999) with 24GB vRAM and the 7900 XT ($899) with 20GB vRAM. Both have the same number of compute units (96) and ray accelerators (84) but the XTX has 24GB vRAM, 96MB infinity cache, and clocks at 2300MHz compared to the XT 20GB vRAM, 80MB infinity cache, and 2000MHz clock speed. They are claiming the XTX has a 1.5x-1.7x performance uplift in games compared to the 6950 XT. Other notable details include support for DP 2.1, much lower power consumption than Nvidia, and a new version of their AI upscaler. Quotes My thoughts As expected we will have to see 3rd party benchmarks to really know how well these perform compared to the 40-series. I doubt they will be able to compete with the 4090 but if the 7900 XTX can come close to 4080 performance at $200 less, then Nvidia may have to rethink their prices. I sure hope this helps spark some actual price competition. Unfortunately, now I have to wait until these benchmarks before I can choose between the 4080 and the 7900 XTX... Sources https://www.amd.com/en/graphics/radeon-rx-graphics
  3. From the very limited coverage I saw it seems like you're about right on gaming but it's honestly very close. The one big thing leaning me towards the 7700X is that when the X3D versions of Zen4 release I can easily do a drop in upgrade. The power consumption and heat management is also a plus for Zen4 for me. I want to use my old cooler (beQuiet Black Rock Pro 4, 250W TDP) which is plenty for Zen4 rather than have to also buy a 360mm AIO which seems almost obligatory for Intel 13th. I was hoping for more detailed coverage which help my decision but so far there are very few reliable or detailed reviews.
  4. So many people are only covering the 13900k and 13600k in detail. Why is that? I'm currently having a hard time deciding between the 13700k, the 7700x or the 7900x for my new build (which will be paired with either a 4080 or 4090). It'll be for 75% 1440p gaming, 25% 4k video production. I'm currently in a bit of choice paralysis and for some reason most reviewers are kinda ignoring the 13700k.
  5. Don't bother upgrading AM3+. My FX8350 OCed severely bottlenecks even the GTX760 I have in that system. It's not worth the money to even consider an AM3+ system at this point. Just save up and make a modern upgrade when you can.
  6. Can intel stop with this artificial weakening of their chips by disabling hyperthreading on anything but the flagships chips? Why is there not a 6c/12t or 4C/8t option?? Maybe continued pressure by AMD will finally force them to include unlocked multipliers and HT on all their chips. Arbitrarily disabling features that are already built into the chips is just devious. It’s like buying a car that has a radio but it’s disabled in the base model and they just plug it in if you pay extra.
  7. Firefox is the only browser that has shown me they care about privacy and are not inherently incentivized to market your data. I’ve been using it for over a decade and the only way I’d switch is if they violated that trust. I’ll even take a small performance hit for the tradeoff, I’ve never actually been using a browser and thought it was too slow for me. Any significant bottleneck is always due to the internet speed or host server, not the browser.
  8. Try using Diskpart -run cmd as admin -type "diskpart" -type "list disk" -find the disk number for the drive in question -type " select disk {insert disk number here}" (example: "select disk 3") -type "clean" now see if you can interact with it in disk manager Edit: Here's a visual guide to help: https://www.tenforums.com/tutorials/85819-erase-disk-using-diskpart-clean-command-windows-10-a.html
  9. Samsung CRG9 is an option. 49", 32:9, 5120x1440, 120hz, freesync. or a more traditional ultrawide. they have some that are 3840x1600, I think they're only 75hz right now but 144hz are coming
  10. I think the most likely and obvious reason is convention... because that's how everyone has been doing it for so long... It could also be that 3200Mhz sounds bigger than 3.2Ghz to tech illiterate people and that using Ghz for both CPU and RAM may confuse those same people. Just think of how often people confuse RAM and drive space already. But to be honest, I think the biggest reason is probably just convention. It's the same thing with GPU clocks, they don't use Ghz even though they could at this point.
  11. I tend to agree that it is likely the CPU. Total CPU usage is not a good metric to go by. The real question is do you have any single thread that is running at >95%? If so then you have a CPU bottleneck. For example many games for me will only use 20-30% overall CPU (6c/12t) but will have one or two threads almost constantly at 100%, that is a CPU bottleneck. To check individual thread use either use a monitoring program (like AIDA64 or whatever you already use) or open task manager, right click on CPU graph, and click "Change graph to" -> "Logical processors", which will show you a separate graph for each thread.
  12. If you want to compare the latency the math is easy to do: CL / (Frequency / 2) = Latency CL is latency in # of cycles Frequency in Mhz (divided by two since it is DDR) is million cycles per second This equation gives latency in microseconds, multiply by 1000 to get nanoseconds. So... It's entirely possible that a ram kit with a higher CL value still has lower absolute latency than a kit with a lower CL value. For example: 4000Mhz kit with CL of 16 has a lower latency (4ns) than a 3000Mhz kit with a CL of 14 (4.7ns). Don't get too sucked into the CL values, they are not absolute metrics but relative metrics and need to be converted if you are comparing kits of different frequencies.
  13. Give it a try, I suspect that will do it (or 60hz since it won't run at 120 if it's locked), or turn off any syncing (but then you have to deal with tearing).
  14. That's probably your issue then. If it's like FO4 then it has a framerate lock at 60fps. 48fps is a perfect division of 144hz so if you a using vsync you will probably lock to 48fps (144hz does not divide evenly by 60). You might be able to fix this by switching your monitor to 120 or 60hz when playing this game, or turning off any syncing.
  15. Try reinstalling drivers from scratch? Also what are your game settings? I remember in FO4 I would get 60fps (game is locked at 60 so you can't get more than that) on most of the map but only like 35fps in the city. The issue was terrible optimization (and a really outdated engine). The fix was to lower the view distances in the ini file and tweak the godrays settings.
  16. What the the per-thread usage? Do you have 1 thread at over 90%? Overall CPU usage does not indicate much.
  17. There's a good chance BF1 might be bottlenecked by your CPU (it already uses like 60-70% of my 6800k [4.2Ghz, 6-core with hyperthreading]). PUBG is probably also bottlenecked by your CPU at those framerates. Modern games seem to reply more and more on CPU performance (I feel like 10 years ago this was not nearly as true as it is now). If you're going above 144fps (btw just lock the framerate in the settings, no reason to go above 144) you will see tearing/sync issues which you might interpret as stutter. In BF4 I see a significant CPU bottleneck (1 thread always at 100%) and see huge stutters/frame drops when I get above 120ish fps. I have to turn my settings to max and turn screen scaling up (I use 125%) to keep the framerates low enough (100-120fps) that the CPU is not bottlnecked. You might have to do the same or limit your framerates.
  18. Why is this a problem, more free features is not a bad thing for consumers. If premium is no longer worth it for you then move to the free version, it’s a win for you.
  19. BF1 is very CPU intensive and likes many cores. Once you get above 100 fps you will see a CPU bottleneck. My 6800k (6core/12thread, OCed to ~4.2Ghz) gets me about 100fps at 1440p ultra but I can tell it's (almost) hitting a CPU bottleneck (GPU still usually gets to 100% but I wouldn't expect improvements if I bought a faster GPU without a faster CPU). I get 6+ threads over 70% usage while playing, at least 1 or 2 constantly above 90%. You are seeing a CPU bottleneck, I'm willing to bet you have at least one or more thread sitting at 100%. Remember that overall CPU usage is not usually the bottleneck, it's individual threads. At 120+fps in BF4 my total CPU use is only like 20% but one thread is always at 100% which means I have a CPU bottleneck.
  20. This is confirmed for many people with Broadwell-E (likely also affects other generations but I can't confirm, comment if you can). The OC shows up in the BIOS but not in windows (everything stays stock). Turns out there was a recent bios update from MSI (my board is the x99a sli plus, CPU is 6800k) which fixed the issue. The problem was the result of update KB4100347 which was an Intel microcode update. The BIOS update did the trick for me and now I have my OC back. Having said that, I am seeing a ~4% decrease in Cinebench score after this update using the same OC so it seems to result in a performance hit. Can anyone else confirm this?
  21. not a chance in games. 2080Ti just does a little over 60fps at 4k.
  22. They won't do this since it's not worth making a new production line and fragmenting their consumer base, especially if they want ray tracing to become widely adapted in games. The only reason they would ever do this is if they have a ton of chips with intact traditional cores but faulty ray tracing cores. I'm not an expert in chip manufacturing but I doubt defect rate in those is any higher than any other part of the chip since it's all the same basic process to make. They will likely make their mid/low tier cards (in this generation) not have RTX but the cards will also be less powerful anyway and not for the same consumer market. I imagine RTX cores will become like CUDA cores within a couple generations, every single card will have them since there isn't a reason to segment the production lines and make separate designs.
  23. We have a very long way to go before this is even a thing.. the next step would be applying quantum computations to general computing... something we have no idea how to do yet (if possible) and would require a complete rebuild of every piece of software and traditional computation knowledge we current have. More and more you'll see 1. software and 2. circuit design/specificity being much more important to massive performance increases than just raw transistor power (just look at things like tensor and ray tracing cores where application specific computational power is more than the sum of its parts). Using neuronal-based hardware design is something I think will make a big difference at some point. Being limited to a bit/transistor only containing two states with only a single input and output is a huge limit on how we currently compute problems.
  24. Can't wait until we start referring to transistor sizes in angstroms (or picometers)... but to be realistic this tech is still very far away and making something in a lab is very different than scaling it to commercial levels. The scaling issue is probably the biggest hurdle by far and I wouldn't hold your breath for this to appear anytime soon.
  25. You can't SLI two different cards together.
×