Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content Count

  • Joined

  • Last visited

Everything posted by pyrojoe34

  1. You might have crypto-mining malware. Try running malwarebytes, see if it finds anything. That would also explain why it doesn't show up in task manager since they try to hide it. Be more careful when downloading sketchy things and don't install anything you can't trust.
  2. Don't bother upgrading AM3+. My FX8350 OCed severely bottlenecks even the GTX760 I have in that system. It's not worth the money to even consider an AM3+ system at this point. Just save up and make a modern upgrade when you can.
  3. Can intel stop with this artificial weakening of their chips by disabling hyperthreading on anything but the flagships chips? Why is there not a 6c/12t or 4C/8t option?? Maybe continued pressure by AMD will finally force them to include unlocked multipliers and HT on all their chips. Arbitrarily disabling features that are already built into the chips is just devious. It’s like buying a car that has a radio but it’s disabled in the base model and they just plug it in if you pay extra.
  4. Firefox is the only browser that has shown me they care about privacy and are not inherently incentivized to market your data. I’ve been using it for over a decade and the only way I’d switch is if they violated that trust. I’ll even take a small performance hit for the tradeoff, I’ve never actually been using a browser and thought it was too slow for me. Any significant bottleneck is always due to the internet speed or host server, not the browser.
  5. Try using Diskpart -run cmd as admin -type "diskpart" -type "list disk" -find the disk number for the drive in question -type " select disk {insert disk number here}" (example: "select disk 3") -type "clean" now see if you can interact with it in disk manager Edit: Here's a visual guide to help: https://www.tenforums.com/tutorials/85819-erase-disk-using-diskpart-clean-command-windows-10-a.html
  6. Samsung CRG9 is an option. 49", 32:9, 5120x1440, 120hz, freesync. or a more traditional ultrawide. they have some that are 3840x1600, I think they're only 75hz right now but 144hz are coming
  7. I think the most likely and obvious reason is convention... because that's how everyone has been doing it for so long... It could also be that 3200Mhz sounds bigger than 3.2Ghz to tech illiterate people and that using Ghz for both CPU and RAM may confuse those same people. Just think of how often people confuse RAM and drive space already. But to be honest, I think the biggest reason is probably just convention. It's the same thing with GPU clocks, they don't use Ghz even though they could at this point.
  8. I tend to agree that it is likely the CPU. Total CPU usage is not a good metric to go by. The real question is do you have any single thread that is running at >95%? If so then you have a CPU bottleneck. For example many games for me will only use 20-30% overall CPU (6c/12t) but will have one or two threads almost constantly at 100%, that is a CPU bottleneck. To check individual thread use either use a monitoring program (like AIDA64 or whatever you already use) or open task manager, right click on CPU graph, and click "Change graph to" -> "Logical processors", which will show you a separate graph for each thread.
  9. If you want to compare the latency the math is easy to do: CL / (Frequency / 2) = Latency CL is latency in # of cycles Frequency in Mhz (divided by two since it is DDR) is million cycles per second This equation gives latency in microseconds, multiply by 1000 to get nanoseconds. So... It's entirely possible that a ram kit with a higher CL value still has lower absolute latency than a kit with a lower CL value. For example: 4000Mhz kit with CL of 16 has a lower latency (4ns) than a 3000Mhz kit with a CL of 14 (4.7ns). Don't get too sucked into the CL values, they are not absolute metrics but relative metrics and need to be converted if you are comparing kits of different frequencies.
  10. Give it a try, I suspect that will do it (or 60hz since it won't run at 120 if it's locked), or turn off any syncing (but then you have to deal with tearing).
  11. That's probably your issue then. If it's like FO4 then it has a framerate lock at 60fps. 48fps is a perfect division of 144hz so if you a using vsync you will probably lock to 48fps (144hz does not divide evenly by 60). You might be able to fix this by switching your monitor to 120 or 60hz when playing this game, or turning off any syncing.
  12. Try reinstalling drivers from scratch? Also what are your game settings? I remember in FO4 I would get 60fps (game is locked at 60 so you can't get more than that) on most of the map but only like 35fps in the city. The issue was terrible optimization (and a really outdated engine). The fix was to lower the view distances in the ini file and tweak the godrays settings.
  13. What the the per-thread usage? Do you have 1 thread at over 90%? Overall CPU usage does not indicate much.
  14. There's a good chance BF1 might be bottlenecked by your CPU (it already uses like 60-70% of my 6800k [4.2Ghz, 6-core with hyperthreading]). PUBG is probably also bottlenecked by your CPU at those framerates. Modern games seem to reply more and more on CPU performance (I feel like 10 years ago this was not nearly as true as it is now). If you're going above 144fps (btw just lock the framerate in the settings, no reason to go above 144) you will see tearing/sync issues which you might interpret as stutter. In BF4 I see a significant CPU bottleneck (1 thread always at 100%) and see huge stutters/frame drops when I get above 120ish fps. I have to turn my settings to max and turn screen scaling up (I use 125%) to keep the framerates low enough (100-120fps) that the CPU is not bottlnecked. You might have to do the same or limit your framerates.
  15. Why is this a problem, more free features is not a bad thing for consumers. If premium is no longer worth it for you then move to the free version, it’s a win for you.
  16. BF1 is very CPU intensive and likes many cores. Once you get above 100 fps you will see a CPU bottleneck. My 6800k (6core/12thread, OCed to ~4.2Ghz) gets me about 100fps at 1440p ultra but I can tell it's (almost) hitting a CPU bottleneck (GPU still usually gets to 100% but I wouldn't expect improvements if I bought a faster GPU without a faster CPU). I get 6+ threads over 70% usage while playing, at least 1 or 2 constantly above 90%. You are seeing a CPU bottleneck, I'm willing to bet you have at least one or more thread sitting at 100%. Remember that overall CPU usage is not usually the bottleneck, it's individual threads. At 120+fps in BF4 my total CPU use is only like 20% but one thread is always at 100% which means I have a CPU bottleneck.
  17. This is confirmed for many people with Broadwell-E (likely also affects other generations but I can't confirm, comment if you can). The OC shows up in the BIOS but not in windows (everything stays stock). Turns out there was a recent bios update from MSI (my board is the x99a sli plus, CPU is 6800k) which fixed the issue. The problem was the result of update KB4100347 which was an Intel microcode update. The BIOS update did the trick for me and now I have my OC back. Having said that, I am seeing a ~4% decrease in Cinebench score after this update using the same OC so it seems to result in a performance hit. Can anyone else confirm this?
  18. not a chance in games. 2080Ti just does a little over 60fps at 4k.
  19. They won't do this since it's not worth making a new production line and fragmenting their consumer base, especially if they want ray tracing to become widely adapted in games. The only reason they would ever do this is if they have a ton of chips with intact traditional cores but faulty ray tracing cores. I'm not an expert in chip manufacturing but I doubt defect rate in those is any higher than any other part of the chip since it's all the same basic process to make. They will likely make their mid/low tier cards (in this generation) not have RTX but the cards will also be less powerful anyway and not for the same consumer market. I imagine RTX cores will become like CUDA cores within a couple generations, every single card will have them since there isn't a reason to segment the production lines and make separate designs.
  20. We have a very long way to go before this is even a thing.. the next step would be applying quantum computations to general computing... something we have no idea how to do yet (if possible) and would require a complete rebuild of every piece of software and traditional computation knowledge we current have. More and more you'll see 1. software and 2. circuit design/specificity being much more important to massive performance increases than just raw transistor power (just look at things like tensor and ray tracing cores where application specific computational power is more than the sum of its parts). Using neuronal-based hardware design is something I think will make a big difference at some point. Being limited to a bit/transistor only containing two states with only a single input and output is a huge limit on how we currently compute problems.
  21. Can't wait until we start referring to transistor sizes in angstroms (or picometers)... but to be realistic this tech is still very far away and making something in a lab is very different than scaling it to commercial levels. The scaling issue is probably the biggest hurdle by far and I wouldn't hold your breath for this to appear anytime soon.
  22. You can't SLI two different cards together.
  23. This would be strange... unless they plan on selling a blower version as well but I'm not sure why they would do that to their AIB partners. Not sure why anyone who would want an open air design would get a non-AIB GPU so who would this be for? For anyone who puts GPUs in server racks and/or for people who run 4+ GPU systems (not gamers but people like our computational biology lab), blower coolers are really the best option.
  24. The most common reasons for this are either: -Do you have a framerate limit set or vsync turned on? (this can be in game settings, in control panel settings, or in GPU tools like PrecisionX) or -Is your CPU running at 100% (either overall OR per-thread)
  25. I had an issue with my laptop like this where the trouble was a power limitation because I tried using a 60W brick when it wanted a 90W brick. The problem was that even after using a 90W brick, clearing the CMOS, updating/downgrading/reinstalling the bios, reinstalling windows, changing bios/OS power settings, reapplying thermal paste (even though there are no thermal issues), manually OCing, etc... It still refused to ever go back to the proper clock speed. I ended up being able to fix it by using throttlestop and having the service autostart on boot and I now get the expected speeds. You should make sure there's not another obvious issue first (go through all the possibilities) but this is a last ditch option if you need to. Be careful with it though and make sure you are constantly monitoring temps since it will no longer autothrottle clock speeds and if you get too hot you can destroy the PC.