Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

skaughtz

Member
  • Content Count

    24
  • Joined

  • Last visited

Awards


This user doesn't have any awards

About skaughtz

  • Title
    Newbie
  1. Can someone explain the difference between these data from the Shadow of the Tomb Raider benchmark? The game provides a min/max/average/95% for each. I have run the benchmark while comparing the performance of an i5-3570K (with an 8GB RX 580) and i7-3770K (with a GTX 1070). The former seems to produce better CPU Game results (76/151/112/86 vs 63/127/94/70) while the latter produces slightly better CPU Render results (48/238/79/50 vs 55/172/84/57). What exactly is the difference between those two data sets and why would a 4c/4t i5 outperform a 4c/8t i7 clocked the same in a newer game? Thanks.
  2. Can someone explain the difference between these data from the benchmark? The game provides a min/max/average/95% for each. I have run the benchmark while comparing the performance of an i5-3570K and i7-3770K. The former seems to produce better CPU Game results while the latter produces better CPU Render results. What exactly is the difference between those two data sets? Thanks.
  3. So I decided to screw around a bit more with the voltages. I managed to get the ASRock board to go to 4.5GHz with a +0.08V offset (+0.1V increase total). With those setting that system managed to stay stable under 10 minutes or so of Prime 95 before I ended the test. The same settings on the Asus board caused a hang and crash on the Windows welcome screen and I didn't bother to try any further. That much more voltage certainly isn't worth the extra clock speed. I guess that means that that the cpu on the Asus board is just a worse chip than the other. It still doesn't necessarily explain the boards providing what are essentially different stock voltages, if the bios readings are to be believed at least.
  4. I have two 3570k systems. One on an ASRock Z75 Pro4 and one on an Asus P8Z77-V LX. Both are running at 4.2GHz with a -0.02 voltage offset. But I just noticed that both are reading quite different voltages. TheASRock board gives me readings of: 0.960V VCore in the bios 0.816V VCore/0.856V VID idle 1.072V VCore/1.196V VID under 100% load The Asus board gives me readings of: 1.048V VCore in the bios 1.016V VCore/1.076V VID idle 1.192V VCore/1.224V VID under 100% load Those are differences of 0.088V in the bios 0.2V VCore/0.22V VID idle 0.12V VCore/0.028V VID under 100% load In all cases the Asus board is recording higher voltages (the idle and load readings we're through HWMonitor, CPU-Z, and Core Temp). So before I go through the bother of testing each CPU on the opposite board, I thought I would check here. Is this simply a case of Asus engineers designing their board to pump out more voltage than the ASRock, or does the CPU on the Asus board require that much more juice to function at the same clock and same negative offset? Maybe both?
  5. That did the trick. It also required a firmware/software update for my TV to display it correctly, but that worked. Also I can maintain the 4k resolution on the desktop and just select 1440p 60 Hz in-game. V-Sync and 60 FPS cap work fine too. For posterity should anyone have the same issue.
  6. I was not aware that I could do that but I will definitely give it a try. Not that it would matter too much to me, but my native display resolution is set to 4k. Would this change it to 1440p, or would it only apply to when I select a 1440p resolution? Thanks for the help, buddy.
  7. TCL 55S405 4k 60Hz monitor (television) EVGA GTX 1070 SC Gaming Black (08G-P4-5173-KR) Windows 10 Pro Need technical help. Long story short, when playing Devil May Cry 5 I seem to be locked into 30FPS or less if I lock the frame rate at 60 and turn V-Sync on during full screen at 1440p. 1080p and 4k don't exhibit the problem. If I use windowed or borderless windowed mode at 1440p the problem doesn't occur. Only full screen. Further, if I set the frames to variable and turn off V-Sync, it seems to run normally... but then I get screen tearing. My TV is 60Hz (and I have manually set it as both 60Hz and 59.93Hz in Windows). When I select the 1440p resolution, I cannot select more than 30Hz refresh rate. I can at all other resolutions. I have altered the DMC 5 config file in Steam to both DX11 and DX12 and it happens with both enabled. Is this just a driver issue or something known? It is annoying. Appreciate the help.
  8. Those are good points, although I still won't be looking above the (current) 2060 price range when that happens. I have a tinkering itch that needs scratching though, so I guess I will drop some coin on a used 1070 deal and see what comes down the road in a few months. At the very least I would guess a 1070 should be fine to handle 1080p at 60fps for some time to come.
  9. So right off of the bat: I want, don't need, a new GPU. I currently play what few games I do on my 4k 60Hz television with an RX 580. At 1080p it keeps things pretty and at 60fps for the most part. I mainly play stuff offline and single player like DMC 5, GTA V, Gears 5, Dragonball Xenoverse 2. No need for super refresh capable graphics. I have a computer in my basement that I wanted to upgrade it's GTX 960, so I am replacing that with my RX 580. The question I have is whether I should drop $300+ for the 2060 to future proof with Ray tracing, or just snag a good used deal on a GTX 1070 or 1080 if they drop low enough. Either way I know I can drop some settings and get respectable 1440p framerates, which is where I want to go. The 1070 can be had for about $175ish, while the 1080 seems to be about $100 more than that and the 2060 another $50 on top of that. As I said, I don't need a new GPU so I'm having a hard time justifying the money on more than a 1070 unless Ray Tracing is going to become a necessary feature to have soon. So is the "PS5 and Xbox Ray Tracing is going to make everything else obsolete other than RTX cards" talk legitimate, or would you not bother?
  10. So I got my old man a 1080p IPS monitor to move him into the 20th century. He is running an i5-2400 and using the integrated HD 2000 graphics. All he does is internet browse, Youtube and Facebook videos, and play some flash slot machine games. Is that old iGPU still up to the task to put out 1080p for the videos? I have an old 1GB DDR5 Radeon HD 6670 and a 2GB RX 460 laying around that would no doubt do the trick, but I want to keep power consumption down and driver updates to a minimum if I can (he wont be updating drivers).
  11. Yep. I really have no complaints. I've been hobbying around with computers for the past 20 years but havn't built a rig with newer technology in quite some time (save for the RX 580... which is by no means "new" anymore). Feeling frisky. I'm just wondering if there is something I am missing as to why a 4 core/thread CPU like my 3570k would be approaching obsolescence sooner rather than later? Or, perhaps more relevant to my question, is unless you need the processing power of a 16 thread CPU for a 64 player online game, why aren't more people singing the praises of how above-average something like the 3570k still functions in this day and age?
  12. This has probably been asked a bunch of times over, but I'm bored and curious for those that would entertain me. Can anyone convince me of a reason to upgrade my current 3570k system to one of the new Ryzen platforms? I have the itch to build and play with the new technology, but can't justify spending the money for limited gains Right now I have a 3570k overclocked to 4.2Ghz on a -0.020V offset voltage. Solid and cool under load and I'm sure room to push another few hundred Mhz. I have it paired with a Sapphire Nitro RX 580 8GB SE at stock and 16GB of 1600 DDR3 ram. I tend to play single player, non-FPS games, and I tend to play those that have been released sometime in the past. I just got done GTA V a few months ago and am on my second run of DMC5 (which stays at 60fps at 1440p other than cut scenes that will occasionally drop to 45fps). I'm not sure what my next game is, but it won't involve me playing with 60 other people. I don't have time to get good enough to not have to hear smack from a 12 year old that plays 10 hours a day (a job and 2 year old children will do that to you). With the prevalence of 6 and 8 core CPUs, I feel bad for my 4 core/thread 3570k when I run Cinebench, but she still isn't letting me down in any noticeable way where it counts. Is that about to change? I'd hate to fire up my rig one day and have to play at a peasant 30fps at 1080p (although I tend to play most things at 1080p... DMC5 just seems really light on its system requirements, so why not?). So, is there any reason that I'm not thinking of for not replacing my 6/7 year old CPU/Mobo/Ram? I'm honestly amazed at how good Intel's platform is/was even after all of those years.
  13. Well okay then. So is it accurate to say that if I want to lower the power consumption and temperature on the Asus board at 4.2Ghz, my only options would be to set the offset lower than -0.045V, or switch to manual voltage and lose the Vcore throttling?
  14. That would make sense, but shouldn't the auto voltage be the same or at the very least VERY close? Isn't that essentially the stock voltage?
  15. I have two nearly identical systems, both with an i5-3570k at 4.2 Ghz with a -0.045V offset and NO LLC (full Vdroop). Same case. Same cooler. Same fan setup. One is on an Asrock Z75 Pro3 and one is on an Asus P8Z77-V LX. For some reason that I cannot determine, the Asus CPU runs about 10C hotter than the other, and according to Core Temp and CPU-Z, draws more power at full load than the other (running small FFTs under Prime95... not that it should matter). Asrock Z75 Pro3: 1.048V Vcore/1.1909V VID/65.9W Asus P8Z77-V LX: 1.176V Vcore/1.3260V VID/78.5W Is there a setting, possibly specific to Asus, that would cause that board to pour more voltage into the CPU? Other than switching to -0.045V offset, I have not touched voltages.
×