Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

skaughtz

Member
  • Content Count

    35
  • Joined

  • Last visited

Awards

This user doesn't have any awards

About skaughtz

  • Title
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks, fellas. The 1080 Ti is still a beast for my purposes, so that is how I was leaning. Now the difficult question of selling the 2070 or replacing the 1070 in my other rig with it.
  2. So long story short, I recently RMA'd a 1070 Ti and am receiving back a 2070. I guess that they were out of stock on 1070 Ti's now, but I'm certainly not complaining. I also have a 1080 Ti in my main machine. Has the 2070 surpassed it now in 2021 with driver improvements and optimizations or should I stick with the 1080 Ti? With GPU prices the way that they are now, I can certainly get a decent ROI on either card if I decide to sell one, but I would rather keep the better one in my main gaming rig (I typically play 1080p or 1440p at 60Hz on a television as I sit too far away on my couch to
  3. Does Kombustor's artifact scanner actually work? The card is topping out at 71C under that program, so I suppose I could let it go for an hour and see if if catches anything once the GPU is being pushed for an extended period. Otherwise I am out of ideas.
  4. I recently bought a 1080 Ti from someone who said that it was artifacting. It is still under warranty so I figured I would send it in for repairs/replacement since they just wanted the cash, then hey, I get a cheap (relatively speaking) 1080 Ti. The problem is that I can't get the darn thing to display any problems. I have run (at 4k and stock settings) MSI Kombustor for 15 minutes, Furmark, Unigine Superposition, Time Spy, various game benchmarks... nada. I am happy that the card seems to be functioning properly, but I don't want any problems to show up later. Is th
  5. I think something fun might be an "Improve my build" segment. This would not include replacing or upgrading parts, but simply improving the existing system that people send in. This could include replacing thermal paste on CPUs and GPUs, tweaking overclock settings for maximum performance, improving cable management or relocating parts to improve airflow, adding simple accessories like fan controllers to reduce noise, or even showing how altering in-game or system settings can increase performance, among others. The LTT team could then take a baseline of the performance of the system upon a
  6. No dice. Swapped it over to an Asus P8Z77-V LX and the same thing thing happend. It just will not give the option to alter the turbo core frequency for each individual core. Asus Multicore enhancement does nothing. I ran a program called "Throttle stop" in Windows 10 on both boards and it also showed that the 4 core turbo multiplier could not be increased past 3.6GHz and showed +0 bins for overclocking. So, for anyone looking at this in the future, base clock is the only way you are increasing these Xeons. Either that, or it takes a very specific motherboard to alter each core
  7. Thanks. I came across that too but the guy who posted last has not put out anything on YouTube. I am wondering if it is the board, which is an Asrock Z75 Pro3, and doesn't seem to have quite the finite control as my Asus P8Z77-V LX board does. I'm going to swap them tomorrow to see if I can force it to behave the way it should. Unless I am completely wrong about how Ivy Bridge chips could boost, I should be able to get the Xeon to clock 4.0GHz on all cores (3600MHz four core turbo + 400MHz OC headroom of that generation for locked CPUs).
  8. I threw a Xeon e3-1240v2 on an Asrock Z75 Pro 3 board. The Z75 chipset is the same as a Z77 for all intents and purposes. Locked Sandy/Ivy cpus can be set to run turbo on all cores on these chipsets, and run 400 MHz above the standard all core turbo (my understanding is that Intel did away with that with Haswell). So why is this stupid Xeon refusing to go above 3.6GHz on all cores? The max turbo boost on one core is 3.8GHz. Despite setting the CPU core ratio to "All core" and the All core ratio to 38, it continues to operate under load as if it were on a non-overclocking chipset. Is this
  9. I have an old gaming rig that I use occasionally that has an RX 480, Z77 motherboard, and a 3570k in it. The 3570K is a good overclocker and I currently have it at 4.5GHz with 16gigs of ram. The other day I decided to fire up Devil May Cry 5 on it and I noticed that I was getting a noticeable amount of microstutter. I happen to have a Xeon E3-1275 collecting dust (4 cores/8 threads) and I was wondering if the increased thread count on that CPU would be the better CPU to have in the system? Since the Xeon can't overclock, it is limited to the turbo boost (3.5GHz on 4 c
  10. Thanks for the explanation. This GN/Buildzoid video explaining LLC is where the majority of my understanding of LLC comes from. It is something of a basic overview, but helpful in understanding nonetheless for anyone interested. Since the Cryorig M9 is basically on level with a 212 EVO, I can't go for an absurdly high overclock anyway. But I can't get a stable overclock with offset voltage above 4.5, so I thought maybe LLC could be of help without torching the chip.
  11. I have a 3770k I'm playing around with. Right now I have it at 4.5GHz on a +0.08V offset with no LLC. It is cooled by a Cryorig M9, so my temps are about as high as I want them to go right now. The motherboard VCore reads 1.112V, and the load VID/VCore under Prime95 is 1.271V/1.216V, respectively. I understand how LLC works in general, and have no intention of applying anything over a mid (50%) setting. My understanding is that with high/extreme LLC settings, the reported voltage through CPU-Z and the like may possibly be significantly lower than what is actually being supplied to the CPU (
  12. Can someone explain the difference between these data from the Shadow of the Tomb Raider benchmark? The game provides a min/max/average/95% for each. I have run the benchmark while comparing the performance of an i5-3570K (with an 8GB RX 580) and i7-3770K (with a GTX 1070). The former seems to produce better CPU Game results (76/151/112/86 vs 63/127/94/70) while the latter produces slightly better CPU Render results (48/238/79/50 vs 55/172/84/57). What exactly is the difference between those two data sets and why would a 4c/4t i5 outperform a 4c/8t i7 clocked the same in a newer
  13. Can someone explain the difference between these data from the benchmark? The game provides a min/max/average/95% for each. I have run the benchmark while comparing the performance of an i5-3570K and i7-3770K. The former seems to produce better CPU Game results while the latter produces better CPU Render results. What exactly is the difference between those two data sets? Thanks.
  14. So I decided to screw around a bit more with the voltages. I managed to get the ASRock board to go to 4.5GHz with a +0.08V offset (+0.1V increase total). With those setting that system managed to stay stable under 10 minutes or so of Prime 95 before I ended the test. The same settings on the Asus board caused a hang and crash on the Windows welcome screen and I didn't bother to try any further. That much more voltage certainly isn't worth the extra clock speed. I guess that means that that the cpu on the Asus board is just a worse chip than the other. It still doesn't necessar
  15. I have two 3570k systems. One on an ASRock Z75 Pro4 and one on an Asus P8Z77-V LX. Both are running at 4.2GHz with a -0.02 voltage offset. But I just noticed that both are reading quite different voltages. TheASRock board gives me readings of: 0.960V VCore in the bios 0.816V VCore/0.856V VID idle 1.072V VCore/1.196V VID under 100% load The Asus board gives me readings of: 1.048V VCore in the bios 1.016V VCore/1.076V VID idle 1.192V VCore/1.224V VID under 100% load Those are differences of 0.088V in the bios 0.2V VC
×