Search the Community
Showing results for tags 'timespy'.
-
Decided to upgrade from a 2080 super to a 4070 super. While it definitely performs much better than the 2080, I can't help but think that im not getting the full value here. The gpu time spy score is bottom 4% every single time i run it, or about 3000 points lower than average (should be ~21000 graphics score). *Also note cpu is a 12700k This isn't my first rodeo and I'm quite experienced with computers, but I also don't want to spend any more time on the piece of junk if that's what it really is. The temps are fine, the clocks are good, hwinfo looks good, has plenty of power - 2 separate 8 pin cables from hx1200i platinum List of what I've done that didn't help (what I can think of right now, so non exhaustive): - Above 4g decoding + resizable bar - Confirmed resizable bar in nvidia control panel + device manager - with/without HDR - with/without gsync - all power plan modes - disabled windows defender - reinstall nvidia drivers - updated all drivers - changing resolution - OC'd with afterburner Also note that before installing this card, I would regularly get above average scores with my 2080 super. Also note that its a gigabyte gaming oc card. I really hate gigabyte, but it was an open box unit from best buy at a crazy good deal, but I'm not sure the previous owner ever plugged it in as the pcie contacts didn't have any scratches on them. At this point im guessing its just a horribly binned card, so I'm just going to end up returning it unless someone has some other idea what's going on.
-
Hello, lovely tech enthusiasts! I come to you today seeking your invaluable expertise and guidance on a matter that has been occupying my tech-loving heart. Allow me to share my experiences and seek your insights regarding my Ryzen 7950X3D processor. So, here's the scoop: I've proudly set a stable curve optimizer on my Ryzen 7950X3D, precisely at -15 for all cores (tested for a solid 9 hours with core cycler, though I must admit I got a tad lazy and didn't go any lower than -15 all core or even attempt to do per core, hehe!). With PBO enabled, SOC set at 1.30, and an Expo 1 configuration (6000MHz, 30 38 38 96), I've achieved higher scores than ever before on the 3DMark Timespy benchmark (both CPU score (top 33% @ 16,701) and graphics score (top 21% @ 36,965) reached unprecedented heights (Overall score: top 25% @ 31,251))! Additionally, my multicore score on Cinebench R23 hit a glorious 36,224, surpassing all previous records. However, there are two aspects that have caught my attention: 1. The multicore temperature during Cinebench R23 remains somewhat high, reaching around 89°C. Are there any tips or tricks you can share to help me lower the temps? My system boasts a trusty 360mm Fractal Design Lumen AIO in pull configuration(exhaust), complemented by the stock Hyte Y60 case with its three static pressure fans (two bottom intake, one rear exhaust). The side panel is currently off, allowing for better airflow. 2. Despite the remarkable overall performance, my single core score on Cinebench lingers around 1926. I can't help but wonder if there's a way to boost this score. Would designating Cinebench as a "game" during the single core test help in core parking and potential performance improvements (I can't think of much since Timespy CPU score and everything else seems to be great (top 33%), although CPU PROFILE score on 3D Mark (not timespy) was only at top 42%)? (It currently parks correctly in games and everything else and I am running Windows 11 btw). Now, my fellow enthusiasts, I turn to you for your wisdom and suggestions. How can I achieve cooler temps during multicore workloads? Are there any ideas to enhance the single core performance? Should I consider repasting or even reseating the CPU to ensure optimal thermals? Linus showcased this EXACT setup in his "500FPS PC" video, and it appeared slightly cooler(5-10C Cooler) in Halo Infinite @ 1080p 500hz (I'm using my old 240hz 1080p monitor for CPU benchmarking test and in Halo Infinite to compare to his, so it's more CPU intensive). While I initially attributed the variances to ambient temperatures and the silicone lottery, I'm open to exploring other possibilities. Your assistance, recommendations, and shared experiences would be immensely appreciated! Let's join forces and navigate the realms of temperature management and single core performance together. Your support means the world to me, and I can't wait to hear your brilliant ideas and suggestions. Thank you in advance for your kindness and expertise, my tech-loving friends! With utmost gratitude, Keiko-chan
- 2 replies
-
- 7950x3d
- single core performance
-
(and 4 more)
Tagged with:
-
Hello world! I'll do my best to keep it short and sweet The Setup: Win11 + 120Ryzen 3900x + ROG Strix X570-E Gaming + 2x16GB Ram G. Skill Trident Z 3600 CL16 + (RTX 2080Ti) + Corsair RMX850x 3dmark 4y ago when I built the setup: ~14500 (sure, cant truly compare apple to apples with older versions of 3Dmark), but I could see GPU Load all maxed out back then. Sold a kidney and one eye and bought: RTX 4090 + MSI A1000G PCIE5 Run 3Dmark on the old setup and detected that the performance of my 2080ti was being really capped: My score ~8500 VS average of the community with same hardware: ~14000 Looking at the pretty graphs from 3dmark: CPU performance as expected (clock wise and CPU score), and temps never higher than 62c (eyeballing the graph). Clocks of the 2080ti: Consistent 1750mhz GPU Memory, and close to 2000mhz GPU Core, GPU Load: ~50% (eyeballing the graph), Temps always bellow 60c Thought to myself something was wrong, but who cares, I have a fancy new toy... Off with the old PSU, off with the old GPU. New PSU, new GPU, sexy and problematic single cable PCI5 cable. Everything boots up, new drivers from nvidia detected. Installed. Run 3dMark and I get... the same score range.. ~8500... GPU boosted a bit to 2800mhz, but seems to be averaging at 1200mhz. GPU Core temp max 48c. I think to myself, whatever... who cares about synthetic benchs? I boot up No Man's Sky, and I was having around the same FPS I had with the 2080ti! (at 4K, 70ish FPS). Ah... poorly optimized game.. sure... Cyberpunk.. crank everything up... (high quality than I used to play) and I get around the same FPS I had before! (At least the raytracing eye candy didn't seem to have impact in the performance). Thats when I went down the rabbit whole: Was it g-sync? No. Was it power profile of the computer? No. Was it the old BIOS? update to latest: No. Is it the CPU virtualization that I need to run docker containers? No. Was it the overclocked RAM? No. High activity of some random program on the background? No. Swamp the RAM sticks - their position between themselves- No Windows pending updates? No Should have removed all the graphic drivers before installing the new one? No worries, nuke windows and install it again: No Some power limit set? Install MSI afterburner, confirm power limit set to 100%: No Use MSI afterburner to train the Overclock curve and maybe it will work? No Was my cat seating on top of the computer? No Am I leaving performance on the table by pairing a 4090 with an old system? Yes, but I need to wait for my Kidney and eye to regenerate to sell them again (that's how it work, right?... right?) What makes things more interesting, is that in the miserable score I get, in the distribution graph that shows the scores of ppl with the same core hardware I have, it shows a little bump around my poor score, indicating that there is a fair number of people getting the same kind of scores that I get. Attached you can see the screenshots of the detailed results of 3DMark. I've also attached the 3dmark result files (if you want to open them with 3dmark to check all the details). My goal really isn't to use 3dmark as the source of all truth and reality (although I did put most focus on it, cuz it's the only instrumentation I got) but gaming is also showing that something is not ok. My knowledge, limited experience in debugging these issues, and whatever ability I have in googling things, leave me at thinking that ... maybe the motherboard is having a problem? But it's I don't know what best way to debug from here. Any tool that is able to capture more detailed information on what's happening with the card? Perhaps power delivery? I have no idea where to pick this from. I'm hopping that this community is able to help me, and all the other 3dmarkers that seem to be having the same problem, and I have a feeling that there might be people on a budget that might be extending themselves so they can enjoy their games, and be facing the same issue. The truth is that over the last 4 years, I went through a backlog of games, and I'm not able to tell when I started having this issue with the 2080ti. Jumping from old games, to AAA games, I believe that I just didn't notice performance drop, and accepted the underutilized reality of my 2080ti, since before this built I had not been playing for 10y. Thanks for reading all the way here, thanks in advance if you chose to help, and now I'll go curl into a ball and cry myself a river. 2080ti - 3DMark-TimeSpy-8651-20230725214529.3dmark-result 4090 - 3DMark-TimeSpy-8675-20230726171831.3dmark-result 2080ti when first built in 2019 - 3dmark-autosave-20190929205015.3dmark-result
-
Continuing from my previous thread (it got derailed from people auguring about 7900 XTX and CPU score... hope this thread will stay on track...) I got a new PSU and done testing again on my Aorus Master RTX 3080 Rev 2, thankfully OPP/OCP didn't trip on this 850W (Corsair RMe850 ATX 3.0) PSU compared to my old one. I'm wondering if The GPU temps are ok especially Gigabyte had bad thermal pad on their 3080s, but this should got fixed with Rev 2 cards (I admit my ambient is too high, but I don't have a AC, and I'm in Sweden so temps usually don't get this high) Here are the results: Note that seems the GPU temps got hotter when running the Furmark stress tests. (all temperature are in Celsius) About Power Connectors: The Corsair RMe850 ATX 3.0 comes with two PCIe cables, one has an extra daisy chained PCIe connector. So it's 1+2 The Aorus Master card requires 3 PCIe connector, and I noticed that they have separate wattage thought them on CPUID HWMonitor, meaning one cable is taking 220+W while the other taking 110+W. Not quite sure how accurate that is, but 220+W is actually over the power rating of the cable (I checked Cybentics report, the cable is using 16+18AWG cable, thicker 16 AWG cable can carry 15 Amps, multiply 12 Volt is 180W, over 220W it is carrying. I'm considering buying an individual sleeved cable kit like this which (including two PCIe cable) to replace the daisy chained one. Corsair Sleevade Kablar Type 4 Gen 4 Startkit -Vit - Inet.se
-
Hey guys, I have a really weird issue with Time Spy, but first the specs: Ryzen 7 5800X, Powercolor Red Dragon RX 6800XT, Kingston Fury 32GB 3200MHz CL16 RAM, Asus TUF B550 motherboard, Asus TUF 750W 80+ Gold PSU, Windows 11, every driver is up to date. So I buildt this PC a few month ago and back then I only cared about GPU OC and decided to not touch anything else in BIOS except DOCP. Now Time Spy gave me very interesting results even back then, to be precise, my graphics score were much lower then others had with same settings on identical system. But my games were running with same performance as others, only 3DMark was fucking with me, so I didn't bother with it. And so we arrived to last weekend, my friend buildt the same system and hit 20k graphic score in Time Spy while my best was 18k AND WE HAD THE SAME OC SETTINGS. So I started to mess with the tuning in Adrenaline but nothing came close to even 18k score. My every day use OC is: Min freq 2500MHz, Max 2600MHz, UV 1050, VRAM fast timing and 2100MHz, power limit 15%. But benchmark was ran with min 500MHz and I simply changed Max to 2650MHz. Now here Time Spy crashed, then after setting UV to the max, which is 1150V, it eventually completed the benchmark, with, you guessed, NO IMPROVEMENT AT ALL ! After searching on the internet, some people said on forums that PBO maybe can affect it and that auto PBO is like turning it off. So I set PBO to enabled instead auto. When I went back to run Time Spy once again it just crashed every time !!!!!!! So I reverted the changes I made but it still keeps crashing, actually doesn't matter what changes I make in Adrenaline cuz it crashes every time. Thought I killed Time Spy but reinstall didn't help and other benchmark softwares (Userbenchmark, Cinebench, Heaven, Prime 95-ran it for 7h, Furmark, PCMark, Geekbench) were actually running. After this I ran my PC for over 18h and tested performance in 15 games (ToF, RE4, FH5, Cyberpunk, Hogwarts Legacy, AC Valhalla, Marvel's Spider-man, GI, Last of Us, NFS Unbound, FF XV, Railway Empire 2, Forspoken, Knights of Honor 2, TW Warhammer 3) with max settings at 1440p. Played all game for over a hour and no crashes happened, temps were normal (even junction never above 95 °C), performance excellent and no problem with any other system parts either. So I just want a confirmation basically, that my PC has no stability problems if only problem is with 3DMark TimeSpy? And a question, should I set PBO to auto or enabled? (don't want to do any manual OC with my CPU atm)
-
I don't know what the problem is with my GPU. All my games just recently started crashing on me, so I decided to run Time Spy and it couldn't run through the second graphics test. I looked into the charts and saw that the card would hit a hard dip down to 300mhz and then shoot straight back up to 1965mhz. I'm not savy on Overclocking and only really mess around with fan speeds through Afterburn. Any suggestions would be really helpful as I Don't want to mess with and break anything in my PC Specs: Cpu: 3700x GPU: 2070 Super RAM: trident rgb 3200mhz Mobo: asus B450 f-gaming
-
What am I missing here? Clocks are all higher, but I get bottom 1% on score... Checked the driver profile for Time Spy, and it's on max perf etc. No point taking my OC further until I know why the GPU score is so woefully low compared to other 3GB 1060 cards.
-
Summary Over at Weibo, the leaker known as “Golden Pig Upgrade” (translated) posted a result from a 3DMark Time Spy benchmark test with a 10,138 Time Spy Score (10,107 Graphics Score). This score was achieved with Intel’s new Arc Alchemist mobile GPU known as Arc A730M, which is not even the full ACM-G10 based model; this discrete GPU features 24 Xe-Cores out of 32 available. It is scoring between a RTX 3060 and 3070 Laptop GPU depending on where you look. Quotes My thoughts I'd say this is looking pretty darn good for Intel, IMHO. It's well in line with many of the previous predictions, so Intel definitely delivered in that regard. There seems to be some disparity between 3060 and 3070 performance, but that's expected with different Laptop models being tested and driver optimization. As for instance, my RTX 3060 Laptop scores 9,802 (9,303 Graphics) in Time Spy and 21,639 (23,288 Graphics) in Fire Strike. So, according to the charts I'm around a 3070 Laptop in Time Spy and the A730M in Fire Strike. I'd say this definitely brings the competition to the midrange mobile market; bringing hope to the release of the Arc desktop GPUs. Of course we will have to wait and see what sort of gaming performance Arc brings; as Tom's Hardware points out that it's easy to optimize for 3DMark rather than a wide gamut of games. Sources https://wccftech.com/intel-arc-a730m-12-gb-mobile-gpu-is-faster-than-nvidias-rtx-3070-mobility-3dmark-performance-benchmarks/ https://www.techpowerup.com/295592/intel-arc-a730m-3dmark-timespy-score-spied-in-league-of-rtx-3070-laptop-gpu https://www.guru3d.com/news-story/in-3dmarkthe-intel-arc-a730m-outperformed-the-rtx-3070-mobile.html https://videocardz.com/newz/intel-arc-a730m-is-faster-than-rtx-3070-laptop-gpu-in-3dmark-timespy-test https://www.tomshardware.com/news/intel-arc-a730m-close-to-mobile-rtx-3060-3dmark-time-spy Update to this story ~ Summary Weibo’s user “Golden Pig Upgrade” tested the Intel Arc A730M discrete mobile graphics card in a number of games, such as Assassin’s Creed: Odyssey, Metro Exodus, & F1 2020 at two resolutions. The performance is around an RTX 3060M or faster than a 3050 desktop but slower than 3060 desktop depending on the scenario. Quotes Sources https://www.guru3d.com/news-story/intel-arc-a730m-game-tests-gaming-performance-differs-from-synthetic-performance.html https://videocardz.com/newz/intel-arc-a730m-has-been-tested-in-games-with-performance-between-rtx-3050-and-rtx-3060 https://www.tomshardware.com/news/intel-arc-a730m-gaming-benchmarks-show-rtx-3050-mobile-level-performance https://www.techpowerup.com/295624/intel-arc-a730m-tested-in-games-gaming-performance-differs-from-synthetic https://wccftech.com/intel-high-end-arc-a730m-gpu-is-barely-faster-than-an-nvidia-rtx-3050-in-gaming/ https://www.pcgamer.com/first-intel-arc-alchemist-benchmarks-are-a-bit-of-a-mixed-bag/ https://hothardware.com/news/intel-arc-a730m-benchmarks-mobile-geforce-rtx-gpus My thoughts I think there is definitely some driver maturation to be done of course, but performance is not too shabby. I know because of the synthetics people were expecting more, but I think this is still a great start to be honest. I'm not as gloom and doom as some of these news outlets are (not all of them are) because I think this is still early. I also think it's unfair to judge the GPU based solely on 3 games. Between now and when the GPUs are widely available they will definitely have some time to do some serious work. Second update to this story ~ Summary There are some more Gaming (and Workstation) Benchmarks available today, and it seems the Arc A730M loses to an RTX 3060M in mostly all instances (except Metro Exodus and Elden Ring). The performance is all over the place with many inconsistent results. There's a brief video from "Golden Pig Upgrade" comparing the Arc A730M and GeForce RTX 3060 Laptop and also a full review from IT-Home. The results from IT-Home are based on an unofficial driver (30.0.101.1726) so the results may vary. But both reviews appear to show similar synthetic performance, so the numbers should be accurate. Quotes Sources https://videocardz.com/newz/first-review-of-intel-alchemist-acm-g10-gpu-is-out-arc-a730m-is-outperformed-by-rtx-3060m-in-gaming https://wccftech.com/intel-arc-a730m-high-end-mobility-gpu-slower-than-rtx-3060m-despite-latest-drivers/ https://www.tomshardware.com/news/geforce-rtx-3060-mobile-kicks-intel-arc-a730m-around https://www.bilibili.com/video/BV1US4y1i7ne https://www.ithome.com/0/623/070.htm My thoughts I'm guessing this might be why the lineup has been limited to Chinese markets first before global release. It might simply be that the software is not even close to ready. I know things were looking promising with the synthetics a few days ago, and even yesterday the first gaming benchmarks weren't looking too bad. But this is quite a different scenario altogether, as even the synthetics here have nothing beneficial going on for the A730M. This is supposed to be a relatively high-end GPU in the end and it's not really competitive with NVIDIA's mainstream offering. If this is the final performance to be expected when the product launches worldwide, the only saving grace is pricing; as the performance is pretty poor in this showing here. Obviously it would be best to wait until Arc lands into the hands of respected reviewers, instead of guesstimating performance from these early reviews. However, nevertheless, it's still quite disappointing.
-
Have been having problems with my RTX2080. Have tried multiple things in the NVIDIA control panel, but it didn't help. I don't have any idea what to do anymore. here's my timespy score: https://www.3dmark.com/spy/11122785 Hope that anyone can help
-
Alright. So. MSI afterburner blinks the screen when applying any overclock. Is this normal? Secondly, its enabled extended overclock, but not voltage increases. As such the overclock is still small. Thirdly, said overclock does well in furmark, going for over an hour with no issues. It glitches out horrendously in timespy, to the point that the main monitor its displaying on goes to a bunch of pixelated lines and even after force quitting the program refuses to recover: it just stays there. Have to force a restart to fix it. Also runs fine on heaven benchmark. Any advice? System specs are R9 270 Gigabyte (was trying 1150 core 1500 memory) 1090t @ 4ghz (6 hours stable in prime) Crosshair IV formula motherboard Samsung 850 evo ssd Corsair HX620 PSU I'm Getting annoyed with the small issues building up.
- 7 replies
-
- msi
- afterburner
-
(and 1 more)
Tagged with:
-
Might I suggest allowing Builds.gg, UserBenchmark scores, and other various benchmark leaderboard scores in the signatures? https://builds.gg/ Builds.gg is one of the best ways, in my opinion, to display a rig. It allows photos, detailed part descriptions, and regular updates to be shown. People can also easily show approval by a simple thumbs up or by leaving a comment. Linus Media Group has been sponsored by them in the past, so it's not like it would be too far off from being allowed easily. They even have their Copper Tubing Build as an entry! https://www.userbenchmark.com/ UserBenchmark links would be useful to show brag-worthy overclocking scores right off the bat. It's also another way to provide a parts list, but it's actually verifiable in contrast to PC Part Picker, which anyone can make any list of whatever components are out there without any proof of actually having them in their possesion. I believe Linus Media Group has used this site as an information source in their videos as well. https://www.3dmark.com/hall-of-fame-2/ Having Timespy or other benchmarking leaderboards would be neat. I understand deciding on which specific sites to be allowed could get complex, so I'm not even sure this would ever become a possibility. It's more of another idea to throw out there while I'm at it.
- 4 replies
-
- signatures
- builds.gg
-
(and 2 more)
Tagged with:
-
So i just bought my system and am running some benchmarks to test it out. my brother has a 1700x running at 3.8ghz that beats my 1900x's ass. Me, my brother and another buddy that has some previous knowledge ocing and generally pc building and testing are all baffled at what might cause my low scores. does anyone have similar exp, or any idea what we might be overlooking? 1700x scores 9500 points in Time spy and 1680 in CinaBench @ 3.8ghz 1900x scores 6500 points in Time spy and 1810 in CinaBench @ 4.2ghz (scores 1450 in CB on Factoryclocked with boost) why so unconsistant? one thing we noticed that seemed weird was that it looked like the 1900x downclocked Some of its cores randomly to half speed both when idle (which seems resonable) but also while Benching. NB: Taken after a benchmark, that why its idle % Any help here would be superb. Thx SlickWizard
- 11 replies
-
- timespy
- thredripper
-
(and 3 more)
Tagged with:
-
Hi, Just wanting to know if 3D Mark has a DRM free version instead of me having to install steam with my account on a friends PC. Seem to only find this one http://www.guru3d.com/files-get/3dmark-download,1.htm Cheers.l
- 2 replies
-
- steam
- benchmarks
-
(and 2 more)
Tagged with:
-
Hello,I don't know whats really going on here. I have a stable OC on my I5-8600k at 5.0 GHz right now. However, my Time Spy CPU score is only 5400 - when comparing this to other I5s its pretty low, even lower that I5s not running an OC as high as mine. Any ideas what could be happening? Whats even more confusing is the fact that my Cinebench r15 scores are multi = 1200 and single = 213, which appears to be very good. My max temps under heavy load never get above 65c so i dont think its a thermal issue. My overall combined score in Time Spy was 6674.Set-upIntel - Core i5-8600K 3.6GHz 6-Core Processor - OC'd to 5.0GhzCorsair - H110i 113.0 CFM Liquid CPU CoolerAsus - Prime Z370-A ATX LGA1151 MotherboardCrucial - Ballistix Sport LT 16GB (2 x 8GB) DDR4-2400 MemorySamsung - 850 EVO-Series 250GB 2.5" Solid State DriveWestern Digital - Caviar Blue 1TB 3.5" 7200RPM Internal Hard DriveAsus - GeForce GTX 1070 Ti 8GB ROG STRIX Video Card
-
http://www.pcper.com/reviews/Graphics-Cards/3DMark-Time-Spy-Looking-DX12-Asynchronous-Compute-Performance PCPer has published an article on the new 3DMark DX12 Time Spy Benchmark, and tested a number of graphic cards from both AMD and Nvidia. Based on the results, it confirms that Pascal does have support for Async Compute, unlike its predecessor Maxwell which clearly does not show any performance gains (ahem** still waiting for async compute driver). While it is apparent that the FuryX is getting as much as twice the relative performance gain over Pascal with Async Compute enabled, This should at least confirm that Nvidia have (in part) successfully circumvented the need for power hungry componentry, while still achieving noteworthy gains in async compute performance, with the 1080 showing gains similar to the R9 Nano, and the 1070 just trailing the 480. How this translates to real world performance mostly remains to be seen, as it depends on the actual adoption of the new API's, and competent implementation from developers. It should also be noted that large % gains with async compute should not necessarily be applauded, or compared between cards. As we can see just below, the 480 barely makes up any ground against the 1070 in this test. I personally feel perf/watt still remains the best metric in determining what approach is the best, and not just for environmental reasons. Actual performance increase per Card (highest to lowest): FuryX: 652 points Nano: 496 points GTX 1080: 471 points RX 480: 335 points GTX 1070: 305 points 1080 SLI: 164 points GTX 970: ? GTX 980: ?
- 171 replies
-
- async compute
- pascal
-
(and 2 more)
Tagged with: