Search the Community
Showing results for tags 'benchmarks'.
-
UPDATE 4/7/21: I see this is still being used but fairly outdated. I will look to update this and create a new tracking system to organize the results. I have decided to start a LTT 3DMark spreadsheet for anyone looking to keep track of their scores in relation to other users of the community. Much thanks to Jumper118 for maintaining the Valley thread found here. I will be maintaining 3 different benchmarks to start, I will add more if needed later on. You can download 3DMark from their website: http://www.futuremark.com/benchmarks/3dmark Rules: You must use the template below to submit your score. Only valid scores will be added. If you are updating your score please indicate you are doing so in your post. Template: Benchmark: (Fire Strike, Fire Strike Extreme, Sky Diver) CPU: GPU: GPU Core: GPU Memory: Score: 3DMark Link: PCPartPicker Link: (optional) Current link (DXMember's May 17 2016 Update): https://onedrive.live.com/redir?resid=718A6487E3EBF0D2!175&authkey=!AFgMhhezFkbXg6Q&ithint=file%2Cxlsx
- 3,930 replies
-
- 3dmark
- benchmarks
-
(and 3 more)
Tagged with:
-
Background When benchmarking and playing CyberPunk I am running into an issue where my minimum FPS drops as low as 6 and goes no higher than 12 FPS. I can physically feel the gameplay stuttering ( rare occurance but jarring non the less) when I play some cyberpunk. I have similar experience with the RD2 benchmark. Could the issue be because I have a defective GPU? I also heard that Am5 struggles with 4 high speed RAM modules and so it would be better if I swapped my 4 16GB RAM modules for 2 32GB ones. System components CPU - AMD 7800x3D CPU cooler - Thermalright Peerless Assassin 120 GPU - PowerColour Red Devil RX 7900xtx Motherboard - MSI b650 edge wifi RAM - Corsair vengeance 64GB (16GBx4) DDR5 6000 CL30 Ssd- 2Tb Samsung 980 pro (OS) Ssd2 and 2Tb Kingston - NV2 PSU - ASU ROG Strix 1000W 80+ Gold Case - Fractal North ATX Fans - Lian Li UNI AL120 V2 (x4) and Lian Li UNI AL140 V2 (x2) My current progress I have since tried the following with no change in the results: Turned off Expo Turned off PBO Removed 2 of the RAM sticks and ran test with 32GB RAM setup Updated Chip set Drivers Updated AMD Display Drivers (latest adrenaline version) Updated windows Updated Mother board drivers (sound, wifi etc) Turned off V-sync Change games to Full screen Mode Turned off Ray tracing Turned off FSR 2.1 Removed and reseated GPU. What other troubleshooting options can I try or am I out of luck and will have to RMA the GPU? Additional Question: is it advisable to switch my 16GBx4 Ddr5 RAM config to a 32GBx2 setup?
- 8 replies
-
- rx7900xtx
- benchmarks
- (and 4 more)
-
Learn more about Intel Optane Memory: https://www.intel.ca/content/www/ca/en/architecture-and-technology/optane-memory.html In a desert of high RAM prices, Intel’s Optane accelerator modules are looking like a pretty good alternative.. Does it have what it takes? Buy a 16GB Intel Optane Memory Module: On Amazon: http://geni.us/OBeho On Newegg: http://geni.us/aBREvG Buy a 32GB Intel Optane Memory module: On Amazon: http://geni.us/00ht On Newegg: http://geni.us/IePD6B
-
Upgrading the HP Pro 3500 I often see posts here on the forums where someone is asking if it is possible to upgrade their prebuilt desktop to make it capable of gaming. Most of the time, the answer is yes, and here I come with proof. I have upgraded a HP Pro 3500 with a fast affordable used GPU and a more powerful PSU. Prefer b-roll over written text? Watch my video here: Component Selection I chose the GTX 770 for the GPU as it’s a still a very capable card for 1080p gaming by today’s standard and it can be picked up quite inexpensively on the used market. The GTX 770 is based on the same fully unlocked 28nm GK104 Kepler chip as the one found in the GTX 680. The GK104 chip packs 1536 CUDA cores and a 230W TDP. The GTX 770 is available in both 2GB and 4GB variants both featuring 7GHz GDDR5 on a 256-bit bus. The MSI GTX 770 Gaming OC variant I have is overclocked out of the box by 5%, resulting in close to 3.4 TFLOPS of compute performance. The MSI GTX 770 Gaming OC barely fits in the HPs compact MATX case, so a smaller GPU would make the upgrade process ever so slightly smoother. For the PSU I chose the Cooler Master B600 for one reason, I already had one on my test bench. I would recommend going with a slightly better quality unit than the B600, so feel free to check out the PSU tier list by @STRMfrmXMN: When it comes to the stock specs of the machine, the HP Pro 3500 is more than adequate. Its Ivy Bridge quad core i5 3470 is no slouch and with 8GB of ddr3 ram you should (in theory) be able to watch a YouTube video and browse the LTT forums at the same time with no slowdowns. The 500GB mechanical drive is not the fastest thing in the world, but it is large enough to house a few games and you can always add a SSD later down the line. The board is nothing fancy, and lacks both USB3 and SATA6. This however is not much of as issue as it has three PCIE 1x slots for expiation cards that can add these features. Specs Intel Core i5 3470 @ 3.2GHz HP Pro 3500 Motherboard (H61) 8GB (2x4Gb) DDR3 1600MHz CL11 MSI nVidia GeForce GTX 770 Gaming OC Seagate Barracuda 7200.12 500GB Cooler Master B600 (600W) Upgrade Process and Benchmarks The upgrade process in video format can be found at 1:10 and benchmarks can be found at 4:28. A quick note on GPU temps Due to the compact nature of the HP case, the airflow situation for the GTX 770 was far from ideal. With the stock fan profile, the card ended up at 81c and 1084MHz on the core. This however was easily fixed by applying a more aggressive fan curve in MSI afterburner, resulting in GPU Boost kicking in although at the cost of a slightly louder system. Conclusion Overall, upgrading a prebuilt is defiantly something to consider for the gamer on a budget. You could go the used route like I did here and buy a last gen GPU and a more powerful PSU, or you could get a new card like the GTX 1050Ti which does not require auxiliary power, and skip the PSU part all together, making for an easier upgrade. As always, the choices are many with PC gaming, and this is just one of them
-
Best all round benchmarking program. For single use...
- 1 reply
-
- benchmarking software
- benchmarks
-
(and 2 more)
Tagged with:
-
If one was to set a high Vram in windows control panel in Windows 11 ; lets say a formula of 7/32 is ram usage with no tabs open but task manager ; 32-7 = 25 ; then 25X3 =75 Vram.. Then also set the Resizeable bar on .. Would that have any effect on high Vram games that use 9-12 gigs of Vram ? As the rumored Jedi survivor game and other games recently released ? Especially for those 6gig as the RTX 2060 or the 8 gig RTX 3070 ?
-
Hello Everyone Sorry for my bad english. I did not know where i could ask this question. I am a tiny youtuber and not that good in editing. I use filmora as my editing software. since i usually build pc and provide benchmarks at the end. i cannot make graphs easily. and also if i make it. it sucks and is not usable for the next video. i know there are some very bad solutions like excel but i wanna keep my production quality a bit higher. now if anyone knows any kind of tips to make some professional looking graphs like LTT or JayztwoCents and that are able to use it multiple times without much hassle please tell me. will really appreciate it.
- 5 replies
-
- benchmarks
- graphs
-
(and 3 more)
Tagged with:
-
So I recently tested my CPU and GPU using 2-3 benchmarking softwares and found out that my CPU is underperforming severely. It's not overheating so thermal throttling is out of the equation. I checked task manager to see if some other background program is causing issues but found none. And while trying to look around with power plans I came across this bug which causes my power plans to switch for some reason. Is there any way to counter this bug? Also what can I do to boost my CPU performance back to average?
-
AMD made the curious decision to very quietly launch a graphics card today – The Radeon RX 6600. Why would they not announce it? Does AMD have any confidence in it whatsoever?
-
I built this PC for $780 and have really been enjoying it. I have an editing station I use at home with a 3700x, 1080ti, 64gb, and x570 board. This experience I have had with the 5700g has been amazing so far. As for 4k editing, it has held up very well. I was able to overclock my crucial ballistix 3200mhz memory to 3600mhz (turned down in these benchmarks). When my AIO comes in i'll be able to hopefully push the graphics card and processor even harder! I have no real gaming benchmarks yet but I have been playing 1080p splitgate, medium settings, at 180 fps!
-
- benchmarks
- ryzen
-
(and 3 more)
Tagged with:
-
Windows 11 is fast approaching, but while Microsoft promises big improvements, is it really going to be a boon to gamers, or is it just going to be a side-grade with a new skin?
- 34 replies
-
- windows 11
- gaming
-
(and 4 more)
Tagged with:
-
Hi I wonder if anyone know of a website that does sub-1080p gaming with ray tracing benchmarks with the latest Nvidia 3000 & Radeon 6000 series of graphics cards?
-
Apologies if this is the wrong forum category, I figured the results in here will be 'news' to some degree. Today my MacBook Air with the new M1 chip arrived. I've linked some images as proof > Image 1 - Image 2 This thread was spurred by various discussions elsewhere in the forums and aim to consolidate my initial benchmarks - more importantly I wanted to open a channel for anyone to request benchmarks and/or answers to questions you may have. I have available various hardware, including a few Intel based Macs and quite a few Windows machines which I can go into more detail about later. Please let me know any benchmarks or questions you have and I'll update this post to consolidate them as soon as I can. Technical Observations Just gonna dump any extraneous ramblings here. Power Draw I let the laptop sit on charge until it was firmly at 100% and then measured wattage at the wall using my wattmeter with the included 30W USB-C PD charger. Take these measurements with a pinch of salt as the included power adapter will not be 100% efficient, and my meter is only supposedly rated for ± 0.5W of precision. Where I've put variability (±) below this is just based on guesstimate looking at the numbers vary for a brief moment. At idle with panel brightness at 0% I measured 2.1W ± 0.2W. At idle with panel brightness at 100% I measured 7.1W ± 0.2W. Running CB23 multi-core for 10 minutes with panel brightness at 100% I measured 29.0W ± 2.0W - this dropped to 24.4W ± 1.0W after about 9 minutes. Running CB23 single-core for 10 minutes with panel brightness at 100% I measured 13.7W ± 0.4W consistently throughout the entire 10 minutes. Playing back 4K60HDR content on Youtube at 1x speed with panel brightness at 100% I measured 9.1W ± 0.2W. Playing back 4K60HDR content on Youtube at 2x speed with panel brightness at 100% I measured 9.7W ± 0.2W. Playing League of Legends at 2560x1600 High with Anti-aliasing I measured 16W ± 2W. Benchmark Results Cinebench R23 After updating macOS & possibly less background stuff going on the results seem to be a bit better. Multi 7724, Single 1517 (Minimum Duration: Off) Result of 1 run after installing macOS 11.0.1 Multi 7261, Single 1508 (Minimum Duration: 10 Minutes) Result of 1 run after installing macOS 11.0.1 Multi 7573, Single 1476 (Minimum Duration: Off) Result of 1 run before installing macOS 11.0.1 Multi 7020, Single 1468 (Minimum Duration: 10 Minutes) Average of 3 run before installing macOS 11.0.1 Shadow of the Tomb Raider (Installed via Steam) All results running under emulation at 100% resolution scale with TAA enabled, HDR off and the 'High' graphics preset. 2560x1600: Min 12 / Max 25 / Avg 14 / 95%n 12 (See results) 1920x1200: Min 18 / Max 39 / Avg 23 / 95%n 19 (See results) 1440x900: Min 25 / Max 60 / Avg 33 / 95%n 28 (See results) All results running as above but with AA turned off and the 'Medium' graphics preset. 2560x1600: Min 13 / Max 29 / Avg 16 / 95%n 14 (See results) 1920x1200: Min 20 / Max 45 / Avg 25 / 95%n 21 (See results) 1440x900: Min 28 / Max 67 / Avg 37 / 95%n 30 (See results) All results running as above but with AA turned off and the 'Low' graphics preset. 2560x1600: Min 20 / Max 48 / Avg 26 / 95%n 22 (See results) 1920x1200: Min 29 / Max 71 / Avg 39 / 95%n 33 (See results) 1440x900: Min 41 / Max 102 / Avg 56 / 95%n 46 (See results) League of Legends (Installed via Steam) 2560x1600 High with Anti-Aliasing: 60-90FPS depending on busyness. 1440x900 High with Anti-Aliasing: 80-130FPS depending on busyness. This is running under Rosetta, Activity Monitor shows League of Legends as 'Intel' under the architecture column. World of Warcraft 2560x1600 with settings at '10' (max) and 2xAA running around goldshire: 30-40FPS. The above is before the upcoming patch, so it's actually running under Rosetta. Tomorrow supposedly blizzard add support for a native binary so we can compare results. Speedometer (Running on Safari 14.0.1) MacBook Air (Late 2020, M1): 224 runs/minute ± 3.8 MacBook Pro 15” (Late 2019, i9-9880H): 115 runs/minute ± 4.2 MacBook Pro 13” (Mid 2018, i5-8259U): 117 runs/minute ± 1.7 iPhone XS Max: 141 runs/minute ± 5.0 Questions & Answers Q: Can it playback 4K60 videos on YouTube at 2x speed without skipping frames? A: It plays back smoothly 4K60HDR YouTube content at 2x playback speed. It claims to drop half the frames likely due to it being a 60Hz panel. Notes For reference I have access to the following machines for comparisons: 2018 MacBook Pro 13" (Core i5-8269U, 8GB, 256GB, 4-port model with 2 fans) 2019 MacBook Pro 15" (Core i9-9880H, 32GB, 1TB, Vega 20) 2020 MacBook Air 13" (Core M1 8-core GPU, 8GB, 512GB) I also use two Razer Core X Chroma enclosures with Vega 56s which sadly does not work with the M1 macbook. I have various Windows based machines too, including server hardware.
- 28 replies
-
Following is gonna be my new build Case - COOLER MASTER Masterbox TD500 MESH MID TOWER Processor - AMD Ryzen 7 5800x Motherboard - Asus rog strix b550 f gaming Gpu - galax 3070sg Power unit - coolermaster 850 80+ gold Storage - crucial p1 1tb Hdd - WD 5400 rm 2tb Ram - Gskill ripjaws V (16x2) 3200mhz Cooler - noctua nhd15 I want to know how to stress test the cooler temps the gpu and cpu Which software to use Please give suggestions
- 2 replies
-
- benchmarks
- 5000 series
-
(and 3 more)
Tagged with:
-
I was looking at some benchmarks of all AM4 CPU generations at 1440p, even 4 vs 8+ cores. Quite honestly... IMO the difference is not worth so much, even when comparing Ryzen's 1st gen vs 5000 series the difference is minimal. The biggest change pretty much comes down to upgrading the GPU (or so it seems) So why does it matter having the best CPU?
-
Apple claims the Mac Studio with M1 Ultra is the most powerful computer you can buy for $4,000. Does that put AMD, Intel, and Nvidia on notice – Or is Apple making claims they can’t back up? Buy an Apple Mac Studio M1 Max: https://geni.us/c9DL Buy an Apple Mac Studio M1 Ultra: https://geni.us/DFJvaa Buy an Apple Studio Display: https://geni.us/3ho7o Buy an Apple Magic Keyboard: https://geni.us/8ezY Buy an Apple Magic Mouse: https://geni.us/h04VhmR Buy an Apple MacBook Pro 14” M1 Pro: https://geni.us/S8vIu Buy an Intel Core i9-12900K: https://geni.us/516f8Q Buy a Nvidia GeForce RTX 3090: https://geni.us/eZkB
-
Summary A purported Core i9-13900 engineering sample (ES) that's in the wild has been put through its paces over at Expreview against a Core i9-12900k. However, this particular engineering sample (ES) is a bit of a turtle concerning clock speeds. Thus, to get some meaningful data out of this hardware, EXP Review compared the ES against an Intel Alder Lake Core i9-12900K at the same frequencies. According to Expreview, the 13th gen CPU is on average around 20 percent faster than the 12th gen CPU. Quotes My thoughts A welcomed showing by the Raptor Lake i9-13900 on an unoptimized platform. Once there's optimizations, more mature drivers, and putting the chip into a 700 series motherboard (or BIOS update on Z690 board) could all meld together to push the discrepancy between it and previous Gen even further when Raptor Lake finally launches. It should also be noted that Raptor Lake is rumored to have much higher frequencies than Alder Lake. Therefore, despite a decent showing by the Engineering Sample here, the final product will look much more polished (these clock speeds are excruciatingly slow). It seems there needs to be some work done on the gaming side, but that is to be expected. Most of the disparities are within margin of error and it's possible that the limited clock speeds are holding back the Raptor Lake CPU in this area. Sources https://videocardz.com/newz/intel-raptor-lake-es-cpu-tested-three-months-ahead-of-launch-20-faster-than-alder-lake-in-multi-threaded-tests https://wccftech.com/intel-raptor-lake-core-i9-13900-es-cpu-benchmarks-leak-out-20-faster-than-core-i9-12900k-in-multi-threading/ https://www.guru3d.com/news_story/the_13th_generation_raptor_lake_es_cpu_from_intel_is_benchmarked.html https://www.techpowerup.com/296157/intels-13th-gen-raptor-lake-es-cpu-gets-benchmarked https://hothardware.com/news/intel-13th-gen-core-i9-13900-raptor-lake-cpu-breaks-cover https://www.tomshardware.com/news/intel-raptor-lake-engineering-sample-benchmarks https://www.expreview.com/83801.html
-
Code Paths, or The Problem With Trying to be Innovative
Mira Yurizaki posted a blog entry in Yurizaki's Tech Ramblings
Back in 2008, there was a controversy stirring up in the neighborhood with supreme underdog of the x86 world VIA was being reviewed. The one thing VIA did that AMD and Intel don't, was it left its CPUID open. The CPUID is an identifying string that tells programs what kind of processor it is and what features it has. The result was that when you changed VIA's CPUID from "CentuarHauls" (a carryover from when VIA bought Centaur Technologies) to "GenuineIntel" or "AuthenticAMD", its benchmark results in PC Mark 05 changed. The most noticeable one? Memory benchmarking. When VIA pretended to be an Intel CPU, its memory benchmark went up 47%. So was this a result of Futuremark, the creators of PCMark 05, playing favorites with Intel? No, not really. Futuremark became a victim of what is known as code paths. A code path is when you execute a different set of instructions based on what hardware the application detects the computer has. The one common point between benchmarking an Intel, AMD, and VIA product is that their all x86 processors. So if they're all x86 processors, why would Futuremark execute a different set of instructions? At the time of PCMark 05's release (presumably 2005), Intel had processors with the SSE3 instruction set, AMD was still stuck on SSE2. VIA was still in the dumps back then. Maybe Intel also had other instructions specific to its architecture and platform that AMD lacked. Maybe Futuremark decided to squeeze out the most of hardware at the time, code paths should be used. But it ended up biting them in the rear. So herein lies the problem. You want to be innovative in your hardware, you create fun features to make your product stand out from the others, who are technically compatible with your hardware. Developers have a choice: either take advantage of those features so your software also runs better or not. This brings me to another point. Futuremark was recently accused, once again, of playing favorites. The problem? Their Time Spy benchmark. People noted that when asynchronous compute (stay tuned, I have a blog brewing about this...) was enabled, NVIDIA's GeForce 10 cards showed an increase in performance, if slight. People called Futuremark out on this because in supposedly every other test, the GeForce 10 cards either showed no improvement or worse improvement and suspected that NVIDIA was paying them out to make them look favorable. It also didn't help that AMD GPUs didn't improve as much as the other benchmarks supposedly show. Futuremark in a press statement said that they considered all PC GPU vendors, including Intel, for their input. Futuremark asked them if they should include vendor specific code paths, all of them disagreed. Because the moment you do so, fairness goes out the window. But Futuremark is a benchmarking developer, they can't afford to throw fairness out the window. But for game developers who want to squeeze all the features they can with their software may resort to using code paths. And they may resort to using one for the sake of development time and effort. It may suck they're playing "favorites", but when your audience is expecting you to do amazing things at mind boggling frame rates, these you kind of have to make these sacrifices. However, often times they won't resort to a code path. If you look at both NVIDIA and AMD/ATI's tech demos over the years, you'll find that both companies have had GPUs with a lot of advanced features that were later standard in GPUs of later generations (sometimes as early as two generations). But I've never seen any of these used in games.Then again, I was able to run a lot of AMD/ATI's demos on NVIDIA hardware... The only one I couldn't run was the Radeon HD 4800 series Froblins demo. Also this may explain the accusations that some applications favor heavily Intel's processors. There was a period in 2000-2006 or so when AMD and Intel had parity on features, and if someone wanted to take advantage of Intel's new whizzbang features, well, AMD was kind of hosed there. But this only usually mattered for high performance applications like CAD.-
- programming
- benchmarks
-
(and 4 more)
Tagged with:
-
Heya! I've recently benched my RX 570 4GB and my newly obtained GTX 1080, and put the results in a nice little spreadsheet that's somewhat easy to compare! https://docs.google.com/spreadsheets/d/1NIjFQUxIsiAUIUc0YsQWtXEAh1dNWcVyUJBOpR98R_4/edit#gid=1604616601
-
Hello, i have recently upgraded my pc with an RTX 3090 and i've gotta say im pretty let down with the performance, and im pretty sure it's a bum card, but i want to make sure. I've gone up from an MSI Aero RTX 2070 to the ZOTAC Trinity RTX 3090 24GB and the improvement is noticeable but not significant. I thought this card will be an absolute monster and the difference will be night and day. At first i thought my Ryzen 5 3600x was much more of a bottleneck than i thought it will be, so i upgraded to a 9 5900x but performance hasn't increased much. Before i ramble on for too long here's the specs(i'll provide more exact model names if required): - Windows 10 64bit - 1080p target resolution (bought the card for VR mainly) - ZOTAC RTX 3090 Gaming Trinity OC 24GB GDDR6X - Ryzen 9 5900x stock - 16 GB DDR4 3200 (XMP profile enabled) - Asus TUF B550 plus WIFI - Aorus PCIe 4.0 boot drive - Chieftec CTG-750C PSU - The main suspect, but if the performance now is the same as with the 3600x i don't think it's actually THE problem. Things i've tried: BIOS update; Clean driver reinstall (DDU); Upgraded tower cooling; setting pci mode to GEN4 in Bios; Digging in nvidia settings The performance is ok-ish, especially in Warzone 2.0 at settings maxed out and DLSS off im getting 100 fps, but i think thats low for 1080p, no? The biggest offence in my book, the one that triggered me to look into the issue is this video: https://www.youtube.com/watch?v=OqTtWGE12fY . I know his setup is overclocked but at 1440p he's getting 200 fps avg while im getting 168 fps avg @ 1080p (38 with rtx on). What i think may be the culprit is a faulty card, maybe a bad thermal paste application(i could do it myself but no point if its under warranty). The card runs fine untill the hotspot hits around a 100 degrees, at which point the fan curve get's overridden and the fans ramp up and down to 100% which on its own is starting to drive me nuts. Maybe thermal throttling? Im not sure if im power limited or temperature limited, i've got a GPU-Z log from a 10 minut furmark test that i've attached which might tell you more (also have a video recording but won't bother with uploading it unless someone asks for it). Feel free to point me to what benchmark to run and i'll update with the results, i appreciate all the help i get. GPU-Z Sensor Log.txt
-
hi there, i bought a Metabox(clevo) P775tm1-g not long ago with a single stick of 16gb 3000mhz ram the laptop made it run at 2666mhz. anyway i wanted to put more ram in it so i bought more 2400mhz sticks under the advice of Metabox. after installing my FPS was lower benchmarks. in games theres not a whole lot of difference, maybe still a bit lower. i took the new ram out thinking it would run faster but it was exactly the same for some reason. am i doing something wrong? is it common for benchmark test programs to have large inconsistencies with results? before ram change i got 60+ fps in vally benchmark and after i got 40+ fps. system is a i7 8700k GTX 1080
-
- ram speed fps
- benchmarks
-
(and 1 more)
Tagged with:
-
Wow. Kudos to Andrei Frumusanu for his vigilance and investigative works here. For those who don't remember the article touches upon the earlier discoveries and reporting of cheating by some very major makers of mobile devices. In this new case there is question that this is being done by and included by the chipmaker placed onto the phones of various OEMs. In this case the chip maker noticed is MediaTek. The tweaks cover a huge range of benchmarks and in some cases go so far as to target the copy of the benchmark used by editorial reviewers. Further, upon discovery and inquiries opened, it would appear said chipmaker simply moved their benchmark cheats to elsewhere but they remain. Their statement in response is along the lines: everyone does it, normal, nothing to see here. Some will consider it shockingly blatant, the author seems to think so, while for others this might come as no surprise at all. https://www.anandtech.com/show/15703/mobile-benchmark-cheating-mediatek