Jump to content

MarkBench Development and Feature Requests

AdamFromLTT

Don't know if it has already been suggested but it would be nice if the tool can try to detect when a test has been limited by either CPU or GPU, or what % of the test has been limited by each if that makes more sense?

 

A highly customizable tag system would be nice to be able to filter specific hardware, resolutions and similar settings when browsing already posted benchmark results, if those are going to be available to the general public.

 

Also, I'm sure this is going to be a recurring request/suggestion, please consider going open source so that the community can contribute and help build a more complete, optimized and robust tool, and they can create their own forks and so on.

 

Good luck with the project, this tool sounds very exciting, can't wait to be able to try it.

Ryzen 3900x // Asus Strix B550-F Gaming // G.Skill Trident Z Neo 32GB (2x16 GB) // MSI RTX 3060 Ti

Link to comment
Share on other sites

Link to post
Share on other sites

Feature Request:

 

Adding a Deep Learning benchmark could give another insight in GPU performance.

Link to comment
Share on other sites

Link to post
Share on other sites

Apologies for not reading the whole thread, it's very long... 

 

I don't know if anyone else has suggested this, but what would be cool is if LTT worked with the Linux driver developers (who are notably not generally the GPU makers) to help test and qualify performance of the Linux drivers across games to help make the Linux gaming experience better. Notably, the folks at Red Hat are the largest group of developers working on the Linux graphics driver stack. They're also working on rebuilding the open source NVIDIA driver for better performance.

 

If MarkBench was open source and worked on Linux, then LTT and the Red Hat graphics folks could collaborate on driving better gaming performance on Linux. @Matthew Miller(the Fedora Project Leader) might be able to help with establishing connections to make that happen.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Conan Kudo said:

Apologies for not reading the whole thread, it's very long... 

 

I don't know if anyone else has suggested this, but what would be cool is if LTT worked with the Linux driver developers (who are notably not generally the GPU makers) to help test and qualify performance of the Linux drivers across games to help make the Linux gaming experience better. Notably, the folks at Red Hat are the largest group of developers working on the Linux graphics driver stack. They're also working on rebuilding the open source NVIDIA driver for better performance.

 

If MarkBench was open source and worked on Linux, then LTT and the Red Hat graphics folks could collaborate on driving better gaming performance on Linux. @Matthew Miller(the Fedora Project Leader) might be able to help with establishing connections to make that happen.

Lots of good work from folks at Valve and Collabora in this space too, e.g. https://steamcommunity.com/games/221410/announcements/detail/1602634609636894200 and https://www.collabora.com/news-and-blog/news-and-events/introducing-nvk.html. I'm not sure LTT wants to become an engineering shop, but advocacy and coverage for such efforts is always great to see!

Link to comment
Share on other sites

Link to post
Share on other sites

I would love to see LTT advocate for standardized game benchmarking interfaces. Navigating splash screens and menus is a massive problem for benchmark automation, and ISVs could do IHVs and their customers a huge service by simply providing a -benchmark flag that launches the game, runs the benchmark immediately, then exits to desktop.

 

Quality settings are another challenge; every menu is different and the text file that backs the menu is not always stable or machine-readable. As an example, Wolfenstein Youngblood stores a display modeline index in its configuration file instead of the display resolution, so swapping monitors can silently change your screen resolution. At least on Linux, Shadow of the Tomb Raider frequently resets its quality settings when the driver is upgraded or the GPU is swapped.

 

Many bonus points if the game benchmark can dump out frametime data to some machine-readable path and format (userdata/game/benchmark/timestamped_log.csv works well).

 

The game updates Linus mentions are a bit of a double-edged sword if one isn't careful. If a game optimization drops  (or worse, a change that causes a performance regression) in the middle of 80+ hours of testing, whole datasets can be invalidated. I'd love to see Valve give folks more control over when game updates are applied.

Link to comment
Share on other sites

Link to post
Share on other sites

Probably already mentioned but you can use a program like this to create standardized benchmarks for games that don't come with this functionality built-in.

 

For example, Overwatch.

 

LTT sets up a benchmarking profile/scene and then the software will either tell you what to do or does it for you (i.e, create a scene on this map with that many scripted bots, and run around it along a preset route, etc.... you could even perhaps script bot movements if you're using the custom game scripter tool the game has).

So end users can just install the game, Markbench, and then get a result they can share as their "Markbench Overwatch result".

Link to comment
Share on other sites

Link to post
Share on other sites

Just reposting my suggestion from the other topic:


 

Quote

 

  • "esports settings" on popular esports games, serious players would enjoy to know how many thousands of FPS they could get in CSGO when playing in 800x600 resolution with a RTX4090 and a Ryzen 9 7950X . (also, is it even beneficial in any way to get thousands of FPS in a game? maybe you get lower frame time?)
  •  
  • Some kind of benchmark for music production, I don't know how this could be done but Kontakt with too many high quality virtual instruments can lag your whole PC. The devs basically recommend "buy the CPU with highest Passmark Score that you can afford". There's DAWbench but I think there's still a lot to learn on this field.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, cqcallaw said:

Lots of good work from folks at Valve and Collabora in this space too, e.g. https://steamcommunity.com/games/221410/announcements/detail/1602634609636894200 and https://www.collabora.com/news-and-blog/news-and-events/introducing-nvk.html. I'm not sure LTT wants to become an engineering shop, but advocacy and coverage for such efforts is always great to see!

It's true that Valve contracts Collabora to do a fair bit of work too.

 

That said, NVK in particular is a joint effort between Collabora and Red Hat folks. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

the program should log all values like temp, clock, power and other information from sensors

Link to comment
Share on other sites

Link to post
Share on other sites

Will it be capable of working in background while the user plays the game themself.

Link to comment
Share on other sites

Link to post
Share on other sites

I would love to see head-to-head comparisons of video cards all of the same flavor, so as many different manufactures of say a 3090, so we can see if spending tons more for some versions is worth the extra money.   Usually they put Nvidia up against AMD or maybe several different cards like 3080 Ti verses 6800 Xt or 3090, but rarely do you see anything that compares a $1800 + GeForce RTX 3080 Ti  EVGA card verses another card with the same specs but is 700 dollars cheaper from a different manufacturer. I know that some manufacturers cut corners and some don't give you all the bells and whistles, but it would be nice to see real world results that show the $1800 card is worth $700 more than the similar card that appears to be the same specs. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/16/2022 at 2:06 AM, matrucious said:

Sure This could add a bit of complexity in the system, but I don't see any other down side than a little work for the devs?

 

I think you already answered your own question here. 😉 A bit of complexity and a bit of dev work is extra maintenance down the line (often) and dev work that currently can't be put into features that are currently outright missing. From what I have seen, it is still a fairly small team that doesn't have unlimited resources. Not to mention that modern software development in general works on iterative principles, where you start out with a minimal viable product and build on top of that. 

As long as they haven't made some bone headed core design decisions, they might indeed improve cut out the middle man in the future. 

 

In fact, it is entirely possible that the CSV step was the last step in the line for the initial minimal viable product. Which makes sense, and now in order to take that data and process it further the most efficient route is to pull that CSV data through libraries that need very little coding to make that work. Meaning that they didn't have to refactor the entire data generation bit itself. 

 

Just my 2ct on your question there. It is of course speculation from my side, though it is backed by over a decade in software development. 🙂

There aren't many subjects that benefit from binary takes on them in a discussion.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I got another idea: compare GPU's with every possible ray tracing / upscaling options available.
An example for #1 would be if a game supports DLSS 3, a DLSS 3 frame generation toggle, AMD FSR, and Raytracing, then run the tests where you have the following combinations:
- Nvidia GPU without raytracing and without upscaling
- AMD GPU without raytracing and without upscaling
- Nvidia GPU with raytracing but without upscaling
- AMD GPU with raytracing but without upscaling

- Nvidia GPU with DLSS 3 no frame generation but without raytracing

- Nvidia GPU with DLSS 3 and frame generation but without raytracing

- Nvidia GPU with DLSS 3 no frame generation and raytracing

- Nvidia GPU with DLSS 3 and frame generation and raytracing
- AMD GPU with FSR and without raytracing
- AMD GPU with FSR and with raytracing
- Nvidia GPU with FSR and raytracing

That way you can measure the performance improvements of:
- FSR vs DLSS 2 vs DLSS 3 vs DLSS 3 with frame generation
- Raytracing performance between generations of Nvidia and AMD cards

Link to comment
Share on other sites

Link to post
Share on other sites

You just dropped the video for under-volting the 4090.  Would you be able to build into MarkBench a script to output those voltage/power/fps graphs in that video?

CPU: Ryzen 5 5600X  | Motherboard: ASROCK B450 pro4 | RAM: 2x16GB  | GPU: MSI NVIDIA RTX 2060 | Cooler: Noctua NH-U9S | SSD: Samsung 980 Evo 1T 

Link to comment
Share on other sites

Link to post
Share on other sites

Love the idea! As a beginner overclocker who previously based all my performance markings around GeekBench v4 and v5 results, knowing the difference between the benchmarks and how MarkBench can help show you a more real-world performance benchmark with extra automation sounds awesome.

Can't wait to test the program out soon!

I told my date to meet me at the gym, but she never came. I guess we just aren't gonna work out

 

(All Specs in Profile)

Link to comment
Share on other sites

Link to post
Share on other sites

Thank you. This inspired me to finally automate getting an acheivement in Forza Horizon 5 that requires you to buy 400 cars from the autoshow. You think I'm going to buy them manually? no, because I have the python power and I read Automate the Boring Stuff with Python. https://github.com/dmoneyballer/buy_cheap_forza_horizon_5_cars_from_autoshow/tree/main
video here: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I would love to see more info on how well a given processor handles gaming while having lots of other stuff open. I use my gaming computer as my business computer as well, and I often game during the day while I don't have work that needs doing (the joys of working IT). I need to keep all my other business applications open and running on another screen so when a call comes in, I'm ready to pause my game and work on something for a client. I feel like nearly every CPU/GPU discussion is about how well it performs at one of those tasks exclusively, but not both at the same time. So far, I've always chosen processors that have 6+ real processor cores because they seemed like they had enough punch, but more in my price range. Can't use a Ryzen 3 or an i3 with only 4 cores or less, now can I? Having a benchmark for such use cases seems to be missing from such reviews.

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/13/2022 at 11:06 PM, AdamFromLTT said:

Can I contribute?

In the future? Maybe! In the now? Development will be done in-house. BUT, you can contribute by providing feedback and feature suggestions in this thread!

 

Bit late to this post, but really glad to see this footnote!

As I mentioned before in one of my only posts here, I'd love to contribute to labs OSS. If my current career wasn't the way it is now I'd be applying to Labs anyway - I have a lot of interest in most of what LMG does as a company, and giving some sort of support other than currency would be pretty cool. Not sure I'd be able to contribute to some of the driver level stuff for this project in particular, but wiling to learn 🙂

 

Previously it seemed like Linus was not particularly interested in OSS development given his reply, has this changed at all?  I remembered this incorrectly.

 

Link to comment
Share on other sites

Link to post
Share on other sites

When Paul’s Hardware is doing comparison charts, he’s started using different colours to differentiate the brands. Could LMG do this when posting results that come about from Markbench? I understand that colours are used to differentiate various datasets (average fps vs 1% lows, for example), but colour gradients could be used for this instead.
 

For example, the latest video about AMD video cards could have used a Red-Orange-Yellow gradient for the various AMD data points, while the data points for NVidea could have gone Forest Green-Emerald-Olive. Charts including Intel could go Navy Blue-Royal Blue-Baby Blue. So long as the colours within the charts are of equitable darkness/lightness for each respective data point, it should make it much easier to distinguish the relative positioning of each card within the stack.

Link to comment
Share on other sites

Link to post
Share on other sites

tl;dr

I assume it's going to include things like being able to run the tests with raytracing and then without back to back. 

 

As a side project since it'll be part of the full mark bench anyway but create just a little program that disables motion blur on all games. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/20/2022 at 7:16 AM, Psychopath028 said:

Will it be capable of working in background while the user plays the game themself.

No this is a benchmark suite not a performance utility.  You wouldn't get consistent results doing it based off real gameplay and running a second instance of the game would destroy the point of trying to measure the top performance. 

Link to comment
Share on other sites

Link to post
Share on other sites

Am I too late? I'd also like to suggest a 7-zip bechmark for LZMA compression/decompression. I think you already used this for some videos if I'm not mistaken.

 

https://sevenzip.osdn.jp/chm/cmdline/commands/bench.htm

 

Also y-cruncher for calculating Pi, it scales very well with more threads/cores so it might be good to see how quickly a CPU can calculate X billion numbers of Pi. It also stresses the RAM/memory bus quite a lot.

 

https://www.numberworld.org/y-cruncher/

 

These are all command line based so that probably makes it easier to integrate.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...
On 10/13/2022 at 7:03 PM, RTX 3071 said:

There should be user-made scripts/configs for games that aren't officially in the app. That was the first thing which got in my mind.

...

This is crucial!

Link to comment
Share on other sites

Link to post
Share on other sites

  • 3 weeks later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×