Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Humbug

Red Dead Redemption 2 PC benchmarks- move over Crysis ; UPDATED

Recommended Posts

Been playing flat out since Sunday, 1440p Ultra Preset, not had a single hiccup, skip, stutter or anything else.

 

I have had some wired flickering in certain places and some pop in during cut scenes. Nothing to jarring and it didn't affect gameplay at all but definitely there.

 

Overall it's a solid experience, much better than some of the other shit that gets shovelled out.

 

Does anyone know how to measure FPS on Vulkan? None of my normal tools work for FPS.

 


Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Windows 10 Pro X64 |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to post
Share on other sites
7 minutes ago, Master Disaster said:

Does anyone know how to measure FPS on Vulkan? None of my normal tools work for FPS.

Isn't there the OSD feature in the AMD drivers?


Main system: Asus Maximus VIII Hero, i7-6700k stock, Noctua D14, G.Skill Ripjaws V 3200 2x8GB, Gigabyte GTX 1650, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, HP LP2475W 1200p wide gamut

Gaming system: Asrock Z370 Pro4, i7-8086k stock, Noctua D15, Corsair Vengeance Pro RGB 3200 4x16GB, Asus Strix 1080Ti, Fractal Edison 550W PSU, GameMax Silent, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p 144Hz G-sync

Gaming system 2: Asus X299 TUF mark 2, 7920X, Noctua D15, Corsair Vengeance RGB 8x8GB, Gigabyte RTX 2070, Corsair HX1000i, Gamemax Abyss, Samsung 960 Evo 512GB, LG OLED55B9PLA

VR system: Asus Z170I Pro Gaming, i7-6700T stock, Scythe Kozuti, Kingston Hyper-X 2666 2x8GB, Zotac 1070 FE, Corsair CX450M, Silverstone SG13, Samsung PM951 256GB, Crucial BX500 1TB, HTC Vive

Gaming laptop: Asus FX503VD, i5-7300HQ, 2x8GB DDR4, GTX 1050, Sandisk 256GB + 480GB SSD

Link to post
Share on other sites

I for the most part have everything on Ultra and High, nothing lower. DX12 was super choppy on my rig even at high frame rates it still felt meh, Vulkan for me is smooth as butter. I was playing at 60fps with very minor drops to about 55fps and getting 75fps in some areas, I decided to goof around in settings and found that not only turning down the grass detail slider but also the water physics detail slider drastically affected my FPS  while still looking the same. With all the same settings I went from 55fps low, 60fps average, 75fps max to 80fps low, 86fps avg, and 90fps max. Just buy setting the sliders for grass detail and water physics detail to halfway instead of 100%.


Main Desktop: CPU - I9-9900k @5ghz | Mobo - Gigabyte Z390 Aorus Master | GPU - Asus ROG STRIX 2080ti OC RAM - G.Skill Trident Z RGB 16GB 3200mhz | AIO - H100i Pro RGB | PSU - Evga 850 GQ | Case - Fractal Design Meshify C White | Storage - Samsung 970 Pro M.2 NVME SSD 512GB / Sabrent Rocket 1TB NVME SSD / Samsung 860 Evo Pro 500GB / Seagate 2TB Firecuda Hybrid Drive |

 

TV Streaming PC: Intel Nuc CPU - i7 8th Gen | RAM - 16GB DDR4 2666mhz | Storage - 256GB WD Black M.2 NVME SSD |

 

Phone: Samsung Galaxy S10+ - Ceramic White 512GB |

 

If you ask for a Mid Tower case recommend, I will 90% of the time recommend the Fractal Design Meshify C or S2.

Link to post
Share on other sites
5 hours ago, porina said:

Isn't there the OSD feature in the AMD drivers?

Yep, doesn't work with Vulkan though. FPS shows as 0.


Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Windows 10 Pro X64 |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to post
Share on other sites

I'm curious since this is a DX12 that this supports crossfire in some way...


------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
AX1600i owner. https://docs.google.com/spreadsheets/d/1_GMev0EwK37J3zZL98zIqF-OSBuHlFEHmrc_SPuYsjs/edit?usp=sharing My WIP Power Supply Guide.

Link to post
Share on other sites
Posted · Original PosterOP
3 hours ago, PSUGuru said:

I'm curious since this is a DX12 that this supports crossfire in some way...

Vulkan and DX12 have explicit multi-GPU support in the API. It's different from crossfire and SLI. The game developer has to specifically program for it to spread the load across multiple graphics chips.

Link to post
Share on other sites

I assume people here are following the investigations by gamers nexus into the performance issues?

They released another video just yesterday.  TL;DR, if you manage to get into a very specific situation where you have too few cores but they are very fast, the game starts to experience multiple second long "stutters" (personally I'd just call that freezing)

They also test async compute on Nvidia and AMD and find no benefit to either.  As an aside, AMD seems to have some issues with frametime consistency


Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to post
Share on other sites

Gta 4 on pc before the patch a couple years ago was so hard to run. 5820k and a 1080 and I couldnt even do 4k. On a 12 year old game. I think I was getting under 60fps at 1440p.

Link to post
Share on other sites
13 minutes ago, corsairian said:

Gta 4 on pc before the patch a couple years ago was so hard to run. 5820k and a 1080 and I couldnt even do 4k. On a 12 year old game. I think I was getting under 60fps at 1440p.

Even with the patch it was never an efficient game.  In fact, "the patch" depends what you're even referring to as contrary to what you'd hope and expect, the latest version wasn't always the best.  I distinctly recall having the best performance with 1.0.4.0 (patch 4), even though they eventually went to 1.0.7.0 iirc and afaik I was not alone with this


Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to post
Share on other sites
59 minutes ago, Ryan_Vickers said:

They also test async compute on Nvidia and AMD and find no benefit to either.

I'm starting to see the option to toggle async compute like turbo buttons on old PCs. Turning async compute off so far will either do almost nothing or degrade performance. It should just be a thing that comes with using the DX12/Vulkan render path. Even with compatible GPUs that cannot actually use multiple command queues to enable async compute, the driver simply does some magic to make it act like a single command queue.

 

Plus I find trying to poke at AMD or NVIDIA over "who has better async compute support" silly. The only question to answer with regards to async compute support is "does the GPU accept different command queues?"

Link to post
Share on other sites
25 minutes ago, Mira Yurizaki said:

I'm starting to see the option to toggle async compute like turbo buttons on old PCs. Turning async compute off so far will either do almost nothing or degrade performance. It should just be a thing that comes with using the DX12/Vulkan render path. Even with compatible GPUs that cannot actually use multiple command queues to enable async compute, the driver simply does some magic to make it act like a single command queue.

 

Plus I find trying to poke at AMD or NVIDIA over "who has better async compute support" silly. The only question to answer with regards to async compute support is "does the GPU accept different command queues?"

I suppose there's potentially quite a lot to it.  Does that setting in the file actually change how the game works or is it just a left over with no function?  How do different cards handle async compute?  Perhaps the traditional memes of "nvidia can't do it, only AMD knows how" aren't necessarily accurate, but it's hard to imagine they just appeared out of no where with no basis in reality.  I suspect at some point someone tried using async compute on AMD and nvidia cards and drew conclusions from which one benefited more - nothing fancy, but information worth knowing at the time, assuming it was tested correctly.  Whether those results were even accurate then, and if they still hold up today is another story though.


Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to post
Share on other sites
30 minutes ago, Ryan_Vickers said:

I suppose there's potentially quite a lot to it.  Does that setting in the file actually change how the game works or is it just a left over with no function?

That is a good question because asynchronous compute isn't really an application feature. It's something the hardware has to do to handle multiple command queues the application does make. So if anything, if the application is really doing something to influence what hardware does, then all it's doing is shoving everything down a single queue. Or if we were to use Windows terms, a single engine on the GPU, likely the Graphics engine.

 

EDIT: On that note, I should find a game in my library that has an async compute toggle and see if this really happens.

 

Quote

How do different cards handle async compute?

While I feel like it's interesting to go over this, this is an implementation detail. It'd be like asking how Intel and AMD handle x86-64 instructions.

 

Quote

Perhaps the traditional memes of "nvidia can't do it, only AMD knows how" aren't necessarily accurate, but it's hard to imagine they just appeared out of no where with no basis in reality.  I suspect at some point someone tried using async compute on AMD and nvidia cards and drew conclusions from which one benefited more - nothing fancy, but information worth knowing at the time, assuming it was tested correctly.  Whether those results were even accurate then, and if they still hold up today is another story though.

I think this all stemmed from the beginning when the developers of Ashes of the Singularity called foul on NVIDIA for telling them to us a specific code path or something. This is on top of a handful of examples showing that AMD gained an incredible performance boost with async compute than NVIDIA.

 

However, I think we're attributing performance gains to a single attribute when DirectX 12 and Vulkan introduced a lot of things that was meant to help with performance. Like for example, how does one explain this? (from https://www.anandtech.com/show/8962/the-directx-12-performance-preview-amd-nvidia-star-swarm/)

 

71450.png

 

As far as I can tell, Star Swarm doesn't use async compute, and yet we're seeing massive performance gains simply by using DX12/Mantle.  And while I was digging around for information on this topic, I came across this slide deck from a GDC presentation. Slides 9 and 12 interest me the most:

 

From Slide 9:

Quote

Hardware Details [AMD]

  • 4 SIMD per CU
  • Up to 10 Wavefronts scheduled per SIMD
    • Accomplish latency hiding
    • Graphics and Compute can execute simultaneously on same CU
  • Graphics workloads usually have priority over Compute

 

From Slide 12:

Quote

Hardware Details [NVIDIA]

  • Compute scheduled breadth first over SMs
  • Compute workloads have priority over graphics
  • Driver heuristic controls SM distribution

If what this slide deck is saying is true, then to me it makes sense why NVIDIA doesn't see much of a boost from async compute as AMD. If you're already prioritizing compute workloads for a feature that's described as squeezing compute tasks alongside graphics ones, then I don't see how you'll get much of a boost in performance.

 

On another note, when also looking around for information on async compute, embarrassingly I found out how I thought NVIDIA Kepler schedules tasks is wrong. I thought the drivers scheduled workloads from beginning to end and the GPU is simply distributing the work as best as possible. i.e., it's just a dumb instruction dispatcher. But it's not, it has a typical scheduler but an aspect from Fermi's scheduler was chopped off because it could be done in software. So how NVIDIA and AMD schedules tasks on their GPU is mostly a moot point.

Link to post
Share on other sites
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×