Jump to content

First DirectX 12 game benchmarked *Update 2 More benchmarks

PC Perspective has tested a new alpha build of the first DirectX 12 game, Ashes of the Singularity.

 

AshesLogo-fullcolor-660x330.jpg

 


 

DirectX 12 is the newest gaming API, exclusive to Windows 10 users, offering less CPU overhead, better threaded performance on CPU's, and lower level access to GPU hardware.

 

Ashes of the Singularity, made by the same people of the Star Swarm demo, utilizing Mantle, is set to be the first DirectX 12 game to reach the market later this year. PC Perspective has gotten a hold of an alpha build with built in benchmarks, to test how DX12 can benefit gaming; in this case a real time strategy game.

 

But nothing new without a little drama:


First, NVIDIA claims that the MSAA implementation in the game engine currently has an application-side bug that the developer is working to address and thus any testing done with AA enabled was invalid. (I happened to get wind of this complaint early and did all testing without to AA avoid the complaints.) Oxide and Stardock dispute this claim as a “game bug” and instead chalk up to early drivers and a new API.

 

Secondly, and much more importantly, NVIDIA makes the claim that Ashes of the Singularity, in its current form, “is [not] a good indicator of overall DirectX 12 gaming performance.”

 

This is not a surprise when we get into the actual benchmarks between the AMD R9 390X and NVidia's GTX 980:

 

Nvidia GTX 980:

 

ashesheavy-gtx980.png

 

AMD R9 390X:

 

ashesheavy-r9390x.png

 

It’s undoubtedly clear from our data that NVIDIA has vastly superior DX11 drivers when compared to AMD. DX11 is an API that requires a lot of optimization for games and for multi-threading and it would appear that NVIDIA’s engineering team has spent a lot of time and resources making sure there is the fastest and best performing platform. This makes sense – DX11 has been around for a long time and will likely be here for quite a while to come. But with the move to DX12, AMD’s less expensive GPU was able to match performance with the GTX 980, a card that was as much as 90% faster in the older API. How? Did AMD suddenly becoming API coding geniuses and tweak its driver for vastly different DX12 behavior than for DX11? I doubt it. Instead, we are seeing a combination of the work AMD did on Mantle / Vulkan APIs and the generalization of game engines that is bound to happen when an engine has more direct access to the GPU hardware than they have ever had. And it doesn’t hurt that AMD has been working with Oxide and the Nitrous engine for many years as it was one of the few game engines to ever really implement Mantle.​

 


 

Update 1:

Extremetech has also published their own benchmarks, but using an AMD Fury X and an NVidia 980ti. They have also done so using MSAA, which NVidia actively called a "game bug" in their reviewers guide, sent out to multiple review sites.

 

The article includes an official retort from Oxide Games, developers of Ashes of the Singularity, stating that their AA implementation is not bugged:

 

There are incorrect statements regarding issues with MSAA. Specifically, that the application has a bug in it which precludes the validity of the test. We assure everyone that is absolutely not the case. Our code has been reviewed by Nvidia, Microsoft, AMD and Intel. It has passed the very thorough D3D12 validation system provided by Microsoft specifically designed to validate against incorrect usages. All IHVs have had access to our source code for over year, and we can confirm that both Nvidia and AMD compile our very latest changes on a daily basis and have been running our application in their labs for months. Fundamentally, the MSAA path is essentially unchanged in DX11 and DX12. Any statement which says there is a bug in the application should be disregarded as inaccurate information.

 

And the conclusion from Extremetech:

 

Nvidia’s strong performance in DX11, however, is overshadowed by negative scaling in DirectX 12 and the complete non-existence of any MSAA bug. Given this, it’s hard not to think that Nvidia’s strenuous objections to Ashes had more to do with its decision to focus on DX11 performance over DX12 or its hardware’s lackluster performance when running in that API.

 


 

Update 2:

Arstechnica has made a comparison between an NVidia GTX 980ti and an AMD R 290X from 2013, with GCN 1.1:

Review-chart-template-final-full-width-3

 

heavy.001.png

There has been some accusation from people that there might be some vendor bias in the game, so I have quoted the official response here from the oxide blog:

 

Often we get asked about fairness, that is, usually if in regards to treating Nvidia and AMD equally? Are we working closer with one vendor then another? The answer is that we have an open access policy. Our goal is to make our game run as fast as possible on everyone’s machine, regardless of what hardware our players have.

To this end, we have made our source code available to Microsoft, Nvidia, AMD and Intel for over a year. We have received a huge amount of feedback. For example, when Nvidia noticed that a specific shader was taking a particularly long time on their hardware, they offered an optimized shader that made things faster which we integrated into our code.

We only have two requirements for implementing vendor optimizations: We require that it not be a loss for other hardware implementations, and we require that it doesn’t move the engine architecture backward (that is, we are not jeopardizing the future for the present).​

 

NVidia has also launched their Ashes of the Singularity optimized driver, which has been used in all of the benchmarks in this post (as it came with the reviewers guide)

http://www.geforce.com/whats-new/articles/geforce-355-60-whql-driver-released

 

Please bear in mind that this is 1 game, so it is not representative of all upcoming DX12 games, nor the DX12 API in itself, but merely one RTS game, which tends to be heavy on the CPU, and in this case draw calls.

 

Sources:

PCPer article: http://www.pcper.com/reviews/Graphics-Cards/DX12-GPU-and-CPU-Performance-Tested-Ashes-Singularity-Benchmark

Official Oxide Games retort: http://oxidegames.com/2015/08/16/the-birth-of-a-new-api/

Extremetech article: http://www.extremetech.com/gaming/212314-directx-12-arrives-at-last-with-ashes-of-the-singularity-amd-and-nvidia-go-head-to-head

Arstechnica: http://arstechnica.co.uk/gaming/2015/08/directx-12-tested-an-early-win-for-amd-and-disappointment-for-nvidia/

 

 


My personal take:

 

This is​ great news and a an interesting introduction to the new API, we all look forward too. RTS games could end up being the games to benefit the most out of the much higher draw call count, at least to begin with.

 

We also knew NVidia's DX11 drivers are more optimized, and more importantly, multithreaded properly, giving NVidia hardware a leg up, that is not hardware based. DX12 might just be the API to level the playing field, or even give AMD a huge head start, or at least an advantage to lead to koth status for their cards. This makes it even more interesting, if PCPer would do a 980ti and Fury X benchmark battle.

 

DX12 (and maybe Vulkan for that matter) might shake up the GPU market quite a bit; if so AMD could stand to win quite a bit on it.

Update 1:

Wow, Oxide Games has officially debunked NVidia's AA claim of being bugged. Like I said we knew NVidia's DX11 drivers are multithreaded giving better performance than AMD's, but seeing the playing field being so levelled in DX12, as to see NVidia outright being untruthful is pretty shocking. Are they this afraid?

 

Update 2:

Officially NVidia has had access to the source code of this game for over a year. They've had access to DX12 for a long time too. They've even gotten their own shader optimizations implemented into the game. And they've released an official Ashes of the Singularity driver specifically for these benchmarks. Either NVidia hardware just isn't powerful enough to run this game much faster, and/or NVidia's own DX12 drivers simply aren't working as expected yet.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Wow looks good

Laptop: Thinkpad W520 i7 2720QM 24GB RAM 1920x1080 2x SSDs Main Rig: 4790k 12GB Hyperx Beast Zotac 980ti AMP! Fractal Define S (window) RM850 Noctua NH-D15 EVGA Z97 FTW with 3 1080P 144hz monitors from Asus Secondary: i5 6600K, R9 390 STRIX, 16GB DDR4, Acer Predator 144Hz 1440P

As Centos 7 SU once said: With great power comes great responsibility.

Link to comment
Share on other sites

Link to post
Share on other sites

O.o Those 390X benchamarks

                                                                                                                 Setup

CPU: i3 4160|Motherboard: MSI Z97 PC MATE|RAM: Kingston HyperX Blue 8GB(2x4GB)|GPU: Sapphire Nitro R9 380 4GB|PSU: Seasonic M12II EVO 620W Modular|Storage: 1TB WD Blue|Case: NZXT S340 Black|PCIe devices: TP-Link WDN4800| Montior: ASUS VE247H| Others: PS3/PS4

Link to comment
Share on other sites

Link to post
Share on other sites

So as i've been saying for a couple of months now... AMD's DX11 drivers have more CPU overhead.

 

I'm not sure why people think this is bad for nvidia. This is GOOD for nvidia. Their DX11 drivers are extremely efficient already.

The only one that loses here, is the 8370. It got rekt.

Link to comment
Share on other sites

Link to post
Share on other sites

So basically what these benchmarks say is that ashes has complete garbage cpu utilization. Not that these benchmarks make much sense at all. 

Ryzen 3700x -Evga RTX 2080 Super- Msi x570 Gaming Edge - G.Skill Ripjaws 3600Mhz RAM - EVGA SuperNova G3 750W -500gb 970 Evo - 250Gb Samsung 850 Evo - 250Gb Samsung 840 Evo  - 4Tb WD Blue- NZXT h500 - ROG Swift PG348Q

Link to comment
Share on other sites

Link to post
Share on other sites

Link to comment
Share on other sites

Link to post
Share on other sites

Wow, if this is true the improvement on the 390X is really really impressive.

CPU: i9-13900k MOBO: Asus Strix Z790-E RAM: 64GB GSkill  CPU Cooler: Corsair H170i

GPU: Asus Strix RTX-4090 Case: Fractal Torrent PSU: Corsair HX-1000i Storage: 2TB Samsung 990 Pro

 

Link to comment
Share on other sites

Link to post
Share on other sites

How did you arrive at this conclusion?

For a long time we have known dx12 is really only going to help with the cpu. These performance gains are pretty huge, therefore the cpu utilization is clearly a problem with this game. But on the other hand the low end chips that should be getting the biggest boosts seem to be limited by the gpu because the frame rates are the same across the board between high and low settings. These numbers don't really make and sense. Do you not see all the problems in these graphs?

Ryzen 3700x -Evga RTX 2080 Super- Msi x570 Gaming Edge - G.Skill Ripjaws 3600Mhz RAM - EVGA SuperNova G3 750W -500gb 970 Evo - 250Gb Samsung 850 Evo - 250Gb Samsung 840 Evo  - 4Tb WD Blue- NZXT h500 - ROG Swift PG348Q

Link to comment
Share on other sites

Link to post
Share on other sites

Ballin'

ROG X570-F Strix AMD R9 5900X | EK Elite 360 | EVGA 3080 FTW3 Ultra | G.Skill Trident Z Neo 64gb | Samsung 980 PRO 
ROG Strix XG349C Corsair 4000 | Bose C5 | ROG Swift PG279Q

Logitech G810 Orion Sennheiser HD 518 |  Logitech 502 Hero

 

Link to comment
Share on other sites

Link to post
Share on other sites

For a long time we have known dx12 is really only going to help with the cpu. These performance gains are pretty huge, therefore the cpu utilization is clearly a problem with this game. 

 

Only on AMD. And that's because their DX11 drivers had a large CPU overhead. Their DX11 drivers were bad, thefefor they scale well since the amount of drawcalls on DX12 will be significantly higher. CPU utilization is fine.

 

I see no problem, because I know what it is i'm looking at.

Link to comment
Share on other sites

Link to post
Share on other sites

So as i've been saying for a couple of months now... AMD's DX11 drivers have more CPU overhead.

 

I'm not sure why people think this is bad for nvidia. This is GOOD for nvidia. Their DX11 drivers are extremely efficient already.

The only one that loses here, is the 8370. It got rekt.

low ipc is low ipc. But then again, it does depend on how many cores the game was using/allocating resources to.

 

It is a alpha build, my guess it will do slightly better with better optimization (barely equal or better then i3 at the very very best).

 

But... imagine the Fury X now....

mein gott

Link to comment
Share on other sites

Link to post
Share on other sites

I mean really... The raw computing performance of all of AMD's offerings massively outdo that of Nvidia at the same fps, so if somehow the field was leveled again for both, I would expect AMD to see way more improvement than nvidia.

 

@SeanBond what that means is the cpu is still a bottleneck on that game.

 

Seriously though that game must be really bad at multithreaded workloads, because that is one hell of a cpu bottleneck they got there.

 

If you ignore the delta and just look at the pure numbers this basically is saying that for this game a 390x=980. I would be interested to see how much vram the game is using at this resolution.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

But... imagine the Fury X now....

mein gott

 

Yep, AMD graphics cards were held back by drivers. I'm fairly sure the Fury X + intel CPU is going to be ballin'

 

 

Seriously though that game must be really bad at multithreaded workloads, because that is one hell of a cpu bottleneck they got there.

 
There is no CPU bottleneck. I fear you people are just interpreting the data incorrectly. I'd rather say there is a GPU bottleneck, seeing the difference between the 5960X and 6700K is very small. Switch the 980 for a 980TI and the 390X for a Fury X, and you'll know i'm right.
Link to comment
Share on other sites

Link to post
Share on other sites

Fuck yea AMD GPU's are hauling ass.

And the FX-8370 is getting it's ass beat by a i3 even when "games will use more then 4 cores".

And a 6700k is obliterating a 5960x. This game has one of the worst multicore scaling's I've seen haha.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

And a 6700k is obliterating a 5960x. This game has one of the worst multicore scaling's I've seen haha.

 

Obliterating? .. sensationalizing in artform. It's a 4.2ghz 4C/8T CPU vs. a 3.0ghz 8C/16T CPU. Ofcourse they will vary in response. But I'm not seeing numbers that put the 6700K significantly ahead.. It's also only on two cards from different vendors. To see whether the CPU's are the bottleneck you need more graphics cards in the test. If you see no significant increase with a 980TI or FuryX it's reason to believe the scaling is bad.

Link to comment
Share on other sites

Link to post
Share on other sites

So as i've been saying for a couple of months now... AMD's DX11 drivers have more CPU overhead.

 

I'm not sure why people think this is bad for nvidia. This is GOOD for nvidia. Their DX11 drivers are extremely efficient already.

The only one that loses here, is the 8370. It got rekt.

 

I was expecting to see some nonsensical rationalizing to justify preeliminar and meaningless results regardless. Nice to see I was not disappointed. 

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

Obliterating? .. sensationalizing in artform. It's a 4.2ghz 4C/8T CPU vs. a 3.0ghz 8C/16T CPU. Ofcourse they will vary in response. But I'm not seeing numbers that put the 6700K significantly ahead...

Look at the 1080p and 1600p low benchmarks (where cpu utilization is highest) there is a 5-15% difference in performance always in favor of the 6700k. That's HUGE in the world of cpu performance gaming impact.

 

 

Also @ Notional, you should put up the average and heavy % difference charts (they are very informative imho.)

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

For a game that is supposed have a shit ton of units and projectiles and stuff flying across the screen its looking poorly designed since these graph numbers make it look single threaded af.

CPU: Intel i7 - 5820k @ 4.5GHz, Cooler: Corsair H80i, Motherboard: MSI X99S Gaming 7, RAM: Corsair Vengeance LPX 32GB DDR4 2666MHz CL16,

GPU: ASUS GTX 980 Strix, Case: Corsair 900D, PSU: Corsair AX860i 860W, Keyboard: Logitech G19, Mouse: Corsair M95, Storage: Intel 730 Series 480GB SSD, WD 1.5TB Black

Display: BenQ XL2730Z 2560x1440 144Hz

Link to comment
Share on other sites

Link to post
Share on other sites

 

Look at the 1080p and 1600p low benchmarks (where cpu utilization is highest) there is a 5-15% difference in performance always in favor of the 6700k. That's HUGE in the world of cpu performance gaming impact.

 

Are you looking at the heavy results? The one that is actually CPU heavy (>20.000 drawcalls)?

http://www.pcper.com/files/imagecache/article_max_width/review/2015-08-16/ashesheavy-gtx980.png

No significant difference.

 

 

I was expecting to see some nonsensical rationalizing to justify preeliminar and meaningless results regardless. Nice to see I was not disappointed. 

 

Are you going to actually retort, or just make some wikipedia-level psych evaluation which shows you projecting more than anything else?

 

 

For a game that is supposed have a shit ton of units and projectiles and stuff flying across the screen its looking poorly designed since these graph numbers make it look single threaded af.

 
If it was single threaded, the 5960X, 6700K and 4330 would've been much closer. The gap between the 4330 and 6700 says it's not single-threaded.
Link to comment
Share on other sites

Link to post
Share on other sites

Do a gtx 670 and R9 200 series tests too.

1 thing is for certain from this videos the FX cpu's are absolute trash waste of money for gaming.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×