Jump to content

First DirectX 12 game benchmarked *Update 2 More benchmarks

I sincerely hope these are real and AMD kick nvidia's ass with the current gpus!
Would put some life back in them and make me consider returning the GTX970 I just purchased and getting a 390 or such.

really exciting news!

Link to comment
Share on other sites

Link to post
Share on other sites

LoL AMD benefits a lot more from what we can see.

I hope similar results will apply in benchmarks done by youtubers especially LMG :)

If so, this could be huge selling point for AMD GPUs, but thata for several years ahead untip we see all new titles and even old ones to support DX12

Connection200mbps / 12mbps 5Ghz wifi

My baby: CPU - i7-4790, MB - Z97-A, RAM - Corsair Veng. LP 16gb, GPU - MSI GTX 1060, PSU - CXM 600, Storage - Evo 840 120gb, MX100 256gb, WD Blue 1TB, Cooler - Hyper Evo 212, Case - Corsair Carbide 200R, Monitor - Benq  XL2430T 144Hz, Mouse - FinalMouse, Keyboard -K70 RGB, OS - Win 10, Audio - DT990 Pro, Phone - iPhone SE

Link to comment
Share on other sites

Link to post
Share on other sites

Since this game is using the "oxide engine" this kind of benchmark results are exactly the same which were expected.

We've seen similar results with the Starswarm-Demo using the oxide engine and DX12 about 5month ago. So it really isnt something new.

 

Probably all games using this engine will see no real benefit from having more then 4cores.

And DX11 is allready well optimized for Nvidia Cards. To gain a much higher performance dx12 needs that faster VRAM.

 

As ive told so many times now. 

CPU: Ryzen 7 5800x3D | MoBo: MSI MAG B550 Tomahawk | RAM: G.Skill F4-3600C15D-16GTZ @3800CL16 | GPU: RTX 2080Ti | PSU: Corsair HX1200 | 

Case: Lian Li 011D XL | Storage: Samsung 970 EVO M.2 NVMe 500GB, Crucial MX500 500GB | Soundcard: Soundblaster ZXR | Mouse: Razer Viper Mini | Keyboard: Razer Huntsman TE Monitor: DELL AW2521H @360Hz |

 

Link to comment
Share on other sites

Link to post
Share on other sites

LoL AMD benefits a lot more from what we can see.

I hope similar results will apply in benchmarks done by youtubers especially LMG :)

If so, this could be huge selling point for AMD GPUs, but thata for several years ahead untip we see all new titles and even old ones to support DX12

Not really.. all this is saying is that their DX11 drivers weren't up to snuff with handling newer games. And DX11 is still VERY relevant.

.

Link to comment
Share on other sites

Link to post
Share on other sites

Not really.. all this is saying is that their DX11 drivers weren't up to snuff with handling newer games. And DX11 is still VERY relevant.

This doesn't clearly say anything... That is one possibility. The other one would be that AMD is the only with the driver that uses the potential of DX12. I would like to think that both are a little true, since the benefits of DX12 seem exaggerated. Nvidia may or may not be using the full potential of DX12 currently and AMD definitely sounds like it's having problems with the DX11 aspect. Just hope that Nvidia doesn't become lazy and finds some improvement in the DX12 drivers, unless it is a problem with the hardware. Would be truly lame to see that they are just using the same DX11 driver with minor improvements, where we can clearly see AMD trying to do something.

Hard to find pinpoint who has the problem in this one scenario, so I'll just blame both of them. Nvidia for the lack of big benefits in DX12 and AMD for the lack of performance in DX11.

Link to comment
Share on other sites

Link to post
Share on other sites

This doesn't clearly say anything... That is one possibility. The other one would be that AMD is the only with the driver that uses the potential of DX12. I would like to think that both are a little true, since the benefits of DX12 seem exaggerated. Nvidia may or may not be using the full potential of DX12 currently and AMD definitely sounds like it's having problems with the DX11 aspect. Just hope that Nvidia doesn't become lazy and finds some improvement in the DX12 drivers, unless it is a problem with the hardware. Would be truly lame to see that they are just using the same DX11 driver with minor improvements, where we can clearly see AMD trying to do something.

Hard to find pinpoint who has the problem in this one scenario, so I'll just blame both of them. Nvidia for the lack of big benefits in DX12 and AMD for the lack of performance in DX11.

This doesn't show that AMD has more potential with DX12 because both have unoptimized DX12 drivers and perform similarly. All this shows is that one, FX processors are jokes, and two, Nvidia has better DX11 drivers out of the box.

.

Link to comment
Share on other sites

Link to post
Share on other sites

Are you guys looking at a different set of graphs to me? If so, could you share?

 

The only reason Nvidia looks worse than AMD here is because AMD's Directx 11 performance is so abysmal. If you look at just the Directx 12 numbers they're basically identical.

Link to comment
Share on other sites

Link to post
Share on other sites

Only theoretically/on paper. AMD's GPUs still can't keep up with Quadros or Teslas at all. The engineers behind the top 500 supercomputers still can't implement an OpenCL version of an algorithm and have it outperform the CUDA-based one in either single or double precision (Maxwell Quadro/Tesla excluded on DP obviously). K40 vs. S9170 has the K40 eating AMD's lunch. And when the best compute scientists can't make it work, AMD or the OpenCL runtime has a huge issue, possibly both.

This is an Nvidia driver optimization and game optimization problem.

Yes this is clearly a Nvidia driver issue. But it is also the display of the raw computer power that can be unleashed on AMD cards when you actually have nothing holding them back.

Yes, compute is a bit misleading, but so is describing the increase in Gigatexels too.. its sort of both being alleviated better.

 

Also, Firepro, Quadro and tesla is a bit different.

Quadro is aimed more at rendering and such.

Tesla is pure compute.

Firepro is a mix of both. But not as strong in either aspect as quadro and tesla.

 

However, what Firepro cannot compete in raw power of either card, it makes up of having the ability to do both, in a single card.

Quadro is rather weak at raw compute, and Tesla is rather weak at raw  rendering. This is why you combine them. Say a K6000 Quadro + K20 or K40 Tesla. To have raw compute AND raw rendering power. However this solution is rather pricey and AMDs firepro can often cost less then either one of the Nvidia cards, while being close in performance

 

AMDs downfall in the enterprise market is the lack of widespread openCL support. It is getting there, from what i can gather, but it has only "caught on" like the last 12 months or so from what some tech news sites report. This means, outside of the giants like Autodesk (CAD, Maya, 3DsMAX) and Adobe (CC, Photoshop, Premiere). Its only partially optimized for or supported by other vendors.

Once OpenCL rendering and compute becomes more mainstream, AMD will be a much greater threat, because their products DO perform well.

 

 

 

Also, from what i know, its the "W" series that is of newer date on the AMD side unless the released something more recently (since the W9100 in 2014).

Link to comment
Share on other sites

Link to post
Share on other sites

This doesn't show that AMD has more potential with DX12 because both have unoptimized DX12 drivers and perform similarly. All this shows is that one, FX processors are jokes, and two, Nvidia has better DX11 drivers out of the box.

 

The AMD GPU's are cheaper than the NVidia equivalents. So this does bring AMD in a better situation than NVidia, as they can deliver a cheaper GPU at the same performance. The only reason not to go fro AMD (based on this result, which no one should), would be because they are vendor locked in.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

The AMD GPU's are cheaper than the NVidia equivalents. So this does bring AMD in a better situation than NVidia, as they can deliver a cheaper GPU at the same performance. The only reason not to go fro AMD (based on this result, which no one should), would be because they are vendor locked in.

You don't know that until you see matured performance from both companies. <_<

.

Link to comment
Share on other sites

Link to post
Share on other sites

You don't know that until you see matured performance from both companies. <_<

 

And when will that be? When is it mature enough? Both AMD and NVidia has had access to the newest builds of this game for a year, and both have full access to everything DX12.

 

The entire point is that a low level API, renderes driver optimizations less important, which is NVidia's prime advantage in DX11, due to their heavy multithreaded drivers. Both AMD and NVidia should be able to optimize their drivers for DX12 and this game, but maybe not to the same extent we are used to.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

The AMD GPU's are cheaper than the NVidia equivalents

 

Are they actually? Let's see what pcpartspicker actually says.

 

960 - £147.43

380 -  £156.98

 

970 - £238.99

390 - £246.59

 

980 - £377.99

390X - £323.99

 

980 Ti - £509.59

Fury X - £503.04

 

So, uh, no. Two Nvidia GPUs are cheaper and two AMD GPUs are cheaper.

Link to comment
Share on other sites

Link to post
Share on other sites

And when will that be? When is it mature enough? Both AMD and NVidia has had access to the newest builds of this game for a year, and both have full access to everything DX12.

 

The entire point is that a low level API, renderes driver optimizations less important, which is NVidia's prime advantage in DX11, due to their heavy multithreaded drivers. Both AMD and NVidia should be able to optimize their drivers for DX12 and this game, but maybe not to the same extent we are used to.

You want me to give you a date? No.

 

Wait until games are released using DX11 and DX12 that do not have problems with sloppy coding on either API and see how the performance scales. Even then, just wait until there's a substantial amount of games using DX12 and compare performance across CPUs then.

.

Link to comment
Share on other sites

Link to post
Share on other sites

You don't know that until you see matured performance from both companies. <_<

Which is what I'm somewhat stating. Yeah they perform similarly, but I thought the Nvidia card was supposed to be a bunch better over the AMD one? Plus, I have a hard time thinking about all the rebrand hate, when a rebrand is on par with a new card... There's definitely a problem with AMD DX11 driver, but there's gotta be more to it in the Nvidia DX12 driver.

 

980 - £377.99

390X - £323.99

Should pay attention to the context of the conversation. 390X price < 980 price (directly quoted from you) and yet have pretty much the same performance.

Link to comment
Share on other sites

Link to post
Share on other sites

Should pay attention to the context of the conversation. 390X price < 980 price (directly quoted from you) and yet have pretty much the same performance.

 

And there was me thinking that Directx 12 was going to be used on more than those two GPUs. My mistake.

 

All of those GPUs have pretty much the same performance as their equivalents, and yet no one brand can be claimed to have better value across the board.

Link to comment
Share on other sites

Link to post
Share on other sites

Which is what I'm somewhat stating. Yeah they perform similarly, but I thought the Nvidia card was supposed to be a bunch better over the AMD one? Plus, I have a hard time thinking about all the rebrand hate, when a rebrand is on par with a new card... There's definitely a problem with AMD DX11 driver, but there's gotta be more to it in the Nvidia DX12 driver.

Don't forget, 900 series performance is only a few percent better than 700 series performance. I'm just going to ignore all this speculation until there's actual solid evidence from mature drivers showing performance as it should be.

.

Link to comment
Share on other sites

Link to post
Share on other sites

Are they actually? Let's see what pcpartspicker actually says.

 

960 - £147.43

380 -  £156.98

 

970 - £238.99

390 - £246.59

 

980 - £377.99

390X - £323.99

 

980 Ti - £509.59

Fury X - £503.04

 

So, uh, no. Two Nvidia GPUs are cheaper and two AMD GPUs are cheaper.

 

Man... This 970? 

 

DwDL9DE.jpg

 

Its really trash... 

 

The 390 on the other hand....

vRt3fQX.jpg

 

I'll take that 390 anyday... 

i5 2400 | ASUS RTX 4090 TUF OC | Seasonic 1200W Prime Gold | WD Green 120gb | WD Blue 1tb | some ram | a random case

 

Link to comment
Share on other sites

Link to post
Share on other sites

Are they actually? Let's see what pcpartspicker actually says.

 

960 - £147.43

380 -  £156.98

 

970 - £238.99

390 - £246.59

 

980 - £377.99

390X - £323.99

 

980 Ti - £509.59

Fury X - £503.04

 

So, uh, no. Two Nvidia GPUs are cheaper and two AMD GPUs are cheaper.

in the UK....

Remember that some countries impose import taxes on one brand and not the other (like India, where AMD is nearly twice as expensive, for no clear reason other then import taxes)

 

EDIT:

in norway

 

380 - 2199 NOK

960 - 1999 NOK

 

390 - 3499 NOK

970 - 3299 NOK

 

390X - 4499 NOK

980 - 5199 NOK

 

Fury X - 6895 NOK

980TI - 7190 NOK

 

200 NOK difference might aswell be the same price here. 200 NOK isnt a whole lot.

Link to comment
Share on other sites

Link to post
Share on other sites

Man... This 970? 

 

 

 

Its really trash... 

 

The 390 on the other hand....

 

 

I'll take that 390 anyday... 

 

If you're determined to cherry pick to skew it in favour of one or the other, sure you can say whatever petty narrative you like. I was being objective. Objectively the factoid of AMD being cheap and Nvidia being expensive is not true often enough to be repeated as often as it is.

Link to comment
Share on other sites

Link to post
Share on other sites

If you're determined to cherry pick to skew it in favour of one or the other, sure you can say whatever petty narrative you like. I was being objective. Objectively the factoid of AMD being cheap and Nvidia being expensive is not true often enough to be repeated as often as it is.

 

Huh? Cherry picked? Those were the cards you picked...

 

9mCC4WW.jpg

60zm5P7.jpg

i5 2400 | ASUS RTX 4090 TUF OC | Seasonic 1200W Prime Gold | WD Green 120gb | WD Blue 1tb | some ram | a random case

 

Link to comment
Share on other sites

Link to post
Share on other sites

Are they actually? Let's see what pcpartspicker actually says.

 

960 - £147.43

380 -  £156.98

 

970 - £238.99

390 - £246.59

 

980 - £377.99

390X - £323.99

 

980 Ti - £509.59

Fury X - £503.04

 

So, uh, no. Two Nvidia GPUs are cheaper and two AMD GPUs are cheaper.

 

Like someone stated, stick to the context. A 380 is generally higher performing than a 960 (off the top of my head), but they haven't been benchmarked in this game, afaik. Of the 4 cards that have been, AMD is cheaper and either performs similar or better (albeit not by much).

Not sure where you got the prices, but if @Pohernori's post represents the cards you priced, surely it's not a fair comparison, as that 970 is crap compared to that 390.

 

You want me to give you a date? No.

 

Wait until games are released using DX11 and DX12 that do not have problems with sloppy coding on either API and see how the performance scales. Even then, just wait until there's a substantial amount of games using DX12 and compare performance across CPUs then.

 

I want you to define when they are optimized enough for a fair comparison.

 

Are you claiming that Ashes of the Singularity is sloppy coding? You care to explain that? Oxide Games outright talked about DX12 validation straight from Microsoft of this code, but yet it's sloppy? We have a dev and 2 objective sources claiming there is no issue with the AA, and 1 company (NVidia) stating there is, with no proof given. The same company that gets disappointing results in DX12.

 

I do agree that this one game (an RTS at that) is not representative for every DX12 game or the API as such, but by that train of thought, no game ever will.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Yes this is clearly a Nvidia driver issue. But it is also the display of the raw computer power that can be unleashed on AMD cards when you actually have nothing holding them back.

Yes, compute is a bit misleading, but so is describing the increase in Gigatexels too.. its sort of both being alleviated better.

Also, Firepro, Quadro and tesla is a bit different.

Quadro is aimed more at rendering and such.

Tesla is pure compute.

Firepro is a mix of both. But not as strong in either aspect as quadro and tesla.

However, what Firepro cannot compete in raw power of either card, it makes up of having the ability to do both, in a single card.

Quadro is rather weak at raw compute, and Tesla is rather weak at raw rendering. This is why you combine them. Say a K6000 Quadro + K20 or K40 Tesla. To have raw compute AND raw rendering power. However this solution is rather pricey and AMDs firepro can often cost less then either one of the Nvidia cards, while being close in performance

AMDs downfall in the enterprise market is the lack of widespread openCL support. It is getting there, from what i can gather, but it has only "caught on" like the last 12 months or so from what some tech news sites report. This means, outside of the giants like Autodesk (CAD, Maya, 3DsMAX) and Adobe (CC, Photoshop, Premiere). Its only partially optimized for or supported by other vendors.

Once OpenCL rendering and compute becomes more mainstream, AMD will be a much greater threat, because their products DO perform well.

Also, from what i know, its the "W" series that is of newer date on the AMD side unless the released something more recently (since the W9100 in 2014).

The W series is for workstation (has video outputs). The S series is for server (no video outputs). And actually Quadro and Tesla as well as FirePro W&S are equally good for both. The form factors/cooling solutions and outputs are the only differences. Teslas get used in rendering farms all over the place.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

If you're determined to cherry pick to skew it in favour of one or the other, sure you can say whatever petty narrative you like. I was being objective. Objectively the factoid of AMD being cheap and Nvidia being expensive is not true often enough to be repeated as often as it is.

 

A factiod is not a fact, but rather disguised as one. Calling it objective is an oxymoron.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Huh? Cherry picked? Those were the cards you picked...

 

9mCC4WW.jpg

60zm5P7.jpg

 

The point is that you can spend as much as you like on "better" versions of any of these cards for relatively small differences between them. You can complain about Zotac for whatever reason all you like, I was looking at the cheapest entry price of every GPU and showing that AMD were not cheaper than Nvidia with anything remotely resembling ubiquity. There are three other 970s available for less than the cheapest 390.

 

You are free to want to spend more on a better cooler, but you are missing the point entirely by focusing on everything but the cost of the GPU.

 

 

A factiod is not a fact, but rather disguised as one. Calling it objective is an oxymoron.

 
I know what a factoid is, that's why I described your point as one.
Link to comment
Share on other sites

Link to post
Share on other sites

I want you to define when they are optimized enough for a fair comparison.

 

Are you claiming that Ashes of the Singularity is sloppy coding? You care to explain that? Oxide Games outright talked about DX12 validation straight from Microsoft of this code, but yet it's sloppy? We have a dev and 2 objective sources claiming there is no issue with the AA, and 1 company (NVidia) stating there is, with no proof given. The same company that gets disappointing results in DX12.

 

I do agree that this one game (an RTS at that) is not representative for every DX12 game or the API as such, but by that train of thought, no game ever will.

Here we go...

 

No. I'm not playing along with your BS. Wait until there are more games that use DX12 and Nvidia and AMD have matured drivers, aside from that you're just speculating and whining about meaningless tests.

 

Done. Bye.

.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×