Jump to content

First DirectX 12 game benchmarked *Update 2 More benchmarks

Not really. Nvidia surely had some magic sauce in their DX11 driver, but nothing to make up this difference.

We are talking about 3 year old architectures, seen the first shred of daylight, as software are slowly been transformed to utilize more features in the architecture.

 

Nothing could have been done 3 years ago to get the same results. AMD have also been working on the low-level API, which hopefully had its influence in DX12 and other graphical APIs.

Dont think this was a simple "update" to their driver.

 

I may have not worded it correctly, but it's still bullshit that they need to adress. 

Link to comment
Share on other sites

Link to post
Share on other sites

I don't think you're following along here. 

 

AMD software has been shit. They really need to fix that. If a what 3 year old gpu can compete with an almost top of the line card with simple driver fixes then AMD really deserves to go out of business. Someone like samsung or valve should buy them and make nvidia sweat. 

 

A completely new low level API is not a "simple driver fix". AMD hardware is generally better than NVidia, but AMD just doesn't have the funding to rewrite their entire DX11 drivers to be fully multithreaded like NVidia did. Remember that DX11 is not multithreaded, so NVidia probably had to tinker quite a bit.

 

NVidia doesn't make all that great hardware, they make great software. They make proprietary shaders, that replaces the shaders in new games, especially in OpenGL games, but also DX11 games. All of this is nice for the gamer, sure, but it also means a games performance is at the mercy of NVidia's driver team, and not the devlopers.

 

DX12 and Vulkan will/should circumvent all of these driver specific shaders and tinkering, meaning that gaming will now be more focused on hardware and good developers. NVidia could continue to implement their proprietary code and shaders into games, if they can land the contract with the dev, but in the case of Ashes of the Singularity, they were only allowed to, if those shaders didn't sabotage AMD performance. If we see NVidia starting to gain massive performance gains compared to AMD in certain games, we can almost guarantee that it's because of proprietary shaders sabotaging AMD cards. You know similar to GameWorks.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Not really. Nvidia surely had some magic sauce in their DX11 driver, but nothing to make up this difference.

We are talking about 3 year old architectures, seen the first shred of daylight, as software are slowly been transformed to utilize more features in the architecture.

 

Nothing could have been done 3 years ago to get the same results. AMD have also been working on the low-level API, which hopefully had its influence in DX12 and other graphical APIs.

Dont think this was a simple "update" to their driver.

 

part of that magic sauce in Nvidia drivers is recompiling shaders to a simpler version that works better for the architecture being used, using free cpu threads for the recompiling work. People crap on AMD all the time for optimizing tessellation where its needed, but fail to mention that Nvidia has been doing similar things for years with shaders.

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

I may have not worded it correctly, but it's still bullshit that they need to adress. 

I'm sure you haven't.

I'm saying, this fix wasn't possible till now.

Are you complaining that your product is getting better over time?

I'm not sure what you're complaining about, it was not like they mis-advertised the performance at the time sold.  

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

Meh. Need more games to compare to. Having just 1 game to compare isnt very helpful.

Whens the next dx12 game getting released? Is it still gears of war?

Link to comment
Share on other sites

Link to post
Share on other sites

Meh. Need more games to compare to. Having just 1 game to compare isnt very helpful.

Whens the next dx12 game getting released? Is it still gears of war?

 

Not sure when Gears of War is released? But Ark: Survival evolved is getting a DX12 patch sometime in the near future.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Not if the initial volume request is 140 million units, which is exactly what it is, because Nvidia is planning to 1-up all of AMD's HPC offerings and win in price, and the entire Pascal lineup is using HBM 2.

 

You would if you were an AMD investor and did your due diligence, like I did before I jumped off at $2. AMD is royally screwed.

 

And Nvidia is a big enough client to jumpstart the volume production to get pricing data aired; which, if it beats Hynix, brings the other customers rolling in. It's Samsung's smartest move to stick it to Hynix and Micron in a single move and help saturate its 14nm foundry which currently is taking a loss due to too low production volume.

 

HBM is too expensive for the consoles and you know it. Currently Hynix's price per stack is $30 for orders of 1 to 9 million, $25 if you order in packs of 10 million. If you intend to have HBM as the only system RAM, the cost of the SOC for 8GB is already between $200 and $240.

 

HBM is too expensive for cell phones too. There's no way Qualcomm and Mediatek are buying, unless Qualcomm has a new ARM SOC for HPC planned that they haven't announced yet.

 

So you pull out 140 million units for a plan. Right... that argument is enough for Samsung to wramp up production... I don't think so.

 

Jumping off at $2 could be a terrible decision lol... you either lost a shit ton of money, or you made almost none since the minimum you could have bought AMD shares was around what... 1,60? So yeah... not a very smart decision in my oppinion on a company with such IP. But I'm not here to talk about investments.

 

Well that's your opinion about Samsung producing only for NVIDIA, and the justification for they to produce it - I doubt it. If the volume is right, HBM will be cheaper then GDDR5 - it's unlikely to end up in a console, but we don't know what Nintendo plans are - we just know they could be more successful and they have a opportunity to cross all other consoles with this late launch.

 

I think there's alot of interest in the market, and Samsung is going for it, because they can. That's all.

 

There's alot going on that we don't know about, at least officially.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 5 months later...

I didn't see any sense in creating a new thread not since the info below is an update from ExtremeTech on the latest version of AOS and performance on both the 980ti and FuryX:

 

AMD clobbers Nvidia in updated Ashes of the Singularity DirectX 12 benchmark


http://www.extremetech.com/gaming/223567-amd-clobbers-nvidia-in-updated-ashes-of-the-singularity-directx-12-benchmark

 

 

Ashes-1080p-Async.png

Ashes-1080p-NoAsync.png

Ashes-4K-Async.png

Ashes-4K-NoAsync.png

Rock On!

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×