Jump to content

So they said "ASHES IS ONLY BETA, THERE WIL BE A DRIVER ONCE THE GAME IS OUT" THE GAME IS OUTT !!!!!!!

El Diablo

I didn't even know AOTS was even released. Goes to show how many shits people care about that game.

 

Wake me up when AAA titles are utilizing the almighty DX12.

 

AMD fangirls can hang on the nuts of a small "victory." AMD and victory is more scarce than a food shortage.

CPU: Intel Core i7 7820X Cooling: Corsair Hydro Series H110i GTX Mobo: MSI X299 Gaming Pro Carbon AC RAM: Corsair Vengeance LPX DDR4 (3000MHz/16GB 2x8) SSD: 2x Samsung 850 Evo (250/250GB) + Samsung 850 Pro (512GB) GPU: NVidia GeForce GTX 1080 Ti FE (W/ EVGA Hybrid Kit) Case: Corsair Graphite Series 760T (Black) PSU: SeaSonic Platinum Series (860W) Monitor: Acer Predator XB241YU (165Hz / G-Sync) Fan Controller: NZXT Sentry Mix 2 Case Fans: Intake - 2x Noctua NF-A14 iPPC-3000 PWM / Radiator - 2x Noctua NF-A14 iPPC-3000 PWM / Rear Exhaust - 1x Noctua NF-F12 iPPC-3000 PWM

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Lays said:

 

6 months to do what? Optimize for something that is BARELY being used?  There's like 5 "games" using it right now, games that aren't even really all that popular. 

 

I guarantee you they have people working on it, and are just waiting for things to be more serious before releasing a proper driver for it.  They have a MASSIVE amount of the market share of GPU's right now, they wouldn't just throw that away, they have much more money to spend on R&D for things like this. 

Rise of the Tomb Raider uses Async compute for the Nvidia designed lightning system.

 

Not a very popular title, i know. Just thought i'd remind people.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, manikyath said:

i'm actually blaming the ashes game devs for this, because the only other example i've seen had literally no change in the average fps differences beteen amd and nvidia going from DX11 to DX12.

You should actually be blaming the other dev for being lazy then, DX12 has a lot of advantages.  If there is no performance change in their game from DX11 to DX12, then they aren't utilizing DX12 at all.

i7-5820k  |  MSI X99S SLI-Plus  |  4x4GB HyperX 2400 DDR4  |  Sapphire Radeon R9 295X2  |  Samsung 840 EVO 1TB x2  |  Corsair AX1200i  |  Corsair H100i  |  NZXT H440 Razer

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Orblivion said:

You should actually be blaming the other dev for being shit then, DX12 has a lot of advantages.  If there is no performance change in their game from DX11 to DX12, then they aren't utilizing DX12 at all.

didnt say there's no performance gains, i said theres no difference in performance gains beteen nvidia and amd specificly when talking about average fps.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, manikyath said:

didnt say there's no performance gains, i said theres no difference in performance gains beteen nvidia and amd specificly when talking about average fps.

you mean the least important metric we got?

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Prysin said:

you mean the least important metric we got?

well... i'd consider average fps to be pretty dang important (more so than the metric of "one of this benchmark's seconds only had 10 frames, and another second had a staggering 50 bajillion)

 

all they've been boasting about with ashes is that DX12 would make amd superior to nvidia in every way possible, which to me feels more like we have an "amdworks" title here than anything tangiable...

 

and as for the other metrics we like, i mentioned interesting results in tomb raider, go look at it and put some graphs side by side, theres some pretty nice things lining up there.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, manikyath said:

didnt say there's no performance gains, i said theres no difference in performance gains beteen nvidia and amd specificly when talking about average fps.

My bad, I misunderstood your post.

i7-5820k  |  MSI X99S SLI-Plus  |  4x4GB HyperX 2400 DDR4  |  Sapphire Radeon R9 295X2  |  Samsung 840 EVO 1TB x2  |  Corsair AX1200i  |  Corsair H100i  |  NZXT H440 Razer

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Orblivion said:

My bad, I misunderstood your post.

shit happens, at least you didnt hurt yourself with a roll of tape out of sheer stupidity.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Kobathor said:

 Sure, DX12 isn't going to be widely popular for a little while, but they should still care about it. 

 

 

Do you work at NVIDIA with their R&D / software dev team?  You have no idea whether they care about it or not, none of us do.   For all we know they may indeed be waiting for Pascal launch to care, or maybe they already have something in the works to increase DX12 performance for current gen cards.

 

We have absolutely no room to talk about whether or not they're optimizing, or whether or not they care about it. Because all we can do is speculate about what we THINK is going on inside NVIDIA's R&D / Software developing team.

Stuff:  i7 7700k @ (dat nibba succ) | ASRock Z170M OC Formula | G.Skill TridentZ 3600 c16 | EKWB 1080 @ 2100 mhz  |  Acer X34 Predator | R4 | EVGA 1000 P2 | 1080mm Radiator Custom Loop | HD800 + Audio-GD NFB-11 | 850 Evo 1TB | 840 Pro 256GB | 3TB WD Blue | 2TB Barracuda

Hwbot: http://hwbot.org/user/lays/ 

FireStrike 980 ti @ 1800 Mhz http://hwbot.org/submission/3183338 http://www.3dmark.com/3dm/11574089

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, manikyath said:

well... i'd consider average fps to be pretty dang important (more so than the metric of "one of this benchmark's seconds only had 10 frames, and another second had a staggering 50 bajillion)

 

all they've been boasting about with ashes is that DX12 would make amd superior to nvidia in every way possible, which to me feels more like we have an "amdworks" title here than anything tangiable...

 

and as for the other metrics we like, i mentioned interesting results in tomb raider, go look at it and put some graphs side by side, theres some pretty nice things lining up there.

so, a title that uses features which AMD happens to do better then Nvidia Maxwell, must be a "AMDWorks" title.

You know, if Nvidia bothered to, they could just optimize the shit outta kepler, and with Keplers compute, it would actually keep up with AMD far better then Maxwell does. Since Kepler DOES have Async compute in hardware.

But no, Nvidia are keeping up with their bullshit planned obsolence, which is great for sales and terrible for consumers.

 

I mean, a 780Ti features 5GFLOPs of compute, compared to a 390X which has 5.9GFLOPs... a 980 has 4.6GFLOPs but no async scheduler... If Nvidia wanted to, they could have had a GeForce GTX card represented among the top cards in AOTS DX12. But they are not optimizing for Kepler anymore, and as thus they cannot for the life of them beat AMD at AMDs own game.

 

You must realize that when you buy a AMD Radeon Rx card, you buy a all-rounder, not a specialized tool. The exception here is the Fiji based Fury linup, which is built on the premise of "Best Gaming Graphics"... Unlike a R9 390X or 780Ti, you cannot use these cards for complex FP64 workloads, not that most people would do this anyway.

 

Infact, i am going to quote Patrickjp93 here. He has a excellent point.

On 25/03/2016 at 6:39 PM, patrickjp93 said:

And how long have consoles and Mantle been out, giving Nvidia a perfect roadmap of where AMD's features would be headed? There's been plenty of time to modify the designs since the launch of the 750/TI.

And you know, he is right.

Nvidia will have known since 2011 what AMD was up to in regards to Async Compute. Nvidia has had plenty of warning time, however i think they just ignored it. They ignored it because they knew they couldnt beat AMD at it and still clearly win at efficiency. They decided that since DX12 aint happening now, it doesnt matter. So they can just build a new generation of cards when DX12 comes out in order to sell more cards.

Which, if you think about it is perfectly in line with how Nvidia conducts business. They build their hardware for the features THEY need today, not tomorrow. Because they can only ask so much for a product, it is better to build stuff specifically for today, forcing users to upgrade if they want the features of tomorrow. AMD on the other hand does it the other way around, which really do hurt their profits, as their cards simply "lasts" longer feature wise. A 7970 will still be doing good at DX12, even though its 4 years old by now. Which, kinda sucks for the economy of AMD you know.

 

Anyway, as for your statement that avg FPS matters.

Why? It doesnt tell you anything worth-while.

You can have 5 FPS drops ever 5 seconds and 4000 FPS spikes every 10 seconds. The avg seems fine, the game itself will be utterly and totally unplayable.

 

What IS the important metrics?

Minimum FPS, that is important. Nothing is as important as minimums. It is the most important metric we have that does not require you to invest into fancy gear for measurement.

 

Frametiming. You seem to be under the misconception that these are irrelevant. But they are not. The ability to dissect a second of information is HUGE. but why? because we humans are extremely perceptive, even the dullest of us will notice a variation within a second. Frametiming let us measure where an issue is. It enables us to say "This game may look fine from the benckmarks, but the micro-stuttering will be unbelivable to the point of making the game unplayable".

Avg FPS cannot tell you this, AVG FPS is such a rough and inaccurate form of measurement it simply cannot convey how the game feels to the end user. Frametiming can.

 

Saying avg FPS is a good metric is about accurate as all those television food programs where the host claim everything "tastes like chicken"... It does make you wonder though, what the fuck is their chicken's like?

avg FPS is sort of the same thing. It tells you something, but it tells you nothing of how the game feels, how it is perceived. Avg FPS doesnt tell you if there is crippling driver overhead or if the game is as smooth as silk.

Link to comment
Share on other sites

Link to post
Share on other sites

no

 

async compute is for performance

 

gameworks is to degrade performance via pumping un needed tessalation in hair strands

 

amd fixed it anyway ,u can lower the tessalation an it looks the same

 

u guys n ur excuses

 

first it was

 

ohh its a beta ,,, its not a full game

 

now its,, THE GAME IS BORING  ,, blah blah,,

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

loll my posts about the nvidia driver that got released a few days ago that blows up your pc got deleted

 

WOW now i know the nvidia bias here is real

 

shit i thought it was a joke

 

WOW

Link to comment
Share on other sites

Link to post
Share on other sites

wow all my posts here got heavily edited

 

talk about no freedom of speech

 

hmmm

 

i wont be coming here much anymore

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, El Diablo said:

no

 

async compute is for performance

 

gameworks is to degrade performance via pumping un needed tessalation in hair strands

 

amd fixed it anyway ,u can lower the tessalation an it looks the same

 

u guys n ur excuses

 

first it was

 

ohh its a beta ,,, its not a full game

 

now its,, THE GAME IS BORING  ,, blah blah,,

 

 

AotS is only one game...and it's a wierd one...if you think every DX12 game will leverage async compute as much has that game does...and that every game will have hundreds upon hundreds of units on the screen like this tech demo has...i think you're wrong. that's all! :)

i think we know that once again most games will be nvidia sponsored titles and most of them will run better on nvidia hardware from day one...like it's been the case for as long as i can remember honestly! it's a small win for AMD here, and it's the reason the game now ship for free with radeon cards...they also have hitman...and then that's it until they push out the next dirt game or something, which will run equally well on both sides.

Async is just ONE asset of DX12...remember nvidia is DX12.1 compliant...so there are still features we havnt seen that will most likely turn the table...VXGI for example.

 

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

aparently this game hasent used it allot

 

amd didnt even ask them to use it

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, El Diablo said:

fact is

 

all games are made to run on amd gcn hardware

 

now the consoles finaly got low level access to async compute 

 

like i posted this video in this thread earlier but some weird reason it got removed

 

im starting to think nvidia actually own linus tech

 

with 2 posts i made here removed that showed nvidias big faults

 

 

it's probably because nothing in this video is true...it's all irrelevant arrogant bullshit...that's probably why it got removed IMHO. still funny though i even gave you the funny thumbs up for it...it made me laugh.

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, i_build_nanosuits said:

it's probably because nothing in this video is true...it's all irrelevant arrogant bullshit...that's probably why it got removed IMHO. still funny though i even gave you the funny thumbs up for it...it made me laugh.

it gets worse for nvidia

 

pascall wont have async compute

 

and pascal is 16nm,, polaris is 14nm

 

nvidia have no relationship with samsung who are making the best hbm2 chips...

 

they tried to sue them

 

i can see nvidia going bankrupt by 2020

Edited by Godlygamer23
Clean up.
Link to comment
Share on other sites

Link to post
Share on other sites

DX12 is amd MANTLE rebranded...

 

amd got their hands in everything...  this is why they been making a big loss.. and their market share is growing,, from 7% to 24% in q4 2015...  24% ..21% cant rember,. bt yh

 

amd ios back

 

and also,, the titan x killer is out soon..

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, El Diablo said:

and also,, the titan x killer is out soon..

nvidia could have delivered the titanX killer 2 or 3 years ago if they would have needed to...but fiji is slow and crap so they just sit on it...waiting.

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

i can see u havent been  lookin at the latest benchmarks.....

 

stock fury x's beat overclocked 980's in ashes

 

and the fury x has been severly held back,, nowhere near its thermal limit.. caps out at 50c..

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, El Diablo said:

i can see u havent been  lookin at the latest benchmarks.....

 

stock fury x's beat overclocked 980's in ashes

 

and the fury x has been severly held back,, nowhere near its thermal limit.. caps out at 50c..

http://investor.nvidia.com/financials-statements.cfm

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, El Diablo said:

i can see u havent been  lookin at the latest benchmarks.....

 

stock fury x's beat overclocked 980's in ashes

 

and the fury x has been severly held back,, nowhere near its thermal limit.. caps out at 50c..

why do you even put an ''s'' at ''benchmarks''...this is only ONE game...and a crappy one made with one purpose in mind: make AMD look good in DX12.

And who care how cool it run? 50c...there are watercooled 980ti and titanX out there that also run at 50c...fiji can't overclock shit so who care how cool it run?

''overclockers dream'' they said at the conference...faster than titan X...yeah sure :P

Maybe in 2-3 years from now when AMD is finally done optimizing for it it will be faster than the nvidia cards...but who care? we will all have moved to faster more powerful more efficient nvidia chips by then :)

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/3/2016 at 6:32 PM, SurvivorNVL said:

I've been sitting in the camp for AMD to dissolve.  I want to see Intel take over Radeon, and pull all their engineers / driver teams and begin to scare nVidia in that field.  And then for nVidia to gobble up AMD64 and the CPU division of AMD, and then negotiate for an x86 license from Intel, in exchange for AMD64 - and then nVidia's wealth and expertise can be used against Intel and vice-versa.  I want to see some insane two-way competition where the only one who wins is us.

^^^^This^^^^

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/3/2016 at 3:48 AM, i_build_nanosuits said:

-Snip-

Original post at page 1.

If ever there where an unfair comparison. It's not only unfair towards AMD, but towards Nvidia as well. It would be extremely unfair to expect that either team to have any optimization whatsoever for a game that is not only not released, but it was even canceled as you pointed out.

 

Though i also agree that judging DX12 by a select few titles is very unfair. Especially if those titles are ''ported'' ,for lack of a better word, from DX11 to DX12 as was the case with Tomb Raider and Hitman 2016. Currently we only have two true DX12 developed games, one of which is Ashes of the Singularity and the other one being quantum break.

 

Both of whom Nvidia have released ''game ready'' drivers for.

Ashes of the singularity = 364.51

Quantum Break = 354.72

 

AMD driver support

Ashes of the singularity = Crimson 16.2

Quantum break = Crimson 16.4.1

 

Currently AMD is undeniably performing better in Ashes of the singularity, while the jury is still out on quantum break. So what does this tell us? It tells us exactly as much, AMD is better in Ashes of the singularity. We have to wait a bit longer before we can conclusively say that either company is better at DX12, i personally won't really make up my mind before Polaris and Pascal is for sale.

Motherboard: Asus X570-E
CPU: 3900x 4.3GHZ

Memory: G.skill Trident GTZR 3200mhz cl14

GPU: AMD RX 570

SSD1: Corsair MP510 1TB

SSD2: Samsung MX500 500GB

PSU: Corsair AX860i Platinum

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/3/2016 at 6:45 PM, El Diablo said:

and no driver,,,

 

The fury x is beating superclocked GTX 980 TITANIUMS 

 

and the titan x is nowhere close .......

What's Ashes market cap? I have a feeling Nvidia doesn't care about a Indy RTS game that is running a few frames slower. I get the feeling the only reason we're seeing a major difference is because the dev team front loaded command lists to exceed 92. This is really the only time the R9 will start to perform better on Async compute and to pretend it's any more than a few frames per second is disingenuous.

 

 Besides, I hope they don't fix it, so I can watch people continue to say that Maxwell doesn't support Async Compute, even though it's been a feature of CUDA since 2008.

An avid PC Enthusiast for 20 years.

Current Rig: i7-5820k - GTX 980TI SC+ - 32GB Corsair DDR4-3200 - 500GB 850 EVO M.2 - GA-X99-SLI

Kids' Rigs: FX-6300 - GTX 960 - 8GB Corsair DDR3-1600 - 1TB 7200RPM - GA-78LMT-USB3

Wife's Rig: FX-9590 - R9 Fury X - 32GB Corsair DDR3-2400 - 512GB 950 EVO M.2 - Asus 970 PRO Gaming/Aura

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×