Jump to content

( Updated )Star Wars Battlefront Benchmarked On Nvidia And AMD, Runs Best On Radeon – Exceptionally Well Optimized Across The Board

Mr_Troll

Must have been a faulty clock then or maybe just over OC my 295x2. She was doing 10%OC fine but has microstuttering issue when launching apps, I'm guessing it was power draw making the OC unstable.

yh, my 295x2 can be tempramental when OCd... stock is fine... OCd stutters in heaven and valley benchmarks

 and some games (FC4, Tomb raider 2013 reboot)

Link to comment
Share on other sites

Link to post
Share on other sites

How are they getting so low performance?
I was playing the beta on my 670 and with the old drivers all day yesterday and was getting more fps than the 960...
(Also stock Maxwell benchmarks are useless those card do 1.400-1.500mhz that is another 10+fps on top.)
 

RTX2070OC 

Link to comment
Share on other sites

Link to post
Share on other sites

Haha Nvidia you been slacking way to much for far to long, that's what you get now take that *competition*

 

It's more a question of games finally being made to take advantage of AMD hardware, like async compute and optimizations, we never see in GameWorks games.

 


As for the benchmark and validity of it, all DICE games are usually done at the point of beta's. All optimizations and such tends to be done. Only discovering bugs and test netcode seems to be the purpose of the beta's as well as act like a marketing stunt/demo.

 

As such, it is not unreasonable to believe the performance is representative of the finished product. Especially considering both AMD and NVidia have released their game optimization drivers for the game.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

So.....apart from the benefits of the extra vRAM, some games are initially appearing to run better on the 390X. Selling my GTX 970 for it is the right choice (4GB vRAM isn't enough for me-at 1080p alone I use over 2.5GB).

 

Edit: BTW, I'm actually going to need to use crossfire since my GTX 970 is struggling so badly ATM-modded Skyrim and outdoors I'm lucky to hit higher than 45fps-indoors the load drops to 60% and the FPS is just fine at 60.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

It's more a question of games finally being made to take advantage of AMD hardware, like async compute and optimizations, we never see in GameWorks games.

 


As for the benchmark and validity of it, all DICE games are usually done at the point of beta's. All optimizations and such tends to be done. Only discovering bugs and test netcode seems to be the purpose of the beta's as well as act like a marketing stunt/demo.

 

As such, it is not unreasonable to believe the performance is representative of the finished product. Especially considering both AMD and NVidia have released their game optimization drivers for the game.

From what we know DX12 support is supposed to come at some point so performance should change.(Probably even more in favor for AMD.)

RTX2070OC 

Link to comment
Share on other sites

Link to post
Share on other sites

780 Ti and 970 losing to 290. Ouch. I'll do my own benchmarks when the beta comes out. Still got my 290X. I'm really interested to see if it can really beat 970 and heavily OCed 970 to 980 levels.

i7 9700K @ 5 GHz, ASUS DUAL RTX 3070 (OC), Gigabyte Z390 Gaming SLI, 2x8 HyperX Predator 3200 MHz

Link to comment
Share on other sites

Link to post
Share on other sites

780 Ti and 970 losing to 290. Ouch. I'll do my own benchmarks when the beta comes out. Still got my 290X. I'm really interested to see if it can really beat 970 and heavily OCed 970 to 980 levels.

 

That's NVidia for you with planned obsolescence and weak architecture.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

That's NVidia for you with planned obsolescence and weak architecture.

TBH, Nvidia has only been thinking about now not the future-and its biting them in the ass with SLI that is still limited by the bridge and high res performance by memory bandwidth (AMD however is been bitten in the ass by rebadging-there was a far better way to launch the 300 series in a way that wouldn't have pissed people off).

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

That's NVidia for you with planned obsolescence and weak architecture.

 

I don't think it's due to planned obsolence. AMD's cards were always superior in brute performance but were often crippled by AMD's drivers. Prior to 15.7 AMD's driver overhead was insanely high. 3DMark's API Overhead test showed 200k boost in draw calls with Catalyst 15.7, from 700k to 900k on my PC, though Nvidia was still superior at 1.3m draw calls with the GTX 970. 

I don't know if they reduced the overhead even more with 15.8 and 15.9, but I expected AMD's benchmark scores to be higher, though I didn't expect for 290X to beat the 980. Something is wrong there. And only 1 fps slower than 390X?

i7 9700K @ 5 GHz, ASUS DUAL RTX 3070 (OC), Gigabyte Z390 Gaming SLI, 2x8 HyperX Predator 3200 MHz

Link to comment
Share on other sites

Link to post
Share on other sites

I wonder if my R9 280x will be good enough? Are those benchmarks on the ultra settings? im at 1050MHz core, and 1450MHz mem.

My AMD Build:

Spoiler

FX 6300 @ 4.8GHz, Zalman CNPS14X, MSI 970 Gaming, 16gb 1866MHz AData Ram, 3D Club R9 280X, Corsair 600M Psu, Thermaltake V3 AMD Edition Case, D-link 1200AC WiFi, 240gb Mushkin SSD, 2tb WD HDD, 140gb WD HDD (recording gameplay), 5x CoolerMaster SickleFlow 120mm fans, Windows 10 64Bit

Sisters Intel Build:

Spoiler

I7 4790k @ 4.4GHz, CoolerMaster 212 Evo, Gigabyte Gaming 5, 16gb 1866MHz Corsair Ram, 3D Club R9 390, EVGA 650GS Psu, NZXT S340 Case, D-Link 1200AC WiFi Card, HyperX 240gb SSD, 2tb WD HDD, Windows 10 64 Bit

 

Link to comment
Share on other sites

Link to post
Share on other sites

TBH, Nvidia has only been thinking about now not the future-and its biting them in the ass with SLI that is still limited by the bridge and high res performance by memory bandwidth (AMD however is been bitten in the ass by rebadging-there was a far better way to launch the 300 series in a way that wouldn't have pissed people off).

 

I don't think it's due to planned obsolence. AMD's cards were always superior in brute performance but were often crippled by AMD's drivers. Prior to 15.7 AMD's driver overhead was insanely high. 3DMark's API Overhead test showed 200k boost in draw calls with Catalyst 15.7, from 700k to 900k on my PC, though Nvidia was still superior at 1.3m draw calls with the GTX 970. 

I don't know if they reduced the overhead even more with 15.8 and 15.9, but I expected AMD's benchmark scores to be higher, though I didn't expect for 290X to beat the 980. Something is wrong there. And only 1 fps slower than 390X?

My answer ties to both your posts:

 

Indeed, they have focused on driver optimization, instead of powerful architecture. That meant they could do he entire "efficiency" marketing stunt. You got to give it to NVidia, they pulled it off very nicely.

I don't think the rebadging has been a problem, as AMD is now gaining market share. The problem was that the Hawaii chips and other architectures were not being taken advantage of by devs, while NVidia was being very aggressive on their twiwmtbp titles, often including GameWorks, which penalized AMD a lot.

 

Well now DX12 is coming fast (23.99% of all steam users has W10+DX12 GPU after 5 weeks of launch), and NVidia is getting wrecked. A 290 beating a 780ti. The same card that was 5-10% better than a 290x back then. Too little vram might be some of the explanations, but now that NVidia is no longer focusing on driver updates for that specific card in these specific games, you can see how weak the architecture really is in Kepler (and Maxwell too). I mean a 290 ffs. And we are seeing this repeated in many other new games, especially DX12 based games.

 

No games will even get close to use 1 million draw calls though. At least not anything we've seen so far. DX12 will focus more on async compute. Raising the draw call limit from about 50k in DX11, should give enough headroom at maybe 200k max. The performance increase should be found other places. And with NVidia gimping Maxwell even more than Kepler, does not bode well. NVidia is forced to update games specifically to their individual cards (like the dual VRAM setup of the 970), so as new cards comes out, the older ones will be fazed out and will drop in performance compared to AMD, which are more hardware focused than driver focused. This is NVidia's planned obsolescence strategy, along with maxing GameWorks out to get people to buy newer more expensive cards, and releasing too little vram from the 680 to the 780ti (which is now being beaten by a 290 in a non DX12 game).

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

I don't think it's due to planned obsolence. AMD's cards were always superior in brute performance but were often crippled by AMD's drivers. Prior to 15.7 AMD's driver overhead was insanely high. 3DMark's API Overhead test showed 200k boost in draw calls with Catalyst 15.7, from 700k to 900k on my PC, though Nvidia was still superior at 1.3m draw calls with the GTX 970. 

I don't know if they reduced the overhead even more with 15.8 and 15.9, but I expected AMD's benchmark scores to be higher, though I didn't expect for 290X to beat the 980. Something is wrong there. And only 1 fps slower than 390X?

This is the truth right here. Historically speaking, AMD has always had superior hardware. Nvidia just blew them away with their heavily optimized drivers. Now that AMD has finally started to get their drivers together (I think that recent news article regarding them cutting some of their software engineers was them trimming the fat, and may have helped) and with DX12 removing driver overhead, AMD's superior hardware is starting to really shine. People often complain of rebadging hardware, but let's be honest here. The architecture that these "rebadged" GPU's are based on, was really solid, and is still solid to this very day. I know people still using 7970's on AAA titles and they still stand the test of time. 

 

Architecturally speaking, Nvidia has been all over the place. Fermi was strong, but it was practically a fire hazard. Kepler was a big deal at the time (Still is, when you talk Tesla's and Titans, compared to Maxwell) and was a huge step in the right direction involving power efficiency. However, Maxwell sacrificed a lot of staying power for that greater power efficiency. This put Nvidia in a pretty terrible position. Nvidia's solution? Fine-tune software to be less taxing on the hardware. MFAA is a great example of this. They also used some of the gaming developers that they work with, to specifically optimize games for their hardware. This helped make Maxwell look stronger than what it actually was.

 

Now, i am not bashing Maxwell, because for what its worth, it is an impressive feat of engineering from a performance:watt perspective, but Maxwell will not stand the same test of time that AMD's last couple generations have been able to do. In fact, when Pascal comes out, i think Maxwell will be forgotten entirely. This is just my opinion by the way, do not mistake this as a solid fact. 

 

TL:DR? AMD's hardware is great. Nvidia's software is great. When the requirement for great software is removed from the equation, superior hardware wins.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

This is the truth right here. Historically speaking, AMD has always had superior hardware. Nvidia just blew them away with their heavily optimized drivers. Now that AMD has finally started to get their drivers together (I think that recent news article regarding them cutting some of their software engineers was them trimming the fat, and may have helped) and with DX12 removing driver overhead, AMD's superior hardware is starting to really shine. People often complain of rebadging hardware, but let's be honest here. The architecture that these "rebadged" GPU's are based on, was really solid, and is still solid to this very day. I know people still using 7970's on AAA titles and they still stand the test of time. 

 

Architecturally speaking, Nvidia has been all over the place. Fermi was strong, but it was practically a fire hazard. Kepler was a big deal at the time (Still is, when you talk Tesla's and Titans, compared to Maxwell) and was a huge step in the right direction involving power efficiency. However, Maxwell sacrificed a lot of staying power for that greater power efficiency. This put Nvidia in a pretty terrible position. Nvidia's solution? Fine-tune software to be less taxing on the hardware. MFAA is a great example of this. They also used some of the gaming developers that they work with, to specifically optimize games for their hardware. This helped make Maxwell look stronger than what it actually was.

 

Now, i am not bashing Maxwell, because for what its worth, it is an impressive feat of engineering from a performance:watt perspective, but Maxwell will not stand the same test of time that AMD's last couple generations have been able to do. In fact, when Pascal comes out, i think Maxwell will be forgotten entirely. This is just my opinion by the way, do not mistake this as a solid fact. 

 

TL:DR? AMD's hardware is great. Nvidia's software is great. When the requirement for great software is removed from the equation, superior hardware wins.

People seem to forget that there was a reason for all of the graphics cards that got bought during the mining craze-raw computational power. And GCN has that in spades.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

​No games will even get close to use 1 million draw calls though. At least not anything we've seen so far. 

 

Source? Because that's an information you wouldn't know unless you heard it from developers that worked on the games in question. The only times I've seen that information disclosed was when a dev that worked on Project CARS posted on Guru3D forums that their game uses ~11k draw calls per frame on ultra settings and AMD said AC:U makes 50k draw calls per frame. So for Project CARS, to get 60 fps you'd need 660k of draw calls, that's almost 1.3m at 120 fps. Now, the way API Overhead test counts draw calls is it calculates the amount of draw calls your CPU can make without dropping below 30 fps. So even though AMD gets 900k of draw calls, that's at 30 fps. At 60 and more it's gonna make much less than that. This is why AMD cards performed so poorly in PC, and everyone was blaming physx and Nvidia.

i7 9700K @ 5 GHz, ASUS DUAL RTX 3070 (OC), Gigabyte Z390 Gaming SLI, 2x8 HyperX Predator 3200 MHz

Link to comment
Share on other sites

Link to post
Share on other sites

lool fanboy much

 

Those benchmarks are probably pretty close to the truth... i bet Nvidias "game ready" drivers just werent ready at all this time

 

358.50 is "battlefront game ready"

http://anandtech.com/show/9698/nvidia-releases-35850-game-ready-drivers-for-star-wars-battlefront

 

I don't think he's fanboying really, the 980 is faster than the 290x.  The 290x is closer to the 970 than the 980.  It's possible that the game engine prefers AMD architectures though, but anyone that knows how to read benchmarks should know the 980 is quite a bit faster than the 290x.

Stuff:  i7 7700k @ (dat nibba succ) | ASRock Z170M OC Formula | G.Skill TridentZ 3600 c16 | EKWB 1080 @ 2100 mhz  |  Acer X34 Predator | R4 | EVGA 1000 P2 | 1080mm Radiator Custom Loop | HD800 + Audio-GD NFB-11 | 850 Evo 1TB | 840 Pro 256GB | 3TB WD Blue | 2TB Barracuda

Hwbot: http://hwbot.org/user/lays/ 

FireStrike 980 ti @ 1800 Mhz http://hwbot.org/submission/3183338 http://www.3dmark.com/3dm/11574089

Link to comment
Share on other sites

Link to post
Share on other sites

Frostbite is the rare engine that was developed, tested and uses Radeon exclusive hardware API or open API's that work much better on them that on Nvidia, this is the only triple AAA engine that is biased toward AMD, that's why AMD performs better that normal under it.

 

Also AMD is EA hardware partner for graphics.

Yep. This is like AMD's performance equivalent of Gameworks.

CPU: Intel Core i7 7820X Cooling: Corsair Hydro Series H110i GTX Mobo: MSI X299 Gaming Pro Carbon AC RAM: Corsair Vengeance LPX DDR4 (3000MHz/16GB 2x8) SSD: 2x Samsung 850 Evo (250/250GB) + Samsung 850 Pro (512GB) GPU: NVidia GeForce GTX 1080 Ti FE (W/ EVGA Hybrid Kit) Case: Corsair Graphite Series 760T (Black) PSU: SeaSonic Platinum Series (860W) Monitor: Acer Predator XB241YU (165Hz / G-Sync) Fan Controller: NZXT Sentry Mix 2 Case Fans: Intake - 2x Noctua NF-A14 iPPC-3000 PWM / Radiator - 2x Noctua NF-A14 iPPC-3000 PWM / Rear Exhaust - 1x Noctua NF-F12 iPPC-3000 PWM

Link to comment
Share on other sites

Link to post
Share on other sites

Source? Because that's an information you wouldn't know unless you heard it from developers that worked on the games in question. The only times I've seen that information disclosed was when a dev that worked on Project CARS posted on Guru3D forums that their game uses ~11k draw calls per frame on ultra settings and AMD said AC:U makes 50k draw calls per frame. So for Project CARS, to get 60 fps you'd need 660k of draw calls, that's almost 1.3m at 120 fps. Now, the way API Overhead test counts draw calls is it calculates the amount of draw calls your CPU can make without dropping below 30 fps. So even though AMD gets 900k of draw calls, that's at 30 fps. At 60 and more it's gonna make much less than that. This is why AMD cards performed so poorly in PC, and everyone was blaming physx and Nvidia.

 

Ashes of the Singularity, which is one of the first DX12 games (an RTS at that) that would benefit a lot from more draw calls, "Only" uses about 20k:

 

http://www.pcper.com/reviews/Graphics-Cards/DX12-GPU-and-CPU-Performance-Tested-Ashes-Singularity-Benchmark/Benchmark-det

It does this by using batches:

http://docs.unity3d.com/Manual/DrawCallBatching.html

 

I'm not saying drawcalls won't increase, I'm sure they will, but I doubt we are talking 30-50 fold compared to now.

 

However draw calls are measured per frame, not per second (only 3dmarks test does it that way), so 660k draw calls @ 60fps in Project Cars is pretty useless a number. DX11 could not handle the 50k draw calls of AC:U very well, so such a thing will be solved with DX12.

 

Note the drawcall test in 3dmark has nothing to do with performance between different architectures or systems, so you can't compare AMD numbers to NVidia numbers for instance. The test (NOT BENCHMARK) uses up to 393216​ draw calls per frame at max.

 

 

The API Overhead feature test is not a general-purpose GPU benchmark, and it should not be used to compare graphics cards from different vendors.

(...)

As a result, you should be careful making conclusions about GPU performance when comparing API Overhead test results from different systems. For instance, we would advise against comparing the Mantle score from an AMD GPU with the DirectX 12 score from an NVIDIA GPU. Likewise, it could be misleading to credit the GPU for any difference in DirectX 12 performance between an AMD GPU and an NVIDIA GPU. 

Another scenario, for example, would be to test DirectX 12 performance with a range of CPUs in a system with a fixed GPU. Or, you could test a vendor's range of GPUs, from budget to high-end, and keep the CPU fixed. But in both cases, the nature of the test means it will not show you the extent to which the performance differences are due to the hardware and how much is down to the driver. 

The proper use of the test is to compare the relative performance of each API on a single system, rather than the absolute performance of different systems.

 

The focus on single-system testing is one reason why the API Overhead test is called a feature test  rather than a benchmark. 

 

Source: http://s3.amazonaws.com/download-aws.futuremark.com/3DMark_Technical_Guide.pdf

 

Sorry but those numbers means nothing in any games, cannot be compared between AMD and Nvidia, and the draw calls per second thing is irrelevant for calculating draw calls in games too.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

I don't think he's fanboying really, the 980 is faster than the 290x.  The 290x is closer to the 970 than the 980.  It's possible that the game engine prefers AMD architectures though, but anyone that knows how to read benchmarks should know the 980 is quite a bit faster than the 290x.

Uhh, not it's not.

 

290=390~780~970

290x=390x~780 ti~980

 

The 390x, which is just an refreshed 290x, trades blows with a 980, as in, it actually beats the 980 in some games like Shadow of Mordor and is even in Far Cry 4. The 290x and 970 are not in the same performance bracket. Even in the charts in the OP, the 290x and 390x are within 1 fps of each other. You need to ignore the benchmarks of when Maxwell 1st came out when AMD still had garbage drivers.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

Uhh, not it's not.

 

290=390~780~970

290x=390x~780 ti~980

 

The 390x, which is just an refreshed 290x, trades blows with a 980, as in, it actually beats the 980 in some games like Shadow of Mordor and is even in Far Cry 4. The 290x and 970 are not in the same performance bracket. Even in the charts in the OP, the 290x and 390x are within 1 fps of each other. You need to ignore the benchmarks of when Maxwell 1st came out when AMD still had garbage drivers.

 

 

I've owned 2 290x and 2 970, as well as a 980

Testing myself and looking at benchmarks, the 290x is slightly faster than 970 by a few FPS, and the 980 is slightly above the 290x.

Looking at reviews and benchmarks confirms this, the 980 90% of the time is ahead of the 290x, and then either goes even with the 390x, or trades blows with it.

 

When it comes to overclocking the 980 should really stretch it's legs, maxwell chips seem to have loads of OC headroom, as well as the VRAM. I see loads of 980 hitting atleast 7700 effective on the memory,  above average samsung chips doing around 8200-8400.

Stuff:  i7 7700k @ (dat nibba succ) | ASRock Z170M OC Formula | G.Skill TridentZ 3600 c16 | EKWB 1080 @ 2100 mhz  |  Acer X34 Predator | R4 | EVGA 1000 P2 | 1080mm Radiator Custom Loop | HD800 + Audio-GD NFB-11 | 850 Evo 1TB | 840 Pro 256GB | 3TB WD Blue | 2TB Barracuda

Hwbot: http://hwbot.org/user/lays/ 

FireStrike 980 ti @ 1800 Mhz http://hwbot.org/submission/3183338 http://www.3dmark.com/3dm/11574089

Link to comment
Share on other sites

Link to post
Share on other sites

The 960 is getting wrecked so hard and R9 290 beats a 970.

These results are kind of questionable, even though Guru3D are a reliable source. We'll see if things change after nvidia roll out their "game optimised" driver.

 

They were using their Game Ready drivers, and it has already been released for the beta.

 

EDIT: My bad I saw that Prysin relied to you. Ignore this.

Spoiler

Cpu: Ryzen 9 3900X – Motherboard: Gigabyte X570 Aorus Pro Wifi  – RAM: 4 x 16 GB G. Skill Trident Z @ 3200mhz- GPU: ASUS  Strix Geforce GTX 1080ti– Case: Phankteks Enthoo Pro M – Storage: 500GB Samsung 960 Evo, 1TB Intel 800p, Samsung 850 Evo 500GB & WD Blue 1 TB PSU: EVGA 1000P2– Display(s): ASUS PB238Q, AOC 4k, Korean 1440p 144hz Monitor - Cooling: NH-U12S, 2 gentle typhoons and 3 noiseblocker eloops – Keyboard: Corsair K95 Platinum RGB Mouse: G502 Rgb & G Pro Wireless– Sound: Logitech z623 & AKG K240

Link to comment
Share on other sites

Link to post
Share on other sites

Ashes of the Singularity, which is one of the first DX12 games (an RTS at that) that would benefit a lot from more draw calls, "Only" uses about 20k:

 

http://www.pcper.com/reviews/Graphics-Cards/DX12-GPU-and-CPU-Performance-Tested-Ashes-Singularity-Benchmark/Benchmark-det

It does this by using batches:

http://docs.unity3d.com/Manual/DrawCallBatching.html

 

I'm not saying drawcalls won't increase, I'm sure they will, but I doubt we are talking 30-50 fold compared to now.

 

However draw calls are measured per frame, not per second (only 3dmarks test does it that way), so 660k draw calls @ 60fps in Project Cars is pretty useless a number. DX11 could not handle the 50k draw calls of AC:U very well, so such a thing will be solved with DX12.

 

Note the drawcall test in 3dmark has nothing to do with performance between different architectures or systems, so you can't compare AMD numbers to NVidia numbers for instance. The test (NOT BENCHMARK) uses up to 393216​ draw calls per frame at max.

 

 

Source: http://s3.amazonaws.com/download-aws.futuremark.com/3DMark_Technical_Guide.pdf

 

Sorry but those numbers means nothing in any games, cannot be compared between AMD and Nvidia, and the draw calls per second thing is irrelevant for calculating draw calls in games too.

 

Here's how Overhead test works from pcper:

 

At a high level, here is how the test works: starting with a small number of draw calls per frame, the test increases the number of calls in steps every 20 frames until the frame rate drops below 30 FPS. Once that occurs, it keeps that draw call count and measures frame rates for 3 seconds. It then computes the draw calls per second (frame rate multiplied by draw calls per frame) and the result is displayed for the user.

 

http://www.pcper.com/reviews/Graphics-Cards/3DMark-API-Overhead-Feature-Test-Early-DX12-Performance

 

So framerate * draw calls per frame. Isn't that exactly what I did? 20k in AoS means at 30 fps 20000*30 = 600k, 1.2 m at 60 fps. You can take the amount of draw calls per frame and multiply it by framerate and you're doing the same thing APi Overhead test does and the numbers aren't meaningless anymore, right?

 

Anyway, all that matters is the difference between AMD's and Nvidia's driver. I am not comparing graphics performance or graphics cards, I'm not comparing AMD vs Nvidia GPUs, but rather their driver overhead. However the test counts the draw calls, it counts them the same way whatever graphics card you use, AMD's or Nvidia's. And if using Nvidia's card and their driver gives you higher draw call count than using AMD's GPU and driver then obviously Nvidia's driver has less overhead.

There's a proof for this. Just look at the DX11 benchmarks in AoS. Nvidia completely destroys AMD. And then when you look at the draw calls, 900k vs 1.3m it makes sense. If I had a faster CPU, both numbers would be higher, but the difference would still be there.

i7 9700K @ 5 GHz, ASUS DUAL RTX 3070 (OC), Gigabyte Z390 Gaming SLI, 2x8 HyperX Predator 3200 MHz

Link to comment
Share on other sites

Link to post
Share on other sites

I've owned 2 290x and 2 970, as well as a 980

Testing myself and looking at benchmarks, the 290x is slightly faster than 970 by a few FPS, and the 980 is slightly above the 290x.

Looking at reviews and benchmarks confirms this, the 980 90% of the time is ahead of the 290x, and then either goes even with the 390x, or trades blows with it.

 

When it comes to overclocking the 980 should really stretch it's legs, maxwell chips seem to have loads of OC headroom, as well as the VRAM. I see loads of 980 hitting atleast 7700 effective on the memory,  above average samsung chips doing around 8200-8400.

Sorry (not sorry), but your personal benchmarks mean absolutely nothing to me. That said I addressed this in my post:

 

Uhh, not it's not.

 

290=390~780~970

290x=390x~780 ti~980

 

The 390x, which is just an refreshed 290x, trades blows with a 980, as in, it actually beats the 980 in some games like Shadow of Mordor and is even in Far Cry 4. The 290x and 970 are not in the same performance bracket. Even in the charts in the OP, the 290x and 390x are within 1 fps of each other. You need to ignore the benchmarks of when Maxwell 1st came out when AMD still had garbage drivers.

 

Even just going by the charts in the OP, the 290 is 1 fps behind the 390, so the same will apply to the 290x vs 390x.  The difference between a 290x and 390x is 50 MHz. If you still own those cards, run them with catalyst 15.7 or later.

 

OC is irrelevant since not all coolers are created equal, but if you want to go down that route, Maxwell's massive overclocking headroom doesn't actually translate to that much better performance. Just like you wouldn't compare AMD and Intel CPUs clock for clock (even back when AMD CPUs were competitive) since their architectures are different, you also can't compared Nvidia GPUs to AMD's. You can rewatch Jayz2cents review where he pitted a 390 OC'ed by a measly 160 MHz against a 970 OC'ed to 1442 MHz and the 970 still lost.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

How are they getting so low performance?

I was playing the beta on my 670 and with the old drivers all day yesterday and was getting more fps than the 960...

(Also stock Maxwell benchmarks are useless those card do 1.400-1.500mhz that is another 10+fps on top.)

 

 

Yes for the small group of hardware enthusiasts like us we OC our GPUs you'd be surprised how many people don't overclock or give a fuck about overclocking, even when I offered to do it for them.

Spoiler

Cpu: Ryzen 9 3900X – Motherboard: Gigabyte X570 Aorus Pro Wifi  – RAM: 4 x 16 GB G. Skill Trident Z @ 3200mhz- GPU: ASUS  Strix Geforce GTX 1080ti– Case: Phankteks Enthoo Pro M – Storage: 500GB Samsung 960 Evo, 1TB Intel 800p, Samsung 850 Evo 500GB & WD Blue 1 TB PSU: EVGA 1000P2– Display(s): ASUS PB238Q, AOC 4k, Korean 1440p 144hz Monitor - Cooling: NH-U12S, 2 gentle typhoons and 3 noiseblocker eloops – Keyboard: Corsair K95 Platinum RGB Mouse: G502 Rgb & G Pro Wireless– Sound: Logitech z623 & AKG K240

Link to comment
Share on other sites

Link to post
Share on other sites

How are they getting so low performance?

I was playing the beta on my 670 and with the old drivers all day yesterday and was getting more fps than the 960...

(Also stock Maxwell benchmarks are useless those card do 1.400-1.500mhz that is another 10+fps on top.)

 

Remember that benchmarks turn on all the eye candy that normal people don't use. I'm pretty sure most settings are on ultra or very high and AA is at 16x.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

Sorry (not sorry), but your personal benchmarks mean absolutely nothing to me. That said I addressed this in my post:

 

 

Even just going by the charts in the OP, the 290 is 1 fps behind the 390, so the same will apply to the 290x vs 390x.  The difference between a 290x and 390x is 50 MHz. If you still own those cards, run them with catalyst 15.7 or later.

 

OC is irrelevant since not all coolers are created equal, but if you want to go down that route, Maxwell's massive overclocking headroom doesn't actually translate to that much better performance. Just like you wouldn't compare AMD and Intel CPUs clock for clock (even back when AMD CPUs were competitive) since their architectures are different, you also can't compared Nvidia GPUs to AMD's. You can rewatch Jayz2cents review where he pitted a 390 OC'ed by a measly 160 MHz against a 970 OC'ed to 1442 MHz and the 970 still lost.

Umm, not exactly true. Plenty of videos showing the difference between a stock clocked 980 Ti, and an overclocked 980 Ti. You are correct that all coolers do not allow all cards to be OC'd the same, i can attest that the difference between the stock Nvidia cooler (best cooler for SLI, not great for OCing) and the best air cooled GPU's is within spitting differences of each other.

 

 

http://www.anandtech.com/show/9306/the-nvidia-geforce-gtx-980-ti-review/17

 

http://www.maximumpc.com/geforce-gtx-980-ti-overclocked/#page-2

 

http://www.legitreviews.com/nvidia-geforce-gtx-980-ti-6gb-video-card-review_165406/5 (go through the rest of the pages as well)

 

http://techreport.com/review/28685/geforce-gtx-980-ti-cards-compared/6

 

Maxwell is no joke when it comes to OCing. To say it won't give you "much performance" is not being objective here. It only spreads misinformation.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×