Jump to content

( Updated )Star Wars Battlefront Benchmarked On Nvidia And AMD, Runs Best On Radeon – Exceptionally Well Optimized Across The Board

Mr_Troll

Umm, not exactly true. Plenty of videos showing the difference between a stock clocked 980 Ti, and an overclocked 980 Ti. You are correct that all coolers do not allow all cards to be OC'd the same, i can attest that the difference between the stock Nvidia cooler (best cooler for SLI, not great for OCing) and the best air cooled GPU's is within spitting differences of each other.

 

 

http://www.anandtech.com/show/9306/the-nvidia-geforce-gtx-980-ti-review/17

 

http://www.maximumpc.com/geforce-gtx-980-ti-overclocked/#page-2

 

http://www.legitreviews.com/nvidia-geforce-gtx-980-ti-6gb-video-card-review_165406/5 (go through the rest of the pages as well)

 

http://techreport.com/review/28685/geforce-gtx-980-ti-cards-compared/6

 

Maxwell is no joke when it comes to OCing. To say it won't give you "much performance" is not being objective here. It only spreads misinformation.

My bad, as I had never seen these benchmarks. However, the 980 ti still seems to be the outlier in the Maxwell line. I assume that you've seen the video I was referencing to in my response to him, but in case you haven't here it is.

 

 

The core of the 970 is OC'ed to the same levels as the 980 ti, but the improvement in performance is nowhere close, and like I said, it still lost to a 390 that was OC'ed only by 160 MHz.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

Here's how Overhead test works from pcper:

 

 

http://www.pcper.com/reviews/Graphics-Cards/3DMark-API-Overhead-Feature-Test-Early-DX12-Performance

 

So framerate * draw calls per frame. Isn't that exactly what I did? 20k in AoS means at 30 fps 20000*30 = 600k, 1.2 m at 60 fps. You can take the amount of draw calls per frame and multiply it by framerate and you're doing the same thing APi Overhead test does and the numbers aren't meaningless anymore, right?

 

Anyway, all that matters is the difference between AMD's and Nvidia's driver. I am not comparing graphics performance or graphics cards, I'm not comparing AMD vs Nvidia GPUs, but rather their driver overhead. However the test counts the draw calls, it counts them the same way whatever graphics card you use, AMD's or Nvidia's. And if using Nvidia's card and their driver gives you higher draw call count than using AMD's GPU and driver then obviously Nvidia's driver has less overhead.

There's a proof for this. Just look at the DX11 benchmarks in AoS. Nvidia completely destroys AMD. And then when you look at the draw calls, 900k vs 1.3m it makes sense. If I had a faster CPU, both numbers would be higher, but the difference would still be there.

 

I literally linked the official technical guide to 3D Mark, and even quoted the important parts. It clearly states it's not a benchmark, that you cannot compare results between vendors what so ever, or even between different GPU architectures of the same vendor.​So your 900k versus 1.3 million draw call comparison cannot be made according to the makers of 3D mark themselves.

It only compares API's on that particular system, and in a way that is completely unrealistic for a game (states so too in the guide).

 

AoS is not draw call limited on AMD on DX11, when it only goes to 20k draw calls. The problem is not having proper multithreading, thus spamming core 0 on the CPI as well as not taking advantage of the GCN architecture, which they do on DX12 by using async compute. That is GCN's strength via their ACE's, which are made for async compute. That has nothing to do with draw calls, making your conclusion incorrect.

In fact, considering NVidia might have to emulate async compute on their drivers, using a CPU, we might see NVidia having a lot higher CPU overhead in DX12 than AMD. But we will see once NVidia activates async compute in new drivers.

 

DX12 can handle 600k draw calls per frame.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

-snip-

 

The number of Mhz a gpu OCs with two completely different architectures is largely irrelevant.

 

AMD cards don't OC to crazy numbers but you usually get decent performance out of it at least from what I have noticed from the 7000 series onwards.

 

Maxwell on the other hand gets crazy 300 mhz+ overclocks and some still you have situation where a Maxwell chip with a crazy OC is neck and neck with a GCN chip with what would be considers a modest OC on Maxwell standards.

 

People get too caught up in numbers. It just all depends on how the chip scales with a faster core clock. These have just been my observations since building my first PC.

Spoiler

Cpu: Ryzen 9 3900X – Motherboard: Gigabyte X570 Aorus Pro Wifi  – RAM: 4 x 16 GB G. Skill Trident Z @ 3200mhz- GPU: ASUS  Strix Geforce GTX 1080ti– Case: Phankteks Enthoo Pro M – Storage: 500GB Samsung 960 Evo, 1TB Intel 800p, Samsung 850 Evo 500GB & WD Blue 1 TB PSU: EVGA 1000P2– Display(s): ASUS PB238Q, AOC 4k, Korean 1440p 144hz Monitor - Cooling: NH-U12S, 2 gentle typhoons and 3 noiseblocker eloops – Keyboard: Corsair K95 Platinum RGB Mouse: G502 Rgb & G Pro Wireless– Sound: Logitech z623 & AKG K240

Link to comment
Share on other sites

Link to post
Share on other sites

The number of Mhz a gpu OCs with two completely different architectures is largely irrelevant.

 

AMD cards don't OC to crazy numbers but you usually get decent performance out of it at least from what I have noticed from the 7000 series onwards.

 

Maxwell on the other hand gets crazy 300 mhz+ overclocks and some still you have situation where a Maxwell chip with a crazy OC is neck and neck with a GCN chip with what would be considers a modest OC on Maxwell standards.

 

People get too caught up in numbers. It just all depends on how the chip scales with a faster core clock. These have just been my observations since building my first PC.

If you followed the conversation from the beginning, that was my point. I also showed that those crazy overclocks don't translate to much for anything that isn't a 980 ti.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

My bad, as I had never seen these benchmarks. However, the 980 ti still seems to be the outlier in the Maxwell line. I assume that you've seen the video I was referencing to in my response to him, but in case you haven't here it is.

 

The core of the 970 is OC'ed to the same levels as the 980 ti, but the improvement in performance is nowhere close, and like I said, it still lost to a 390 that was OC'ed only by 160 MHz.

Yeah, i saw that video before. There is no doubting that the 390 is the obvious winner against the 970, stock vs stock, OC vs OC, it's just an all around better card. There is not a single reason i could think of for someone to take a 970 over a 390, unless you absolutely needed the Nvidia software (Geforce Experience, Shield streaming, etc). It is also easy to argue that the GTX 980 is no longer worth getting over a 390X. In many benches, the 980 outperforms the 390X in 1080p, but as soon as you hit 1440p or 4k, the 390X comes out on top, at a much lower price point. AMD priced their hardware to be super competitive. People just need to get over the old horror stories regarding AMD's drivers.

 

Source on my 390x vs GTX 980 claim: http://www.eurogamer.net/articles/digitalfoundry-2015-radeon-r9-390x-review

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah, i saw that video before. There is no doubting that the 390 is the obvious winner against the 970, stock vs stock, OC vs OC, it's just an all around better card. There is not a single reason i could think of for someone to take a 970 over a 390, unless you absolutely needed the Nvidia software (Geforce Experience, Shield streaming, etc). It is also easy to argue that the GTX 980 is no longer worth getting over a 390X. In many benches, the 980 outperforms the 390X in 1080p, but as soon as you hit 1440p or 4k, the 390X comes out on top, at a much lower price point. AMD priced their hardware to be super competitive. People just need to get over the old horror stories regarding AMD's drivers.

 

Source on my 390x vs GTX 980 claim: http://www.eurogamer.net/articles/digitalfoundry-2015-radeon-r9-390x-review

Don't worry, you don't have to give me a source for your claim because I've known it to be fact for a while. When GPUs trade blows the 390x and 980 do, it all comes down to price. The price of the 980 never made sense at the launch of the 980 and it makes even less sense now that the 390x exists, even with the price drop on the 980.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

Don't worry, you don't have to give me a source for your claim because I've known it to be fact for a while. When GPUs trade blows the 390x and 980 do, it all comes down to price. The price of the 980 never made sense at the launch of the 980 and it makes even less sense now that the 390x exists, even with the price drop on the 980.

The price of the 980 has always bothered me. It performs 20% faster than a 970 on average, but costs 57% more? Where is the sense in that? It's part of the reason why i absolutely despise the existence of the Titan X. It is a Titan, that can't do Titan things. Titans were known for their insane DP performance, and the Titan X just ruins that entirely. Not to mention its price tag, when the 980 Ti outperforms it once OC'd (unless you slap a waterblock on the Titan X, which only adds even more cost to an already expensive card). I have friends on skype that are huge Nvidia Fanboys that question the existence of the Titan X. That's when you know something is wrong.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I literally linked the official technical guide to 3D Mark, and even quoted the important parts. It clearly states it's not a benchmark, that you cannot compare results between vendors what so ever, or even between different GPU architectures of the same vendor.​So your 900k versus 1.3 million draw call comparison cannot be made according to the makers of 3D mark themselves.

It only compares API's on that particular system, and in a way that is completely unrealistic for a game (states so too in the guide).

 

AoS is not draw call limited on AMD on DX11, when it only goes to 20k draw calls. The problem is not having proper multithreading, thus spamming core 0 on the CPI as well as not taking advantage of the GCN architecture, which they do on DX12 by using async compute. That is GCN's strength via their ACE's, which are made for async compute. That has nothing to do with draw calls, making your conclusion incorrect.

In fact, considering NVidia might have to emulate async compute on their drivers, using a CPU, we might see NVidia having a lot higher CPU overhead in DX12 than AMD. But we will see once NVidia activates async compute in new drivers.

 

DX12 can handle 600k draw calls per frame.

 

You misinterpreted 3dMark's documentation.

 

 

The API Overhead feature test is not a general-purpose GPU benchmark, and it should not be used to compare graphics cards from different vendors.

 

I didn't. I compared driver overhead.

 

 

you should be careful making conclusions about GPU performance when comparing API Overhead test results from different systems.

 

I didn't make any conclusions about GPU performance, but about CPU performance due to driver overhead.

 

 

Likewise, it could be misleading to credit the GPU for any difference in DirectX 12 performance between an AMD GPU and an NVIDIA GPU. 

 

I didn't credit any GPUs for performance difference, but rather drivers.

 

 

The proper use of the test is to compare the relative performance of each API on a single system, rather than the absolute performance of different systems.

 

And the reason they say this is because:

 

 

Or, you could test a vendor's range of GPUs, from budget

to high-end, and keep the CPU fixed. But in both cases, the nature of the test means it will
not show you the extent to which the performance differences are due to the hardware and
how much is down to the driver. 

 

However, they also say that the test is not GPU intensive at all. Sure, theoretically if you had a crap GPU it could interfere with your testing because it's so slow it can't even render as much draw calls as your CPU is able to make, in which case your test would be flawed. In my case, I was CPU-bound so this is not an issue.

 

 

AoS is not draw call limited on AMD on DX11, when it only goes to 20k draw calls. The problem is not having proper multithreading, thus spamming core 0 on the CPI

 

Lol what? Everybody knows AoS is draw call limited on AMD's DX11. WHy do you think they've advertised DX12 so much? Why do you think they've developed Mantle? Why is AoS even interested in DX12? Of course it's limited by draw calls. 20k draw calls is a large amount of draw calls. And regarding MT, that's exactly how DX12 works and is able to make more draw calls. It uses more cores.

i7 9700K @ 5 GHz, ASUS DUAL RTX 3070 (OC), Gigabyte Z390 Gaming SLI, 2x8 HyperX Predator 3200 MHz

Link to comment
Share on other sites

Link to post
Share on other sites

The price of the 980 has always bothered me. It performs 20% faster than a 970 on average, but costs 57% more? Where is the sense in that? It's part of the reason why i absolutely despise the existence of the Titan X. It is a Titan, that can't do Titan things. Titans were known for their insane DP performance, and the Titan X just ruins that entirely. Not to mention its price tag, when the 980 Ti outperforms it once OC'd (unless you slap a waterblock on the Titan X, which only adds even more cost to an already expensive card). I have friends on skype that are huge Nvidia Fanboys that question the existence of the Titan X. That's when you know something is wrong.

I guess you could think of it as 1%er product. Even if it's less capable than it's predecessors, there are still people who will buy them for the bragging rights and I don't really mind Nvidia making a product for those people. Price of 1%er products have never made sense no matter what the product is. At that point, it's just a luxury item.

 

On the 980, it's always been a fact the higher up you go, the less bang you get for your buck aka performance per dollar (this kind of ties into my 1st paragraph). However, it's never been this bad and this should be scary to even the most die hard of Nvidia fans. Graphics cards are getting way more expensive for what they're actually offering on a level that I don't think we've ever seen (the 980, 960 and Fury X are the biggest current culprits). I understand the reasoning behind AMD saying they don't want to be the budget option anymore, but this scares me going forward.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

I have a 970, and honestly i don't give a damn about it getting rekt anymore.

Link to comment
Share on other sites

Link to post
Share on other sites

My current OC of my Sapphire 7950. Max Temps are 69-73C

 

20f6133f73ec1c6bad72d0f07623ae2b.png

 

Nice, you'll probably be able to run Battlefront at beyond 380 performance and maybe even higher than 280x levels. Sadly my 7950 can't overclock as well, although it's cause I'm unwilling to overvolt it to the 7950B/280 voltage levels even though I know it's safe. I keep mine at 1ghz and max power level at the stock voltage of 1.09v for the much lower power consumption lol. I'm surprised your memory is able to clock that high despite it being the less overclockable Elpida memory; mine doesn't like to be overclocked at all. 

Link to comment
Share on other sites

Link to post
Share on other sites

I don't think he's fanboying really, the 980 is faster than the 290x. The 290x is closer to the 970 than the 980. It's possible that the game engine prefers AMD architectures though, but anyone that knows how to read benchmarks should know the 980 is quite a bit faster than the 290x.

techpowerup.com

go look under "database" then look under "GPU Database"

Find a 980 and a 290X.... you will see that the 290X is breathing down the neck of hte 980 in every way, hardware wise.

actually... here click the links

http://www.techpowerup.com/gpudb/2460/radeon-r9-290x.html

http://www.techpowerup.com/gpudb/2621/geforce-gtx-980.html

this is the 970

http://www.techpowerup.com/gpudb/2620/geforce-gtx-970.html

what you are looking for is>

Gpixels, Gtexels, Floating point....

look how MASSIVE the difference in Texels and compute there is.

Yes, in pixel fill rate, AMD does lose out, the GCN architecture is ROP limited to some degree. However the performance of the Hawaii core is through the roof.

Compute wise it is on the level of a 980Ti (reference vs reference stock vs stock)... what does that tell you about "unused potential"?

If AMD reached Nvidia level of efficiency in their drivers, a 390 would directly compete with a 980. a Fury would beat out even custom 980Tis

Link to comment
Share on other sites

Link to post
Share on other sites

Here's hoping the multi-gpu optimization is good.

I've never had any issues with frostbite engine before, so I'm hopeful

Higher frame rate over higher resolution.

CPU-i5 4690k -GPU-MSI 970 sli -Mobo-MSI g45 gaming -Memory-16gb crucial ballistix -PSU- EVGA 80+ gold g2 850w -Case- corsair 200r

Monitors- Acer XB240H, Asus ROG Swift, Dell P2815Q 2160p  -Keyboard- Corsair k70 RGB -Mouse- Corsair M65 -Mouse Pad- Glorious Extended Pad -Headphone- BeyerDynamic DT990 250ohm, Senheiser HD 518, Fiio E10k

Link to comment
Share on other sites

Link to post
Share on other sites

So.....apart from the benefits of the extra vRAM, some games are initially appearing to run better on the 390X. Selling my GTX 970 for it is the right choice (4GB vRAM isn't enough for me-at 1080p alone I use over 2.5GB).

 

Edit: BTW, I'm actually going to need to use crossfire since my GTX 970 is struggling so badly ATM-modded Skyrim and outdoors I'm lucky to hit higher than 45fps-indoors the load drops to 60% and the FPS is just fine at 60.

the 390X is approximatly 12-15% faster then a 970, reference vs reference... The custom versions is 50Mhz above stock speeds...

That being said, if you can afford 2x 390X.... and your powersupply can handle it...

You will "finally" be able to push that i7 of yours... trust me.. you CAN hit CPU limits with dual 390Xs...

That being said... will be nice to see someone else use CF that actually has the balls to speak out about his experiences... being a bit tedious to be a one man crusade against the bullshit about massive CF stuttering all over the place... It is not gone, but it is not even a shadow of what it used to be in he 7000 series...

Link to comment
Share on other sites

Link to post
Share on other sites

Damn. I'm swapping my GTX 970 for a 290X.

I don't think my 1580Mhz OC can fill the huge gap...

AMD is winning...

Link to comment
Share on other sites

Link to post
Share on other sites

>MFW bought a 780

The engine roars but then it gives, but never dies

We don't live we just survive

On the scraps that you throw awaaaaaay

Link to comment
Share on other sites

Link to post
Share on other sites

I would like to see the same benchmark tests done with different CPUs as well.  A 5960x has great multicore, but not as strong single core as something like a 5820k.

Link to comment
Share on other sites

Link to post
Share on other sites

the 390X is approximatly 12-15% faster then a 970, reference vs reference... The custom versions is 50Mhz above stock speeds...

That being said, if you can afford 2x 390X.... and your powersupply can handle it...

You will "finally" be able to push that i7 of yours... trust me.. you CAN hit CPU limits with dual 390Xs...

That being said... will be nice to see someone else use CF that actually has the balls to speak out about his experiences... being a bit tedious to be a one man crusade against the bullshit about massive CF stuttering all over the place... It is not gone, but it is not even a shadow of what it used to be in he 7000 series...

So instead of being rip graphics cards it would be rip CPU? If that's the case I'm glad I didn't stick with the i5 4440.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

780 Ti and 970 losing to 290. Ouch. I'll do my own benchmarks when the beta comes out.

Relative performance may be lower than R9 290 but absolute performance is higher than games like Witcher3 and GTA V. All these cards are maxing out the game at ultra. Based on the beta feedback I think Nvidia users too will be happy with their silky smooth gameplay experience unless they start digging into benchmarks and obsessing about AMD numbers.
Link to comment
Share on other sites

Link to post
Share on other sites

Dice games are always going to favor amd.Good too see its well optimised ill be playing at 9

Link to comment
Share on other sites

Link to post
Share on other sites

Relative performance may be lower than R9 290 but absolute performance is higher than games like Witcher3 and GTA V. All these cards are maxing out the game at ultra. Based on the beta feedback I think Nvidia users too will be happy with their silky smooth gameplay experience unless they start digging into benchmarks and obsessing about AMD numbers.

Different engine, different performance. Frostbite's always better on GCN

Link to comment
Share on other sites

Link to post
Share on other sites

Different engine, different performance. Frostbite's always better on GCN

Battlefield 4 runs slightly better on NV hardware... so idk what u talking about...

Intel Core i7 7800x @ 5.0 Ghz with 1.305 volts (really good chip), Mesh OC @ 3.3 Ghz, Fractal Design Celsius S36, Asrock X299 Killer SLI/ac, 16 GB Adata XPG Z1 OCed to  3600 Mhz , Aorus  RX 580 XTR 8G, Samsung 950 evo, Win 10 Home - loving it :D

Had a Ryzen before ... but  a bad bios flash killed it :(

MSI GT72S Dominator Pro G - i7 6820HK, 980m SLI, Gsync, 1080p, 16 GB RAM, 2x128 GB SSD + 1TB HDD, Win 10 home

 

Link to comment
Share on other sites

Link to post
Share on other sites

Battlefield 4 runs slightly better on NV hardware... so idk what u talking about...

The denial is real :D

 

Different engine, different performance. Frostbite's always better on GCN

Mantle* runs better on GCN

Frostbite runs good on both

 

Relative performance may be lower than R9 290 but absolute performance is higher than games like Witcher3 and GTA V. All these cards are maxing out the game at ultra. Based on the beta feedback I think Nvidia users too will be happy with their silky smooth gameplay experience unless they start digging into benchmarks and obsessing about AMD numbers.

*cough* 960 40 fps peasantry *cough*

Archangel (Desktop) CPU: i5 4590 GPU:Asus R9 280  3GB RAM:HyperX Beast 2x4GBPSU:SeaSonic S12G 750W Mobo:GA-H97m-HD3 Case:CM Silencio 650 Storage:1 TB WD Red
Celestial (Laptop 1) CPU:i7 4720HQ GPU:GTX 860M 4GB RAM:2x4GB SK Hynix DDR3Storage: 250GB 850 EVO Model:Lenovo Y50-70
Seraph (Laptop 2) CPU:i7 6700HQ GPU:GTX 970M 3GB RAM:2x8GB DDR4Storage: 256GB Samsung 951 + 1TB Toshiba HDD Model:Asus GL502VT

Windows 10 is now MSX! - http://linustechtips.com/main/topic/440190-can-we-start-calling-windows-10/page-6

Link to comment
Share on other sites

Link to post
Share on other sites

*cough* 960 40 fps peasantry *cough*

46 fps when absolutely maxed out ULTRA settings at 1080p. With a budget-midrange GPU.

I am not denying that AMD is faster. They are. Just saying that NVIDIA users will have a very good experience too.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×