Jump to content

The GTX 970 vs. R9 390 thread to (theoretically) end them all

PlayStation 2
5 minutes ago, Majestic said:

No i'm saying that an engine which puts more stress on the CPU (significantly), has to effect the graphics hardware more which suffers from a driver overhead. Just as DigitalFoundry and PCLAB.pl found games where driver overhead is an issue, and ones in which it is not. The point was, because there was no significant difference between the delta between the 970 and the 390 in such vastly different engines, means the CPU wasn't a factor in both scenario's. Which means that the 6700K @ 4,5ghz obscures some of the more nuanced results you'd have running a lower-end CPU. Which was the whole point Both DF and PCLAB did the tests.

 

Are you saying the FX-8350 isnt worse than a skylake CPU in Far Cry 4? In that case, you're incredibly disingenuous and we're done here. 

Again, semantics. The point was that FC4 scaled heavily on CPU, SOM didn't. And yet the delta's are the same.

 

Yes, i know the 390 is faster hardwarewise. On a fast. Fucking. CPU.

 

On  a fast. c. p. u. 

but then again, define a FAST CPU?

In a title that uses MAX 4 cores, a 5820x will do about as well as a i3 or locked i5 (haswell), but worse then a locked skylake i5.

why?

IPC is higher, but clock speeds are the same.

 

Equally, a Core i7 4790k with its 4GHz stock and 4.3GHz Turbo will beat the skylake i5 in the same game, because the clock speed will counteract the IPC deficiency.

 

However here comes the fallacy of some of these tests (regardless of who tests). Many benchmarkers do not have all these CPUs, instead they SIMULATE, by downclocking and disabling cores. And for the most part that is correct, except you end up with an i3 with a gigantic L3 cache. And to this day, noone has bothered to check how much that cache can or can not affect CPU performance. It is a wildcard that noone seems interested in looking into. Yet they keep benchmarking as if it couldnt possibly have any effect.

 

I know Digital Foundry has simulated i3s a few times, generally they are very tidy and actually say straight out that they simulate their i3s. In the case of the 6100, they own one of those (you can see the box on a shelf in some videos). So that is fine, no simulation going on there.

 

Again, we return to the semantics which i argue on when it comes to PClabs.PL . Do they simulate, or do they OWN a i3 or i5 used for testing? Are you, Majestic, fluent enough in Polish to translate this?

Because when it comes to nitpicky grammar, i do not trust google translate AT ALL. Ive seen how "accurate" google translate is when using my own language. If you write simple text, fine. But add any technical jargong or way of wording things, and the accuracy of google translate sinks like a rock in water.

 

It all comes down to semantics, a few simple words. But in this case, those words can mean the difference between an accurate measurement, or a approximated measurement.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Prysin said:

but then again, define a FAST CPU?

In a title that uses MAX 4 cores, a 5820x will do about as well as a i3 or locked i5 (haswell), but worse then a locked skylake i5.

why?

IPC is higher, but clock speeds are the same.

 

Equally, a Core i7 4790k with its 4GHz stock and 4.3GHz Turbo will beat the skylake i5 in the same game, because the clock speed will counteract the IPC deficiency.

 

However here comes the fallacy of some of these tests (regardless of who tests). Many benchmarkers do not have all these CPUs, instead they SIMULATE, by downclocking and disabling cores. And for the most part that is correct, except you end up with an i3 with a gigantic L3 cache. And to this day, noone has bothered to check how much that cache can or can not affect CPU performance. It is a wildcard that noone seems interested in looking into. Yet they keep benchmarking as if it couldnt possibly have any effect.

 

I know Digital Foundry has simulated i3s a few times, generally they are very tidy and actually say straight out that they simulate their i3s. In the case of the 6100, they own one of those (you can see the box on a shelf in some videos). So that is fine, no simulation going on there.

 

Again, we return to the semantics which i argue on when it comes to PClabs.PL . Do they simulate, or do they OWN a i3 or i5 used for testing? Are you, Majestic, fluent enough in Polish to translate this?

Because when it comes to nitpicky grammar, i do not trust google translate AT ALL. Ive seen how "accurate" google translate is when using my own language. If you write simple text, fine. But add any technical jargong or way of wording things, and the accuracy of google translate sinks like a rock in water.

 

It all comes down to semantics, a few simple words. But in this case, those words can mean the difference between an accurate measurement, or a approximated measurement.

I can tell you with my own testing that the overhead drops significantly when switching from AMD to Nvidia. I ran a 4690k OC to 4.6 and a 4790k OC to 4.6 with in Fallout 4 with AMD and so knew my CPU load, when I switched to Nvidia the change was substantial. So much so I bet I don't even need my 4790k. Whereas my CPU load would spike to 70% and hover around 55% with the 290 and my 4790k, with the 980ti I see rare spikes to 50% and I'm usually in the 30's. Sometime I'm just idling with the CPU.

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Prysin said:

but then again, define a FAST CPU?

In a title that uses MAX 4 cores, a 5820x will do about as well as a i3 or locked i5 (haswell), but worse then a locked skylake i5.

why?

IPC is higher, but clock speeds are the same.

 

Equally, a Core i7 4790k with its 4GHz stock and 4.3GHz Turbo will beat the skylake i5 in the same game, because the clock speed will counteract the IPC deficiency.

 

However here comes the fallacy of some of these tests (regardless of who tests). Many benchmarkers do not have all these CPUs, instead they SIMULATE, by downclocking and disabling cores. And for the most part that is correct, except you end up with an i3 with a gigantic L3 cache. And to this day, noone has bothered to check how much that cache can or can not affect CPU performance. It is a wildcard that noone seems interested in looking into. Yet they keep benchmarking as if it couldnt possibly have any effect.

 

I know Digital Foundry has simulated i3s a few times, generally they are very tidy and actually say straight out that they simulate their i3s. In the case of the 6100, they own one of those (you can see the box on a shelf in some videos). So that is fine, no simulation going on there.

 

Again, we return to the semantics which i argue on when it comes to PClabs.PL . Do they simulate, or do they OWN a i3 or i5 used for testing? Are you, Majestic, fluent enough in Polish to translate this?

Because when it comes to nitpicky grammar, i do not trust google translate AT ALL. Ive seen how "accurate" google translate is when using my own language. If you write simple text, fine. But add any technical jargong or way of wording things, and the accuracy of google translate sinks like a rock in water.

 

It all comes down to semantics, a few simple words. But in this case, those words can mean the difference between an accurate measurement, or a approximated measurement.

Fair points.

 

A CPU that removes most of the bottlencking for a 390 is typically an overclocked 4690K (4,7ghz) or higher. Which isn't always the case for mainstream builds. That means it's 45% faster than your run-of-the-mill 3,1ghz 4460. Which many here believe will not bottleneck any GPU to any significant degree. Both DF and PCLAB.pl beg to differ. And usually the bad AMD performance just gets dumped onto "gimpworks".

 

And yes, I know the problem with simulating the i3 and i5. As for PCLAB.pl, they seem to actually own the i3-6100 since it was provided to them by a sponsor.

http://pclab.pl/art66945.html bottom bit of the page. However, JayzTwoCents made this error, as does nl.hardware.info. I've mentioned it to them on several occasions that a 2C/4T CPU with 20MB cache...isn't quite the same as a 2C/4T 3MB cache chip.

 

I'll see if I can translate some of it with morgan_mlgman. Because I'm Dutch, not polish.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Majestic said:

Fair points.

 

A CPU that removes most of the bottlencking for a 390 is typically an overclocked 4690K (4,7ghz) or higher. Which isn't always the case for mainstream builds. That means it's 45% faster than your run-of-the-mill 3,1ghz 4460. Which many here believe will not bottleneck any GPU to any significant degree. Both DF and PCLAB.pl beg to differ. And usually the bad AMD performance just gets dumped onto "gimpworks".

 

And yes, I know the problem with simulating the i3 and i5. As for PCLAB.pl, they seem to actually own the i3-6100 since it was provided to them by a sponsor.

http://pclab.pl/art66945.html bottom bit of the page. However, JayzTwoCents made this error, as does nl.hardware.info. I've mentioned it to them on several occasions that a 2C/4T CPU with 20MB cache...isn't quite the same as a 2C/4T 3MB cache chip.

 

I'll see if I can translate some of it with morgan_mlgman. Because I'm Dutch, not polish.

My 4690k overclocked to 4.6GHz had no such luck. I lost massive amounts of frames and had stuttering issues. More with my 390 than my 290 though.

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Majestic said:

Fair points.

 

A CPU that removes most of the bottlencking for a 390 is typically an overclocked 4690K (4,7ghz) or higher. Which isn't always the case for mainstream builds. That means it's 45% faster than your run-of-the-mill 3,1ghz 4460. Which many here believe will not bottleneck any GPU to any significant degree. Both DF and PCLAB.pl beg to differ. And usually the bad AMD performance just gets dumped onto "gimpworks".

 

And yes, I know the problem with simulating the i3 and i5. As for PCLAB.pl, they seem to actually own the i3-6100 since it was provided to them by a sponsor.

http://pclab.pl/art66945.html bottom bit of the page. However, JayzTwoCents made this error, as does nl.hardware.info. I've mentioned it to them on several occasions that a 2C/4T CPU with 20MB cache...isn't quite the same as a 2C/4T 3MB cache chip.

 

I'll see if I can translate some of it with morgan_mlgman. Because I'm Dutch, not polish.

Except, even with a 6700k @ 4.7 GHz, according to pclab, a 960 is faster than a 380x, when every other benchmark shows a 960 being slower than a 380. You've been side stepping a lot of things I've called you out on and keep posting things that don't even agree with you.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

42 minutes ago, App4that said:

My 4690k overclocked to 4.6GHz had no such luck. I lost massive amounts of frames and had stuttering issues. More with my 390 than my 290 though.

your case is not one of comparison as for your 290 + 390, the issues are very likely of your own cause then that of AMD or whoever made your GPU.

However since you refused to take advice and dismissed it being you that caused any issue. Blaming others before yourself proves nothing.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Prysin said:

your case is not one of comparison as for your 290 + 390, the issues are very likely of your own cause then that of AMD or whoever made your GPU.

However since you refused to take advice and dismissed it being you that caused any issue. Blaming others before yourself proves nothing.

What? I made several topics and spent week trying to resolve the issue. I spent days just in trying different crossfire profiles.

 

If you have any concern of karma, think before you post next. I did everything in my power, and tried everything suggested. I even made a topic before buying the 390. Sadly buying a second Vapor wasn't an option for me, as that was my intent.

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, App4that said:

What? I made several topics and spent week trying to resolve the issue. I spent days just in trying different crossfire profiles.

 

If you have any concern of karma, think before you post next. I did everything in my power, and tried everything suggested. I even made a topic before buying the 390. Sadly buying a second Vapor wasn't an option for me, as that was my intent.

And you were told, what 200 times, that mixing those two were a bad idea....

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Prysin said:

And you were told, what 200 times, that mixing those two were a bad idea....

You were there? No you weren't. The choice was a 290x TriX, and a Nitro 390 OC. AMD lists both as crossfire compatible with the Vapor X 290. And they are in synthetic benchmarks.

 

Here's proof56c8df22d528b_best3dmark.jpg.4260df99a73

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

57 minutes ago, ivan134 said:

Except, even with a 6700k @ 4.7 GHz, according to pclab, a 960 is faster than a 380x, when every other benchmark shows a 960 being slower than a 380. You've been side stepping a lot of things I've called you out on and keep posting things that don't even agree with you.

You already made that point, and I've already pointed out where you were wrong.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, App4that said:

I can tell you with my own testing that the overhead drops significantly when switching from AMD to Nvidia. I ran a 4690k OC to 4.6 and a 4790k OC to 4.6 with in Fallout 4 with AMD and so knew my CPU load, when I switched to Nvidia the change was substantial. So much so I bet I don't even need my 4790k. Whereas my CPU load would spike to 70% and hover around 55% with the 290 and my 4790k, with the 980ti I see rare spikes to 50% and I'm usually in the 30's. Sometime I'm just idling with the CPU.

With drivers from 2010 with my HD5650 or HD4250, i have this laptop's Phenom II N970 idling lower than my 4790K with the same programs and my GTX 970 on 355.98.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Majestic said:

You already made that point, and I've already pointed out where you were wrong.

I was wrong about what? The 2nd benchmark you posted also showed a 960 faster than both the 380 and 380x.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, ivan134 said:

I was wrong about what? The 2nd benchmark you posted also showed a 960 faster than both the 380 and 380x.

6700K at 4.7 GHz + Nvidia GPU scenario still has higher draw call throughput than 6700K at 4.7 GHz + AMD GPU. It doesn't necessarily get rid of driver overhead. It does so only if the amount of draw calls isn't higher than what's needed to fully feed the 380 and 380X. Because if it isn't, the 960 will still win because of higher draw call throughput. So it depends on the amount of draw calls needed to render frames.

i7 9700K @ 5 GHz, ASUS DUAL RTX 3070 (OC), Gigabyte Z390 Gaming SLI, 2x8 HyperX Predator 3200 MHz

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Monarch said:

6700K at 4.7 GHz + Nvidia GPU scenario still has higher draw call throughput than 6700K at 4.7 GHz + AMD GPU. It doesn't necessarily get rid of driver overhead. It does so only if the amount of draw calls isn't higher than what's needed to fully feed the 380 and 380X. Because if it isn't, the 960 will still win because of higher draw call throughput. So it depends on the amount of draw calls needed to render frames.

Oh cool. The other mental gymnastics gold medalist is here. Go back through the thread and compare pclab 960 vs 380 and 380x results in their 380x benchmark and their study where they nake the cpu argument with the gta 5 benhmarks of every other outlet out there. I'm not sure if you people are trolls or have the reasoning skills of an ostrich when you accuse people of cherry picking when there is literally only one place that corroborates your arguments. Please, show who else has the 960 faster than a 380 and 380x in gta 5, unless your new argument is going to be that a 6700k at 4.7 GHz is a weak CPU?

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, ivan134 said:

Oh cool. The other mental gymnastics gold medalist is here. Go back through the thread and compare pclab 960 vs 380 and 380x results in their 380x benchmark and their study where they nake the cpu argument with the gta 5 benhmarks of every other outlet out there. I'm not sure if you people are trolls or have the reasoning skills of an ostrich when you accuse people of cherry picking when there is literally only one place that corroborates your arguments. Please, show who else has the 960 faster than a 380 and 380x in gta 5, unless your new argument is going to be that a 6700k at 4.7 GHz is a weak CPU?

 

 

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, ivan134 said:

Oh cool. The other mental gymnastics gold medalist is here. Go back through the thread and compare pclab 960 vs 380 and 380x results in their 380x benchmark and their study where they nake the cpu argument with the gta 5 benhmarks of every other outlet out there. I'm not sure if you people are trolls or have the reasoning skills of an ostrich when you accuse people of cherry picking when there is literally only one place that corroborates your arguments. Please, show who else has the 960 faster than a 380 and 380x in gta 5, unless your new argument is going to be that a 6700k at 4.7 GHz is a weak CPU?

Apparently others have been instructed to avoid benchmarking drawcall heavy scenes, and to compare AMD cards only to the reference versions of the Nvidia counterparts to make AMD cards look better. 

I've seen those pclab's benchmarks and they clearly show a CPU bottleneck.

 

gta5w_1920h.png

 

Even the 390 is a bit slower than the 960, which indicates the CPU is bottlenecking it. That's how bad AMD's driver is. And with an i5, the bottlenecking would be even worse.

i7 9700K @ 5 GHz, ASUS DUAL RTX 3070 (OC), Gigabyte Z390 Gaming SLI, 2x8 HyperX Predator 3200 MHz

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Monarch said:

Apparently others have been instructed to avoid benchmarking drawcall heavy scenes, and to compare AMD cards only to the reference versions of the Nvidia counterparts to make AMD cards look better. 

I've seen those pclab's benchmarks and they clearly show a CPU bottleneck.

 

-yoink-

 

Even the 390 is a bit slower than the 960, which indicates the CPU is bottlenecking it. That's how bad AMD's driver is. And with an i5, the bottlenecking would be even worse.

Something else looks really wrong here. I know AMD's drivers aren't the best but I really have genuine doubt that the CPU is being hammered so hard, the 2GB Asus 960 beats a 390.

Check out my guide on how to scan cover art here!

Local asshole and 6th generation console enthusiast.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Dan Castellaneta said:

Something else looks really wrong here. I know AMD's drivers aren't the best but I really have genuine doubt that the CPU is being hammered so hard, the 2GB Asus 960 beats a 390.

Here's what Mahigan said on overclock.net:

Quote

 

People wondering why Nvidia is doing a bit better in DX11 than DX12. That's because Nvidia optimized their DX11 path in their drivers for Ashes of the Singularity. With DX12 there are no tangible driver optimizations because the Game Engine speaks almost directly to the Graphics Hardware. So none were made. Nvidia is at the mercy of the programmers talents as well as their own Maxwell architectures thread parallelism performance under DX12. The Devellopers programmed for thread parallelism in Ashes of the Singularity in order to be able to better draw all those objects on the screen. Therefore what were seeing with the Nvidia numbers is the Nvidia draw call bottleneck showing up under DX12. Nvidia works around this with its own optimizations in DX11 by prioritizing workloads and replacing shaders. Yes, the nVIDIA driver contains a compiler which re-compiles and replaces shaders which are not fine tuned to their architecture on a per game basis. NVidia's driver is also Multi-Threaded, making use of the idling CPU cores in order to recompile/replace shaders. The work nVIDIA does in software, under DX11, is the work AMD do in Hardware, under DX12, with their Asynchronous Compute Engines.

But what about poor AMD DX11 performance? Simple. AMDs GCN 1.1/1.2 architecture is suited towards Parallelism. It requires the CPU to feed the graphics card work. This creates a CPU bottleneck, on AMD hardware, under DX11 and low resolutions (say 1080p and even 1600p for Fury-X), as DX11 is limited to 1-2 cores for the Graphics pipeline (which also needs to take care of AI, Physics etc). Replacing shaders or re-compiling shaders is not a solution for GCN 1.1/1.2 because AMDs Asynchronous Compute Engines are built to break down complex workloads into smaller, easier to work, workloads. The only way around this issue, if you want to maximize the use of all available compute resources under GCN 1.1/1.2, is to feed the GPU in Parallel... in comes in Mantle, Vulcan and Direct X 12.

 

http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/400

i7 9700K @ 5 GHz, ASUS DUAL RTX 3070 (OC), Gigabyte Z390 Gaming SLI, 2x8 HyperX Predator 3200 MHz

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Dan Castellaneta said:

Something else looks really wrong here. I know AMD's drivers aren't the best but I really have genuine doubt that the CPU is being hammered so hard, the 2GB Asus 960 beats a 390.

Lol, I'm done with them. I dont think I've ever seen fanboying or shilling of this level and its a colossal waste of my time to keep engaging them. Next time someone asks 970 or 390, I'll tell them 960.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, ivan134 said:

Lol, I'm done with them. I dont think I've ever seen fanboying or shilling of this level and its a colossal waste of my time to keep engaging them. Next time someone asks 970 or 390, I'll tell them 960.

I recommend an R7-240, it beats out the 970 and 390 even in sli and cfx respectively. I have Quad-SLI 980ti's and when I switched all of those out for my single R7-240, not only did my framerates sky-rocket, but I could now use 4 4k monitors while playing GTA V on ultra with a solid 250+ fps! I love this card. Best thing is, it only cost me $67.99.

  • CPU: Intel Core i7-6700k @4.3ghz, Motherboard: Gigabyte Z170X Gaming 7, RAM: Corsair Vengeance LPX 16GB @2400mhz, GPU: Gigabyte GTX 1080 G1 , Case: Corsair 300R, Storage: Samsung 850 EVO 256GB SSD, WD 1TB Black HDD, WD 2TB HDD PSU: Evga SuperNova 750 G2, Display: ASUS VG248QE @144hz, Cooling: Cooler Master V8 GTS, Keyboard: Corsair K95 RGB Platinum, Mouse: Logitech G502 Lightspeed, Sound: Phillips SHP9500 + VModa Boom Pro, Operating System: Windows 10 Pro
  •  
Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Festive said:

I recommend an R7-240, it beats out the 970 and 390 even in sli and cfx respectively. I have Quad-SLI 980ti's and when I switched all of those out for my single R7-240, not only did my framerates sky-rocket, but I could now use 4 4k monitors while playing GTA V on ultra with a solid 250+ fps! I love this card. Best thing is, it only cost me $67.99.

Please, be constructive, or I might have to be a bit more brash with my language.

Check out my guide on how to scan cover art here!

Local asshole and 6th generation console enthusiast.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Dan Castellaneta said:

Probably on par, depending on the game. What games?

witcher 3, Rise of the Tomb Raider, Planetside 2, Frostbite 3 games (probably the new Mass Effect)

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, gilang01 said:

witcher 3, Rise of the Tomb Raider, Planetside 2, Frostbite 3 games (probably the new Mass Effect)

Just go AMD in this case. One game out of the rest you listed (The Witcher 3) is bad on tessellation but can be adjusted if necessary.

Check out my guide on how to scan cover art here!

Local asshole and 6th generation console enthusiast.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×