Jump to content

How does the 290X stack against the 970 at 1080p as of now?

dfg666

Oh I'm sorry. You're right. People should say whatever they want and not be questioned about the validity. This place should be an echo chamber and only confirmation bias should be allowed. You should probabaly go back to those forums where no one ever questions anything you say.

And your responsibility is where exactly?

Oh, wait. You're a AMD fan. That's all that's required here.

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

I get your point however, when you supply an equivalent number of samples (25) showing a consistent higher frametime then that's a fair comparison, don't you think so?

 

I understand that the time scale is different.

 

You actually missed the point again. Just because most frames are rendered faster by the 290X, which gives it higher average, doesn't mean you get better performance and gaming experience. Because once you come across an area with a lot of objects the 290X (and other AMD cards) drops performance, unlike the 970. I'd rather have lower consistent framerate of say 50-60 fps with 970, than high framerate of say 70+ fps with 290X in GPU-bound scenario and then drop to 40 or have micro-stuttering in a CPU-bound scenario. 

 

That's why DF said 970 performs better in open areas. More objects to render = AMD cards perform slower. How much slower depends on the amount of draw calls. 

I said this already, but I'll say it again. 290X and 390 are a bit faster than the 970 when they're not bottlenecked by AMD's driver and when there's very little to no tessellation. That's why they pull ahead in such GPU-bound scenarios. But the thing is, a lot of games do make a lot of draw calls and have a good amount of tessellation, which cripples AMD performance. And in a lot of games, this happens on some levels and in some areas only. 

You asked me if I expect reviewers to play x-amount of titles from start to finish to get the full picture. Of course it'd be insane to do that, but technically that's what you'd need to do to get the whole picture. How else can you get the full picture?

 

 

What games do you get stuttering?

 

I swear we already went over this in another thread. I remember even giving you DigitalFoundry's videos to show you the lower framerate and frametime spikes on the AMD side. I've also posted 3dmark's API overhead test results with both of my cards that showed ~30% more draw calls with the Nvidia card. Games that have lower performance/microstuttering on my 290X are BF4, TW3, Fallout 4, Just Cause 3, GTA V. Just search for DF's videos and you'll find they had the same issue.

i7 9700K @ 5 GHz, ASUS DUAL RTX 3070 (OC), Gigabyte Z390 Gaming SLI, 2x8 HyperX Predator 3200 MHz

Link to comment
Share on other sites

Link to post
Share on other sites

 

You actually missed the point again. Just because most frames are rendered faster by the 290X, which gives it higher average, doesn't mean you get better performance and gaming experience. Because once you come across an area with a lot of objects the 290X (and other AMD cards) drops performance, unlike the 970. I'd rather have lower consistent framerate of say 50-60 fps with 970, than high framerate of say 70+ fps with 290X in GPU-bound scenario and then drop to 40 or have micro-stuttering in a CPU-bound scenario. 

 

That's why DF said 970 performs better in open areas.. How much slower depends on the amount of draw calls. 

I said this already, but I'll say it again. 290X and 390 are a bit faster than the 970 when they're not bottlenecked by AMD's driver and when there's very little to no tessellation. That's why they pull ahead in such GPU-bound scenarios. But the thing is, a lot of games do make a lot of draw calls and have a good amount of tessellation, which cripples AMD performance. And in a lot of games, this happens on some levels and in some areas only. 

You asked me if I expect reviewers to play x-amount of titles from start to finish to get the full picture. Of course it'd be insane to do that, but technically that's what you'd need to do to get the whole picture. How else can you get the full picture?

 

 

I swear we already went over this in another thread. I remember even giving you DigitalFoundry's videos to show you the lower framerate and frametime spikes on the AMD side. I've also posted 3dmark's API overhead test results with both of my cards that showed ~30% more draw calls with the Nvidia card. Games that have lower performance/microstuttering on my 290X are BF4, TW3, Fallout 4, Just Cause 3, GTA V. Just search for DF's videos and you'll find they had the same issue.

A plausible explanation... but I'm not sold. I rely on statistics and I'm tough to convince without it. Suggesting that AMD cards give a poor gaming experience (70+fps to 40fps) sounds like you've had a poor experience with your 290X.(unlucky silicon lottery perhaps)..I've just watched a DF video on the latest tomb raider title comparing both the 390&970 and your claim doesn't seem to hold true for "all AMD" cards (granted it's only one video)

 

You seem to be changing the goal post. it was frametime to tessellation/draw calls/drivers/bottleneck .. the variables seem to be increasing ( which is reality; there are a lot of variables) .. but please don't suggest a poorer gaming experience because both cards have their flaws and I know because my partner has a PC with SLI Titan Black and it's not been singing and dancing from day one till now.

Link to comment
Share on other sites

Link to post
Share on other sites

Depends on the 970 I'd say, the Gigabyte G1 Overclock version has a massive overclock, but in general the info I've seen and even Benchmarks posted in this thread all seem to indicate the 970 pulling ahead, am I missing something?

Don't hate the game, stab the player.

Link to comment
Share on other sites

Link to post
Share on other sites

An issue that affects both cards... are you going to get to the point or what?

my point is that average framerate is far from the whole story of "gaming experience"

i never said anything about one card or the other

i just said that frame time is more important

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

A plausible explanation... but I'm not sold. I rely on statistics and I'm tough to convince without it. Suggesting that AMD cards give a poor gaming experience (70+fps to 40fps) sounds like you've had a poor experience with your 290X.(unlucky silicon lottery perhaps)..I've just watched a DF video on the latest tomb raider title comparing both the 390&970 and your claim doesn't seem to hold true for "all AMD" cards (granted it's only one video)

 

You seem to be changing the goal post. it was frametime to tessellation/draw calls/drivers/bottleneck .. the variables seem to be increasing ( which is reality; there are a lot of variables) .. but please don't suggest a poorer gaming experience because both cards have their flaws and I know because my partner has a PC with SLI Titan Black and it's not been singing and dancing from day one till now.

 

There's nothing wrong with relying on statistics per se, but no game benchmark does it properly. You'd need an analysis of performance throughout the game, in every single situation in the game. If you only pick a certain map or level or part of them, it might use tessellation for example, in which case it would look like Nvidia cards are superior, but that's just that one level/map. You can't understand how graphics cards stack up unless you analyse how they perform in different situations.

 

Also, I'm not changing the goal of my posts. Frametime spikes and low framerates are the consequences of tessellation and driver overhead. Draw calls, drivers, bottleneck are not separate issues. AMD's issue is that their driver has high CPU overhead, which means their driver has lower draw call throughput, which is therefore the bottleneck when a game makes a lot of draw calls.

 

Take a look at this: https://youtu.be/15wOp7_dD8E?t=45s

See how 390's performance drops significantly? In this case we're not talking just frametimes, but framerate as well. Now, this happens only in town areas. If the benchmark didn't include this area in its test, then 390 would probably win and you'd think it's a great graphics card. Well it isn't because of AMD's driver.

 

If you want a microstuttering example, take a look at this: https://youtu.be/vSDQzlKDYq4?t=40s

Notice the horrible frame time spikes that only occur with AMD cards. 

i7 9700K @ 5 GHz, ASUS DUAL RTX 3070 (OC), Gigabyte Z390 Gaming SLI, 2x8 HyperX Predator 3200 MHz

Link to comment
Share on other sites

Link to post
Share on other sites

my point is that average framerate is far from the whole story of "gaming experience"

i never said anything about one card or the other

i just said that frame time is more important

And frame time has a relationship with framerate which I've shown ( nothing about cards either). The scales are just different.

Link to comment
Share on other sites

Link to post
Share on other sites

And frame time has a relationship with framerate which I've shown ( nothing about cards either). The scales are just different.

you clearly didnt read the article...

 

 

 

here is a TLDR version since youre too lazy to read:

 

framerate is an average over a short period of time, thats why its measured in frames per second

minimum framerate is still an average, just over a fraction of a second rather than the entire benchmark period

 

frame time is the exact amount of milliseconds that it took the GPU to produce a frame and whether that was an entire frame, or a fraction of a frame

half an image that causes screen tearing would theoretically count as one frame when measuring fps, but not when measuring frame time, because it never was a full image

 

also the way the GPU pushes frames to a monitor is much more complicated than 60fps in games = 60fps to your eyes

thats why frame time measurements actually take the image from the GPU output, what actually gets displayed on the monitor, and not what your computer thinks its doing in fps

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

Also, I'm not changing the goal of my posts. Frametime spikes and low framerates are the consequences of tessellation and driver overhead. Draw calls, drivers, bottleneck are not separate issues. AMD's issue is that their driver has high CPU overhead, which means their driver has lower draw call throughput, which is therefore the bottleneck when a game makes a lot of draw calls.

 

Take a look at this: https://youtu.be/15wOp7_dD8E?t=45s

See how 390's performance drops significantly? In this case we're not talking just frametimes, but framerate as well. Now, this happens only in town areas. If the benchmark didn't include this area in its test, then 390 would probably win and you'd think it's a great graphics card. Well it isn't because of AMD's driver.

 

If you want a microstuttering example, take a look at this: https://youtu.be/vSDQzlKDYq4?t=40s

Notice the horrible frame time spikes that only occur with AMD cards. 

 

I don't fully understand what is "driver overhead". If somebody can explain in simple english. 

Link to comment
Share on other sites

Link to post
Share on other sites

I don't fully understand what is "driver overhead". If somebody can explain in simple english. 

 

It's basically how efficient driver is and how much CPU processing power is needed to complete a task. The GPU gets instructions from the CPU on what to render. It needs instructions for each object. This is called a draw call. Each draw call creates overhead for the CPU. Games typically make thousands of draw calls per frame. The lower the driver overhead, the more draw calls can be made. 

i7 9700K @ 5 GHz, ASUS DUAL RTX 3070 (OC), Gigabyte Z390 Gaming SLI, 2x8 HyperX Predator 3200 MHz

Link to comment
Share on other sites

Link to post
Share on other sites

And AMD is not offering good enough drivers? Why? 

 

P.s.

I don't mean in general, just this about driver overhead. 

Link to comment
Share on other sites

Link to post
Share on other sites

And AMD is not offering good enough drivers? Why?

P.s.

I don't mean in general, just this about driver overhead.

Because they rely on hardware over software. Reason I went from AMD to Nvidia.

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

They could change it if they wanted to, but they don't if I understand that correctly. 

Link to comment
Share on other sites

Link to post
Share on other sites

Why should they? AMD sells graphics cards to customers who only look at fps. Stability takes a back seat to raw performance. I'm playing Rise of the Tomb Raider right now, first day it's been out and it plays beautifully. I put the settings so it was at 20fps and it was still butter smooth. Nvidia might not have the impressive specs some AMD cards carry but I'm sold on their features like day one drivers.

Oh, 1440p and this game. Holy fucking balls is it beautiful.

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

you clearly didnt read the article...

 

 

 

here is a TLDR version since youre too lazy to read:

 

framerate is an average over a short period of time, thats why its measured in frames per second

minimum framerate is still an average, just over a fraction of a second rather than the entire benchmark period

 

frame time is the exact amount of milliseconds that it took the GPU to produce a frame and whether that was an entire frame, or a fraction of a frame

half an image that causes screen tearing would theoretically count as one frame when measuring fps, but not when measuring frame time, because it never was a full image

 

also the way the GPU pushes frames to a monitor is much more complicated than 60fps in games = 60fps to your eyes

thats why frame time measurements actually take the image from the GPU output, what actually gets displayed on the monitor, and not what your computer thinks its doing in fps

I discard useless information it's not being lazy.

I explained it mathematically earlier on and now you've put it in words. There's no need to be cocky.

They are both functions of time. One a function of time in seconds the other a function of time in milliseconds. Each frame (partial or full) takes a finite amount of time to execute which is different from frame to frame. It's not rocket science.

Cards aside...if frame 1 executes in t1=15ms, frame 2 in t2= 35ms, frame 3 in t3=20ms. When you add t1+t2+t3+........tx = 1000ms, it should have executed Y amount of frames in 1second right?

Link to comment
Share on other sites

Link to post
Share on other sites

I discard useless information it's not being lazy.

I explained it mathematically earlier on and now you've put it in words. There's no need to be cocky.

They are both functions of time. One a function of time in seconds the other a function of time in milliseconds. Each frame (partial or full) takes a finite amount of time to execute which is different from frame to frame. It's not rocket science.

Cards aside...if frame 1 executes in t1=15ms, frame 2 in t2= 35ms, frame 3 in t3=20ms. When you add t1+t2+t3+........tx = 1000ms, it should have executed Y amount of frames in 1second right?

 

no, i dont think you get it at all

 

they are not the same thing

 

what youre doing is taking a framerate (fps) and dividing it by number of frames, to get the AVERAGE time for one frame

 

what frame time is, is actually the opposite

its time per one frame, not frames divided by time

 

and as i said, its not as simple as "t1=15ms, frame 2 in t2= 35ms, frame 3 in t3=20ms"

if t1 is a single line of an image it would still count as one frame, which is not true, because you cannot see a single line of one image

frame time takes into account if the entire image was actually displayed or not (which is not an issue with gsync/freesync)

frames per second counts all image refreshes, even if they are less than a frame

 

frame time also includes the delay between the computer creating the frame and the frame being output by the GPU

this is something not measured at all by fps, since fps is measured internally

 

 

you seem to discard ALL information, not just useless information...

you really need to work on your text analysis skills...

 

maybe you're not so good at reading but more of a visual/aural learner?

try watching this and learn something instead

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

I assume it outperforms the 970 in most titles at 1080p by no more than 5% (since it's similar to the 390) and obviously above resolutions (4K) due to the massive bandwidth?

It definitely outperforms the 970. With a little overclocking, the 290x is 980 level.

Link to comment
Share on other sites

Link to post
Share on other sites

There's nothing wrong with relying on statistics per se, but no game benchmark does it properly. You'd need an analysis of performance throughout the game, in every single situation in the game. If you only pick a certain map or level or part of them, it might use tessellation for example, in which case it would look like Nvidia cards are superior, but that's just that one level/map. You can't understand how graphics cards stack up unless you analyse how they perform in different situations.

 

Also, I'm not changing the goal of my posts. Frametime spikes and low framerates are the consequences of tessellation and driver overhead. Draw calls, drivers, bottleneck are not separate issues. AMD's issue is that their driver has high CPU overhead, which means their driver has lower draw call throughput, which is therefore the bottleneck when a game makes a lot of draw calls.

 

Take a look at this: https://youtu.be/15wOp7_dD8E?t=45s

See how 390's performance drops significantly? In this case we're not talking just frametimes, but framerate as well. Now, this happens only in town areas. If the benchmark didn't include this area in its test, then 390 would probably win and you'd think it's a great graphics card. Well it isn't because of AMD's driver.

 

If you want a microstuttering example, take a look at this: https://youtu.be/vSDQzlKDYq4?t=40s

Notice the horrible frame time spikes that only occur with AMD cards.

Microstuttering ....lol. If you notice a stutter is not micro anymore. Milli is a 1000th , Micro is a 1000000th All these jargons are meaningless.

Hand picked games to prove your point and I see spikes but are these spikes sustained for long enough periods to have an impact on gameplay...not that I can see. Is frametime an accurate measure, I think so. But FPS is king because it's tangible you will notice a second than a 1000th of a second.

Link to comment
Share on other sites

Link to post
Share on other sites

It definitely outperforms the 970. With a little overclocking, the 290x is 980 level.

Yep. My 290X overclocked scored 105% of a GTX 980 score (reference one) in userbenchmark.

CPU: AMD Ryzen 7 5800X3D GPU: AMD Radeon RX 6900 XT 16GB GDDR6 Motherboard: MSI PRESTIGE X570 CREATION
AIO: Corsair H150i Pro RAM: Corsair Dominator Platinum RGB 32GB 3600MHz DDR4 Case: Lian Li PC-O11 Dynamic PSU: Corsair RM850x White

Link to comment
Share on other sites

Link to post
Share on other sites

Yay frametime argument.

 

Taken from Techreport R9-Fury review, the same website that started frametime test alongside PCper.

 

fc4-16ms.gif

ai-16ms.gif

civbe-16ms.gif

c3-16ms.gif

gtav-16ms.gif

w3-16ms.gif

 

 

bf4-16ms.gif

 

AMD didn't optimized DX11 for BF4, the frametime is much better when u use Mantle.

 

 

MordorU_2560x1440_PLOT_0.png

 

 

 

In be4 they need to overclock that 970

 

Look at how awesome that 295x2 frametime in most of the games, @Prysin must be proud.

| Intel i7-3770@4.2Ghz | Asus Z77-V | Zotac 980 Ti Amp! Omega | DDR3 1800mhz 4GB x4 | 300GB Intel DC S3500 SSD | 512GB Plextor M5 Pro | 2x 1TB WD Blue HDD |
 | Enermax NAXN82+ 650W 80Plus Bronze | Fiio E07K | Grado SR80i | Cooler Master XB HAF EVO | Logitech G27 | Logitech G600 | CM Storm Quickfire TK | DualShock 4 |

Link to comment
Share on other sites

Link to post
Share on other sites

AMD didn't optimized DX11 for BF4, the frametime is much better when u use Mantle.

Yeah, that is reasonable as they most likely assumed that every AMD GPU user would use it. Or at least a vast majority of them

BTW. You put Crysis 3 twice in there.

CPU: AMD Ryzen 7 5800X3D GPU: AMD Radeon RX 6900 XT 16GB GDDR6 Motherboard: MSI PRESTIGE X570 CREATION
AIO: Corsair H150i Pro RAM: Corsair Dominator Platinum RGB 32GB 3600MHz DDR4 Case: Lian Li PC-O11 Dynamic PSU: Corsair RM850x White

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah, that is reasonable as they most likely assumed that every AMD GPU user would use it. Or at least a vast majority of them

BTW. You put Crysis 3 twice in there.

 

Yeah but Mantle sucks on 2GB cards so they shouldn't neglected DX11 for that game imo.

 

Thanks, fixed that.

| Intel i7-3770@4.2Ghz | Asus Z77-V | Zotac 980 Ti Amp! Omega | DDR3 1800mhz 4GB x4 | 300GB Intel DC S3500 SSD | 512GB Plextor M5 Pro | 2x 1TB WD Blue HDD |
 | Enermax NAXN82+ 650W 80Plus Bronze | Fiio E07K | Grado SR80i | Cooler Master XB HAF EVO | Logitech G27 | Logitech G600 | CM Storm Quickfire TK | DualShock 4 |

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah but Mantle sucks on 2GB cards so they shouldn't neglected DX11 for that game imo.

 

Thanks, fixed that.

Why does it suck on 2GB cards?

Now that I think of it, there aren't many fairly recent 2GB AMD cards so it's not that bad I guess

CPU: AMD Ryzen 7 5800X3D GPU: AMD Radeon RX 6900 XT 16GB GDDR6 Motherboard: MSI PRESTIGE X570 CREATION
AIO: Corsair H150i Pro RAM: Corsair Dominator Platinum RGB 32GB 3600MHz DDR4 Case: Lian Li PC-O11 Dynamic PSU: Corsair RM850x White

Link to comment
Share on other sites

Link to post
Share on other sites

no, i dont think you get it at all

they are not the same thing

what youre doing is taking a framerate (fps) and dividing it by number of frames, to get the AVERAGE time for one frame

what frame time is, is actually the opposite

its time per one frame, not frames divided by time

and as i said, its not as simple as "t1=15ms, frame 2 in t2= 35ms, frame 3 in t3=20ms"

if t1 is a single line of an image it would still count as one frame, which is not true, because you cannot see a single line of one image

frame time takes into account if the entire image was actually displayed or not (which is not an issue with gsync/freesync)

frames per second counts all image refreshes, even if they are less than a frame

frame time also includes the delay between the computer creating the frame and the frame being output by the GPU

this is something not measured at all by fps, since fps is measured internally

you seem to discard ALL information, not just useless information...

you really need to work on your text analysis skills...

maybe you're not so good at reading but more of a visual/aural learner?

try watching this and learn something instead

Right lets see.

no, i dont think you get it at all

they are not the same thing

what youre doing is taking a framerate (fps) and dividing it by number of frames, to get the AVERAGE time for one frame

what frame time is, is actually the opposite

its time per one frame, not frames divided by time

and as i said, its not as simple as "t1=15ms, frame 2 in t2= 35ms, frame 3 in t3=20ms"

if t1 is a single line of an image it would still count as one frame, which is not true, because you cannot see a single line of one image

frame time takes into account if the entire image was actually displayed or not (which is not an issue with gsync/freesync)

frames per second counts all image refreshes, even if they are less than a frame

frame time also includes the delay between the computer creating the frame and the frame being output by the GPU

this is something not measured at all by fps, since fps is measured internally

you seem to discard ALL information, not just useless information...

you really need to work on your text analysis skills...

maybe you're not so good at reading but more of a visual/aural learner?

try watching this and learn something instead

https://www.youtube.com/watch?v=2cH_ozvn0gA

https://www.youtube.com/watch?v=CsHuPxX8ZzQ

no, i dont think you get it at all

they are not the same thing

what youre doing is taking a framerate (fps) and dividing it by number of frames, to get the AVERAGE time for one frame

what frame time is, is actually the opposite

its time per one frame, not frames divided by time

and as i said, its not as simple as "t1=15ms, frame 2 in t2= 35ms, frame 3 in t3=20ms"

if t1 is a single line of an image it would still count as one frame, which is not true, because you cannot see a single line of one image

frame time takes into account if the entire image was actually displayed or not (which is not an issue with gsync/freesync)

frames per second counts all image refreshes, even if they are less than a frame

frame time also includes the delay between the computer creating the frame and the frame being output by the GPU

this is something not measured at all by fps, since fps is measured internally

you seem to discard ALL information, not just useless information...

you really need to work on your text analysis skills...

maybe you're not so good at reading but more of a visual/aural learner?

try watching this and learn something instead

Maybe you should learn your units and conversions first. 60 frames per second is 16ms per frame (as mentioned in a previous post- ye who know how to read)..or maybe a quick maths lesson.

60FPS = 60 frames per 1000ms (frames per time)

one millisecond it will execute 0.06frames

One frame will execute in 16.67milliseconds (time per frame)

You fill your head with people's ideas and try an sell it to others. Nonsense! - agreeing with videos talking about spikes and "Microstuttering" in open areas what a load of garbage both cards displayed Microstuttering in all the games. Bullocks

I didn't carry out any division in my previous post so it's not an average calculation. The question mark in my previous message wasn't to seek your verification. you so confused than I thought you run to videos to do your explanation.

Link to comment
Share on other sites

Link to post
Share on other sites

Why does it suck on 2GB cards?

Now that I think of it, there aren't many fairly recent 2GB AMD cards so it's not that bad I guess

Because Mantle use more VRAM mostly >2GB in BF4.

 

Pitcairn and Tonga 2GB mostly.

| Intel i7-3770@4.2Ghz | Asus Z77-V | Zotac 980 Ti Amp! Omega | DDR3 1800mhz 4GB x4 | 300GB Intel DC S3500 SSD | 512GB Plextor M5 Pro | 2x 1TB WD Blue HDD |
 | Enermax NAXN82+ 650W 80Plus Bronze | Fiio E07K | Grado SR80i | Cooler Master XB HAF EVO | Logitech G27 | Logitech G600 | CM Storm Quickfire TK | DualShock 4 |

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×