Jump to content

Hard OCP Blind test of Vega vs 1080Ti

VanayadGaming
1 hour ago, MageTank said:

I keep seeing this, but I never really hear what disabled features could possibly aid in Vega's (consumer) performance. What features were disabled, and what impact did you expect it to have?

From what I can tell, it's something people on /r/AMD has repeated over, and over and over in an attempt to keep the hype up.

Some clueless person said it while clutching at straws, some equally clueless person saw that post and parroted it, then that post was seen by more clueless people who parroted it, and so on and so forth.

I am willing to bet that 95% of people who has posted things like "just wait for tile based rasterization" doesn't even know what it is.

 

It's so annoying because it's like watching the blind not only leading the blind, but actually encouraging other blind people to join them in some kind of bizarre conga line, and they are walking right towards a cliff.

 

It's the same with drivers. I have seriously seen several people say that we will see a 50+% performance increase from new drivers. I have seen at least 1 person on this forum say it, and 4 people on Reddit.

 

 

1 hour ago, Humbug said:

We really have no idea what the difference is between Vega FE and Rx Vega.

Because AMD refuses to tell us until the launch of rx Vega.

 

From Raja's AMA

RA2lover: What things does the RX Vega have over the Radeon Vega FE that would make it worth the extra wait?

Raja Koduri: RX will be fully optimized gaming drivers, as well as a few other goodies that I can’t tell you about just yet….

Calling it now:

The game drivers might give a decent performance increase but not anywhere near what AMD would actually need to be competitive in terms of performance (read: not price).

 

The "few other goodies" will be a water cooler, RGB lights, that special edition with a metallic finish and/or some other useless stuff which doesn't really affect performance.

It might also be a slightly higher base clock.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, LAwLz said:

From what I can tell, it's something people on /r/AMD has repeated over, and over and over in an attempt to keep the hype up.

Some clueless person said it while clutching at straws, some equally clueless person saw that post and parroted it, then that post was seen by more clueless people who parroted it, and so on and so forth.

I am willing to bet that 95% of people who has posted things like "just wait for tile based rasterization" doesn't even know what it is.

 

It's so annoying because it's like watching the blind not only leading the blind, but actually encouraging other blind people to join them in some kind of bizarre conga line, and they are walking right towards a cliff.

 

It's the same with drivers. I have seriously seen several people say that we will see a 50+% performance increase from new drivers. I have seen at least 1 person on this forum say it, and 4 people on Reddit.

I feel like I've heard this exact thing before though. Like, not even in regards to Vega. Still, they can't honestly expect that big of an improvement, given it's the exact same architecture. We didn't see improvements on that level comparing the Nvidia workhorse cards to the gaming cards (even when going as far back as Kepler's original Titans, which had a mix of both). 

 

The driver issue also seems odd, given AMD released the Vega FE with a dev and gaming driver, both of which did the exact same thing. If such were the case, it means AMD rushed the Vega FE with zero driver optimizations, and didn't take time pre-launch to sort those issues out. Not only that, but it was already proven to be a power limit and thermal issue. When GN put the card on water, and undervolted it, they were able to squeeze more clocks out of it while stabilizing the core. Unless AMD did something on these cards that give it a much better thermal profile, and improved power limit, I just don't see them clocking much higher than the Vega FE, let alone high enough to make a dramatic difference to justify the rumored costs.

 

At this point, I'll have to follow the old advice of "wait until it launches" while hoping that I am wrong. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, LAwLz said:

It's so annoying because it's like watching the blind not only leading the blind, but actually encouraging other blind people to join them in some kind of bizarre conga line, and they are walking right towards a cliff.

The only realistic rumor ive seen from r/amd is that vega is starved for bandwidth, as fury was. so, if true, amd fucked up majorly on the memory controller for hbm or the drivers might actually fix the bandwidth problem, with compression or something. Its honestly still hard to believe that amd would fail so much

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, LAwLz said:

The "few other goodies" will be a water cooler, RGB lights, that special edition with a metallic finish and/or some other useless stuff which doesn't really affect performance.

LOL maybe. Let's see. Hope not.

Would be pretty lame if after launching a radical new architecture Raja was just dangling that cosmetic stuff in front of us.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, hobobobo said:

The only realistic rumor ive seen from r/amd is that vega is starved for bandwidth, as fury was. so, if true, amd fucked up majorly on the memory controller for hbm or the drivers might actually fix the bandwidth problem, with compression or something. Its honestly still hard to believe that amd would fail so much

Wasn't the fury lineup also ROP limited as well? I can't remember where I read it, but I am certain I saw it somewhere. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, MageTank said:

I keep seeing this, but I never really hear what disabled features could possibly aid in Vega's (consumer) performance. What features were disabled, and what impact did you expect it to have?

The big ticket item being tile based rasterization, which it's being debated by a small few on whether that will effect performance or power draw.

Someone somewhere had another thing that wasn't working quite right on FE (geometry pipeline I think), but so far, it seems like FE is either allocating resources that can boost gaming performance elsewhere, or VEGA is true to its GCN nature where it has great compute performance but lackluster gaming performance due to architectural differences that require more work than its worth.

1 hour ago, AnonymousGuy said:

How bad is this fucking thing that they don't bother with benchmarks but instead go straight to some bullshit infomercial tactic of blind testing.

I'm sitting back. VEGA can be another Bulldozer, or it can be another supposed Threadripper.

I personally lean towards it being the latter, and that AMD is doing everything they can to keep RX VEGA under wraps because Nvidia would have started working to move up Volta to stomp out VEGA.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, MageTank said:

Wasn't the fury lineup also ROP limited as well? I can't remember where I read it, but I am certain I saw it somewhere. 

ye, fury x had 64 rops while 980ti had 96, theres a thread on old amd reddit about it

Thats basically why finewine worked so well, their arch is good, just not good enough in gaming out of the box

 

 

but then again, what do i know, i just read shit online))

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, hobobobo said:

ye, fury x had 64 rops while 980ti had 96, theres a thread on old amd reddit about it

 

Thats basically why finewine worked so well, their arch is good, just not good enough in gaming out of the box

 

 

but then again, what do i know, i just read shit online))

That was 2 years ago, and we still don't see the scaling in DX12 titles that they expected. At least, I've yet to see them. Hard to find sources that still test the Fury X in modern benches, but I did find this with a quick google search: https://www.extremetech.com/gaming/246377-new-directx-11-vs-directx-12-comparison-shows-uneven-results-limited-improvements

 

From what I see, ROP's are still very important, and the Fury X lineup still looks bottlenecked in that department, even in modern DX12 applications. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, MageTank said:

That was 2 years ago, and we still don't see the scaling in DX12 titles that they expected. At least, I've yet to see them. Hard to find sources that still test the Fury X in modern benches, but I did find this with a quick google search: https://www.extremetech.com/gaming/246377-new-directx-11-vs-directx-12-comparison-shows-uneven-results-limited-improvements

 

From what I see, ROP's are still very important, and the Fury X lineup still looks bottlenecked in that department, even in modern DX12 applications. 

yes, rops will remain very important in consumer graphics for the next couple years for sure, but the point of that reddit thread is, as i see it, is that rops are highly bandwidth dependant and fury failed to utilize them properly for the first year, maybe a bit more. I remember seeing Raja talk somewhere about improving bandwidth with hardware compression, so just maybe that will actually work.

 

I would not take anything i wrote more seriously then just some random interwebs rambling, as i have only the basic idea of how semiconductor stuff functions

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, MageTank said:

I feel like I've heard this exact thing before though. Like, not even in regards to Vega. Still, they can't honestly expect that big of an improvement, given it's the exact same architecture. We didn't see improvements on that level comparing the Nvidia workhorse cards to the gaming cards (even when going as far back as Kepler's original Titans, which had a mix of both). 

If you say that a lot of people will just go "it's a workstation card! It's not meant for gaming! AMD have said that the gaming card will be better for gaming!".

It really doesn't help when AMD themselves fuels on the hype.

 

 

12 minutes ago, MageTank said:

The driver issue also seems odd, given AMD released the Vega FE with a dev and gaming driver, both of which did the exact same thing. If such were the case, it means AMD rushed the Vega FE with zero driver optimizations, and didn't take time pre-launch to sort those issues out. Not only that, but it was already proven to be a power limit and thermal issue. When GN put the card on water, and undervolted it, they were able to squeeze more clocks out of it while stabilizing the core. Unless AMD did something on these cards that give it a much better thermal profile, and improved power limit, I just don't see them clocking much higher than the Vega FE, let alone high enough to make a dramatic difference to justify the rumored costs.

Not only that, but even if we assume that it's true that the drivers are terrible for gaming, what do people expect the launch drivers to be like? AMD spent like a year developing the drivers for Vega (which is funny because people say it's just a slightly modified Fiji driver, despite having no evidence for it and it doesn't ever make sense really) and now they expect them fix everything in these ~2 extra months?

 

 

17 minutes ago, hobobobo said:

The only realistic rumor ive seen from r/amd is that vega is starved for bandwidth, as fury was. so, if true, amd fucked up majorly on the memory controller for hbm or the drivers might actually fix the bandwidth problem, with compression or something. Its honestly still hard to believe that amd would fail so much

I don't get how it can be bandwidth starved. I mean, it has the same memory bandwidth as the 1080 Ti (which I think in and of itself is weird since one of the benefits of HBM should be higher bandwidth).

Is it in some special scenario because (I am speaking from ignorance here so please forgive me if I am wrong) I don't see how a card with that much bandwidth could struggle to feed the core adequately except if we're talking like triple 4K setups. It has about 50% higher memory bandwidth than the 980 Ti.

I don't think Nvidia's memory compression in Maxwell is so much better than AMD's that it would make up for that raw difference either.

 

22 minutes ago, MageTank said:

Wasn't the fury lineup also ROP limited as well? I can't remember where I read it, but I am certain I saw it somewhere. 

And the driver was bad so the scheduler ended up not utilizing resources properly either, from what I've heard. Not sure if that's true or not though.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, LAwLz said:

I don't get how it can be bandwidth starved. I mean, it has the same memory bandwidth as the 1080 Ti (which I think in and of itself is weird since one of the benefits of HBM should be higher bandwidth).

Is it in some special scenario because (I am speaking from ignorance here so please forgive me if I am wrong) I don't see how a card with that much bandwidth could struggle to feed the core adequately except if we're talking like triple 4K setups. It has about 50% higher memory bandwidth than the 980 Ti.

I don't think Nvidia's memory compression in Maxwell is so much better than AMD's that it would make up for that raw difference either.

 

As far as i understand, raw bandwidth doesnt mean much unless you can utilize it properly, as in raw bandwidth is not an indicator of the bandwidth to each engine, rops and stuff. Nvidia is notorious for going straight to gamedevs to optimize games for their hardware, so i can see how the same bandwidth can be much better utilized. I remember reading raja/someone else from rtg complain that game devs suck at using api's, as in they wont put basic commands into their code and amd has to deal with it inhouse post launch, basically finishing the game for game devs. Coupled with nvidia policy regarding gamedevs, doesnt seem to far out of reach that the cards resources are not used properly for gaming at all. How much of an uplift proper memory implementation will give is a mystery to me, but "prepare for the impossible at siggraph" seems a bit less of a fantasy now

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, hobobobo said:

I remember reading raja/someone else from rtg complain that game devs suck at using api's, as in they wont put basic commands into their code and amd has to deal with it inhouse post launch, basically finishing the game for game devs. Coupled with nvidia policy regarding gamedevs, doesnt seem to far out of reach that the cards resources are not used properly for gaming at all.

Just a small correction, that was an ex-Nvidia developer (actually a brief internship) complaining, not AMD.

So the argument that Nvidia hunt down developers and help them fix game code in order to utilize their memory bandwidth does not exactly hold up that well.

 

 

25 minutes ago, hobobobo said:

As far as i understand, raw bandwidth doesnt mean much unless you can utilize it properly, as in raw bandwidth is not an indicator of the bandwidth to each engine, rops and stuff. Nvidia is notorious for going straight to gamedevs to optimize games for their hardware, so i can see how the same bandwidth can be much better utilized.

That's not really the case, and if you read Promit's post (link earlier) you can see that it's not really about "utilizing the same bandwidth better". A GPU is way more than just the memory bandwidth, and things like fixing proper BeginFrame and EndFrame calls, or poorly written shaders (the two examples he gave) are probably not memory starvation issues (the first one certainly isn't, although that was just an extreme example of poor code).

 

 

34 minutes ago, hobobobo said:

How much of an uplift proper memory implementation will give is a mystery to me, but "prepare for the impossible at siggraph" seems a bit less of a fantasy now

Well I mean... That's all assuming there is even a bandwidth starvation issue to begin with. Could you perhaps try and find the Reddit post with that rumor? I would like to see their rational for it.

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, tom_w141 said:

Blind testing is bad news. When a company blind tests instead of revealing impressive numbers it means the product is inferior. They are basically saying its not as good but hey it feels the same! This is because at higher frame rates for example, we can't notice the difference between 120 and 150 fps.

 

Also the fact that they keep stressing vega + freesync is cheaper than NVidia +gsync just to me says they know they aren't competitive on price for the gpus and are relying on the large premium carried by gsync panels. Well that's crap logic because for a start someone upgrading their gpu might already have a decent freesync monitor and therefore it would be better for them to get a 1080Ti and just not use the freesync functionality,

That HBM2 was really worth the wait. Nvidia spot on with not adopting that memory too early like AMD

Processor: Intel core i7 930 @3.6  Mobo: Asus P6TSE  GPU: EVGA GTX 680 SC  RAM:12 GB G-skill Ripjaws 2133@1333  SSD: Intel 335 240gb  HDD: Seagate 500gb


Monitors: 2x Samsung 245B  Keyboard: Blackwidow Ultimate   Mouse: Zowie EC1 Evo   Mousepad: Goliathus Alpha  Headphones: MMX300  Case: Antec DF-85

Link to comment
Share on other sites

Link to post
Share on other sites

53 minutes ago, LAwLz said:

Well I mean... That's all assuming there is even a bandwidth starvation issue to begin with. Could you perhaps try and find the Reddit post with that rumor? I would like to see their rational for it.

 

basically this one. While i do not take it as gospel, there is some sense to his points

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, KOMTechAndGaming said:

neither do i but assuming they are mostly correct then amd fudged this launch up, speciall with nvidia volta or what ever the next line up of cards that are coming

they say those rumors came from a spanish reseller, and stuff is more expensive here in Europe because the conversion is often applied 1:1. So it's possible the spanish guy said "600€" and the writer of the article just converted it to 700$ whereas it will probably be cheaper in the US.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, hobobobo said:

basically this one. While i do not take it as gospel, there is some sense to his points

I'm sorry but that guy must be on some heavy crack... a 4096 bit bus and he's saying fiji was memory bandwidth limited? And he uses an LN2 test with massive core and memory overclocks to make up statistics that somehow in his mind prove his point? Of course when you massively increase core speeds the memory bandwidth will be much more likely to be saturated, but in normal operation 500GB/s are more than enough bandwidth, and nvidia's current offerings aren't even CLOSE.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, GatioH said:

Yeah it is. You find text somewhere and stand a couple feet away from it, as you’re walking left and right it’s a night and day difference (even without an FPS counter)

Pc enthuisiasts and tech reviewers talking about what the eye/brain can interpret is as funny as watching tea ladies discuss ct slide composition. Except ct composition is piss easy next to understanding the visual processing system. 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, LAwLz said:

From what I can tell, it's something people on /r/AMD has repeated over, and over and over in an attempt to keep the hype up.

Some clueless person said it while clutching at straws, some equally clueless person saw that post and parroted it, then that post was seen by more clueless people who parroted it, and so on and so forth.

I am willing to bet that 95% of people who has posted things like "just wait for tile based rasterization" doesn't even know what it is.

 

It's so annoying because it's like watching the blind not only leading the blind, but actually encouraging other blind people to join them in some kind of bizarre conga line, and they are walking right towards a cliff.

 

It's the same with drivers. I have seriously seen several people say that we will see a 50+% performance increase from new drivers. I have seen at least 1 person on this forum say it, and 4 people on Reddit.

 

 

Calling it now:

The game drivers might give a decent performance increase but not anywhere near what AMD would actually need to be competitive in terms of performance (read: not price).

 

The "few other goodies" will be a water cooler, RGB lights, that special edition with a metallic finish and/or some other useless stuff which doesn't really affect performance.

It might also be a slightly higher base clock.

On the FE card, it seemed to me like the card was somewhat broken when deciding what it should draw. It destroyed the titan on software where you basically have to render everything anyway (like in CAD), but struggled when it shouldn't draw everything. That shows they may have some gains to have to put maybe Vega halfway between 1080 and 1080ti, which is certainly better than just slightly over a 1080 to be fair. 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, GatioH said:

Yeah it is. You find text somewhere and stand a couple feet away from it, as you’re walking left and right it’s a night and day difference (even without an FPS counter)

that may have less to do with the framerate and more to do with the monitor's response times and how sensitive to ghosting the panel is...

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, Sauron said:

I'm sorry but that guy must be on some heavy crack... a 4096 bit bus and he's saying fiji was memory bandwidth limited? And he uses an LN2 test with massive core and memory overclocks to make up statistics that somehow in his mind prove his point? Of course when you massively increase core speeds the memory bandwidth will be much more likely to be saturated, but in normal operation 500GB/s are more than enough bandwidth, and nvidia's current offerings aren't even CLOSE.

4kb bus is not really good for games, its good for compute. Even with my limited understanding im pretty sure games dont saturate that bus well enough, gddr5 is 256kb and 5x is 352 and all games are made for that. Im pretty sure the game engine doesnt just go "oh, i can pack the last 16 commands into one now" just by itself. So the 4kb bus width is underused. Thats, as i see it, part of the reason why fury x aged so well

 

But then again, thats just my own conclusion for my own entertainment. All speculation is well and good, but we will see in 2 days

 

edit: and i think im terribly wrong on how memory bus operates

Edited by hobobobo
Link to comment
Share on other sites

Link to post
Share on other sites

49 minutes ago, hobobobo said:

basically this one. While i do not take it as gospel, there is some sense to his points

But that post provides no evidence for his claims either. That post just says "it's clear that it's heavily memory bandwidth limited" and then provides no evidence or arguments for why it is memory bottlenecked.

The only thing that is kind of evidence is that he goes "they overclocked the memory and core on this Fury card, and the performance increase was a higher % than the core clock".

But that's completely flawed logic. If I overclock my RAM a bit then some benchmarks will show higher scores too. Does that mean that I am bottlenecked by my RAM? Of course not. In some scenarios I might be, but certainly not always.

Increase performance by overclocking one part != That part was a bottleneck (and it especially doesn't mean it was a major bottleneck). You can get some performance increases by just overclocking the memory on all cards, and the same is true for only overclocking the core.

 

But even IF that part made sense (which it doesn't), he still can't just look at a Fury card and go "this was memory bottlenecked" (which also completely ignores the fact that different workloads can have different bottlenecks on a single GPU) because Fury and Vega are two different cards. It is completely ignoring things like potential improvements to memory compression, or other techniques for reducing the reliance on fetching and sending things to VRAM.

He might as well be saying "since Fury got a increase in performance by overclocking the memory, the 980 Ti is memory bottlenecked because it has even lower memory bandwidth!"

 

 

And again it brings up "AMD just needs to enable these new features" without specificity what these features are.

 

Judging by his other posts, that guy seems like a diehard AMD fanboy who knows fairly little about the things he is talking about.

I wouldn't trust him with my nephews math homework, let along speculating about GPUs.

 

 

13 minutes ago, hobobobo said:

4kb bus is not really good for games, its good for compute. Even with my limited understanding im pretty sure games dont saturate that bus well enough, gddr5 is 256kb and 5x is 352 and all games are made for that.

That's... Not at all how it works.

13 minutes ago, hobobobo said:

edit: and i think im terribly wrong on how memory bus operates

Yes you are.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, LAwLz said:

From what I can tell, it's something people on /r/AMD has repeated over, and over and over in an attempt to keep the hype up.

Some clueless person said it while clutching at straws, some equally clueless person saw that post and parroted it, then that post was seen by more clueless people who parroted it, and so on and so forth.

I am willing to bet that 95% of people who has posted things like "just wait for tile based rasterization" doesn't even know what it is.

Well, AMD clearly stated Vega has tile based rasterization technology, but triangle tests on FE showed no difference compared to Fiji

On a mote of dust, suspended in a sunbeam

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, LAwLz said:

I don't get how it can be bandwidth starved. I mean, it has the same memory bandwidth as the 1080 Ti (which I think in and of itself is weird since one of the benefits of HBM should be higher bandwidth).

Nvidia has really good compression so has much higher effective bandwidth than the raw bandwidth figures as well. Latency plays a part too and HBM might have high latency for smaller calls or something, I dunno not in the mood to look it up right now being lazy. I do know that compression can help with effective latency as well, AMD could be trying to solve a problem in a cheaper but fundamentally wrong way i.e. HBM. Not to say HBM plus decent compression is wrong though but both is probably required.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Agost said:

Well, AMD clearly stated Vega has tile based rasterization technology, but triangle tests on FE showed no difference compared to Fiji

I never said people were wrong in saying it was not enabled. What I was saying is that people talk about it without understanding what it is. 

 

4 hours ago, leadeater said:

Nvidia has really good compression so has much higher effective bandwidth than the raw bandwidth figures as well. Latency plays a part too and HBM might have high latency for smaller calls or something, I dunno not in the mood to look it up right now being lazy. I do know that compression can help with effective latency as well, AMD could be trying to solve a problem in a cheaper but fundamentally wrong way i.e. HBM. Not to say HBM plus decent compression is wrong though but both is probably required.

Yes  but that assumption relies on several things. 

1) That Fiji was actually memory starved. I have so far not seen any evidence for this whatsoever. And no, overclocking the memory and getting more performance does not imply that the card has a memory bottleneck. 

2) That Nvidia's color compression is miles ahead of AMD's. A few generations ago I think AMD said their compression on average saved 40% bandwidth. I really doubt Nvidia's saves like 90% or however much it would need to make the 980 match the Fury in terms of effective memory bandwidth. 

3) That AMD has made no improvements at all to the memory compression or controller.

 

I strongly doubt all three of those things are true, and if they aren't then you can't conclude that Vega is memory starved. 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, LAwLz said:

I never said people were wrong in saying it was not enabled. What I was saying is that people talk about it without understanding what it is.

sorry, but I don't think that's how it works - flip a switch and the raster changes

the Raster Engines need to be physically capable of executing the new algorithms in the 1st place

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×