Jump to content

6800XT or rtx 3080

raven123
7 minutes ago, Hymenopus_Coronatus said:

They do not show incorrect usage. They show allocation. 

 

The sim's developer tools show actual usage, the simulator allocates more than it actually uses

Well I already mentioned that in my initial response to you in this part: 

 

49 minutes ago, papajo said:

But ok to be fair most the VRAM used is for elements that are buffered meaning you wont necessarily need them right now (e.g stuff on the background or stuff the game engine predicts you gonna see if you keep on a certain direction e.g a city/buildings or what not) 

 

It's always better to have more VRAM though for futureproofness upcoming titles surely are going to use more VRAM because they are going to have better textures or more elemtns (e.g NPCs ) etc 

But I think you are misunderstanding the situation by saying "more than it actually uses" 

 

So the closest for you to be correct is that this tool shows the amount of data that is rendered and visible to you in a given frame that doesnt mean that the game "doesn;t use" the rest of the allocated data, why to allocate it in the first place then? the rest of the data is used to render stuff that you are not CURRENTLY seeing in real time but it stll uses it... an extreme scenario would be that you make an abrubt Uturn for example (so moving fast from what you see to a direction you were not seeing in your back) and notice in the horizon things to pop up, that would be a result of the game not having allocated enough data and just loading it as you making the u turn which introduces latency hence the stuff pops out suddenly and wasnt there instantly. 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, papajo said:

But I think you are misunderstanding the situation by saying "more than it actually uses" 

 

allocated basically means successfully requested. 

 

None of the GPU tools on the market report memory usage correctly, whether it’s GPU-Z, Afterburner, Precision, etc. They all report the amount of memory requested by the GPU, not the actual memory usage. Cards will (sic) larger memory will request more memory, but that doesn’t mean that they actually use it. They simply request it because the memory is available.” ( https://www.extremetech.com/gaming/...y-x-faces-off-with-nvidias-gtx-980-ti-titan-x

 

This is something that Brandon Bell from Nvidia said on this topic a while ago (980 Ti era). Allocated does not mean it is in use

Current System: Ryzen 7 3700X, Noctua NH L12 Ghost S1 Edition, 32GB DDR4 @ 3200MHz, MAG B550i Gaming Edge, 1TB WD SN550 NVME, SF750, RTX 3080 Founders Edition, Louqe Ghost S1

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, Hymenopus_Coronatus said:

allocated basically means successfully requested. 

 

None of the GPU tools on the market report memory usage correctly, whether it’s GPU-Z, Afterburner, Precision, etc. They all report the amount of memory requested by the GPU, not the actual memory usage. Cards will (sic) larger memory will request more memory, but that doesn’t mean that they actually use it. They simply request it because the memory is available.” ( https://www.extremetech.com/gaming/...y-x-faces-off-with-nvidias-gtx-980-ti-titan-x

 

This is something that Brandon Bell from Nvidia said on this topic a while ago (980 Ti era). Allocated does not mean it is in use

That's an opinion of one guy Joel Hruska(a journalist)  based on what this guy  (he mentions him while saying that) told them https://developer.nvidia.com/blog/author/branbell/ who is a Senior Technical Marketing Manager  obviously he would twist things by generalizing and using certain language since nvidia traditionally had less Vram in its products compared to the competition (and in fact your link turns around why

 

 

https://www.cs.uah.edu/~rcoleman/Common/C_Reference/MemoryAlloc.html

 

 

Quote

Memory allocation is the process of setting aside sections of memory in a program to be used to store variables, and instances of structures and classes. There are two basic types of memory allocation:
 

We need to get very technical to make complete sense of it not that I wouldnt like to but on the other side I am kinda hesitating since it would need a wall of a post lol 

 

But to simply satisfy your doubts just think of it this way. The game engine doesnt do (most of the time and depending on the game engine 😛 ) random stupid things. 

 

It allocates memory for a reason (it believes under the resources given it would be optimal to have X memory available) it wouldnt load up all of your vram just because it can (for example in the video I showed you the allocation was 12.5G not 15 or 16G despite it being physically available if they game wanted to make use of it)  

 

In other words it happens for a reason and not at random.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, papajo said:

It allocates memory for a reason (it believes under the resources given it would be optimal to have X memory available) it wouldnt load up all of your vram just because it can (for example in the video I showed you the allocation was 12.5G not 15 or 16G despite it being physically available if they game wanted to make use of it)  

 

In other words it happens for a reason and not at random.

 

Yeah, you are 100% right.

 

I was just saying that MSFS does not use more than 9 GBs in my experience and VRAM is not a limiting factor.

Current System: Ryzen 7 3700X, Noctua NH L12 Ghost S1 Edition, 32GB DDR4 @ 3200MHz, MAG B550i Gaming Edge, 1TB WD SN550 NVME, SF750, RTX 3080 Founders Edition, Louqe Ghost S1

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Hymenopus_Coronatus said:

Yeah, you are 100% right.

 

I was just saying that MSFS does not use more than 9 GBs in my experience and VRAM is not a limiting factor.

And I try to convey to you that if it allocates such amount of memory it does mean that it calculated the texture IDs and their sizes and everything else needed that you could possibly need while playing the game hence it does need this amount otherwise it would not allocate it. 

 

The fact that you in this particular game do not notice any difference could be due to a gazillion factors most prominent (I would assume) that the vram and GPU is fast enough to not make you notice the delay for loading extra stuff that was not accounted for given the determined allocation size. 

 

It is mainly (but not solely) a matter of settings (which determine e.g the texture size etc) and GPU/vRAM speed. 

 

But it could simply mean that your playstyle doesnt "stress" the game e.g (since I dont play this game I couldnt possibly know better) I assume that if you are not flying fast and dont make many maneuvers the elements on the screen are more or less the same compared to the elements of the screen few milliseconds (or even seconds) ago hence nothing needs to load up 

 

In general though more vRAM is always better and it is historically evident that new titles consume more of it and this tradition is likely to continue. 

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, papajo said:

I assume that if you are not flying fast and dont make many maneuvers the elements on the screen are more or less the same compared to the elements of the screen few milliseconds (or even seconds) ago hence nothing needs to load up

You are right about the nothing needing to load up. Due to the sim’s “play style” it only loads a tiny bit of new data each frame usually, each frame usually looks quite similar to the one before it

Current System: Ryzen 7 3700X, Noctua NH L12 Ghost S1 Edition, 32GB DDR4 @ 3200MHz, MAG B550i Gaming Edge, 1TB WD SN550 NVME, SF750, RTX 3080 Founders Edition, Louqe Ghost S1

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Hymenopus_Coronatus said:

You are right about the nothing needing to load up. Due to the sim’s “play style” it only loads a tiny bit of new data each frame usually, each frame usually looks quite similar to the one before it

well I wouldnt be very sure about it though again if it allocates (given an optimal scenario of 16 GB) 12.5 it means that it needs it.

 

Here for example is a guy with an RTX 2080 ti (8GB of vram) I know that because he mentions his rig in the description 

 

I timestamped it in a particular part (just push the play button on the video below) notice the white dots on the side of the airway they become poles when the plane gets closer to each one. 

 

 

 

This is a perfect example of a game needing more Vram (or GPU speed but since we are talking about a RTX 2080 ti in addition to the framerate being smooth I doubt that) but not being able to allocate enough

 

EDIT: or notice at 7:26 how some structures pop in and out of existence

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, papajo said:

notice the white dots on the side of the airway they become poles when the plane gets closer to each one. 

This actually part of the sim's LOD, this happens regardless of VRAM. The sim has very aggressive LOD, like most flight sims in the past. Loading objects hammers the CPU quite hard

4 minutes ago, papajo said:

RTX 2080 ti (8GB of vram)

The 2080 Ti has 11GBs

Current System: Ryzen 7 3700X, Noctua NH L12 Ghost S1 Edition, 32GB DDR4 @ 3200MHz, MAG B550i Gaming Edge, 1TB WD SN550 NVME, SF750, RTX 3080 Founders Edition, Louqe Ghost S1

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Hymenopus_Coronatus said:

 

The 2080 Ti has 11GBs

my bad confused it with the super but this servers even better for the point that it needs more than just 5GB you were thinking or 10GB of the 3080

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, papajo said:

5GB you were thinking

Oh that was only in one situation with very little going on. It usually sits at around 7.5 to 8

Current System: Ryzen 7 3700X, Noctua NH L12 Ghost S1 Edition, 32GB DDR4 @ 3200MHz, MAG B550i Gaming Edge, 1TB WD SN550 NVME, SF750, RTX 3080 Founders Edition, Louqe Ghost S1

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Hymenopus_Coronatus said:

Oh that was only in one situation with very little going on. It usually sits at around 7.5 to 8

Fair enough but still 11GB of vram and you see things popping into existence at the very last minute (= they needed to be loaded) with more vRAM you would not see such a thing  and notice that the youtube video goes up to 4K so I assume he plays at 4K 

 

On the contrary here with a RX 6800 xt I dont see anything popping out suddenly and the scenery is much more detailed (desner buildings etc) also in the end of the video (although the stupid video recommendations dont help but you still can see if you focus on what's visible) where the plane is landing all poles are actually poles far as the eye can see 

 

https://youtu.be/1-OZQfV_Xag

Link to comment
Share on other sites

Link to post
Share on other sites

On 1/21/2021 at 9:12 PM, GamerDude said:

Plus, as someone at guru3D had said, DLSS is planned obsolescence, there'll come a time when GPU's are powerful enough that DLSS isn't needed.

IDK, I kind of wonder if it'll be the opposite, that upscaling technologies will grow in importance, not even factoring in RT. I recall reading Microsoft isn't expecting there to be much if any drop in price per transistor on 5nm and 3nm processes when explaining why they put out a $300 console now in the Series S instead of just the $500 Series X. If that's the case that we can't get performance gains just from miniaturization sounds like we could see a plateauing of price to performance on gpus. In which case hacks like DLSS would start becoming critical.

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, SteveGrabowski0 said:

IDK, I kind of wonder if it'll be the opposite, that upscaling technologies will grow in importance, not even factoring in RT. I recall reading Microsoft isn't expecting there to be much if any drop in price per transistor on 5nm and 3nm processes when explaining why they put out a $300 console now in the Series S instead of just the $500 Series X. If that's the case that we can't get performance gains just from miniaturization sounds like we could see a plateauing of price to performance on gpus. In which case hacks like DLSS would start becoming critical.

That's what I am afraid of, this is practically defeating the purpose of PC gaming, consoles do sacrifice graphical fidelity(checkerboard rendering  etc)  in order to maintain framerate if PCs start doing the same then what's the point? 

 

I remember an nvidia response back in the day of pascal comparing console 4K (with either titan or GTX 1080 ti cant recall) on Forza (or an other car game cant remember ) where they were saying yes consoles are cheaper and play at 4K smoothly, but the quality is lower, zooming in and showing differences etc

 

That was way before DLSS and then it suited their marketing now that they have become greedy suddenly harming image quality if it is "unnoticeable" in favor of maintaining a passable framerate is suddenly not a big of deal on top of that they push it as a kick ass feature that differentiates them from the competition... tragic. 

Link to comment
Share on other sites

Link to post
Share on other sites

I keep asking for a scenario where 10GB VRAM becomes a limitation for gaming, in such a way that an RX 6800 provides a smoother experience. I have yet to get a scenario where this is true.

 

Doom Eternal: 4K Max settings - 8.5 GB VRAM (approx) - I cap the framerate at 115 for GSYNC. Smooth as butter.

Cyberpunk 2077: 4K Max (no DLSS) - 9.3 GB VRAM (avg 7 FPS... completely unplayable)
GTA V - 4K max settings, 3/2 frame scaling - 9.7 GB VRAM (avg 30 FPS - barely acceptable)

 

I want to continue exploring this issue. If I discover some scenario where 10GB VRAM is a limitation at 4K, I will gladly point it out. Seriously. I'm a gamer. I'm not an Nvidia shill. I want a pristine gaming experience on my LG CX 55 OLED TV that I paid way too much money for. So far, the RTX 3080 has accomplished this task easily now that LG has fixed the issues with HDMI 2.1 on the CX.

 

I will ask it again. Please provide an example where 10GB VRAM is a limitation to smooth gameplay. I will wait.

CPU: Ryzen 7 5800x3D || GPU: Gigabyte Windforce RTX 4090 || Memory: 32GB Corsair 3200mhz DDR4 || Motherboard: MSI B450 Tomahawk || SSD1: 500 GB Samsung 850 EVO M.2 (OS drive) || SSD2: 500 GB Samsung 860 EVO SATA (Cache Drive via PrimoCache) || Spinning Disks: 3 x 4TB Western Digital Blue HDD (RAID 0) || Monitor: LG CX 55" OLED TV || Sound: Schiit Stack (Modi 2/Magni 3) - Sennheiser HD 598, HiFiMan HE 400i || Keyboard: Logitech G915 TKL || Mouse: Logitech G502 Lightspeed || PSU: EVGA 1300-watt G+ PSU || Case: Fractal Design Pop XL Air
 

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, papajo said:

 

On the contrary here with a RX 6800 xt I dont see anything popping out suddenly and the scenery is much more detailed (desner buildings etc) also in the end of the video (although the stupid video recommendations dont help but you still can see if you focus on what's visible) where the plane is landing all poles are actually poles far as the eye can see

That's the CPU the issue. The guy with a 2080 Ti has a 7700K. And yes, CPU bottlenecks in this sim exist at 4K, my 3700X limits the 3080 ~70% of the time. 

 

The scenery being much more detailed is because the 6800 XT video takes place in a photogrammetry area that was very recently updated, most of the world is actually autogen, which is much less dense usually. 

 

The poles thing is weird, since for me the taxi lights/runway lights always appear as poles. 

 

The hard part about this sim is that the CPU causes most of the issues with "smoothness" and pop-in, so it's really hard to judge, without having the same CPU with the 3080 and 6800 XT.

 

This video uses the same CPU and doesn't seem to have a bottlenecking issue, so it's probably a better point of comparison.

 

Current System: Ryzen 7 3700X, Noctua NH L12 Ghost S1 Edition, 32GB DDR4 @ 3200MHz, MAG B550i Gaming Edge, 1TB WD SN550 NVME, SF750, RTX 3080 Founders Edition, Louqe Ghost S1

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Hymenopus_Coronatus said:

That's the CPU the issue. The guy with a 2080 Ti has a 7700K. And yes, CPU bottlenecks in this sim exist at 4K, my 3700X limits the 3080 ~70% of the time. 

 

The scenery being much more detailed is because the 6800 XT video takes place in a photogrammetry area that was very recently updated, most of the world is actually autogen, which is much less dense usually. 

 

The poles thing is weird, since for me the taxi lights/runway lights always appear as poles. 

 

The hard part about this sim is that the CPU causes most of the issues with "smoothness" and pop-in, so it's really hard to judge, without having the same CPU with the 3080 and 6800 XT.

 

This video uses the same CPU and doesn't seem to have a bottlenecking issue, so it's probably a better point of comparison.

 

I think you didnt understand what I meant... 

 

I did not mean more detailed scenery I meant more dense and I meant that not as GPU workload (resolution of pixels etc) but as vram workload (each house occupies some data in the vram more houses = more vram needed) 

 

Last but not least what I described  had not much to do with CPU/GPU speed things poping in of existence and out of existence is about them not being loaded on time in the vram.  if it was a CPU/GPU issue then the framerate would be affected. 

 

As for your video has it occurred to you that maybe it is the same clip on both sides ?  

 

People make fake videos all the time this guy 132 subs...this video had 10K vies and he has in total 6 videos... 

 

And on top of that in this videos he has benchmarks of both RTX 3090 and 3080 a few days after release  😛 not to mention that he has a RX 6800 XT

 

+ the fact that as you say the videos are identical, in case it is not a benchmark 0 variation in movements scenery etc also suggest to the video just being faked... 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, papajo said:

I did not mean more detailed scenery I meant more dense and I meant that not as GPU workload (resolution of pixels etc) but as vram workload (each house occupies some data in the vram more houses = more vram needed) 

 

I know. What I mean is that the scenery that is autogen is always less dense. 

 

2 minutes ago, papajo said:

As for your video has it occurred to you that maybe it is the same clip on both sides ?  

 

People make fake videos all the time this guy 132 subs...this video had 10K vies and he has in total 6 videos... 

 

And on top of that in this videos he has benchmarks of both RTX 3090 and 3080 a few days after release  😛 not to mention that he has a RX 6800 XT

 

+ the fact that as you say the videos are identical, in case it is not a benchmark 0 variation in movements scenery etc also suggest to the video just being faked... 

 

 

I just took a look at it again, you might be right. Unless he's using Approach Mode on both aircraft and that it did the same thing both times.

 

4 minutes ago, papajo said:

Last but not least what I described  had not much to do with CPU/GPU speed things poping in of existence and out of existence is about them not being loaded on time in the vram.  if it was a CPU/GPU issue then the framerate would be affected. 

 

What the guy with the 7700K may have done is changed a few of the terrain/building settings, since that does affect the CPU heavily

Current System: Ryzen 7 3700X, Noctua NH L12 Ghost S1 Edition, 32GB DDR4 @ 3200MHz, MAG B550i Gaming Edge, 1TB WD SN550 NVME, SF750, RTX 3080 Founders Edition, Louqe Ghost S1

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, Hymenopus_Coronatus said:

terrain/building settings,

Such as? Because most such settings are more vram heavy than CPU heavy 

 

46 minutes ago, Hymenopus_Coronatus said:

I just took a look at it again, you might be right. Unless he's using Approach Mode on both aircraft and that it did the same thing both times.

 

the guy has 6 videos under 10K views most of em and about 100 subscribers and had a RTX 3080 and 3090 on launch  + RX 6800 xt

 

Either that guy is superrich which also is strange since he would have a better channel.. if you shit money you can buy yourself (very cheaply at that) bot subs facebook likes etc etc) or he fakes them 😛

 

I highly doubt both AMD AND nvidia giving him test samples for a channel of 135 subs or so... 

 

He may have rich friends... yea.. ok maybe 

 

Its not 100% impossible but if you consider all facts (and that the clips are identical) meh I wouldbt be so sure about that.. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, papajo said:

Such as? Because most such settings are more vram heavy than CPU heavy 

Terrain Detail, Buildings, etc. 

 

Those are CPU heavy in my own testing, if I turn those down (especially terrain detail), the "Limited by MainThread" message appears less frequently. VRAM usage doesn't change either. The CPU part of it could just be a coincidence though,. 

 

That's the weird thing with this sim. It doesn't really care what GPU you use (after a certain point), it's super CPU bound at almost all resolutions and it's super inconsistent. 

 

The RX580 I used to run ran the sim at 1440p high settings at a solid 25 fps (quite a  good result for a flight sim), yet the same exact card in a 5800X rig barely manages 1080p (or even 720p). Granted, it is at ultra, but 720p ultra is probably less hard to run than 1440p high. 

Current System: Ryzen 7 3700X, Noctua NH L12 Ghost S1 Edition, 32GB DDR4 @ 3200MHz, MAG B550i Gaming Edge, 1TB WD SN550 NVME, SF750, RTX 3080 Founders Edition, Louqe Ghost S1

Link to comment
Share on other sites

Link to post
Share on other sites

I had the choice between a 6800xt and a RTX3080. I went with the 3080 for several reasons.

DLSS - This is a huge and pretty much free performance boost. It often produces much better results when it comes to quality than other options and does it while giving a performance boost.

Raytracing - New titles are implementing it and with the new gen of consoles supporting it we will see much more going forward. It makes a huge difference in games that utilized it and the 3080 has MUCH better RT performance.

GDDR6x - It is MUCH faster. Yes, you have less vram on the 3080, but when you have enough speed getting stuff into the memory isn't a problem. This is going to improve greatly in a month or so when the resizable bar support hits for Nvidia.

Drivers - I have encounter driver issues with almost every single AMD card I have ever owned. I can count on one hand the amount of times Nvidia cards have giving me issues and normally it was something that was quickly fixed via beta drivers or a new hotfix release.

When it comes to rasterization sure the 6800xt appears to have a slight edge, but at the fps these cards are pushing for pure raster content it isn't as big of a plus as it use to be. When you look at it you have to look at the whole picture and for me you get a lot of great stuff for the 50 dollar difference between the two cards.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Hymenopus_Coronatus said:

Terrain Detail, Buildings, etc. 

 

Those are CPU heavy in my own testing, if I turn those down (especially terrain detail), the "Limited by MainThread" message appears less frequently. VRAM usage doesn't change either. The CPU part of it could just be a coincidence though,. 

 

That's the weird thing with this sim. It doesn't really care what GPU you use (after a certain point), it's super CPU bound at almost all resolutions and it's super inconsistent. 

 

The RX580 I used to run ran the sim at 1440p high settings at a solid 25 fps (quite a  good result for a flight sim), yet the same exact card in a 5800X rig barely manages 1080p (or even 720p). Granted, it is at ultra, but 720p ultra is probably less hard to run than 1440p high. 

I think we can keep this on and on for ages because it is virtually impossible to show significant data without us being both on two systems judging them side by side on the same time 

 

but here I found a video comparing an RX 3080 vs an RX 6800 XT

 

 

Could he fake the video too? well in his video we can see the actual physical cards (on the beginning of it) so I doubt it is fake. 

 

Notice the stadium in front of the left wing see the shadow that pops out of nothing? well thats vram for you(it pops up a few seconds after my timestamp not immediately).  😛 

 

I also took some side by side pictures of nearly the same point of view (I used those game pathway marks to help me and it seems it is almost the same angle although the plane has not the same tilt or hows that called and a little different aproach angle but either way they are close enough) 

 

4BZwBys.png

 

click on it and zoom it otherwise you couldnt possibly make sense out of it 😛 

 

 

Notice how more detailed (as in chunky) the scenery is far back in the horizon and how flat and how blury the shadows are it is on the RTX 3080 that means that in order to compensate for less vram it loaded less data where it would be "unnoticeable" 

 

 

Ok granted  that is nitpicking but given the specific game and the capabilities of both GPUs and hardware used in general both are bound to look good and you couldnt possibly find a very outstanding case of vram limitation. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, papajo said:

Ok granted  that is nitpicking but given the specific game and the capabilities of both GPUs and hardware used in general both are bound to look good and you couldnt possibly find a very outstanding case of vram limitation. 

 

Yeah. The two look so similar that when using the sim, you won't notice. I didn't even notice the difference at first, but you are right there is a slight difference. 

 

Honestly for this sim, grab the best GPU that would allow you to have enough budget buy the best CPU possible (in terms of SC performance)

Current System: Ryzen 7 3700X, Noctua NH L12 Ghost S1 Edition, 32GB DDR4 @ 3200MHz, MAG B550i Gaming Edge, 1TB WD SN550 NVME, SF750, RTX 3080 Founders Edition, Louqe Ghost S1

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, AngryBeaver said:

DLSS - This is a huge and pretty much free performance boost. It often produces much better results when it comes to quality than other options and does it while giving a performance boost.

It gives performance boost because it cant make use of the actual RTX gimmick/feature so it needs to degrade the image quality to an "unnoticeable" degree. 

 

and it also works with a limited list of titles. 

 

At this point whats the difference e.g compared to consoles using checkboard rendering to maintain framerate etc.. you are supposed to buy a PC in order to have better image quality 

 

26 minutes ago, AngryBeaver said:

GDDR6x - It is MUCH faster. Yes, you have less vram on the 3080, but when you have enough speed getting stuff into the memory isn't a problem. This is going to improve greatly in a month or so when the resizable bar support hits for Nvidia.

It is faster not much faster and AMD did not use that because they implemented bigger much faster GPU cache 

 

 

For me its just about which is cheaper the GPUs are about the same with the AMD one being a notch faster in most games but they trade blows more or less 

 

so I would get the cheaper one if the money was about the same I would get the AMD one. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, papajo said:

For me its just about which is cheaper the GPUs are about the same with the AMD one being a notch faster in most games but they trade blows more or less 

 

At 1080p and 1440p yes, 4K the 3080 is faster usually. Overall they basically match each other.

 

I'd get the AMD GPU if you don't care about RT, otherwise get the 3080.

Current System: Ryzen 7 3700X, Noctua NH L12 Ghost S1 Edition, 32GB DDR4 @ 3200MHz, MAG B550i Gaming Edge, 1TB WD SN550 NVME, SF750, RTX 3080 Founders Edition, Louqe Ghost S1

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Hymenopus_Coronatus said:

Yeah. The two look so similar that when using the sim, you won't notice. I didn't even notice the difference at first, but you are right there is a slight difference. 

 

Honestly for this sim, grab the best GPU that would allow you to have enough budget buy the best CPU possible (in terms of SC performance)

That may have been partially my fault click on the link as apparently when I checked here on the post it doesnt zoom it further much https://i.imgur.com/4BZwBys.png

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×