Jump to content

AMD Radeon RX 7700 XT & 7800 XT Review

AdamFromLTT
2 hours ago, My poodle is not French said:

7900 XT really is not similar to a 6950 XT at all.

 

 

So you're saying the 7800 XT would became faster with driver updates? Again, made to not touch the 7900 XT.

Made In Brazil 🇧🇷

Link to comment
Share on other sites

Link to post
Share on other sites

I like the blinking dots, but I think highlighting the entire bar as I am constantly looking left and right could help, otherwise in my opinion great job.

Link to comment
Share on other sites

Link to post
Share on other sites

I will say that the apparent overclocking potential on the 7800 XT, at least on the primo models, is somewhat impressive: https://www.techpowerup.com/review/sapphire-radeon-rx-7800-xt-nitro/ They have this Sapphire one about matching the performance of a 6950 XT or 3070 ti, about a 10-15% gain, at least in a synthetic benchmark. Not huge but most modern cards have virtually zero OC headroom.

 

Of course, this Sapphire model is a $550 card (if it was even in stock, which it's already not) and 6950 XT's (at least the XFX model) are in the $600 ballpark now. 

Corps aren't your friends. "Bottleneck calculators" are BS. Only suckers buy based on brand. It's your PC, do what makes you happy.  If your build meets your needs, you don't need anyone else to "rate" it for you. And talking about being part of a "master race" is cringe. Watch this space for further truths people need to hear.

 

Ryzen 7 5800X3D | ASRock X570 PG Velocita | PowerColor Red Devil RX 6900 XT | 4x8GB Crucial Ballistix 3600mt/s CL16

Link to comment
Share on other sites

Link to post
Share on other sites

Great video, but I noticed use of the geometric mean when comparing FPS across all the games within the benchmark suite. FPS is a rate metric, and thus would be (in my opinion) better represented by the harmonic mean. Because the geometric mean is not either directly nor inversely proportional to the actual execution time of the benchmarks.

 

For more discussion, see Lilja's Measuring Computer Performance: A Practitioner's Guide, pp. 29-34 (or Chapter 3 in general) and Jacob and Mudge's "Notes on Calculating Computer Performance" (https://tnm.engin.umich.edu/wp-content/uploads/sites/353/2021/06/1995_Notes_on_calculating_computer_performance.pdf). 

 

The geometric vs. harmonic debate was a fairly contentious topic in academia, but my understanding is that harmonic is simply the better metric here. Would be interested to hear the rationale behind the use of geomean here. 

Link to comment
Share on other sites

Link to post
Share on other sites

50 minutes ago, robotater said:

Great video, but I noticed use of the geometric mean when comparing FPS across all the games within the benchmark suite. FPS is a rate metric, and thus would be (in my opinion) better represented by the harmonic mean. Because the geometric mean is not either directly nor inversely proportional to the actual execution time of the benchmarks.

 

For more discussion, see Lilja's Measuring Computer Performance: A Practitioner's Guide, pp. 29-34 (or Chapter 3 in general) and Jacob and Mudge's "Notes on Calculating Computer Performance" (https://tnm.engin.umich.edu/wp-content/uploads/sites/353/2021/06/1995_Notes_on_calculating_computer_performance.pdf). 

 

The geometric vs. harmonic debate was a fairly contentious topic in academia, but my understanding is that harmonic is simply the better metric here. Would be interested to hear the rationale behind the use of geomean here. 

Hey, our data vis person will look into this.

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, robotater said:

Great video, but I noticed use of the geometric mean when comparing FPS across all the games within the benchmark suite. FPS is a rate metric, and thus would be (in my opinion) better represented by the harmonic mean. Because the geometric mean is not either directly nor inversely proportional to the actual execution time of the benchmarks.

 

For more discussion, see Lilja's Measuring Computer Performance: A Practitioner's Guide, pp. 29-34 (or Chapter 3 in general) and Jacob and Mudge's "Notes on Calculating Computer Performance" (https://tnm.engin.umich.edu/wp-content/uploads/sites/353/2021/06/1995_Notes_on_calculating_computer_performance.pdf). 

 

The geometric vs. harmonic debate was a fairly contentious topic in academia, but my understanding is that harmonic is simply the better metric here. Would be interested to hear the rationale behind the use of geomean here. 

I also stumbled across the geometric mean and I would be really interested to know why they used it. Maybe @LMGcommunity can enlighten us?

 

However, the differences among the geometric, arithmetic and harmonic mean should be negligible when comparing run to run differences of the same benchmark on the same hardware. The variance is a better metric to show those differences - which is basically what the 1% or .1% lows show but within the same benchmark.

Since calculating the variance of a harmonic mean is awful, maybe the good ol' arithmetic mean is the way to go. Simply average all benchmark runs and tell us the variance (if there are outliers).

Link to comment
Share on other sites

Link to post
Share on other sites

It's prolly been said before but I wanna say it too...those colour coordinated dots that moved along with him as he spoke about each card was a massive help for someone like me who has severe dyslexia as typically my eyes just glaze over whenever info like that is presented to me. THANK YOU for those dots, please keep them as much as possible. I'm sure in editing they might suck to move around for different types of charts, but they were so freaking nice to see.

Link to comment
Share on other sites

Link to post
Share on other sites

I don't think there's been anything since the 20 series and the 5000 series that has been remotely compelling. It's all just meh. Ray Tracing still isn't that big, and neither AMD or NVIDIA has delivered anything with compelling value. It's like comparing the iPhone 13 to the 14 but over a 4-5 year period.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, HenrySalayne said:

I also stumbled across the geometric mean and I would be really interested to know why they used it. Maybe @LMGcommunity can enlighten us?

 

However, the differences among the geometric, arithmetic and harmonic mean should be negligible when comparing run to run differences of the same benchmark on the same hardware. The variance is a better metric to show those differences - which is basically what the 1% or .1% lows show but within the same benchmark.

Since calculating the variance of a harmonic mean is awful, maybe the good ol' arithmetic mean is the way to go. Simply average all benchmark runs and tell us the variance (if there are outliers).

Not so sure arithmetic would be better. That isn't great for rates, and the potential differences in workload and/or length would make even a weighted arithmetic mean questionable. Harmonic mean is equivalent to total number of frames over total execution time, which is exactly what you'd want. 

 

Issue here is that the geometric and arithmetic means would always be overstating the true average framerate over the entire benchmark suite, as geometric and arithmetic will always be greater than the harmonic if there are two datapoints with different values in a given positive dataset. 

 

So, likely not that big of a deal but could be potentially misleading by overstating average capabilities (if not by much) and not being the true expression of the average framerate over the whole suite. 

 

ETA: Variance of a harmonic mean is a bit of a pain to set up, yes, but a jackknifing script once written should be able to be generalized so long as the data's always in a consistent format.

Something I should point out is that two samples with the same arithmetic mean and different variances will result in different harmonic means. That is - higher variance, even lower harmonic mean compared to the geometric and arithmetic.

Variance is obviously important, which is why the low 1% scores are included. But this means that a higher variance card could be incorrectly reported as having a higher average framerate than one with lower variance, higher harmean, lower geomean. The chances of that happening are pretty slim and an eagle eyed viewer might be able to pick up the variance difference, but a layperson might just look at the averages and think "Oh, that one has a higher average, I'll go with that one" even if it's not truly better.  

Edited by robotater
just making an additional point
Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, AndreiArgeanu said:

I don't think there's been anything since the 20 series and the 5000 series that has been remotely compelling. It's all just meh. Ray Tracing still isn't that big, and neither AMD or NVIDIA has delivered anything with compelling value. It's like comparing the iPhone 13 to the 14 but over a 4-5 year period.

Ray Tracing performance still kind of doesn't matter. It's in the "nice to have" and the name of the game is still Raster. RT performance in the >$1000USD range just tells us that it's still not quite ready for prime time. GPUs when through this with AA technology. That was in GPUs for almost a decade before you can just leave it on in most games. (A lot of it is still Software & Game Engine side.)

 

That said, the next Gen is I think the point where it'll matter a lot. Mostly because the next console cycle should be on that Zen 5 + RNDA4 tech base.  Ray Tracing does take dedicated units, so that requires die space.  If we see another, say, 50% performance increase in RT at the mid-range next cycle, that should be where it makes sense to just leave it on in High End testing, rather than breaking it out.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Taf the Ghost said:

Ray Tracing performance still kind of doesn't matter. It's in the "nice to have" and the name of the game is still Raster. RT performance in the >$1000USD range just tells us that it's still not quite ready for prime time. GPUs when through this with AA technology. That was in GPUs for almost a decade before you can just leave it on in most games. (A lot of it is still Software & Game Engine side.)

 

That said, the next Gen is I think the point where it'll matter a lot. Mostly because the next console cycle should be on that Zen 5 + RNDA4 tech base.  Ray Tracing does take dedicated units, so that requires die space.  If we see another, say, 50% performance increase in RT at the mid-range next cycle, that should be where it makes sense to just leave it on in High End testing, rather than breaking it out.

 

I will say I thought that the full switch on the Console side to the "next" (now current) gen would see a drastic uptick in the amount of games that include ray tracing, since both the SeriesBox and PS5 support it, but that has not happened yet. 

Corps aren't your friends. "Bottleneck calculators" are BS. Only suckers buy based on brand. It's your PC, do what makes you happy.  If your build meets your needs, you don't need anyone else to "rate" it for you. And talking about being part of a "master race" is cringe. Watch this space for further truths people need to hear.

 

Ryzen 7 5800X3D | ASRock X570 PG Velocita | PowerColor Red Devil RX 6900 XT | 4x8GB Crucial Ballistix 3600mt/s CL16

Link to comment
Share on other sites

Link to post
Share on other sites

58 minutes ago, Middcore said:

 

I will say I thought that the full switch on the Console side to the "next" (now current) gen would see a drastic uptick in the amount of games that include ray tracing, since both the SeriesBox and PS5 support it, but that has not happened yet. 

While the tech is actually very old, the hardware to run it is very new and requires dedicated die space. Which means learning to program for it while adjusting to exactly how much you can run with it. Realistically, Console Devs decided to just push more textures & geometry over ray tracing. Truth of the matter is that until the hardware sees another 200-300% improvement in actual throughput, it's not really worth using full time global illimitation. Pre-baked lighting is just so much more effective.

 

Still, it's going to be a wonky switch over. In a decade, everything minus like 2D and Retro games will make the full switch to Global Illumination, but getting to that point from this point means you have to make both. Somewhere around the UE6 era we'll probably see the switch.

 

For the next console cycle, I'd expect full time RT Reflections given both engine development and the much larger amount of units dedicated to it.

Link to comment
Share on other sites

Link to post
Share on other sites

Too bad about FSR 3.
But the video feels nice and reminds me of the older card reviews.
Ofc this is a bit longer due to it being 2 cards at the same time and some added issues, but its a nice difference than to some previous videos.

Not sure if one could group the names, so you see the new + old cards and which "team" they are from. like red, blue, green, and a different shade of colors for the new cards, but might not be any good solution to that. or nvm, might be fine as is. I guess one way is just show how it compares to its cards, and then the other teams instead of showing all at once. so you know where they are placed, from the new and old cards.

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, DesolationGod said:

Im glad amd priced this where it did, the comparisons to last gen dont really mean much to me...just ordered one yesterday, should be a great upgrade to my aging 1060 6gb.

It is good for a new purchase, but the GPU market has definitely shifted from products that made people from the previous gen envious and want to upgrade, to being, "hey this is a nice upgrade for someone with a 5 year old GPU". Most people are upgrading their gpus in 3-4 gens, so I understand why this is happening, but it kinda kills off a lot of excitement for enthusiasts with GPU launches. I am so thoroughly disappointed and utterly bored with 7xxx series and the 40 series cards.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, WolframaticAlpha said:

It is good for a new purchase, but the GPU market has definitely shifted from products that made people from the previous gen envious and want to upgrade, to being, "hey this is a nice upgrade for someone with a 5 year old GPU". Most people are upgrading their gpus in 3-4 gens, so I understand why this is happening, but it kinda kills off a lot of excitement for enthusiasts with GPU launches. I am so thoroughly disappointed and utterly bored with 7xxx series and the 40 series cards.

 

It would be less galling if the pricing hadn't risen as though there were actual major steps in performance from generation to generation.

 

Remember, it was only five years ago that the MSRP of Nvidia's flagship GPU was $700. 

Corps aren't your friends. "Bottleneck calculators" are BS. Only suckers buy based on brand. It's your PC, do what makes you happy.  If your build meets your needs, you don't need anyone else to "rate" it for you. And talking about being part of a "master race" is cringe. Watch this space for further truths people need to hear.

 

Ryzen 7 5800X3D | ASRock X570 PG Velocita | PowerColor Red Devil RX 6900 XT | 4x8GB Crucial Ballistix 3600mt/s CL16

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, LMGcommunity said:

We'll be getting a fix out.

Coulid someone confirm the base clock of the 7800XT as well, please? There seems to be information online that conflicts with what is posted in the video. In any case, 1295MHz does not seem right, compared to all other cards on the screen.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, littlegreen said:

Coulid someone confirm the base clock of the 7800XT as well, please? There seems to be information online that conflicts with what is posted in the video. In any case, 1295MHz does not seem right, compared to all other cards on the screen.

Hey, here's confirmation of the base clock on our reference card. Additionally, it's confirmed here on the specs of one of the third-party cards.image.jpeg

Link to comment
Share on other sites

Link to post
Share on other sites

Seems a lot like the RX 6800 vs 6800 XT argument, however, with 4GB of VRAM on the line. If the RX 7700 XT was closer to $400, then it would make more sense in my opinion. Otherwise, its likely to end up an issue of supply/demand why anyone would buy the 7700 XT over the 7800 XT.

 

I wonder if the MCM architecture just had better yields with the way the binning scheme ended up, so the 7800 XT has a lot higher yield than normal. AKA more quantity of 7800 XT than 7700 XT proportionally to previous generations, therefore disincentivizing AMD to gap the price as much.

Ryzen 7950x3D PBO +200MHz / -15mV curve CPPC in 'prefer cache'

RTX 4090 @133%/+230/+1000

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, Agall said:

Seems a lot like the RX 6800 vs 6800 XT argument, however, with 4GB of VRAM on the line. If the RX 7700 XT was closer to $400, then it would make more sense in my opinion.

 

The 7700 XT is clearly positioned to get people to pick the 7800 XT instead, the way the 7900 XT was for the 7900 XTX at launch.

 

The 7700 XT's price will drop to the point it becomes a more viable option in itself,  just like the 7900 XT. 

Corps aren't your friends. "Bottleneck calculators" are BS. Only suckers buy based on brand. It's your PC, do what makes you happy.  If your build meets your needs, you don't need anyone else to "rate" it for you. And talking about being part of a "master race" is cringe. Watch this space for further truths people need to hear.

 

Ryzen 7 5800X3D | ASRock X570 PG Velocita | PowerColor Red Devil RX 6900 XT | 4x8GB Crucial Ballistix 3600mt/s CL16

Link to comment
Share on other sites

Link to post
Share on other sites

Starting from 6:10 in the video, several graphs are shown, and the labels on these graphs don't make sense for me. The X-Axis is labeled as "Frame", but for which of the graphs is that? I'm pretty sure it actually displays time, because with framerates being different between runs, the benchmark would reach the same scene at different frame count numbers, but the graphs all show the same general trend, so it seems to be synced by time and not by frame count. The title of the graph even says "FPS over time", so that seems to be consistent with the content of the graphs, but not with the label on the X-Axis

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/9/2023 at 11:32 AM, Alvin853 said:

Starting from 6:10 in the video, several graphs are shown, and the labels on these graphs don't make sense for me. The X-Axis is labeled as "Frame", but for which of the graphs is that? I'm pretty sure it actually displays time, because with framerates being different between runs, the benchmark would reach the same scene at different frame count numbers, but the graphs all show the same general trend, so it seems to be synced by time and not by frame count. The title of the graph even says "FPS over time", so that seems to be consistent with the content of the graphs, but not with the label on the X-Axis

The graphs were included as a visual aid of inconsistency between runs. The actual graph is something internal that we've been working with to verify the runs are lined up as we trim the data to remove menus or loading screens from the whole data recording, and they should've probably been watermarked to illustrate that effect. The Y axis is actually FPS, and the X axis should be elapsed time (seconds). This is something we will update internally, but the hope is that we will be able to create properly themed graphs that coincide with the other graphs that we include in videos in the near future.

Link to comment
Share on other sites

Link to post
Share on other sites

Something that would be nice is to keep the color codes consistent when you use the power draw graphs. I noticed I had to pause, go back, and re-adjust my brain to change which card was which color. Overall, well done LMG team on the first GPU review since the hiatus!

ex 1.PNG

ex2.PNG

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×