Jump to content

RTX 3080 relative performance data from Digital Foundry's video

JZStudios
4 hours ago, Valentyn said:


considering the RTX Titan is heavily capped in FP64 it saves a lot of money if you don’t want a Quadro.

Fun fact: nvidia has limited FP64 capable cards only to their x100 chips for some time, such as the V100 (Titan V, Quadro and Tesla V100), P100 (Quadro and Tesla P100).

 

The last gen with wide options for FP64 was Kepler (not in consumer ofc). Even modern Quadros and Teslas often don't have proper FP64.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, leadeater said:

Well if that ever existed then it wouldn't be, but no card has ever done that. AIB cards of similar design to that the fans intake in the front of the card (facing down) and push air out the sides in to the case, which mixes and equalizes so you are raising the case ambient but you are not blowing hot air directly in to the CPU heatsink.

 

This is no different to an electric fan heater and you getting hotter sitting in front of it versus sitting away from it, same energy, just not directly applied to you.

 

It's really not going to be that big of a problem, for AMD CPUs only slightly as an extra 5C is 25MHz less, if you could event notice that.

on 3 fan cards there are gaps on the sides of the card making the hot air rise right in to the cpu, while the new FE design is less of an issue then any other "normal" aib design because it exhaling half of the air outside the case.

Link to comment
Share on other sites

Link to post
Share on other sites

You would be surprised what people pay online for used it is generally a bit behind the downward trend a bit.  But even if I got 400 for my 2080 and I used it two years it cost me about $8/mo that isn't all that bad.  I'd feel bad if I just got a 2080ti though!

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Bombastinator said:

95w.  You are referring to the 3950.  I remember saying once here that a 3950 could be cooled by big air coolers of sizes similar to the D14 and having people jump down my throat saying only water was possible, and how dare I comment on workstation coolers.  They were quite angry.

 

  A 3700x lists as 65w. A3950 lists are 95w a dh14 lists at somewhere between 200-300w


So you’re saying 95w must then be the hottest cpu AMD ever produces then since it’s been according to you a 20 year steady progression?

 

I don’t have AMD numbers handy, just intel.

They’re here:

https://en.m.wikipedia.org/wiki/List_of_CPU_power_dissipation_figures

 

Theres a breakdown on intel and AMD microcomputer CPUs in there I’d you want to dig for it.

Looking at the charts is odd.  AMD has dates but the intel stuff doesn’t.  Intel stuff has wattages but AMD doesn’t. Was this why you chose AMD for your comparison? 

 

 I’ve got a core2 Quad 6600 in my basement with a hyper212 on it. It was rated for something near or over 100w stock. It’s less than 20 years old though. I bought it after I bought my house and I did that in 2000. Hyper212 was newish when I built that machine.  No sure how much less. I want to say 2003 but I’m not sure.  I recall I overclocked the chip.  Don’t remember by how much.  It may not have been 20 years but it’s been a while

 

So perhaps 20 years is not sufficiently exact.  This according to you apparently isn’t about cpu heat progression it’s about the number 20 years old and specificity.  And you base a claim of constant heat progression on that.  You want more specificity?  I can give you granularity. 
 

When did the hyper212 come out? I was thinking something like 20 years.  Perhaps that is too long. Coolermaster had only been going about 8 years at that time.  Maybe it should have been 17 or something. 20 was a round number.   When did single core CPUs first start overclocking to the 5ghz range?  Pentium D era? It was before core2 duo came out.  I remember that because the core 2duos could beat single core celerons running OCed to 5 ghz The core 2duos of the time  were something like 65w iirc.  The big pentiumDs ran much much hotter. Have cpu sizes and temps gone up over the course of micro computing in general? Sure.  There was a time when CPUs didn’t have coolers at all.  Look at a z80. I messed about with those on trash80s and stuff when I was a kid.  CPUs got hotter and hotter and higher ghz and higher ghz until 5ghz was reached where upon they went multicore and the process repeated.  They did two core the same way.  Then four core. All this time nodes were shrinking and transistor counts were growing.  As the nodes went down so did heat but as transistor counts went up so did heat.  It went up for a while but became More of a pulsing after multicore hit.  A pulsing that went in general slightly down.  When the upper range of transistor count was reached but there hadn’t been a die shrink yet, chips get hot.  Intel’s 14nm CPUs are getting pretty hot these days.  They’re rather famously having trouble with the node shrink. Hotter and hotter. They’re well over 100w these days. They’ve made higher wattage chips though.  The interesting bit is they cooled them with smaller coolers and got lower numbers doing it.  SOI really didn’t like going over 75c.  Yet big water radiators weren’t needed.  What’s the difference?  The chips themselves are physically smaller.  Node shrink.  Radiator size has gone up fairly consistently.  Wattage hasn’t though, which was the point I was making.  Old radiators still work fine so they’re still used.  They were engineered for a slightly different situation though. 

 

so yeah.  “What.”

 

The Athlon XP isn't even 20 years ago. The core duo's didn't launch till 2006. 14 years ago, (and a couple of months if you want to be precise). If i'd gone back the full 20 years we'd be looking at early generation Pentium 4's or late generation Pentium 3's.

 

Also no i was intending to reffer to the 3800x, but a check shows thats actually a 105w part, (my memory got it wrong), and it's actually the 3600x thats the 95w part.

 

I chose AMD for my comparison because i actually owned an Athlon XP back in he day and i built that system myself and spent quite some time looking at good cooler options so i'm very familiar with what was around back then. if you want the individual TDP';s, go to the dedicated wiki pages for those architectures. i grabbed my info from there for the 3200+

 

Also i'm not saying that 95w is in any way the hottest that's ever happened, i'm not also claiming it's a clean neat progression, certainly not. What i am saying is the idea that CPU's of yesteryear where hotter and CPU coolers from then are overdesigned for modern CPU's simply doesn't hold up. There's been hot and cool generations in between. But even using your own table since the 3k series the intel top of the line desktop products have seen a jump in TDP, (small but present), ever generation except the leap from the 6k to the 7k series, and notably the jump from 4 to 6 and 6 to 8 cores both came with TDP jumps, (and again we know intel has been understating it's TDP's for a while).

 

Now your comments seem focused on the Core duo series, and i'll admit my knowledge their is nigh non-existent, i was well out of the tech scene from around the end of the Athlon 64 era, (they'd just launched their first multi-cores and everyone was wondering what Intel would do), through till the 4k series, and i didn;t get back in seriously till the 8k series, i've back-filled my knowledge since then a fair ways, but the core duo days is a bit further back than I've gone. So it's possibble from your statements and what little i can make of the tables, (i'm not sure which chips on there where positioned where and when), that those where on the hot side compared to later chips. But thats one small era in an over decade long progression since then. And a decade plus progression before then, and that one era aside there's been a tendency towards rising power and heat it hasn't always been perfectly consistent, and it's definitely slowed down as time has gone on, but it's still been the overall tend.

 

The reason many of the early heat-pipe coolers, and for that matter water cooling setups still do well is that even with most of the modern hot CPU's you don't actually need a super cooler to keep it cool enough to not thermal throttle. Big coolers have never about making the CPU operate, (hence the modest AMD reference cooler today), but about achieving either lower temps, quieter operation, or more OC headroom than the bare minimum required. The differance is those old setups where in their heyday top of the line best of the best cooling options available.

 

As an aside, amusing image for you. The cooler on one model of my first ever DGPU, (it may even be the same model, i really can't remember):

 

SmwABoQ.png

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Valentyn said:

NVIDIA showing actual numbers between 2080Ti, and 3080

 

 

 

Odd that they are using two completely different driver versions in this comparison. 455.77 for the RTX 3080, 451.67 for the RTX 2080 Ti. Even weirder that they chose to use a driver from July 9th instead of the one they just released on August 17th. 

 

Maybe I am reading a bit too far into this, but most people want to rule out as many variables as possible when making direct comparisons between two things. I test and qualify computer hardware for a living, and any time we compare products, we keep as much of the test bench identical as physically possible with the exception being the component under test. If we change a driver, all previous scores are invalid when compared against the new hardware. Only hardware qualified on the same driver/platform can be compared. If Nvidia's older drivers didn't support the 3080 (which we know they don't), then they should perform the testing on 455.77 across all cards. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

There is no way I’d go from 11gb vram to a more capable card with less vram. I already cap that in a few games at 3440x1440 and 4k res (monitor and tv connected to my pc). The only worthy upgrade is the 3090 right now, but I’d be paying for a lot of that vram that would go completely unused in the lifespan the card would have in my hands. I guess it’s waiting time until the 3080 Ti comes out for USD 999. Back to trashing money on my audiophile paranoia. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, CarlBar said:

 

The Athlon XP isn't even 20 years ago. The core duo's didn't launch till 2006. 14 years ago, (and a couple of months if you want to be precise). If i'd gone back the full 20 years we'd be looking at early generation Pentium 4's or late generation Pentium 3's.

 

Also no i was intending to reffer to the 3800x, but a check shows thats actually a 105w part, (my memory got it wrong), and it's actually the 3600x thats the 95w part.

 

I chose AMD for my comparison because i actually owned an Athlon XP back in he day and i built that system myself and spent quite some time looking at good cooler options so i'm very familiar with what was around back then. if you want the individual TDP';s, go to the dedicated wiki pages for those architectures. i grabbed my info from there for the 3200+

 

Also i'm not saying that 95w is in any way the hottest that's ever happened, i'm not also claiming it's a clean neat progression, certainly not. What i am saying is the idea that CPU's of yesteryear where hotter and CPU coolers from then are overdesigned for modern CPU's simply doesn't hold up. There's been hot and cool generations in between. But even using your own table since the 3k series the intel top of the line desktop products have seen a jump in TDP, (small but present), ever generation except the leap from the 6k to the 7k series, and notably the jump from 4 to 6 and 6 to 8 cores both came with TDP jumps, (and again we know intel has been understating it's TDP's for a while).

 

Now your comments seem focused on the Core duo series, and i'll admit my knowledge their is nigh non-existent, i was well out of the tech scene from around the end of the Athlon 64 era, (they'd just launched their first multi-cores and everyone was wondering what Intel would do), through till the 4k series, and i didn;t get back in seriously till the 8k series, i've back-filled my knowledge since then a fair ways, but the core duo days is a bit further back than I've gone. So it's possibble from your statements and what little i can make of the tables, (i'm not sure which chips on there where positioned where and when), that those where on the hot side compared to later chips. But thats one small era in an over decade long progression since then. And a decade plus progression before then, and that one era aside there's been a tendency towards rising power and heat it hasn't always been perfectly consistent, and it's definitely slowed down as time has gone on, but it's still been the overall tend.

 

The reason many of the early heat-pipe coolers, and for that matter water cooling setups still do well is that even with most of the modern hot CPU's you don't actually need a super cooler to keep it cool enough to not thermal throttle. Big coolers have never about making the CPU operate, (hence the modest AMD reference cooler today), but about achieving either lower temps, quieter operation, or more OC headroom than the bare minimum required. The differance is those old setups where in their heyday top of the line best of the best cooling options available.

 

As an aside, amusing image for you. The cooler on one model of my first ever DGPU, (it may even be the same model, i really can't remember):

 

SmwABoQ.png

 

 

Re: Dates 

not gonna bother looking em up. They’re not the point. 

 

Re: specific CPUs 

there’s a a lot of really wrong data there.

the 3800 and the 3950 are different chips, and the 3600 is 65w not 95w.  Let’s assume typos though since it’s still not the point.

 

re: the point
“What i am saying is the idea that CPU's of yesteryear where hotter and CPU coolers from then are overdesigned for modern CPU's simply doesn't hold up.“

 

Let’s divide this in two:

Claim 1: cpus of yesteryear were not hotter.

 

cpus of yesteryear were often hotter than they are now.  Not always, but often enough.
 Your statement about the 3800x (which is more or less an overclocked 3700x ) is entertaining, I we including overclocking then?  they could have binned even harder I suppose and made a 3850xt or something that took 140w and couldn’t be overclocked at all because it was already running at the maximum possible for that chip design.  There are a few 3700s that can take 140w.  Not many, but it’s the same chip. That’s on air.  On LN I’m sure it could draw even more wattage. 
Its just not a usable example. What’s the highest wattage a cpu ever overclocked to without exotic cooling? Betting its more than 140w, and it wasn’t a chip that was coffee lake or newer.  You can play games with overclock voltages if you want to but it doesn’t change the situation. 

 
Claim 2: that I claimed coolers from yesteryear are over designed for modern CPUs 

 

That I did not do.  I claimed they were designed for CPUs that were different, though they still function well enough with modern ones that there is no reason to alter the design. 
 

What was different about earlier chips that is useful in this discussion is the node.  
The die underneath the heat spreaders was larger, and that make it easier for them to transfer heat TO the heat spreaders which in turn transferred heat to the cooler.   The result is you need more cooler to cool less cpu wattage, because the problem is between the die and the spreader. 
 

Your entire thing seems to be “BUT THEY NEED LARGER COOLERS THAN THEY USED TO!”  Yep.  They do.  But it’s NOT because the chips produce more heat.  They in general produce less.  It’s because it’s harder to get the heat out, and the only way to do it is with higher delta(?) (difference in hotness and coldness. Which term to use has been put under debate) 

 

If you recall this is all about cpu coolers in systems that also have 3080/3090 CPUs (the 3070 has a more standard design)  the GPU doesn’t blow its hot air at the cpu.  It blows it at the cpu cooler.  The cpu coolers have more raw cooling capacity than they really need right now because they have to rely more on albedo than they used to.  A trickle of warmer air to one part of the stack (as opposed to actually raising case ambient)  isn’t going to do a lot.  
now a 350w gpu is still a 350w gpu.  Case ambient is going to go up. The statement made though was the through gpu behavior of the 3080/3090 cooler would make it even worse than what a mere regular 350w gpu would do.  I’m saying I don’t think it will.  At least not for tower style aftermarket air coolers.  They’re built to handle more than that.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, valdyrgramr said:

...

Your FPS seems to be higher then in benchmarks from popular sources(is "Nightmare Quality" a thing or that is the abbreviation of "Ultra Nightmare"?):

spacer.png

performance-3840-2160.png

 

Edit: Nvidia results are kinda similar to the DF results, the driver is weird, but at the same time it shows that the 2080Ti wasn't weaker because the new driver is lowering it's performance.

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, Metrize said:

Does anyoen know when Linus will do benchmarks?

Probably the reviews will be out by the launch day.

 

ComputerBase benchmarks also show the 2080Ti with a 104 FPS avg. with 89 FPS lows at 4K Ultra Nightmare (Edit: looks like it's on High, I'm not sure, but they show that the performance difference between Ultra Nightmare and High isn't that big, it's ~6% when using the 2070 Super.), so Nvidia numbers doesn't seem that far off, at least when looking the few sections of the video showing the 2080Ti. Still probably DOOM is a best case for the new GPUs, otherwise Nvidia wouldn't use it.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, MageTank said:

455.77 for the RTX 3080, 451.67 for the RTX 2080 Ti

Is it possible the drivers have split?

It's my understanding that the 30 series cards have at least two hardware capabilities that the 20 series cards do not have:

  • Whatever they call that "direct decompression" thing more than likely requires special hardware support
  • Microsofts "Hardware GPU Scheduling" thing probably is better with dedicated hardware on the GPU, instead of just a software update.

 

4 hours ago, MageTank said:

Even weirder that they chose to use a driver from July 9th instead of the one they just released on August 17th. 

This is the concerning part to me.

 

Any reasonable and experienced person is certain that Nvidia is exaggerating. All of the hardware manufacturers do at every new product release. That isn't even a question IMO. The only question is how much are they exaggerating? Are they exaggerating to the point of lying? Or are they just exaggerating to the point of marketing?

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, straight_stewie said:

Is it possible the drivers have split?

It's my understanding that the 30 series cards have at least two hardware capabilities that the 20 series cards do not have:

  • Whatever they call that "direct decompression" thing more than likely requires special hardware support
  • Microsofts "Hardware GPU Scheduling" thing probably is better with dedicated hardware on the GPU, instead of just a software update.

 

This is the concerning part to me.

 

Any reasonable and experienced person is certain that Nvidia is exaggerating. All of the hardware manufacturers do at every new product release. That isn't even a question IMO. The only question is how much are they exaggerating? Are they exaggerating to the point of lying? Or are they just exaggerating to the point of marketing?

It's definitely possible, however I am not certain why they would require an entirely new, exclusive driver for that particular graphics card instead of just lumping them all together and using the device ID's of the hardware itself to differentiate between them. You can have architecture-exclusive features across several generations of products within a driver, we see this with the current drivers going back as far as Kepler already.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, valdyrgramr said:

Nvidia did this exact exaggeration with Turing vs Pascal.  The 2 times bs was RT, but they were acting like in everything.  The 3080 is just a 2080 Ti with improved RT performance.

In that video you posted I'm really only seeing 30-40 FPS increases in most scenarios. If that is really the case, they've missed the 1.7-2.7 times mark they claim. Of course, if you look at the graphs, they don't claim 1.7-2 times performance, they claim performance per watt. Which I guess means that they might not be lying. Let's find out:

In testing the RTX 2080 TI shows ~280 Watts at full load. They're claiming the new chips are somewhere around 320 watts, which comes out to a 13% increase.

 

Interestingly enough, the jump from 110 fps to 140 fps (what I saw in the video for the most part) is a 22% increase. So that's close to the mark just on a wattage increase, assuming the driver differences, that's the whole kit right there.

 

Unfortunately, that leaves no room for their alleged performance/watt increase.

So without seeing real, third party, unbiased benchmarks, I'd have to say that you are more than likely correct.

As an aside, Nvidia isn't acting like it's everything, they are downright saying that it is. They had that slide that claimed somewhere between a 1.7 and 2.7 times increase in all three major areas they care about: shader, RT, and Tensor. Proof below.

Spoiler

Capture.thumb.PNG.e342d5d35bb0bc08541ee0079a91bf66.PNG



 

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

There's a follow up Q&A after the launch, written up at https://videocardz.com/newz/nvidia-provides-further-details-on-geforce-rtx-30-series

 

On the performance question:

 

Q: tldrdoto – Please clarify if the slide saying RTX 3070 is equal or faster than 2080 Ti is referring to traditional rasterization or DLSS/RT workloads?

Very important if you could clear it up, since no traditional rasterization benchmarks were shown, only RT/DLSS supporting games.

 

A: [Justin Walker] We’re talking about both. Games that only support traditional rasterization and games that support RTX (RT+DLSS). You can see this in our launch article here https://www.nvidia.com/en-us/geforce/news/introducing-rtx-30-series-graphics-cards/

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, valdyrgramr said:

Well, I also tested my 2080 Ti at the highest possible settings and shared pics of me getting higher frame rate than both of their 2080 Ti and 3080 while using the latest driver.  144-170 fps, 4k, ultra nightmare.  So, I have doubts about their claims.  Doom Eternal lacks RT too.

I can test this when I get off work, I have a 2080 Ti and a 4k TV to keep things native. I'll mimic the settings posted above, though i'd also like to know more details about the rest of the system (CPU/memory configuration, GPU core/memory clock, etc)

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, valdyrgramr said:

Nvidia did this exact exaggeration with Turing vs Pascal.  The 2 times bs was RT, but they were acting like in everything.  The 3080 is just a 2080 Ti with improved RT performance.

Nvidia themselves kinda said that RT performance increase isn't much different than the rasterization one.

spacer.png

You can see here that the improvements in non-RT games isn't much different than the ones with RT, and DF results kinda show the same.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, valdyrgramr said:

Well, I also tested my 2080 Ti at the highest possible settings and shared pics of me getting higher frame rate than both of their 2080 Ti and 3080 while using the latest driver.  144-170 fps, 4k, ultra nightmare.  So, I have doubts about their claims.  Doom Eternal lacks RT too.

id Software is known to be working on a raytracing build of the game. The rumor mill proposes that they are targeting holiday season 2020 release for it, which means it's probably ready to start being shown off (it's actually a little late to start hype for the holiday season, sans Cyberpunk even).

Considering Nvidia, they could be using a dev build of the raytracing version. I would assume they aren't lying about that part, that's just a dumb lie that buys them nothing. Plus we know that the software does exist at some level, and I'd bet dollars to donuts that Nvidia could get access to a build of the in-development raytracing version.

 

The old driver issue is still concerning.

 

 

15 minutes ago, MageTank said:

It's definitely possible, however I am not certain why they would require an entirely new, exclusive driver for that particular graphics card instead of just lumping them all together and using the device ID's of the hardware itself to differentiate between them. You can have architecture-exclusive features across several generations of products within a driver, we see this with the current drivers going back as far as Kepler already.

At some point that becomes a software maintenance issue. I would fully expect them to branch the driver families every so often, and I wouldn't assume anything else is going on until it's proven otherwise.

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, Mark Kaine said:

It looks good, but if you like your CPU  it's maybe not the best choice.

 

 

am2nvg.png

I'm using liquid cooling, I doubt it'll change much.

System Specs:

CPU: Ryzen 7 5800X

GPU: Radeon RX 7900 XT 

RAM: 32GB 3600MHz

HDD: 1TB Sabrent NVMe -  WD 1TB Black - WD 2TB Green -  WD 4TB Blue

MB: Gigabyte  B550 Gaming X- RGB Disabled

PSU: Corsair RM850x 80 Plus Gold

Case: BeQuiet! Silent Base 801 Black

Cooler: Noctua NH-DH15

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, KaitouX said:

Nvidia themselves kinda said that RT performance increase isn't much different than the rasterization one.

spacer.png

You can see here that the improvements in non-RT games isn't much different than the ones with RT, and DF results kinda show the same.

The DF video showed moment-to-moment RTX 3080 improvements of 60 - 85+% in non-RT performance.

 

The non-RT average game performance increases they mentioned are:

 

Borderlands 3: 81.6%

Shadow of the Tomb Raider: 69.8%

Battlefield 5: 68%

Control: 77.6%

DOOM Eternal: ~85%

You own the software that you purchase - Understanding software licenses and EULAs

 

"We’ll know our disinformation program is complete when everything the american public believes is false" - William Casey, CIA Director 1981-1987

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Exty said:

on 3 fan cards there are gaps on the sides of the card making the hot air rise right in to the cpu, while the new FE design is less of an issue then any other "normal" aib design because it exhaling half of the air outside the case.

Yes I know, that is what I explained but that isn't directly blowing hot air straight in to a CPU cooler. It has time to mix while rising in the case. That's the difference here, you have direct air current blowing right in to where the CPU cooler is, AIB cards don't do this.

 

It's a very simple concept, like explained before, get a fan heater or hair dryer. Turn it on, hold it close to your face, hot right? Now move it further away, not has hot right? Heating CPU vs heating the case which does raise heatsink temperature but not to the same degree as blowing hot air directly on to it.

 

There is a difference, the question is does it matter, probably not. That's my opinion though, it's going to be tested and then I'll be able to know for sure.

Link to comment
Share on other sites

Link to post
Share on other sites

I had a hard time finding out for sure - can anyone clarify if RTX/DLSS was ON for each of these tests?

We all knew the 3080 would be much better than the 2080 at RTX/DLSS feature, but what I want to know is straight up conventional rasterization performance.

 

If the 3080 is only 1.8x faster than the 2080 with RTX/DLSS, but only slightly faster or not faster at all than the 2080 at conventional rasterization games, it would be very disappoitning.

Before you reply to my post, REFRESH. 99.99% chance I edited my post. 

 

My System: i7-13700KF // Corsair iCUE H150i Elite Capellix // MSI MPG Z690 Edge Wifi // 32GB DDR5 G. SKILL RIPJAWS S5 6000 CL32 // Nvidia RTX 4070 Super FE // Corsair 5000D Airflow // Corsair SP120 RGB Pro x7 // Seasonic Focus Plus Gold 850w //1TB ADATA XPG SX8200 Pro/1TB Teamgroup MP33/2TB Seagate 7200RPM Hard Drive // Displays: LG Ultragear 32GP83B x2 // Royal Kludge RK100 // Logitech G Pro X Superlight // Sennheiser DROP PC38x

Link to comment
Share on other sites

Link to post
Share on other sites

48 minutes ago, Mister Woof said:

I had a hard time finding out for sure - can anyone clarify if RTX/DLSS was ON for each of these tests?

We all knew the 3080 would be much better than the 2080 at RTX/DLSS feature, but what I want to know is straight up conventional rasterization performance.

 

If the 3080 is only 1.8x faster than the 2080 with RTX/DLSS, but only slightly faster or not faster at all than the 2080 at conventional rasterization games, it would be very disappoitning.

The DF video has sections comparing rasterization performance and RT performance. For the rasterization performance, this is what DF has reported:

 

1 hour ago, Delicieuxz said:

The DF video showed moment-to-moment RTX 3080 improvements of 60 - 85+% in non-RT performance.

 

The non-RT average game performance increases they mentioned are:

 

Borderlands 3: 81.6%

Shadow of the Tomb Raider: 69.8%

Battlefield 5: 68%

Control: 77.6%

DOOM Eternal: ~85%

 

There's also this Nvidia chart showing performance differences in some games without RT, and in some games with RT.

2 hours ago, KaitouX said:

Nvidia themselves kinda said that RT performance increase isn't much different than the rasterization one.

spacer.png

You can see here that the improvements in non-RT games isn't much different than the ones with RT, and DF results kinda show the same.

You own the software that you purchase - Understanding software licenses and EULAs

 

"We’ll know our disinformation program is complete when everything the american public believes is false" - William Casey, CIA Director 1981-1987

Link to comment
Share on other sites

Link to post
Share on other sites

Man i'm stuck trying to decide which one I want.

 

I'm on a 5700 XT right now, and I want an upgrade to also get me into RTX.

 

None of them necessary since the 5700xt is still capable at 1440p, but i'll be shifting GPUs down and retiring the 1060 3gb.

Before you reply to my post, REFRESH. 99.99% chance I edited my post. 

 

My System: i7-13700KF // Corsair iCUE H150i Elite Capellix // MSI MPG Z690 Edge Wifi // 32GB DDR5 G. SKILL RIPJAWS S5 6000 CL32 // Nvidia RTX 4070 Super FE // Corsair 5000D Airflow // Corsair SP120 RGB Pro x7 // Seasonic Focus Plus Gold 850w //1TB ADATA XPG SX8200 Pro/1TB Teamgroup MP33/2TB Seagate 7200RPM Hard Drive // Displays: LG Ultragear 32GP83B x2 // Royal Kludge RK100 // Logitech G Pro X Superlight // Sennheiser DROP PC38x

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, valdyrgramr said:

Wait for 3rd party benches, tbh.

It's not like you can realistically get it before the reviews are out, so that should be the standard for any GPU purchase. Buying without knowing the value and possible issues of the product doesn't really make sense when talking about CPUs and GPUs.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×