Jump to content

Battlefield V with DXR on tested from Techspot(Hardware Unboxed)

kiska3

Source:

https://www.techspot.com/review/1749-battlefield-ray-tracing-benchmarks/

Quote

So how does DXR actually look in Battlefield V? Above is a direct comparison of the four presets, and the first thing you should notice is there’s a large visual difference between Ultra and Low, but there’s not as much of a difference between Ultra and Medium. In fact, there is basically no difference between the Ultra, High and Medium settings for DXR reflections, and it’s not just in this scene. We tested several other sections of the game and couldn’t spot any difference between the three higher settings.

This leads me to conclude that there are only two DXR modes: Ultra, which applies the full complement of ray tracing reflections in the game, and a cut down Low mode for better performance. Both the low and ultra modes apply reflections to surfaces like water, puddles and shiny objects, but it’s only the ultra mode that also applies these reflections to more matte surfaces like mud and your gun.

You’ll spot in this scene for example that the ultra mode reflects the fiery vehicle wreck off the muddy surface in the front, as well as on the barrel of your weapon at some distance away. Switch down to low, and the reflections disappear on your gun and from the mud, leaving just reflections from the water’s surface.

From what we can tell, this is the main difference between the two modes. The quality of reflections in terms of resolution, accuracy and draw distance is unaffected, so you’ll get the same sort of experience looking at those shiny surfaces whether you’re playing on Ultra or Low. Switching it down a notch only affects the amount of materials and surfaces that RTX reflections apply to. Ultra, in my opinion, gives a far better and more realistic presentation and really shows off the quality of ray tracing.

S-14.jpg

1.png

Spoiler

2.png

3.png

So cut perf about 60% or more to get more realistic reflections... I think not, at least for a fps. And this certainly doesn't reflect the price.

Western Sydney University - 4th year BCompSc student

Link to comment
Share on other sites

Link to post
Share on other sites

But what's the noise level like?

One day I will be able to play Monster Hunter Frontier in French/Italian/English on my PC, it's just a matter of time... 4 5 6 7 8 9 years later: It's finally coming!!!

Phones: iPhone 4S/SE | LG V10 | Lumia 920 | Samsung S24 Ultra

Laptops: Macbook Pro 15" (mid-2012) | Compaq Presario V6000

Other: Steam Deck

<>EVs are bad, they kill the planet and remove freedoms too some/<>

Link to comment
Share on other sites

Link to post
Share on other sites

I guess people will skip the DXR option in-game totally, maybe if it had a Story mode that you can enjoy with the DXR ON, maybe yes, in fast paced multiplayer and 60% fps cut...I doubt it. Battlefield yet again looks good even on lower settings so it's a no brainer for me - only if they could present the game properly without the stupid stuff..

Link to comment
Share on other sites

Link to post
Share on other sites

oh, ye. below 50 fps is starting become kinda "meh" in terms of framerate. 

 

i mean, hopefully optimizations help in the future, but i doubt we will ever see more than 70 fps on the 2080ti. and at that point id call it pretty good. 

 

atm its good, but not quite there yet.

Link to comment
Share on other sites

Link to post
Share on other sites

like i said on another topic i think Nvidia's plan for this not to happen is to implement a good SLI with DXR. And that's why SLI is now a thing again.

.

Link to comment
Share on other sites

Link to post
Share on other sites

Spoiler alert, Raytracing is really really really REALLY fucking hard to render in realtime.

The fact its doable at all is amazing.

System Specs

CPU: Ryzen 5 5600x | Mobo: Gigabyte B550i Aorus Pro AX | RAM: Hyper X Fury 3600 64gb | GPU: Nvidia FE 4090 | Storage: WD Blk SN750 NVMe - 1tb, Samsung 860 Evo - 1tb, WD Blk - 6tb/5tb, WD Red - 10tb | PSU:Corsair ax860 | Cooling: AMD Wraith Stealth  Displays: 55" Samsung 4k Q80R, 24" BenQ XL2420TE/XL2411Z & Asus VG248QE | Kb: K70 RGB Blue | Mouse: Logitech G903 | Case: Fractal Torrent RGB | Extra: HTC Vive, Fanatec CSR/Shifters/CSR Elite Pedals w/ Rennsport stand, Thustmaster Warthog HOTAS, Track IR5,, ARCTIC Z3 Pro Triple Monitor Arm | OS: Win 10 Pro 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, VegetableStu said:

people don't get this though ._. they see 60 fps evaporating and people go OMG RAYTRACING SUX

Frankly, I was expecting 30ish at 1920x1080, the fact its getting about 20 over that is blowing my fucking mind lol

System Specs

CPU: Ryzen 5 5600x | Mobo: Gigabyte B550i Aorus Pro AX | RAM: Hyper X Fury 3600 64gb | GPU: Nvidia FE 4090 | Storage: WD Blk SN750 NVMe - 1tb, Samsung 860 Evo - 1tb, WD Blk - 6tb/5tb, WD Red - 10tb | PSU:Corsair ax860 | Cooling: AMD Wraith Stealth  Displays: 55" Samsung 4k Q80R, 24" BenQ XL2420TE/XL2411Z & Asus VG248QE | Kb: K70 RGB Blue | Mouse: Logitech G903 | Case: Fractal Torrent RGB | Extra: HTC Vive, Fanatec CSR/Shifters/CSR Elite Pedals w/ Rennsport stand, Thustmaster Warthog HOTAS, Track IR5,, ARCTIC Z3 Pro Triple Monitor Arm | OS: Win 10 Pro 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, westport17 said:

Just hope the next 7nm nVidia GPUs can handle it well enough..

It would be nice with a new architecture and node shrink to hope for something kick arse and compelling.   It's starting to get rather droll with Nvidia did this and nvidia did that.  Having something to compare and talk about rather than simply addressing pointless hate would make for a nice change on these forums. 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, huilun02 said:

But it only works with some reflections and even then not done properly

If the scene has no reflections, it is identical to having raytracing disabled

The hit to performance is unjustifiable for what little it gives

 

It is really fucking annoying during livestreams of the game and everyone constantly asking if RTX is turned on

Because if it does so little that it isn't visually identifiable and you have to ask if its on, then its pointless to care about the feature at all

You dont know how shitty it feels when you cut your framerate to a third and still nobody can tell if its enabled

Being its physics-based it is not unreasonable to assume that its still working behind the scene even if nothing is utilizing it at that moment. Since you just don't know when its gonna need to kick in for example in MP.

And I haven't watched steams but the demos frankly blew my mind. I suspect a metric ton of these people saying is RTX on is people trying to be funny and reciting memes and or frankly dont know what they are looking for.

I wish I was getting BF5 to try RTX on my own but sadly Im not buying BF games anymore, I play them a ton day one and never touch em again, so im off that treadmill now.

System Specs

CPU: Ryzen 5 5600x | Mobo: Gigabyte B550i Aorus Pro AX | RAM: Hyper X Fury 3600 64gb | GPU: Nvidia FE 4090 | Storage: WD Blk SN750 NVMe - 1tb, Samsung 860 Evo - 1tb, WD Blk - 6tb/5tb, WD Red - 10tb | PSU:Corsair ax860 | Cooling: AMD Wraith Stealth  Displays: 55" Samsung 4k Q80R, 24" BenQ XL2420TE/XL2411Z & Asus VG248QE | Kb: K70 RGB Blue | Mouse: Logitech G903 | Case: Fractal Torrent RGB | Extra: HTC Vive, Fanatec CSR/Shifters/CSR Elite Pedals w/ Rennsport stand, Thustmaster Warthog HOTAS, Track IR5,, ARCTIC Z3 Pro Triple Monitor Arm | OS: Win 10 Pro 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

I'm just surprised that a shooter - were people care a lot more about FPS - is the first to implement it. I would've expected NVidia to throw money at e.g. Tomb Raider and have a cinematic game showing off the feature.  1080p Ultra @60fps is what I expected to be what Raytracing could do for us now, simply because it is the "sweet spot" for many gamers.  

Link to comment
Share on other sites

Link to post
Share on other sites

For early, most likely somewhat rushed software (being that this is added in a post-launch patch rather than being integrated as part of the base game) running on first generation hardware, this is actually better than I would have expected. Sure, it's not practical right now. I doubt any implementation of RTX coming out in the near future is going to have an FPS hit that most would consider practical (outside of stuff like puzzle or horror games that don't need quick reactions). But as a proof of concept it shows that there are tangible benefits to the tech that are immediately noticeable for the performance cost (lest we forget the trials and tribulations of tessellation, where we paid a huge performance tax for bumpier rocks and the occasional underground ocean).

 

I'm frankly excited for what comes out of this in 2-3 GPU generations. Node shrinks (more than likely just 1 - 1 1/2 though) combined with refinements to the RT cores to boost efficiency and throughput and more optimized software could easily bring us to a point where this becomes a standard feature (assuming AMD comes up with something comparable that doesn't end up needing an entirely separate implementation. I'd expect them to have something working by at least Navi gen 2).

Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, mr moose said:

It would be nice with a new architecture and node shrink to hope for something kick arse and compelling.   It's starting to get rather droll with Nvidia did this and nvidia did that.  Having something to compare and talk about rather than simply addressing pointless hate would make for a nice change on these forums. 

This is the way technology progressing. Inventions always come first and hardware improvements will follow.

Just be positive about what future will bring us.

Link to comment
Share on other sites

Link to post
Share on other sites

51 minutes ago, Nexxus said:

Frankly, I was expecting 30ish at 1920x1080, the fact its getting about 20 over that is blowing my fucking mind lol

Well, it's Hybrid Rendering and it's only handling some reflections. It's amazing they've gotten that far, but this is more "Can it run Crysis?" territory for at least 2 more generations of GPUs. It's going to either be the 5nm generation of GPUs or some sort of super Vector Math Unit that's built into the GPUs.

Link to comment
Share on other sites

Link to post
Share on other sites

To make the performance hit acceptable, for the limited implementation, it looks like at least 100 RT cores are going to be necessary. To make it really viable, it's probably closer to 200 RT Cores.

Link to comment
Share on other sites

Link to post
Share on other sites

Not going to lie, those scenes with RTX on ULTRA looked pretty sick so in a few years time I'm really looking forward to playing ultra realistic AAA titles

Irish in Vancouver, what's new?

 

Link to comment
Share on other sites

Link to post
Share on other sites

I am hoping that I can use a combination of technologies to arrive at 1440p/60 with my i7-8700K/RTX 2080:

 

1080p, ray tracing enabled in medium or high and DLSS to upsample the resolution back to 1440p.

 

That'll do me for a first-gen. product.

Link to comment
Share on other sites

Link to post
Share on other sites

oh man this is gonna look so sick on my PC Classic! Everything shining and glistening... RTX rubbing baby oil on everything...you aint gaming unless you shoot a man with a glistening baby oiled gun fool...

 

EDIT: I meant shoot a person... I did not mean it to come out that way, I believe in equality. Please let me keep my job...

Bolivia.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Nexxus said:

Frankly, I was expecting 30ish at 1920x1080, the fact its getting about 20 over that is blowing my fucking mind lol

I remember trying raytracing in blender back in 2012ish? It boosted a render from 2-3 mins to 20 secs....20 secs for 1 frame.

Link to comment
Share on other sites

Link to post
Share on other sites

Question for anyone here who knows enough about programming with the new RTX in game engines. How hard is it to implement in time trms compared to traditional ways of doing the related parts. Honestly the way it sounds, it comes across as if it uses all of the existing art asset textures, geometry, and basic lighting sources to do it all automatically bypassing the need to do more than build an on switch into the engine, at which point NVIDIA@s hardware and drivers make over and build the raytrace, de-noise it, and integrate it. But i've no idea if thats acurratte.

 

Also for anyone familiar with any of the hardware computational costs, how acurratte do you think NVIDIA's 10 Giga Rays per second number on the 2080Ti actually is in pure "has the processing power" sense. Is it pure marketing of an utterly perfect scenario or is it just a modest  overestimation of the capabilities, (the way AMD's claim about EPYC 2's 4x Floating Point Performance is, acurratte and we'll see a real benefit close to that but not a number you'll hit 100% of the time). Because i've been doing some math and 1920x1080 @ 60FPS with 3 rays per pixel and just the reflection rays on bounce only comes to around 760 million rays per second. It's making me wonder if maybe something else isn't bottlenecking he Tensor cores real hard, (it's obviously not the Rasterization, but could be PCI-E, could be Memory, Could be Denoising, hard to tell).

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, CarlBar said:

Question for anyone here who knows enough about programming with the new RTX in game engines. How hard is it to implement in time trms compared to traditional ways of doing the related parts. Honestly the way it sounds, it comes across as if it uses all of the existing art asset textures, geometry, and basic lighting sources to do it all automatically bypassing the need to do more than build an on switch into the engine, at which point NVIDIA@s hardware and drivers make over and build the raytrace, de-noise it, and integrate it. But i've no idea if thats acurratte.

 

Also for anyone familiar with any of the hardware computational costs, how acurratte do you think NVIDIA's 10 Giga Rays per second number on the 2080Ti actually is in pure "has the processing power" sense. Is it pure marketing of an utterly perfect scenario or is it just a modest  overestimation of the capabilities, (the way AMD's claim about EPYC 2's 4x Floating Point Performance is, acurratte and we'll see a real benefit close to that but not a number you'll hit 100% of the time). Because i've been doing some math and 1920x1080 @ 60FPS with 3 rays per pixel and just the reflection rays on bounce only comes to around 760 million rays per second. It's making me wonder if maybe something else isn't bottlenecking he Tensor cores real hard, (it's obviously not the Rasterization, but could be PCI-E, could be Memory, Could be Denoising, hard to tell).

 

Im still waiting for someone to release some information on the power consumption when the RT and Tensor Cores are turned on, because the PCBs were already hitting max wattage OC'd without those cores turned on.  Maybe creating the bottleneck as its not performing on the Turing processor like it should.

Workstation Laptop: Dell Precision 7540, Xeon E-2276M, 32gb DDR4, Quadro T2000 GPU, 4k display

Wifes Rig: ASRock B550m Riptide, Ryzen 5 5600X, Sapphire Nitro+ RX 6700 XT, 16gb (2x8) 3600mhz V-Color Skywalker RAM, ARESGAME AGS 850w PSU, 1tb WD Black SN750, 500gb Crucial m.2, DIYPC MA01-G case

My Rig: ASRock B450m Pro4, Ryzen 5 3600, ARESGAME River 5 CPU cooler, EVGA RTX 2060 KO, 16gb (2x8) 3600mhz TeamGroup T-Force RAM, ARESGAME AGV750w PSU, 1tb WD Black SN750 NVMe Win 10 boot drive, 3tb Hitachi 7200 RPM HDD, Fractal Design Focus G Mini custom painted.  

NVIDIA GeForce RTX 2060 video card benchmark result - AMD Ryzen 5 3600,ASRock B450M Pro4 (3dmark.com)

Daughter 1 Rig: ASrock B450 Pro4, Ryzen 7 1700 @ 4.2ghz all core 1.4vCore, AMD R9 Fury X w/ Swiftech KOMODO waterblock, Custom Loop 2x240mm + 1x120mm radiators in push/pull 16gb (2x8) Patriot Viper CL14 2666mhz RAM, Corsair HX850 PSU, 250gb Samsun 960 EVO NVMe Win 10 boot drive, 500gb Samsung 840 EVO SSD, 512GB TeamGroup MP30 M.2 SATA III SSD, SuperTalent 512gb SATA III SSD, CoolerMaster HAF XM Case. 

https://www.3dmark.com/3dm/37004594?

Daughter 2 Rig: ASUS B350-PRIME ATX, Ryzen 7 1700, Sapphire Nitro+ R9 Fury Tri-X, 16gb (2x8) 3200mhz V-Color Skywalker, ANTEC Earthwatts 750w PSU, MasterLiquid Lite 120 AIO cooler in Push/Pull config as rear exhaust, 250gb Samsung 850 Evo SSD, Patriot Burst 240gb SSD, Cougar MX330-X Case

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Tristerin said:

 

Im still waiting for someone to release some information on the power consumption when the RT and Tensor Cores are turned on, because the PCBs were already hitting max wattage OC'd without those cores turned on.  Maybe creating the bottleneck as its not performing on the Turing processor like it should.

 

It's lower with them on, give me a bit to dig info up.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, CarlBar said:

 

It's lower with them on, give me a bit to dig info up.

If that's the case than to me that sounds like intentional gimping of that GPU so that the next generation has an easy fix, more power OR smaller node (first easy, second in the works) or a combination of both.

Workstation Laptop: Dell Precision 7540, Xeon E-2276M, 32gb DDR4, Quadro T2000 GPU, 4k display

Wifes Rig: ASRock B550m Riptide, Ryzen 5 5600X, Sapphire Nitro+ RX 6700 XT, 16gb (2x8) 3600mhz V-Color Skywalker RAM, ARESGAME AGS 850w PSU, 1tb WD Black SN750, 500gb Crucial m.2, DIYPC MA01-G case

My Rig: ASRock B450m Pro4, Ryzen 5 3600, ARESGAME River 5 CPU cooler, EVGA RTX 2060 KO, 16gb (2x8) 3600mhz TeamGroup T-Force RAM, ARESGAME AGV750w PSU, 1tb WD Black SN750 NVMe Win 10 boot drive, 3tb Hitachi 7200 RPM HDD, Fractal Design Focus G Mini custom painted.  

NVIDIA GeForce RTX 2060 video card benchmark result - AMD Ryzen 5 3600,ASRock B450M Pro4 (3dmark.com)

Daughter 1 Rig: ASrock B450 Pro4, Ryzen 7 1700 @ 4.2ghz all core 1.4vCore, AMD R9 Fury X w/ Swiftech KOMODO waterblock, Custom Loop 2x240mm + 1x120mm radiators in push/pull 16gb (2x8) Patriot Viper CL14 2666mhz RAM, Corsair HX850 PSU, 250gb Samsun 960 EVO NVMe Win 10 boot drive, 500gb Samsung 840 EVO SSD, 512GB TeamGroup MP30 M.2 SATA III SSD, SuperTalent 512gb SATA III SSD, CoolerMaster HAF XM Case. 

https://www.3dmark.com/3dm/37004594?

Daughter 2 Rig: ASUS B350-PRIME ATX, Ryzen 7 1700, Sapphire Nitro+ R9 Fury Tri-X, 16gb (2x8) 3200mhz V-Color Skywalker, ANTEC Earthwatts 750w PSU, MasterLiquid Lite 120 AIO cooler in Push/Pull config as rear exhaust, 250gb Samsung 850 Evo SSD, Patriot Burst 240gb SSD, Cougar MX330-X Case

 

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, VegetableStu said:

people don't get this though ._. they see 60 fps evaporating and people go OMG RAYTRACING SUX

It doesn't really matter if it's hard to render or not, when paying that much for a GPU people expect more than ~50 frames, and no amount of special shadows is going to change that. The fact is it's an exciting new development, but it hasn't been refined enough

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×