Jump to content

Nvidia Ray Tracing

Billy Pilgrim

Just wondering if anyone else had the thought that Nvidia could have created a separate Ray-Tracing card that would calculate rays, kind of like with PhysX, then they could merge them at a future date. This would mean that people uninterested in ray tracing would be able to buy a 20 series card that would be more powerful, with a greater focus on power, rather than a feature with extremely low support. that radically decreases frame rates when in use. This would be a win for Nvidia because they could sell an extra  card for enthusiasts and large companies as well as offer a more compelling reason to upgrade.

Link to comment
Share on other sites

Link to post
Share on other sites

Maybe they can't do that due to bandwidth requirements for RT cores to communicate with the rest of the GPU. We just don't know atm. PhyX calculations can be done with CUDA cores instead, so it doesnt have this problem.

CPU: i7-2600K 4751MHz 1.44V (software) --> 1.47V at the back of the socket Motherboard: Asrock Z77 Extreme4 (BCLK: 103.3MHz) CPU Cooler: Noctua NH-D15 RAM: Adata XPG 2x8GB DDR3 (XMP: 2133MHz 10-11-11-30 CR2, custom: 2203MHz 10-11-10-26 CR1 tRFC:230 tREFI:14000) GPU: Asus GTX 1070 Dual (Super Jetstream vbios, +70(2025-2088MHz)/+400(8.8Gbps)) SSD: Samsung 840 Pro 256GB (main boot drive), Transcend SSD370 128GB PSU: Seasonic X-660 80+ Gold Case: Antec P110 Silent, 5 intakes 1 exhaust Monitor: AOC G2460PF 1080p 144Hz (150Hz max w/ DP, 121Hz max w/ HDMI) TN panel Keyboard: Logitech G610 Orion (Cherry MX Blue) with SteelSeries Apex M260 keycaps Mouse: BenQ Zowie FK1

 

Model: HP Omen 17 17-an110ca CPU: i7-8750H (0.125V core & cache, 50mV SA undervolt) GPU: GTX 1060 6GB Mobile (+80/+450, 1650MHz~1750MHz 0.78V~0.85V) RAM: 8+8GB DDR4-2400 18-17-17-39 2T Storage: HP EX920 1TB PCIe x4 M.2 SSD + Crucial MX500 1TB 2.5" SATA SSD, 128GB Toshiba PCIe x2 M.2 SSD (KBG30ZMV128G) gone cooking externally, 1TB Seagate 7200RPM 2.5" HDD (ST1000LM049-2GH172) left outside Monitor: 1080p 126Hz IPS G-sync

 

Desktop benching:

Cinebench R15 Single thread:168 Multi-thread: 833 

SuperPi (v1.5 from Techpowerup, PI value output) 16K: 0.100s 1M: 8.255s 32M: 7m 45.93s

Link to comment
Share on other sites

Link to post
Share on other sites

the problum with that is no one may buy the RT card if they put it in the gpu it will become more common there for making games look better in the long run

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, thedonheath said:

the problum with that is no one may buy the RT card if they put it in the gpu it will become more common there for making games look better in the long run

It's kind of a catch-22. You need consumers with ray tracing for devs to update games for it, but you need devs to update games for RTX to get people to buy.

Link to comment
Share on other sites

Link to post
Share on other sites

you must respect nvida for taking the risk tho no one may like it but its better for eveyone in the long run im al for new tech

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, thedonheath said:

you must respect nvida for taking the risk tho no one may like it but its better for eveyone in the long run im al for new tech

Yeah, I guess that's true, though maybe they introduced RTX too early. Maybe it was partly for the 20 naming scheme?

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Billy Pilgrim said:

Yeah, I guess that's true, though maybe they introduced RTX too early. Maybe it was partly for the 20 naming scheme?

not too early, the hardware needs to be there before the software, they just released it at too high of a price

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Billy Pilgrim said:

Yeah, I guess that's true, though maybe they introduced RTX too early. Maybe it was partly for the 20 naming scheme?

peaple moan but still buy them haha and ofcourse the 3000 will do it better and so on plus the is ouher cool stuff the rtx brings with it

remeber how long tessellation toke now nealy evey game uses it

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Arika S said:

not too early, the hardware needs to be there before the software, they just released it at too high of a price

this is true they toke the piss with the price

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Jurrunio said:

Maybe they can't do that due to bandwidth requirements for RT cores to communicate with the rest of the GPU. We just don't know atm. PhyX calculations can be done with CUDA cores instead, so it doesnt have this problem.

I doubt it very much, NVLINK can do 100GB/s.

 

I think the real reason is if they don't push it out so aggressively it wont get the market penetration and then adoption it needs.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, schwellmo92 said:

I doubt it very much, NVLINK can do 100GB/s.

 

I think the real reason is if they don't push it out so aggressively it wont get the market penetration and then adoption it needs.

100GB/s is nothing compared to how fast data can travel within the same GPU die. Also latency, just milliseconds there can mean everything.

CPU: i7-2600K 4751MHz 1.44V (software) --> 1.47V at the back of the socket Motherboard: Asrock Z77 Extreme4 (BCLK: 103.3MHz) CPU Cooler: Noctua NH-D15 RAM: Adata XPG 2x8GB DDR3 (XMP: 2133MHz 10-11-11-30 CR2, custom: 2203MHz 10-11-10-26 CR1 tRFC:230 tREFI:14000) GPU: Asus GTX 1070 Dual (Super Jetstream vbios, +70(2025-2088MHz)/+400(8.8Gbps)) SSD: Samsung 840 Pro 256GB (main boot drive), Transcend SSD370 128GB PSU: Seasonic X-660 80+ Gold Case: Antec P110 Silent, 5 intakes 1 exhaust Monitor: AOC G2460PF 1080p 144Hz (150Hz max w/ DP, 121Hz max w/ HDMI) TN panel Keyboard: Logitech G610 Orion (Cherry MX Blue) with SteelSeries Apex M260 keycaps Mouse: BenQ Zowie FK1

 

Model: HP Omen 17 17-an110ca CPU: i7-8750H (0.125V core & cache, 50mV SA undervolt) GPU: GTX 1060 6GB Mobile (+80/+450, 1650MHz~1750MHz 0.78V~0.85V) RAM: 8+8GB DDR4-2400 18-17-17-39 2T Storage: HP EX920 1TB PCIe x4 M.2 SSD + Crucial MX500 1TB 2.5" SATA SSD, 128GB Toshiba PCIe x2 M.2 SSD (KBG30ZMV128G) gone cooking externally, 1TB Seagate 7200RPM 2.5" HDD (ST1000LM049-2GH172) left outside Monitor: 1080p 126Hz IPS G-sync

 

Desktop benching:

Cinebench R15 Single thread:168 Multi-thread: 833 

SuperPi (v1.5 from Techpowerup, PI value output) 16K: 0.100s 1M: 8.255s 32M: 7m 45.93s

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Jurrunio said:

100GB/s is nothing compared to how fast data can travel within the same GPU die. Also latency, just milliseconds there can mean everything.

People who professionally render cgi (incl ray tracing) normally use 2-4 GPU's and it scales almost perfectly, so I think it would've been fine to offload ray tracing to an add-in card.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Billy Pilgrim said:

Just wondering if anyone else had the thought that Nvidia could have created a separate Ray-Tracing card that would calculate rays, kind of like with PhysX, then they could merge them at a future date. This would mean that people uninterested in ray tracing would be able to buy a 20 series card that would be more powerful, with a greater focus on power, rather than a feature with extremely low support. that radically decreases frame rates when in use. This would be a win for Nvidia because they could sell an extra  card for enthusiasts and large companies as well as offer a more compelling reason to upgrade.

Besides being a horrible business decision it would be a massive pain in the ass for the dev to have another card to deal with. You have to remember that the scene is only partially ray traced so the scene is going to be rendered before hand then essentially post processed by RTX then you have your traditional post processing after that. Because the scene is not preexisting on the RTX card it has to be rendered first then sent to the rtx card then sent back to the GPU. The memory allocations and copies associated with moving that data around is going to slow everything down big time. Memory allocations and copies are hands down the most expensive operation you can perform for a real time application.

 

1 minute ago, schwellmo92 said:

People who professionally render cgi (incl ray tracing) normally use 2-4 GPU's and it scales almost perfectly, so I think it would've been fine to offload ray tracing to an add-in card.

 

5 minutes ago, schwellmo92 said:

I doubt it very much, NVLINK can do 100GB/s.

 

I think the real reason is if they don't push it out so aggressively it wont get the market penetration and then adoption it needs.

This isn't a problem for SLI because there really isn't much for card communication as they each have a copy of the data and work independently of each other so the only time they transfer data is when memory allocation occurs in which typically is what your loading screen is for. Or you slowly stream data to the GPU each frame but you allocated the memory block before hand.

 

As far as the ray tracing with software such as Blender or Maya they tile the scene so each card gets a tile at a time, the data set can be split among the cards, and the cards are completely independent of each other. The others cards do not need to know about each other where as the RTX standalone would have a massive circular dependency with the GPU which means lots of communication and dependencies which means frame time budgets shrink by a lot.

CPU: Intel i7 - 5820k @ 4.5GHz, Cooler: Corsair H80i, Motherboard: MSI X99S Gaming 7, RAM: Corsair Vengeance LPX 32GB DDR4 2666MHz CL16,

GPU: ASUS GTX 980 Strix, Case: Corsair 900D, PSU: Corsair AX860i 860W, Keyboard: Logitech G19, Mouse: Corsair M95, Storage: Intel 730 Series 480GB SSD, WD 1.5TB Black

Display: BenQ XL2730Z 2560x1440 144Hz

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, schwellmo92 said:

People who professionally render cgi (incl ray tracing) normally use 2-4 GPU's and it scales almost perfectly, so I think it would've been fine to offload ray tracing to an add-in card.

just because it scales without external RT cores involved doesnt mean it works...

CPU: i7-2600K 4751MHz 1.44V (software) --> 1.47V at the back of the socket Motherboard: Asrock Z77 Extreme4 (BCLK: 103.3MHz) CPU Cooler: Noctua NH-D15 RAM: Adata XPG 2x8GB DDR3 (XMP: 2133MHz 10-11-11-30 CR2, custom: 2203MHz 10-11-10-26 CR1 tRFC:230 tREFI:14000) GPU: Asus GTX 1070 Dual (Super Jetstream vbios, +70(2025-2088MHz)/+400(8.8Gbps)) SSD: Samsung 840 Pro 256GB (main boot drive), Transcend SSD370 128GB PSU: Seasonic X-660 80+ Gold Case: Antec P110 Silent, 5 intakes 1 exhaust Monitor: AOC G2460PF 1080p 144Hz (150Hz max w/ DP, 121Hz max w/ HDMI) TN panel Keyboard: Logitech G610 Orion (Cherry MX Blue) with SteelSeries Apex M260 keycaps Mouse: BenQ Zowie FK1

 

Model: HP Omen 17 17-an110ca CPU: i7-8750H (0.125V core & cache, 50mV SA undervolt) GPU: GTX 1060 6GB Mobile (+80/+450, 1650MHz~1750MHz 0.78V~0.85V) RAM: 8+8GB DDR4-2400 18-17-17-39 2T Storage: HP EX920 1TB PCIe x4 M.2 SSD + Crucial MX500 1TB 2.5" SATA SSD, 128GB Toshiba PCIe x2 M.2 SSD (KBG30ZMV128G) gone cooking externally, 1TB Seagate 7200RPM 2.5" HDD (ST1000LM049-2GH172) left outside Monitor: 1080p 126Hz IPS G-sync

 

Desktop benching:

Cinebench R15 Single thread:168 Multi-thread: 833 

SuperPi (v1.5 from Techpowerup, PI value output) 16K: 0.100s 1M: 8.255s 32M: 7m 45.93s

Link to comment
Share on other sites

Link to post
Share on other sites

It would not surprise me at all of something like this happened in the future. But for the time being, it needs to be adopted and optimised software side before Nvidia/AMD can/will look to dedicated cards for ray tracing alone.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, N1NJ4W4RR10R said:

It would not surprise me at all of something like this happened in the future. But for the time being, it needs to be adopted and optimised software side before Nvidia/AMD can/will look to dedicated cards for ray tracing alone.

I would think that if this were to happen they would start of as two separate cards and then merge, as the industry tends towards integration. For example, vram was, at one point upgradable and CPU cache was external.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, schwellmo92 said:

People who professionally render cgi (incl ray tracing) normally use 2-4 GPU's and it scales almost perfectly, so I think it would've been fine to offload ray tracing to an add-in card.

Thats different. 

There is a huge difference between rendering a scene and waiting till it finishes and rendering a scene in real time with which the player interacts with all the time.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Billy Pilgrim said:

I would think that if this were to happen they would start of as two separate cards and then merge, as the industry tends towards integration. For example, vram was, at one point upgradable and CPU cache was external.

It's a relatively nique sounding thing though, and likely wouldn't be picked up if sold as a seperate unit (at least as much) while software side it's unoptimised as hell.

 

Add onto that I'm sure Nvidia wanted to guarantee their profits, I'd bet they'll only develop dedicated hardware once they know it's worthwhile.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Billy Pilgrim said:

Just wondering if anyone else had the thought that Nvidia could have created a separate Ray-Tracing card that would calculate rays, kind of like with PhysX, then they could merge them at a future date. This would mean that people uninterested in ray tracing would be able to buy a 20 series card that would be more powerful, with a greater focus on power, rather than a feature with extremely low support. that radically decreases frame rates when in use. This would be a win for Nvidia because they could sell an extra  card for enthusiasts and large companies as well as offer a more compelling reason to upgrade.

Imagine if we applied this to all new tech, then we would not have SSD, USB, etc.. "to name so newer tech" because people would instead go for full performance focused hardware. Personally I do not have a problem with the new RTX cards, people have the choice to buy GTX cards if they do not want to invest in the new tech. However I am certain that once Ray Tracing is used properly by developers and have been on the market abit longer ti will have alot better performance, but the real exiting thing for me is the DLSS, not Ray Tracing.

 

People also expect to much performance increase, like unrealistic in my opinion. I see people demand 50% improvement for the same price point every year and in my honest opinion that is not going to happen and one might argue have never really happened unless you upgrade your hardware constantly. It would be counter productive for Nvidia to just have upped the performance abit more, but without new tech and even more so when we think about what AMD might do in half a year or so.. I personally think AMD will dominate the budget and avg gamer GPU market during next year. Would I use Ray Tracing in BFV ? most likely not once I get my new rig build, because I want FPS.. Will I put all settings on ultra? no.. why would I ? the gain from that is almost nothing. Now once a few of the none FPS games that supports Ray Tracing comes out next year, I think we will see much better performance and a real difference. This might actually make people want to upgrade to a RTX card.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Sahl said:

but the real exiting thing for me is the DLSS, not Ray Tracing.

What exactly is DLSS?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Billy Pilgrim said:

What exactly is DLSS?

deep learning super-sampling

 

Or do you actually want me to make a technical post about it ? if so I rather link to the various articles describing it much better than I could ever do.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Sahl said:

Or do you actually want me to make a technical post about it ? if so I rather link to the various articles describing it much better than I could ever do.

That would be useful. Thanks

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×