Jump to content

Is AMD copying NVIDIA for ray tracing hardware (tensor cores)

redbread123

so AMD said they'd have ray tracing hardware by next year, and I'm here wondering if nvidia during the release say they've been developing ray tracing hardware for 10 years, and AMD suddenly comes out of no where in 1 year has ray tracing hardware are they just copying NVIDIA. how does AMD suddenly out of the air know how to make ray tracing hardware?

Link to comment
Share on other sites

Link to post
Share on other sites

It's not like that's some magic hardware that nVidia were developing, there's nothing new, it's just a piece of hardware that does matrix multiplications only. I'm not sure why exactly but it's not very useful for general 3D rendering pathway so it wasn't introduced earlier because it's simply useless without software support\demand. What nVidia did is a leap of faith to introduce it in their hardware and to try to encourage it's usage, it seems that it worked to some degree, even though it really works good only on high-end GPUs for now. But we can expect that hardware and software to get optimized more and more now when the starting point has passed.

Tag or quote me so i see your reply

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Juular said:

It's not like that's some magic hardware that nVidia were developing, there's nothing new, it's just a piece of hardware that does matrix multiplications only. I'm not sure why exactly but it's not very useful for general 3D rendering pathway so it wasn't introduced earlier because it's simply useless without software support\demand. What nVidia did is a leap of faith to introduce it in their hardware and to try to encourage it's usage, it seems that it worked to some degree, even though it really works good only on high-end GPUs for now. But we can expect that hardware and software to get optimized more and more now when the starting point has passed.

ah yes

Link to comment
Share on other sites

Link to post
Share on other sites

Ray Tracing is not a nVidia technology, games uses DXR which is part of Microsoft's DirectX portfolio.

 

All nVidia did was create dedicated hardware to improve performance on it, but if AMD can figure out how to do it on software level just as good that could give them the upper hand.

Personal Desktop":

CPU: Intel Core i7 10700K @5ghz |~| Cooling: bq! Dark Rock Pro 4 |~| MOBO: Gigabyte Z490UD ATX|~| RAM: 16gb DDR4 3333mhzCL16 G.Skill Trident Z |~| GPU: RX 6900XT Sapphire Nitro+ |~| PSU: Corsair TX650M 80Plus Gold |~| Boot:  SSD WD Green M.2 2280 240GB |~| Storage: 1x3TB HDD 7200rpm Seagate Barracuda + SanDisk Ultra 3D 1TB |~| Case: Fractal Design Meshify C Mini |~| Display: Toshiba UL7A 4K/60hz |~| OS: Windows 10 Pro.

Luna, the temporary Desktop:

CPU: AMD R9 7950XT  |~| Cooling: bq! Dark Rock 4 Pro |~| MOBO: Gigabyte Aorus Master |~| RAM: 32G Kingston HyperX |~| GPU: AMD Radeon RX 7900XTX (Reference) |~| PSU: Corsair HX1000 80+ Platinum |~| Windows Boot Drive: 2x 512GB (1TB total) Plextor SATA SSD (RAID0 volume) |~| Linux Boot Drive: 500GB Kingston A2000 |~| Storage: 4TB WD Black HDD |~| Case: Cooler Master Silencio S600 |~| Display 1 (leftmost): Eizo (unknown model) 1920x1080 IPS @ 60Hz|~| Display 2 (center): BenQ ZOWIE XL2540 1920x1080 TN @ 240Hz |~| Display 3 (rightmost): Wacom Cintiq Pro 24 3840x2160 IPS @ 60Hz 10-bit |~| OS: Windows 10 Pro (games / art) + Linux (distro: NixOS; programming and daily driver)
Link to comment
Share on other sites

Link to post
Share on other sites

The specific techniques NVIDIA use to implement ray tracing have been around for over a decade, developed by researchers not affiliated with any graphics company, as far as I know.

 

So AMD could make hardware that accelerates ray tracing because they're basing it off of knowledge in more or less the public domain. What AMD can't do is directly copy NVIDIA's hardware.

 

Link to comment
Share on other sites

Link to post
Share on other sites

nvidia's approach is only one route of many. They get some kinda usable RT performance by getting some number of rays, then using the tensor cores to de-noise those rays. If you try the Quake II RTX and turn off the noise reduction, it is really heavy in "noise" from the limited number of rays. For a brute force approach you'd want a load more rays than that.

 

AMD will face a similar consideration. How much hardware do they throw at doing the rays? With their chosen number of rays, will they need to do other post processing to make it look good? This is what we wait to find out. It may be that they choose to go for a similar number of rays, but implement denoise in a different way for example.

 

While we may still be some way off Intel's joining of the GPU club, they currently offer an open RT denoise library. Although this is CPU based, they're not inactive in this area of software.

https://openimagedenoise.github.io/

 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

  • 4 years later...

They are copying their dlss technology tho. Look at fsr now they gave their source to modders 🤦‍♂️

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×