Jump to content

PhysX cards, but raytracing?

DANK_AS_gay

So, I have noticed that raytracing takes up a significant amount of area on the die on an nVidia RTX chip, and I was wondering how feasible it would be for them to create a "raytracing card" that accelerated raytracing, so that the GPU could focus on everything else. That way a 30 series card could of had 2 slot coolers, and had the massive die all focused on rasterizing. There could be a card for after effects and raytracing, and a card for geometry and textures. And, older games could support it still, because they are telling the base GPU to do all of these things, and the base GPU sends the raytracing information (maybe anti-aliasing acceleration as well?) back over a massive SLI style link for huge bandwidth. How feasible is this, and how would that effect gamers? To clarify, all non-essential tasks that are nice-to-haves are computed on the second card, but because it is its own card, it can perform significantly better, as well as basically do a chiplet thing where the manufacturing gets cheaper (2 dies).

Link to comment
Share on other sites

Link to post
Share on other sites

Possible? Yeah probably.

 

Going to happen? No.

 

It's just not in the interest of nvidia to do so as well then you can just buy a raytracing card for your needs and only upgrade the gpu or vice versa. Instead of having multiple reasons to upgrade a single card you have 1 reason to upgrade each card.

 

That and others  (intel and amd) can then take advantage of it too.

 

Also keep in mind space. This is extra space wasted and compared to the olden days of phsyx cards people are moving to smaller devices often even laptops for their main system.

 

Could they make a separate skew for raytracing on board? Yes but then you are back in the here and now where you have to ask the question why would they give consumers a reason to not buy their stuff?

Link to comment
Share on other sites

Link to post
Share on other sites

Wonder what kind of latency would be caused having to do that all over the PCIe bus (no idea, thats why I ask) communicating between the devices etc

Workstation Laptop: Dell Precision 7540, Xeon E-2276M, 32gb DDR4, Quadro T2000 GPU, 4k display

Wifes Rig: ASRock B550m Riptide, Ryzen 5 5600X, Sapphire Nitro+ RX 6700 XT, 16gb (2x8) 3600mhz V-Color Skywalker RAM, ARESGAME AGS 850w PSU, 1tb WD Black SN750, 500gb Crucial m.2, DIYPC MA01-G case

My Rig: ASRock B450m Pro4, Ryzen 5 3600, ARESGAME River 5 CPU cooler, EVGA RTX 2060 KO, 16gb (2x8) 3600mhz TeamGroup T-Force RAM, ARESGAME AGV750w PSU, 1tb WD Black SN750 NVMe Win 10 boot drive, 3tb Hitachi 7200 RPM HDD, Fractal Design Focus G Mini custom painted.  

NVIDIA GeForce RTX 2060 video card benchmark result - AMD Ryzen 5 3600,ASRock B450M Pro4 (3dmark.com)

Daughter 1 Rig: ASrock B450 Pro4, Ryzen 7 1700 @ 4.2ghz all core 1.4vCore, AMD R9 Fury X w/ Swiftech KOMODO waterblock, Custom Loop 2x240mm + 1x120mm radiators in push/pull 16gb (2x8) Patriot Viper CL14 2666mhz RAM, Corsair HX850 PSU, 250gb Samsun 960 EVO NVMe Win 10 boot drive, 500gb Samsung 840 EVO SSD, 512GB TeamGroup MP30 M.2 SATA III SSD, SuperTalent 512gb SATA III SSD, CoolerMaster HAF XM Case. 

https://www.3dmark.com/3dm/37004594?

Daughter 2 Rig: ASUS B350-PRIME ATX, Ryzen 7 1700, Sapphire Nitro+ R9 Fury Tri-X, 16gb (2x8) 3200mhz V-Color Skywalker, ANTEC Earthwatts 750w PSU, MasterLiquid Lite 120 AIO cooler in Push/Pull config as rear exhaust, 250gb Samsung 850 Evo SSD, Patriot Burst 240gb SSD, Cougar MX330-X Case

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, dilpickle said:

The goal in technology is to consolidate not separate.

Sure, but also when you seperate you make yourself room. Silicon manufacturing is at the edge of how fine we can cut the silicon using light. To go much smaller we have to come up with a new way of manufacturing processors. Plus I am sure that the yields of the silicon dies of the Ampere GPUs is horrendous. By separating the parts like AMD has on their desktop processors, you can increase yields dramatically. Plus it gives you more room to improve performance, since right now there isn't realistically any small processing nodes to go to, fitting more transistors on one die is not possible, we need more space for more performance. Or we can just have Fermi in 2021.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Tristerin said:

Wonder what kind of latency would be caused having to do that all over the PCIe bus (no idea, thats why I ask) communicating between the devices etc

I was suggesting SLI connectors, but they rework them so that they don't suck in terms of bandwidth. I guess you could do this, but why?

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, jaslion said:

Possible? Yeah probably.

 

Going to happen? No.

 

It's just not in the interest of nvidia to do so as well then you can just buy a raytracing card for your needs and only upgrade the gpu or vice versa. Instead of having multiple reasons to upgrade a single card you have 1 reason to upgrade each card.

 

That and others  (intel and amd) can then take advantage of it too.

 

Also keep in mind space. This is extra space wasted and compared to the olden days of phsyx cards people are moving to smaller devices often even laptops for their main system.

 

Could they make a separate skew for raytracing on board? Yes but then you are back in the here and now where you have to ask the question why would they give consumers a reason to not buy their stuff?

Sure, but in a laptop, you could still feasibly do this, especially since if you are getting discrete graphics in a laptop, thin 'n light isn't exactly what you are looking for. Also in the here and now, AMD is separating the cores of their processors in order to increase yields, and if you split the functions of the  GPU, this could also help. Especially since if there is a defect in the RTX parts of an Ampere card, the whole thing goes in the bin. Separating the two would increase yields, decrease costs, and increase profits since customers have to buy 2 cards in order to get the best experience. 

Link to comment
Share on other sites

Link to post
Share on other sites

Well guys, I found out that this is actually a thing, it's just with AMD and Apple. Using Apple's proprietary "infinity link", you can use one GPU for physics, and simulation. The second for rendering. Or you can use the GPUs (two Duo cards) and the CPU for rendering related workflows for triple buffer rendering. 

Cool idea, too bad about the cost and inferior performance in some workflows to a single dedicated GPU (like the 3090).

Link to comment
Share on other sites

Link to post
Share on other sites

but aren't there single slot rtx cards? or at least 2 slots?

 

ps: why, yes, yes there is... (several)

kf-img.thumb.png.8b8b180f22c9312ad69492a62b0fb767.png

(rtx3080)

 

so doesn't this kinda counter your argument of "not enough space"?

its up to you to you to buy a single, double, triple or quadruple slot card... 

 

 

 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, DANK_AS_gay said:

Well guys, I found out that this is actually a thing, it's just with AMD and Apple. Using Apple's proprietary "infinity link", you can use one GPU for physics, and simulation. The second for rendering. Or you can use the GPUs (two Duo cards) and the CPU for rendering related workflows for triple buffer rendering. 

Cool idea, too bad about the cost and inferior performance in some workflows to a single dedicated GPU (like the 3090).

This is not for gaming.

Link to comment
Share on other sites

Link to post
Share on other sites

My memory is a little rusty but I don't think the add on phys-x cards ever sold well.

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/11/2021 at 2:57 PM, ADarkBard said:

My memory is a little rusty but I don't think the add on phys-x cards ever sold well.

From what I have seen they did not sell well. My point has a few benefits that Phys-X cards did not. Modern GPUs are already very, very small process-wise anyways, leaving little room for additional ray tracing performance without making the die bigger, and increasing power draw. Having an external card would allow more room on the die of the GPU for graphics, and more room on the die for raytracing.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×