Jump to content

what do you think will be the next future graphical enhanced GPU technology?

so i remember when HDR came out for GPUs i really could not imagine any new tech for the GPU's could top it and now i see that they'll be releasing ray-tracing which looks even better, so now after HDR after ray-tracing what could be next? any thoughts

Link to post
Share on other sites

I'm not sure if it's going to be the "next big thing" but it is going to be implemented at some time of period. It's real time physics in engines. So the gpu's easily will be able to proccess them. For example destruction of ground/buildings in much detailed way. Smoke and air. Electricity and water. Everything that we have now, but on such more detailed way.

Link to post
Share on other sites

12 minutes ago, Firewrath9 said:

actually true SLI, 2x sli = 2x perf

Probably not, two way SLI will go obsolete just like 3 and 4 way did.

 

You don't see people gaming with dual-CPU systems that often anymore, same thing will happen to GPUs.

 

Multi GPU will become mostly enterprise/server focused.

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to post
Share on other sites

33 minutes ago, saif96 said:

so i remember when HDR came out for GPUs i really could not imagine any new tech for the GPU's could top it and now i see that they'll be releasing ray-tracing which looks even better, so now after HDR after ray-tracing what could be next? any thoughts

Considering ray tracing was the holy grail, at least for real-time lighting, I'm having a hard time imagining what's next to improve image quality. I mean there are givens like more detailed models or textures, but nothing like a new way of doing things.

Link to post
Share on other sites

11 minutes ago, Enderman said:

Probably not, two way SLI will go obsolete just like 3 and 4 way did.

 

You don't see people gaming with dual-CPU systems that often anymore, same thing will happen to GPUs.

 

Multi GPU will become mostly enterprise/server focused.

Let's see where nvlink gets us with the impending 20 series. Ok, it doesn't scale beyond 2, but if it allows better scaling with 2, that's a starting point. It's been reported to have about 50GB/s bidirection bandwidth, which is comparable to high speed dual channel ram. A massive upgrade from the 1 or 2 GB/s of existing SLI bridges, although still short of the hundreds of GB/s of on-card vram.

 

The holy grail of multi-GPU is how to make it scale without dumping a ton of work on game developers. If that could be resolved in hardware somehow, there could be a resurgence of multi-GPU especially if scaling well beyond 2.

 

Another parallel comparison is AMD's Ryzen strategy. Can you effectively make smaller units, and combine them to scale better? Not without its own problems, but it is a starting point.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Alienware AW3225QF (32" 240 Hz OLED)
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, iiyama ProLite XU2793QSU-B6 (27" 1440p 100 Hz)
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to post
Share on other sites

18 minutes ago, porina said:

Let's see where nvlink gets us with the impending 20 series. Ok, it doesn't scale beyond 2, but if it allows better scaling with 2, that's a starting point. It's been reported to have about 50GB/s bidirection bandwidth, which is comparable to high speed dual channel ram. A massive upgrade from the 1 or 2 GB/s of existing SLI bridges, although still short of the hundreds of GB/s of on-card vram.

 

The holy grail of multi-GPU is how to make it scale without dumping a ton of work on game developers. If that could be resolved in hardware somehow, there could be a resurgence of multi-GPU especially if scaling well beyond 2.

 

Another parallel comparison is AMD's Ryzen strategy. Can you effectively make smaller units, and combine them to scale better? Not without its own problems, but it is a starting point.

The thing is that having two linked units of something will never perform as well as a single unit.

That's what happens with multi-CPU.

That's just how electricity works, you can't magically make physics work in your favour.

 

GPUs are already reaching the point where a single one is more power than most people could need.

For example, the titan V.

It's not affordable yet but eventually it will be.

And as you probably already know, you can't even run two titan Vs together, only one.

 

Crossfire, SLI, whatever, they all are being phased out.

Other than triple A games, most games or programs don't even support multi-GPU.

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to post
Share on other sites

12 minutes ago, Enderman said:

The thing is that having two linked units of something will never perform as well as a single unit.

That's what happens with multi-CPU.

That's just how electricity works, you can't magically make physics work in your favour.

Electricity does work that way at a basic physics level. Electronics are a different matter and a higher level. It is when things get complicated, as we have here, it is not so simple. I'm not saying it is easy, but I will say it isn't impossible for things to be done better in hardware than it is now. GPUs are inherently massively parallel structures, and the trick is how to make it more scaleable outside of a single die. Usually that comes down to bandwidth in some way. Maybe something like EMIB or a future version of it will allow more separate dies to be combined in a useful manner.

 

IMO there is no such thing as too much power. Many might accept "good enough" at some price point, but I think it is almost open ended if scaling worked. If we were in a parallel universe, where say if I could buy 4x 1080Ti, and get the performance of 4x 1080Ti in games, I would do, and I'm sure I wont be alone. In this universe, I do agree SLI/Crossfire of today is a pain point. There are more problems with it than the benefit you get. But still, I wouldn't rule out a future solution to this. We shouldn't allow ourselves to be held back by the limits of today indefinitely.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Alienware AW3225QF (32" 240 Hz OLED)
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, iiyama ProLite XU2793QSU-B6 (27" 1440p 100 Hz)
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to post
Share on other sites

13 minutes ago, porina said:

Electricity does work that way at a basic physics level.

Nope.

When you have to transfer a signal from one processor to another there is far more delay than within a processor itself.

 

Even within a single processor it is necessary to engineer a special structure so that everything runs in sync, because at high frequencies one part of the CPU might receive the clock pulse sooner than another part.

 

Did you not hear a long time ago about when AMD engineered a new clock mesh in order to get 4GHz on their CPUs?

 

13 minutes ago, porina said:

I'm not saying it is easy, but I will say it isn't impossible for things to be done better in hardware than it is now. GPUs are inherently massively parallel structures, and the trick is how to make it more scaleable outside of a single die. Usually that comes down to bandwidth in some way. Maybe something like EMIB or a future version of it will allow more separate dies to be combined in a useful manner.

Of course it's possible to get better scaling.

The problem is that nobody is going to put in that work for a handful of games that support it because it's already being phased out slowly.

 

13 minutes ago, porina said:

IMO there is no such thing as too much power. Many might accept "good enough" at some price point, but I think it is almost open ended if scaling worked. If we were in a parallel universe, where say if I could buy 4x 1080Ti, and get the performance of 4x 1080Ti in games, I would do, and I'm sure I wont be alone. In this universe, I do agree SLI/Crossfire of today is a pain point. There are more problems with it than the benefit you get. But still, I wouldn't rule out a future solution to this. We shouldn't allow ourselves to be held back by the limits of today indefinitely.

Most people have no trouble accepting 'good enough', that's why you don't see everyone with four $10000 28 core xeons.

Even with perfect scaling why would people buy four X GPUs if one Y GPU can do the same job.

Also most game developers are not going to waste time coding in multi-GPU for the seven people that do want to spend 10 grand on four Y GPUs.

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to post
Share on other sites

52 minutes ago, Enderman said:

When you have to transfer a signal from one processor to another there is far more delay than within a processor itself.

Propagation delay can be accounted for.

 

52 minutes ago, Enderman said:

Even within a single processor it is necessary to engineer a special structure so that everything runs in sync, because at high frequencies one part of the CPU might receive the clock pulse sooner than another part.

 

Did you not hear a long time ago about when AMD engineered a new clock mesh in order to get 4GHz on their CPUs?

There are various ways of solving this. In modern devices parts can easily run asynchronously where it is advantageous to do so. These aren't ancient times where everything has to be tied to the same clock, or even the same phase of the same clock.

 

52 minutes ago, Enderman said:

Of course it's possible to get better scaling.

The problem is that nobody is going to put in that work for a handful of games that support it because it's already being phased out slowly.

The point was, if this could be implemented in a largely seamless way in hardware/driver level, games or other software wouldn't need to know or care about it.

 

52 minutes ago, Enderman said:

Most people have no trouble accepting 'good enough', that's why you don't see everyone with four $10000 28 core xeons.

But there are people with $10000 Xeons. It doesn't need everyone to buy it to make it a viable product.

 

52 minutes ago, Enderman said:

Even with perfect scaling why would people buy four X GPUs if one Y GPU can do the same job.

If we assume my scenario could happen, economics would generally still make products targeted at the masses more affordable per performance than more niches ones. Datacenter level users may pay a premium for density, but if density isn't a concern multiple consumer level devices would probably work out more bang for buck. There may be secondary factors, like easier cooling by spreading the devices out. Even the old SLI argument of upgrading performance over time.

 

To reiterate, I'm not suggesting any of this will happen, or any time soon. This is a speculative possible future, the point of this thread.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Alienware AW3225QF (32" 240 Hz OLED)
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, iiyama ProLite XU2793QSU-B6 (27" 1440p 100 Hz)
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to post
Share on other sites

37 minutes ago, porina said:

Propagation delay can be accounted for.

 

There are various ways of solving this. In modern devices parts can easily run asynchronously where it is advantageous to do so. These aren't ancient times where everything has to be tied to the same clock, or even the same phase of the same clock.

Why do you think you need a special sync card for nvidia mosaic when using multiple GPUs...

 

37 minutes ago, porina said:

But there are people with $10000 Xeons. It doesn't need everyone to buy it to make it a viable product.

 

The handful of computer enthusiasts buying those CPUs is absolutely nowhere near enough for the product to be sustainable.

That product exists because there are enterprises that buy tens of thousands of them at a time for supercomputers or other professional applications.

And that's exactly my point, multi-GPU is becoming the same as multi-CPU, server and enterprise focused, rather than consumer and gaming focused.

 

37 minutes ago, porina said:

If we assume my scenario could happen, economics would generally still make products targeted at the masses more affordable per performance than more niches ones. Datacenter level users may pay a premium for density, but if density isn't a concern multiple consumer level devices would probably work out more bang for buck. There may be secondary factors, like easier cooling by spreading the devices out. Even the old SLI argument of upgrading performance over time.

 

To reiterate, I'm not suggesting any of this will happen, or any time soon. This is a speculative possible future, the point of this thread.

Multi-GPU makes cooling more difficult not easier, because you need a bigger case, more PCIe spacing on the motherboard, more waterblocks, or stuff like that.

The SLI argument to add more cards over time is flawed because you can get a similar or better performance increase by simply selling the old card and buying a newer one that is more powerful.

 

As things are currently going, in the future there will be no need for multi-GPU.

Not saying your "speculative future" isn't possible, but it certainly isn't likely.

Multi-GPU will end up being something only rich enthusiasts will pay for as companies focus multi-GPU on professional applications more than consumers and gaming.

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to post
Share on other sites

4 minutes ago, Enderman said:

Not saying your "speculative future" isn't possible, but it certainly isn't likely.

It may well never happen, likely or not. I feel your thinking is more anchored in what happens today, not what I'm trying to imagine as a future. Time will tell.

 

Just now, YouSirAreADudeSir said:

Point cloud

I forgot about that... wonder whatever happened to that company that was making a lot of noise about it perhaps a year or so ago.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Alienware AW3225QF (32" 240 Hz OLED)
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, iiyama ProLite XU2793QSU-B6 (27" 1440p 100 Hz)
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to post
Share on other sites

3 minutes ago, porina said:

I forgot about that... wonder whatever happened to that company that was making a lot of noise about it perhaps a year or so ago.

Probably got bought by one of the bigger companies

 With all the Trolls, Try Hards, Noobs and Weirdos around here you'd think i'd find SOMEWHERE to fit in!

Link to post
Share on other sites

8 minutes ago, porina said:

It may well never happen, likely or not. I feel your thinking is more anchored in what happens today, not what I'm trying to imagine as a future. Time will tell.

It's anchored in what has happened in the past (with multi-CPU) and what it happening at the moment (3 and 4 way SLI get cut, and also 2 way SLI gets cut for titan V, and game developers not bothering to implement multi-GPU).

 

Linus thinks cloud gaming is going to take over and people won't need powerful PCs at home eventually thanks to the improvements in internet speed and latency and how geforce now is working so far.

 

What you think doesn't seem to be based on anything other than "I wish this will happen".

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to post
Share on other sites

4 hours ago, Enderman said:

It's anchored in what has happened in the past (with multi-CPU) and what it happening at the moment (3 and 4 way SLI get cut, and also 2 way SLI gets cut for titan V, and game developers not bothering to implement multi-GPU).

Multi-CPU reduced its need at lower core counts, as single socket core counts grew. But there is still a place for multi-socket. Even core counts per die are reaching some limits, so AMD have gone the multi-die route. This is just like multi-socket without the actual socket. It hasn't gone away, but is changing form.

 

GPUs by their nature are more parallel. The consumer case for more than one is challenging right now, for sure. If a Zen like approach could be made to work, that could shake things up.

 

4 hours ago, Enderman said:

Linus thinks cloud gaming is going to take over and people won't need powerful PCs at home eventually thanks to the improvements in internet speed and latency and how geforce now is working so far.

You're talking about latency earlier, at this scale latency really adds up. Talk of server/terminal or powerful local computers has gone on through the decades, and will likely continue to do so. Like so many things, it'll possibly take off if it is cheap and easy enough, but it will never be able to compete physically with local resource.

 

4 hours ago, Enderman said:

What you think doesn't seem to be based on anything other than "I wish this will happen".

I'm not claiming to see the future. It is one possible future. Could you have predicted the Zen strategy before its release? We don't know where things will go. Scaling has always been a problem in more HPC like uses, and I don't think anyone is going to stop looking at it because consumers don't need it today.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Alienware AW3225QF (32" 240 Hz OLED)
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, iiyama ProLite XU2793QSU-B6 (27" 1440p 100 Hz)
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to post
Share on other sites

14 minutes ago, porina said:

Multi-CPU reduced its need at lower core counts, as single socket core counts grew. But there is still a place for multi-socket. Even core counts per die are reaching some limits, so AMD have gone the multi-die route. This is just like multi-socket without the actual socket. It hasn't gone away, but is changing form.

That's just an AMD problem, intel has 64+ core xeon phis.

Single socket core counts grew, which is exactly the same thing happening with GPUs.

More cuda cores in a single GPU.

 

14 minutes ago, porina said:

GPUs by their nature are more parallel. The consumer case for more than one is challenging right now, for sure. If a Zen like approach could be made to work, that could shake things up.

 

??

They are just another type of processor.

 

14 minutes ago, porina said:

You're talking about latency earlier, at this scale latency really adds up. Talk of server/terminal or powerful local computers has gone on through the decades, and will likely continue to do so. Like so many things, it'll possibly take off if it is cheap and easy enough, but it will never be able to compete physically with local resource.

The latency I mentioned earlier is the latency of electrical communication between two processors, which needs to be extremely low for clock syncing, nanoseconds.

This has nothing to do with internet connection latency.

Linus has played on geforce now and said that the latency was not bad for casual gaming.

As internet connections switch to fiber the latency to a game server will become lower.

A few milliseconds are not possible to be distinguished by any human, so if it gets that low then cloud gaming will be perfectly usable for competitive gaming.

 

14 minutes ago, porina said:

I'm not claiming to see the future. It is one possible future. Could you have predicted the Zen strategy before its release? We don't know where things will go. Scaling has always been a problem in more HPC like uses, and I don't think anyone is going to stop looking at it because consumers don't need it today.

What "zen strategy" are you talking about...

AMD released better CPUs at a reasonable price point, wow, who couldn't have guessed.

Your 'future' still has no basis on anything that has or is happening, so it would be called a guess rather than a hypothesis or educated prediction.

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to post
Share on other sites

I am not really sure, but I would like to see either nvidia or amd come up with a driver/tech that would allow people with integrated graphics to make use of it, as an example, work like how physx card use to work so you can get a few extra fps from your dedicated card not having to run physx anymore.

Link to post
Share on other sites

I think we need better character mod models, and more convincing textures. In most video games it feels as if the trees, and foliage look super good, but fine details of humans aren't well done. We need more texture and more accurate to life depictions of life. I honestly never notice my shadows not being shadowy enough,  although ray tracing is neato. 

Link to post
Share on other sites

10 hours ago, Enderman said:

And as you probably already know, you can't even run two titan Vs together, only one.


That's because Nvidia disabled NV-Link on the card for the enterprise to buy into the Quadro GV100. The NV-Link on those cards was actually created in mind for multiple GPUs to get more screen outputs and such, in professional environments. The card was not even made for gaming. It was a case of Nvidia being Nvidia, nothing else.

Link to post
Share on other sites

1 minute ago, Motifator said:

That's because Nvidia disabled NV-Link on the card for the enterprise to buy into the Quadro GV100. The NV-Link on those cards was actually created in mind for multiple GPUs to get more screen outputs and such, in professional environments. The card was not even made for gaming. It was a case of Nvidia being Nvidia, nothing else.

Yes I know why nvidia disabled it, why are you explaining that to me?

Just like all other titans it's basically the quadro card with consumer drivers and reduced features.

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to post
Share on other sites

Just now, Enderman said:

Yes I know why nvidia disabled it, why are you explaining that to me?

Just like all other titans it's basically the quadro card with consumer drivers and reduced features.


You talked like the thing was made for SLI gaming, that was why.

Link to post
Share on other sites

Just now, Motifator said:


You talked like the thing was made for SLI gaming, that was why.

Obviously it is not, which is why it doesn't support SLI.

My point is that nvidia is slowly phasing out SLI, starting with 3 and 4 way.

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×