Jump to content

The SLI information guide

D2ultima
5 hours ago, AhmedElsisy said:

hello i am a new here. and i was aking is it possible to do 4-way sli using GTX 1080Ti-s ??

If you read the guide you'd see that it's not possible

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, D2ultima said:

If you read the guide you'd see that it's not possible

Unless you use Nvidia Inspector. ThirtyIR has done it already with 4 Pascal GPUs, but he said that it was a royal pain in the ass.

 

4-way SLI is certainly doable, but I wouldn't do it anyways.

RIGZ

Spoiler

Starlight (Current): AMD Ryzen 9 3900X 12-core CPU | EVGA GeForce RTX 2080 Ti Black Edition | Gigabyte X570 Aorus Ultra | Full Custom Loop | 32GB (4x8GB) Dominator Platinum SE Blackout #338/500 | 1TB + 2TB M.2 NVMe PCIe 4.0 SSDs, 480GB SATA 2.5" SSD, 8TB 7200 RPM NAS HDD | EVGA NU Audio | Corsair 900D | Corsair AX1200i | Corsair ML120 2-pack 5x + ML140 2-pack

 

The Storm (Retired): Intel Core i7-5930K | Asus ROG STRIX GeForce GTX 1080 Ti | Asus ROG RAMPAGE V EDITION 10 | EKWB EK-KIT P360 with Hardware Labs Black Ice SR2 Multiport 480 | 32GB (4x8GB) Dominator Platinum SE Blackout #338/500 | 480GB SATA 2.5" SSD + 3TB 5400 RPM NAS HDD + 8TB 7200 RPM NAS HDD | Corsair 900D | Corsair AX1200i + Black/Blue CableMod cables | Corsair ML120 2-pack 2x + NB-BlackSilentPro PL-2 x3

STRONK COOLZ 9000

Spoiler

EK-Quantum Momentum X570 Aorus Master monoblock | EK-FC RTX 2080 + Ti Classic RGB Waterblock and Backplate | EK-XRES 140 D5 PWM Pump/Res Combo | 2x Hardware Labs Black Ice SR2 480 MP and 1x SR2 240 MP | 10X Corsair ML120 PWM fans | A mixture of EK-KIT fittings and EK-Torque STC fittings and adapters | Mayhems 10/13mm clear tubing | Mayhems X1 Eco UV Blue coolant | Bitspower G1/4 Temperature Probe Fitting

DESK TOIS

Spoiler

Glorious Modular Mechanical Keyboard | Glorious Model D Featherweight Mouse | 2x BenQ PD3200Q 32" 1440p IPS displays + BenQ BL3200PT 32" 1440p VA display | Mackie ProFX10v3 USB Mixer + Marantz MPM-1000 Mic | Sennheiser HD 598 SE Headphones | 2x ADAM Audio T5V 5" Powered Studio Monitors + ADAM Audio T10S Powered Studio Subwoofer | Logitech G920 Driving Force Steering Wheel and Pedal Kit + Driving Force Shifter | Logitech C922x 720p 60FPS Webcam | Xbox One Wireless Controller

QUOTES

Spoiler

"So because they didn't give you the results you want, they're biased? You realize that makes you biased, right?" - @App4that

"Brand loyalty/fanboyism is stupid." - Unknown person on these forums

"Assuming kills" - @Moondrelor

"That's not to say that Nvidia is always better, or that AMD isn't worth owning. But the fact remains that this forum is AMD biased." - @App4that

"I'd imagine there's exceptions to this trend - but just going on mine and my acquaintances' purchase history, we've found that budget cards often require you to turn off certain features to get slick performance, even though those technologies are previous gen and should be having a negligible impact" - ace42

"2K" is not 2560 x 1440 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, JurunceNK said:

Unless you use Nvidia Inspector. ThirtyIR has done it already with 4 Pascal GPUs, but he said that it was a royal pain in the ass.

 

4-way SLI is certainly doable, but I wouldn't do it anyways.

It's not that you can't get it working, as you say. But it's pointless. The latency is through the roof and unlike where using multiple cards worked on Kepler/Maxwell, most of the time you attempt forcing multiple cards above 2 in Pascal on games the games themselves outright crash.

 

nVidia is pretty much stabbing SLI behind its back while still saying it's a good thing.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

  • 3 weeks later...
On 5/30/2017 at 2:18 AM, D2ultima said:

If you read the guide you'd see that it's not possible

but why it was available on the 980ti

Intel Core i7-7700K @ 5.0Ghz // NZXT Kraken X62 AIO CPU Cooler // GTX 1080Ti ICX FTW3 in SLI // MSI Z270 Gaming M5 // G.Skill Trident Z RGB @ 4200Mhz // 2TB WD Black, Samsung EVO 850 500GB, Samsung 960 EVO 1TB // EVGA SuperNova G2 850W 80+ Gold Fully Modular // NZXT HUE+ // NZXT AER RGB 140NM // LG - 27UD58-B 27.0" 3840x2160 60Hz Monitor 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, AhmedElsisy said:

but why it was available on the 980ti

Because nVidia let it be. Now it's not. They don't want people using Pascal GPUs beyond 2-way, because their driver team is focused on adding crap we don't need and not fixing/improving their existing stuff.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, D2ultima said:

Because nVidia let it be. Now it's not. They don't want people using Pascal GPUs beyond 2-way, because their driver team is focused on adding crap we don't need and not fixing/improving their existing stuff.

oh....... 

ok thanks

Intel Core i7-7700K @ 5.0Ghz // NZXT Kraken X62 AIO CPU Cooler // GTX 1080Ti ICX FTW3 in SLI // MSI Z270 Gaming M5 // G.Skill Trident Z RGB @ 4200Mhz // 2TB WD Black, Samsung EVO 850 500GB, Samsung 960 EVO 1TB // EVGA SuperNova G2 850W 80+ Gold Fully Modular // NZXT HUE+ // NZXT AER RGB 140NM // LG - 27UD58-B 27.0" 3840x2160 60Hz Monitor 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...

@D2ultima Hi! I got 2 different GTX 970s

 

1 Asus GTX 970 strix

       Serial no.: EBC0YZ265487

1 EVGA GTX 970 SC

       Serial no.: 04G-P4-2974-KR

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, NeedSomePractice said:

@D2ultima Hi! I got 2 different GTX 970s

 

1 Asus GTX 970 strix

       Serial no.: EBC0YZ265487

1 EVGA GTX 970 SC

       Serial no.: 04G-P4-2974-KR

Can't say. I only know the compatibility list that's in the guide. But my suggestion is get rid of them and buy a 1080 or 1080Ti or something if you can afford. It'll do you better.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...

You do explain what SLI is but I have yet to see someone explaining through screenshots or something how to we actually ACTIVATE it in some games that insist in not recognize +1 graphic card like GTA 5. Looks like from what I've already seen, it has to be kept as a secret to due some awkward and unknown reason that I don't understand. Nvidia would just profit their sales in their cards I don't understand why of hide it at all. Can you make? 

 

Or if you would like to make it private you can talk with me here: http://steamcommunity.com/id/pulselicious

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, Pulselicious said:

You do explain what SLI is but I have yet to see someone explaining through screenshots or something how to we actually ACTIVATE it in some games that insist in not recognize +1 graphic card like GTA 5. Looks like from what I've already seen, it has to be kept as a secret to due some awkward and unknown reason that I don't understand. Nvidia would just profit their sales in their cards I don't understand why of hide it at all. Can you make? 

 

Or if you would like to make it private you can talk with me here: http://steamcommunity.com/id/pulselicious

GTA V uses SLI by default, even pre-launch drivers.

 

You can open Nvidia control panel and change the SLI setting otherwise. I did not put pictures because it is foolproof:

Screenshot3566.png

 

Anything beyond that, you use Nvidia Profile Inspector and input the values corresponding to the game. To which I linked a page explaining that, and how to do that, but here is how it looks for your convenience:

Screenshot3567.png

 

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

Overall you said SLI is getting worse and worse, so should I really go into a SLI Build? Most probably not recommend, right?

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Pulselicious said:

Overall you said SLI is getting worse and worse, so should I really go into a SLI Build? Most probably not recommend, right?

If a GTX 1080Ti or Titan Xp is not enough for your needs, get a second card.

 

If you must have a laptop, and a GTX 1080 is not enough for your needs, get a second card.

 

Otherwise, I just see it as too much headache for too little reward; especially since I consider it mostly worthless at 3.0 x8/x8 which is what the majority of people will use it on; the need for a $500 CPU (last-gen intel, current-gen AMD) or $1000 CPU (current-gen intel, which needs a delid to be worth anything) and a $200+ board just to get 3.0 x16/x16, with cores not necessary for gaming on the AMD side or current-gen intel side of things, is a lot for someone to invest into a system just for gaming.

 

Not that it's bad to invest as much; just most people don't see PCs and gaming as that important to them to consider it.

 

Even if you find a single GPU is a lot weaker than two GPUs that are a step or two lower, the fact that the single card will have its power 100% of the time and the multiple GPUs will beat it only about 40% of the time (default Nvidia profiles, no fiddling) to 70% of the time (fiddling with NVPI) is something pretty heavy to consider... and I can guarantee you that most people will not be the fiddly kind.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/20/2017 at 4:26 PM, Pulselicious said:

Overall you said SLI is getting worse and worse, so should I really go into a SLI Build? Most probably not recommend, right?

SLI is good if the performance of two cards is higher than the current fastest single GPU card out. Also, if you want to make the most out of your investment, you'll need to be willing to tinker with driver settings and do some basic troubleshooting if a game doesn't scale well.

 

I've been using multiple GPU systems since 2008, and not once did I ever regret it, but that's only because I was willing to do the legwork to get things running smoothly. If you don't want to spend time tweaking settings, and are wanting a plug-n-play experience, then SLI is not for you.

 

Gaming Rig
Spoiler

CPU: Intel i7-6850k @ 4.2GHz

GPU: 2x FE GTX 1080Ti

Memory: 16GB PNY Anarchy DDR4 3200MHz

Motherboard: ASRock X99 Extreme 4

 

Encoding Rig
Spoiler

CPU: Ryzen 7 1700 @ 3.7GHz

GPU: GTX 1050

Memory: 8GB Curcial Ballistix DDR4 2133MHz

Motherboard: Gigabyte AB350M-DS3H

 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...
On 9/21/2017 at 5:26 AM, Pulselicious said:

Overall you said SLI is getting worse and worse, so should I really go into a SLI Build? Most probably not recommend, right?

 

It's not worse but there are diminishing returns with more cards.

 

Not exact numbers but the concept is like.

 

1 card = 100% performance

2 cards = 170% performance

3 cards = 210% performance

4 cards = 230% performance

 

This is because there is computational overhead in splitting data along 4 cards which at some point causes performance to start degrade.

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, xentropa said:

 

It's not worse but there are diminishing returns with more cards.

 

Not exact numbers but the concept is like.

 

1 card = 100% performance

2 cards = 170% performance

3 cards = 210% performance

4 cards = 230% performance

 

This is because there is computational overhead in splitting data along 4 cards which at some point causes performance to start degrade.

 

Something like that, though it used to be better. SLI was 70 to 95% scaling in the past for a lot of titles, apparently. I remember when it was near double for me as well. Using Firestrike provides a 95% scaling rate.

 

Games now are all in the 20% range though.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/8/2017 at 11:11 PM, D2ultima said:

Games now are all in the 20% range though.

Often times, the cause of poor scaling is using TAA or a bad implementation of HBAO+. Point in case, Fallout 4. FO4 with HBAO+ and TAA enabled at 5k drops each of my 1080Ti's down to 70% usage and the frames down to about 40-50 FPS. Witcher 3 also experiences similar issues with in game AA enabled. In both cases though, there are ways to substitute whatever setting is breaking SLI.

 

I can't tell you how many times I've seen "reputable" hardware reviewers benchmarking with TAA enabled, even though it's been documented that it breaks SLI. IIRC, many developers are taking a second look into SFR, since on paper that'll fix the problem at hand.

 

Gaming Rig
Spoiler

CPU: Intel i7-6850k @ 4.2GHz

GPU: 2x FE GTX 1080Ti

Memory: 16GB PNY Anarchy DDR4 3200MHz

Motherboard: ASRock X99 Extreme 4

 

Encoding Rig
Spoiler

CPU: Ryzen 7 1700 @ 3.7GHz

GPU: GTX 1050

Memory: 8GB Curcial Ballistix DDR4 2133MHz

Motherboard: Gigabyte AB350M-DS3H

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Frankenburger said:

Often times, the cause of poor scaling is using TAA or a bad implementation of HBAO+. Point in case, Fallout 4. FO4 with HBAO+ and TAA enabled at 5k drops each of my 1080Ti's down to 70% usage and the frames down to about 40-50 FPS. Witcher 3 also experiences similar issues with in game AA enabled. In both cases though, there's often a way to substitute whatever setting is breaking SLI.

 

I can't tell you how many times I've seen "reputable" hardware reviewers benchmarking with TAA enabled, even though it's been documented that it breaks SLI. IIRC, many developers are taking a second look into SFR, since on paper that'll fix the problem at hand.

You dropping to 70% utilization might very well be broken implementation of things, but utilization % means nothing for scaling.

 

If you look at my bandwidth issue section, you'll see in particular a shot with GPU util between x16/x8 and x16/x16 in R6 Siege, where both cards have extremely high util but the scaling is off the charts on 16/16 versus 16/8. There's all sorts of similar issues like that all over. TAA adds a lot of CPU and GPU load when implemented well, and kills scaling. It needs a LOT of bandwidth between cards, more than a simple HB bridge will provide. Not "can" provide, mind. WILL provide. Because Nvidia killed the excess bandwidth provided by the HB bridge and diverted it into frame pacing improvements. So your games don't scale more at 4K and below than a simple LED bridge will do, but they're a little smoother due to more consistent frames. Makes the need for x16/x16 still exist.

 

Since you don't have x16/x16 for your 1080Ti SLI due to the 5820K, it's possible that your bad performance is as a result of that. Maybe get a cheap 6850K for like $275 (they go around that price a lot lately) and see if it makes a difference and how much. Are you using a LED or HB bridge at 5K? I hope it's not a flex bridge.

 

Also, "reputable" hardware reviewers don't even get the bandwidth issue for SLI. I've seen Digital Foundry try 8K gaming with a couple 1080Tis... and a 6700K. Like no please delete the video.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

On 21.09.2017 at 8:26 AM, D2ultima said:

Otherwise, I just see it as too much headache for too little reward; especially since I consider it mostly worthless at 3.0 x8/x8 which is what the majority of people will use it on; the need for a $500 CPU (last-gen intel, current-gen AMD) or $1000 CPU (current-gen intel, which needs a delid to be worth anything) and a $200+ board just to get 3.0 x16/x16, with cores not necessary for gaming on the AMD side or current-gen intel side of things, is a lot for someone to invest into a system just for gaming.

TBH I am surprised how you managed to sneak that kind of info on this forum without much discussion where wherever you go, people tend to shout that x16/x16 isn't giving you anything despite the proof otherwise.

 

I guess next time I will be arguing about PCIE lanes, I will just link this thread instead of some videos I have seen which were dubbed "unreliable" because were done by non English channels.

CPU: i7 6950X  |  Motherboard: Asus Rampage V ed. 10  |  RAM: 32 GB Corsair Dominator Platinum Special Edition 3200 MHz (CL14)  |  GPUs: 2x Asus GTX 1080ti SLI 

Storage: Samsung 960 EVO 1 TB M.2 NVME  |  PSU: In Win SIV 1065W 

Cooling: Custom LC 2 x 360mm EK Radiators | EK D5 Pump | EK 250 Reservoir | EK RVE10 Monoblock | EK GPU Blocks & Backplates | Alphacool Fittings & Connectors | Alphacool Glass Tubing

Case: In Win Tou 2.0  |  Display: Alienware AW3418DW  |  Sound: Woo Audio WA8 Eclipse + Focal Utopia Headphones

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, D2ultima said:

You dropping to 70% utilization might very well be broken implementation of things, but utilization % means nothing for scaling.

On the contrary, not having 95% GPU usage usually means there's either a bottleneck, incompatible rendering technique being used, or a bad SLI profile. All of these affect scaling. Though that doesn't necessarily mean that SLI can't scale poorly when both GPUs are in the mid to high 90% usages, but low GPU usages are generally the most common indicator of poor scaling.

 

17 minutes ago, D2ultima said:

Since you don't have x16/x16 for your 1080Ti SLI due to the 5820K, it's possible that your bad performance is as a result of that.

I think you misunderstand my previous post. I'm not complaining about poor performance. All I'm saying is that 20% scaling is usually caused by faulty settings, like using a rendering effect that's incompatible with SLI. 20% scaling isn't going to be solely caused by a 16x/8x SLI configuration.

 

Gaming Rig
Spoiler

CPU: Intel i7-6850k @ 4.2GHz

GPU: 2x FE GTX 1080Ti

Memory: 16GB PNY Anarchy DDR4 3200MHz

Motherboard: ASRock X99 Extreme 4

 

Encoding Rig
Spoiler

CPU: Ryzen 7 1700 @ 3.7GHz

GPU: GTX 1050

Memory: 8GB Curcial Ballistix DDR4 2133MHz

Motherboard: Gigabyte AB350M-DS3H

 

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Lathlaer said:

TBH I am surprised how you managed to sneak that kind of info on this forum without much discussion where wherever you go, people tend to shout that x16/x16 isn't giving you anything despite the proof otherwise.

 

I guess next time I will be arguing about PCIE lanes, I will just link this thread instead of some videos I have seen which were dubbed "unreliable" because were done by non English channels.

People still fight it, don't feel otherwise, haha. I even have people telling me they think it's all like, gsync issues or other random things. Someone looked at the proof I sent and said they didn't believe it because gamers nexus did that stupid video by testing synthetics and games whose development started before PCI/e 3.0 was available, which would have meant most were using 2.0 x8/x8 on intel mainstream... which as you can imagine means that 3.0 x8/x8 is more than enough for when it was created. More bandwidth is meant for when scaling is bad, not when scaling is good. Because bad scaling is usually a bandwidth problem if the engine isn't directly anti-multi-GPU and the devs need to fix something (Dark Souls 3 scaled negatively with multi-GPU until the first DLC's launch time; the patch that held DLC1 fixed SLI scaling; Nvidia drivers weren't updated when this became a thing on my old system).

 

10 hours ago, Frankenburger said:

On the contrary, not having 95% GPU usage usually means there's either a bottleneck, incompatible rendering technique being used, or a bad SLI profile. All of these affect scaling. Though that doesn't necessarily mean that SLI can't scale poorly when both GPUs are in the mid to high 90% usages, but low GPU usages are generally the most common indicator of poor scaling.

Oh it certainly means that there is a bottleneck somewhere; incompatible SLI profile or not. But what I meant was, 99% util on each card constantly doesn't mean good scaling either, which you just understood anyway. However,

 

10 hours ago, Frankenburger said:

I think you misunderstand my previous post. I'm not complaining about poor performance. All I'm saying is that 20% scaling is usually caused by faulty settings, like using a rendering effect that's incompatible with SLI. 20% scaling isn't going to be solely caused by a 16x/8x SLI configuration.

That's.. the entire problem, though. SLI was not, and should not be right now, something you have to tip-toe around settings for. TAA is anti-SLI in nature because it requires data from the previous frames, and this kills scaling because you need bandwidth. Once again, look at my "The Bandwidth Issue" section. There's a user who bumped scaling significantly in witcher 3 at 4K by using a PLX chip on a mainstream (haswell) motherboard. There's another user who jumped from 41fps to 78fps in R6 Siege at 4K by going from x16/x8 to x16/x16, with TAA on.

 

So yes, 20% scaling is pretty much solely a bandwidth problem for a large majority of titles. LED bridge and x16/x16 for 4K and under, 5K+ I'd recommend a HB bridge because it can provide more bandwidth and it's possible Nvidia has coded it so that extra bandwidth is provided above 4K (since HB is recommended for 5k+ as per their official charts).

 

The problem is that a lot of game tech these days are basically lazy/cheap implementations that are easy to use but need too much bandwidth or just are outright incompatible with AFR. There is little reason (except TAA) to use tech that is ant-AFR... it's just easier to implement, so devs do it that way and let users salt. It wouldn't be a problem if optimization were a thing; console games are sometimes optimized even 5 times better than PC titles are. And these are usually the ones that need multi-GPU the most too, due to how bad they can be on PC.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, D2ultima said:

The problem is that a lot of game tech these days are basically lazy/cheap implementations that are easy to use but need too much bandwidth or just are outright incompatible with AFR. There is little reason (except TAA) to use tech that is ant-AFR... it's just easier to implement, so devs do it that way and let users salt. It wouldn't be a problem if optimization were a thing; console games are sometimes optimized even 5 times better than PC titles are. And these are usually the ones that need multi-GPU the most too, due to how bad they can be on PC.

There's been talk that developers have been taking a second look into SFR, which should in theory fix a lot of the issues that AFR has with engines that rely on previously drawn frames. In the past, the work required to get SFR working right wasn't worth it, since AFR worked much better out of the gate. I think once either Epic or Unity Technologies finds a feasible way to implement SFR, multi GPUs will become a lot more valid. Epic has already made some huge strides with UE4 when it comes to multi GPU systems, and I have no reason to believe they won't stop progressing.

 

2 hours ago, D2ultima said:

That's.. the entire problem, though. SLI was not, and should not be right now, something you have to tip-toe around settings for. TAA is anti-SLI in nature because it requires data from the previous frames, and this kills scaling because you need bandwidth.

It also kills SLI because of how AFR works (which I'm sure you know, though I'm also sure some don't know this yet). Some implementations of TAA is better than others though. For the time being, TAA has more growing pains than simply being incompatible with AFR. Many people don't like how much it blurs. I personally prefer super sampling with FXAA and perhaps moderate amounts of MSAA over TAA when possible.

 

Gaming Rig
Spoiler

CPU: Intel i7-6850k @ 4.2GHz

GPU: 2x FE GTX 1080Ti

Memory: 16GB PNY Anarchy DDR4 3200MHz

Motherboard: ASRock X99 Extreme 4

 

Encoding Rig
Spoiler

CPU: Ryzen 7 1700 @ 3.7GHz

GPU: GTX 1050

Memory: 8GB Curcial Ballistix DDR4 2133MHz

Motherboard: Gigabyte AB350M-DS3H

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Frankenburger said:

There's been talk that developers have been taking a second look into SFR, which should in theory fix a lot of the issues that AFR has with engines that rely on previously drawn frames. In the past, the work required to get SFR working right wasn't worth it, since AFR worked much better out of the gate. I think once either Epic or Unity Technologies finds a feasible way to implement SFR, multi GPUs will become a lot more valid. Epic has already made some huge strides with UE4 when it comes to multi GPU systems, and I have no reason to believe they won't stop progressing.

Well see, that's the OTHER issue. SFR can have a line depicting where one card is rendering and where the other is, and has something akin to screen tearing when something on one portion ends up on the other. Think about spinning 90 degrees up/down or left/right. And further to that, it requires even MORE bandwidth than AFR does. In fact, the entire reason four-way-SLI was so bad was because it was SFR of AFR. Two cards ran half the screen in AFR mode, and the other two ran the other half in AFR mode, and then the first two would be using "SFR" with the second two like if they were two larger GPUs instead of four ones. And scaling was actually so bad that three-way SLI was better in a lot of games, despite two-way SLI having the best scaling. Theoretically, the scaling in four-way should be equal to two-way without a bandwidth bottleneck; since it's two two-way AFR (up to 95% scaling) cards, rather than just trying to AFR four cards.

 

People were pushing for SFR with DirectX 12, but it's not that we needed DirectX 12 to do it. It was possible the whole time, but it was totally inferior because it needed too much bandwidth. Bandwidth is the primary issue in everything right now. PCI/e SSDs, the maximum lane count the chipset can provide (thunderbolt 3 can't even use its rated 40Gb/s unless directly connected to the CPU on an x8 bus; both AMD and intel have a maximum chipset <--> CPU connection speed of PCI/e 3.0 x4, which is only 32Gb/s, plus multi-PCI/e SSD setups, etc), the inter-GPU bandwidth (which NVLink could have fixed... just saying), AMD's entire CPU crutch is needing to increase inter-CCX bandwidth to reduce latency, etc.

 

Tech is pushing forward and bandwidth is holding everything back. This is why I am an advocate of XDMA and also a hater of AMD's CrossfireX solution, because XDMA is such a brilliant design where even a PCI/e 2.0 x8/x8 solution provides more bandwidth than PCI/e 3.0 x8/x8 + HB bridge on Nvidia cards. There is a latency issue, but AMD seems to have dealt with it well enough and Nvidia sure as hell can deal with it better than AMD can, so I don't know why they bothered with bridges. They could make x8/x8/x8 no-bridge PCI/e 3.0 and tri-SLI would probably have blown past the competition entirely, maybe even pushing scaling all the way up to two-way levels for three-way. That's just speculations of course, but I'm simply extrapolating what I already know to make it make sense in my head.

 

8 hours ago, Frankenburger said:

It also kills SLI because of how AFR works (which I'm sure you know, though I'm also sure some don't know this yet). Some implementations of TAA is better than others though. For the time being, TAA has more growing pains than simply being incompatible with AFR. Many people don't like how much it blurs. I personally prefer super sampling with FXAA and perhaps moderate amounts of MSAA over TAA when possible.

Yeah, TAA can be done superbly well or terribly. But that's true of everything from the Unreal Engine 4 kit. There's games that run like the god of IT itself blessed them using UE4, and then there's games that run like garbage. Dead by Daylight and The Culling are good examples of games that run like aids (or they used to be, I don't know if optimization happened recently). On the other hand, Tekken 7 can pretty much be run on a toaster with a screen attached. PUBG is laughably optimized, and Unreal Tournament 4 you could max out at 120fps on a kepler midranged card at 1080p last I checked it.

 

Cryengine 3 as well can get very optimized, as Prey proved to us. It's just if devs would put in the work... either they don't care, the publishers don't care, or both.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, D2ultima said:

Well see, that's the OTHER issue. SFR can have a line depicting where one card is rendering and where the other is, and has something akin to screen tearing when something on one portion ends up on the other. And further to that, it requires even MORE bandwidth than AFR does.

I know. That's the reason why it was shelved. There was no reason for Nvidia to improve on SFR because AFR was superior in every way. But in recent times, there's a legitimate reason to start looking into SFR once again. It needs to be improved upon, not just performance and tearing, but compatibility as well. Even though we can switch to SFR, and force "SFR Friendly" bits with Inspector, it simply doesn't work in most games. If developers can lower the bandwidth and fix the tear down the middle of the screen, then SFR will be a good alternative to AFR in games that rely on previously drawn frame data. Having each GPU process one half of the frame makes sense on paper. It just needs to be properly implemented.

 

Gaming Rig
Spoiler

CPU: Intel i7-6850k @ 4.2GHz

GPU: 2x FE GTX 1080Ti

Memory: 16GB PNY Anarchy DDR4 3200MHz

Motherboard: ASRock X99 Extreme 4

 

Encoding Rig
Spoiler

CPU: Ryzen 7 1700 @ 3.7GHz

GPU: GTX 1050

Memory: 8GB Curcial Ballistix DDR4 2133MHz

Motherboard: Gigabyte AB350M-DS3H

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Frankenburger said:

I know. That's the reason why it was shelved. There was no reason for Nvidia to improve on SFR because AFR was superior in every way. But in recent times, there's a legitimate reason to start looking into SFR once again. It needs to be improved upon, not just performance and tearing, but compatibility as well. Even though we can switch to SFR, and force "SFR Friendly" bits with Inspector, it simply doesn't work in most games. If developers can lower the bandwidth and fix the tear down the middle of the screen, then SFR will be a good alternative to AFR in games that rely on previously drawn frame data. Having each GPU process one half of the frame makes sense on paper. It just needs to be properly implemented.

But you're asking for a paradox. You're saying that you want devs to lower bandwidth requirements to use SFR... but the problem with AFR IS heavy bandwidth requirements.

 

SFR is unnecessary if we have the bandwidth or AFR-friendly tech is used. SFR requires more bandwidth than AFR even if it works better with TAA or allows SMAA T2x. It's a lose-lose situation. What we need is devs to optimize, and not use AFR-unfriendly tech just because it's easier/cheaper to implement. I.E. make games for PC first then port to consoles. Why don't consoles get less optimization, running at sub-low specs for "1080p" and "4K", barely holding 30fps like the PC titles end up doing if someone used console specs to run the games? It makes no logical sense, it's all on devs to fix things.

 

That, or we get significantly more bandwidth available to us. If we could skip the extremely late PCI/e 4.0 spec and jump right into 5.0 spec, PCI/e 3.0 x16 will become PCI/e 5.0 x4, which means x8/x8 for SLI is more than enough, and a 4-lane width from chipsets to bridges will not be saturated by thunderbolt 3, or a single NVMe, or an eGPU, etc. Bandwidth is key right now, the tech in the last 3 years has exploded and the interfaces we have are either poorly designed (HB bridge, NVMe on M.2 NGFF interface, etc) or just have no bandwidth to keep up (4-lane bridge between chipset/CPU, no 10 gigabit ethernet readily available, no gigabit wifi available, etc). Optimization in games will help greatly, but the rest of the services we just need more bandwidth for.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

  • 3 months later...
On 12/10/2017 at 12:53 PM, D2ultima said:

But you're asking for a paradox. You're saying that you want devs to lower bandwidth requirements to use SFR... but the problem with AFR IS heavy bandwidth requirements.

 

SFR is unnecessary if we have the bandwidth or AFR-friendly tech is used. SFR requires more bandwidth than AFR even if it works better with TAA or allows SMAA T2x. It's a lose-lose situation. What we need is devs to optimize, and not use AFR-unfriendly tech just because it's easier/cheaper to implement. I.E. make games for PC first then port to consoles. Why don't consoles get less optimization, running at sub-low specs for "1080p" and "4K", barely holding 30fps like the PC titles end up doing if someone used console specs to run the games? It makes no logical sense, it's all on devs to fix things.

 

That, or we get significantly more bandwidth available to us. If we could skip the extremely late PCI/e 4.0 spec and jump right into 5.0 spec, PCI/e 3.0 x16 will become PCI/e 5.0 x4, which means x8/x8 for SLI is more than enough, and a 4-lane width from chipsets to bridges will not be saturated by thunderbolt 3, or a single NVMe, or an eGPU, etc. Bandwidth is key right now, the tech in the last 3 years has exploded and the interfaces we have are either poorly designed (HB bridge, NVMe on M.2 NGFF interface, etc) or just have no bandwidth to keep up (4-lane bridge between chipset/CPU, no 10 gigabit ethernet readily available, no gigabit wifi available, etc). Optimization in games will help greatly, but the rest of the services we just need more bandwidth for.

4 years later you are still a legend for maKing this guide. Hope one day you can make a Crossfire Guide

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×