Jump to content

The SLI information guide

D2ultima

 

 

Oh hey, you mentioned it!

Mid-range Emulation Gaming and Video Rendering PC

[CPU] i7 4790k 4.7GHz & 1.233v Delidded w/ CLU & vice method [Cooling] Corsair H100i [Mobo] Asus Z97-A [GPU] MSI GTX 1070 SeaHawk X[RAM] G.Skill TridentX 2400 9-11-11-30 CR1 [PSU] Corsair 750M 

Link to comment
Share on other sites

Link to post
Share on other sites

It's been there for a considerable amount of time.

 

Maybe, maybe it wasn't and you just added it. We'll never know.

Mid-range Emulation Gaming and Video Rendering PC

[CPU] i7 4790k 4.7GHz & 1.233v Delidded w/ CLU & vice method [Cooling] Corsair H100i [Mobo] Asus Z97-A [GPU] MSI GTX 1070 SeaHawk X[RAM] G.Skill TridentX 2400 9-11-11-30 CR1 [PSU] Corsair 750M 

Link to comment
Share on other sites

Link to post
Share on other sites

Maybe, maybe it wasn't and you just added it. We'll never know.

There is a "last edited" in the main page. It's prior to your comment today, so it must have been there before.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...

There is a "last edited" in the main page. It's prior to your comment today, so it must have been there before.

I know that #16 has mentioned it in passing, but cpu driver overhead is really beginning to be a huge issue for sli both as graphics card improvements outstrip cpu improvements and as games themselves are becoming more cpu intensive.

If you'd like I can link quite a bit of comparison videos, but the jist of it is even for 970 sli, going from an I5 to an I7 gives serious performance gains (and quite large reductions in stutter).

For flagship class gpus, I cannot recommend anything less than Haswell-E (new anyways, x79 should still be better than z97 and possibly z170 for sli).

(GM200 sli at 4k high/ultra settings shows huge improvements comparing same clocked haswell-E hexcores and haswell quad core I7s).

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I know that #16 has mentioned it in passing, but cpu driver overhead is really beginning to be a huge issue for sli both as graphics card improvements outstrip cpu improvements and as games themselves are becoming more cpu intensive.

If you'd like I can link quite a bit of comparison videos, but the jist of it is even for 970 sli, going from an I5 to an I7 gives serious performance gains (and quite large reductions in stutter).

For flagship class gpus, I cannot recommend anything less than Haswell-E (new anyways, x79 should still be better than z97 and possibly z170 for sli).

(GM200 sli at 4k high/ultra settings shows huge improvements comparing same clocked haswell-E hexcores and haswell quad core I7s).

I know games are becoming much more CPU intensive. Too many people still feel an i5-4460 can run every game in the universe.

 

I would need to know which games these are. If it's a game that already devours CPU like GTA V or Dying Light, then it would make a bit more sense that doubling GPU power would bite the CPU. What you would need to check however, is midranged SLI vs heavily OC'd top end single GPU. If you're checking two 970s in SLI, then check them at stock vs a 980Ti at 1500MHz or so.

 

Current flagship class GPUs like 980Tis or Fury X cards use quite a lot of data. In this instance, X79 and X99 might have an advantage in sheer PCI/e lane amount. To make sure that the PCI/e lane amount is not the reason, one should try disabling two cores and two threads in the PCI/e lanes, running it quadcore + HT vs quadcore + HT and see if the enthusiast platforms in itself still give better performance. I know that 4K resolutions benefit from more bandwidth being available to a point. While it isn't a vRAM issue so to speak, the cards DO render a lot of pixels. I've also heard that people who've hacked in SLI into Unreal Engine 4 games have noticed that TAA is a MASSIVE performance drop in multi-GPU as opposed to single GPU, and that users on PCI/e 3.0 slots had a huge advantage... though I didn't ask for verification of this from the user who told me, so I can't make it as an official statement here or anything.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

I know games are becoming much more CPU intensive. Too many people still feel an i5-4460 can run every game.

Snips...

I have no problems recommending a 4460 for a budget single gpu build, but sli is just too much cpu overhead.

Also the pcie lane difference has been tested by lots of people and 8x vs 16x is within margin of error. So that isn't a factor.

Here is some sample games at 4k ultra. I don't have a 4790k to test with myself anymore so I cant help here other than linking some sources.

https://www.youtube.com/watch?v=ZABt8bHgDHo

Copied from an earlier post I made:

Sli 970 comparison, the benchmarks are inconveniently hidden in here, but everyone shows significant improvements by the I7. The link starts with gta v and others times are listed below.

https://youtu.be/ew5MIgXfHL0?t=299

@6:48 you can see W3, 8:25 is BF:4, 10:02 is crysis 3.

Here is a great example of classic CPU bottle-necking of SLI systems (note low gpu load and high differences in load in comparison to the original video I linked by the same person). (near matching frequency i5/i7 oc'd and oc'd 980 sli)

https://www.youtube.com/watch?v=pDmuv8Q45iY

So really I guess the statement is, even if you claim that really it's a cpu limitation and not "indicative of sli" then that I show significant performance reductions even with 970 sli using a 4690k over a 4790k should really explain why like I said from the beginning "I 100% would never ever recommend sli on an i5."

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I have no problems recommending a 4460 for a budget single gpu build, but sli is just too much cpu overhead.

Also the pcie lane difference has been tested by lots of people and 8x vs 16x is within margin of error. So that isn't a factor.

Here is some sample games at 4k ultra. I don't have a 4790k to test with myself anymore so I cant help here other than linking some sources.

-snip-

So really I guess the statement is, even if you claim that really it's a cpu limitation and not "indicative of sli" then that I show significant performance reductions even with 970 sli using a 4690k over a 4790k should really explain why like I said from the beginning "I 100% would never ever recommend sli on an i5."

Okay. I've got a lot to explain now xD. Firstly, i5-4460 for a budget build? Yes. For a 980Ti? No. For SLI? No. For anything beyond maybe a R9 380 or GTX 960? No. Squeeze out some more cash, especially if new AAA titles are your desires. 2015's AAA title CPU usage was off the charts, and as you say, we don't have strong enough CPUs to compensate.

 

The PCI/e lane testing I was considering purely for 4K.

 

See this post is in games that are known to use more than 4 cores/8 threads, and in the case of Crysis, in a specific scenario where CPU utilization is high (the place with the cannon and all the grass in the field). What I'm trying to get at, is that even single GPU games not at a heavy GPU bottleneck are likely to benefit from the CPU, and it's not just SLI. Hence why I was suggesting two stock 970s versus an overclocked 980Ti. If possible test them with 3DMark and see that their GPU values are rather close together, and then run them through the same scenarios. The i5 should still produce less frames on average for the single 980Ti than the i7 in those games tested.

 

On the flip side, look at Tomb Raider, which had basically the same FPS.

 

When you're at a GPU bottleneck, you notice the CPU affecting your frames a lot less. And what people fail to realize is that just because a GPU is at "99%" doesn't mean it's at maximum load. That's an estimation, and the reason why some games at 100% heat up a video card less than others at 100%. It's how the GPU is being used. Also, 4K Ultra is a rather GPU-bottlenecked situation. If it were at 1080p you might notice more of a gap.

 

But I always tell people to get an i7 for gaming if they want to play new games especially at high FPSes. People tend to be on the entire "omg, i5 is king, nothing more than i5-4460 is ever needed, omg, stfu" etc etc and not listen. Even if your i7 isn't being utilized by the game, the rest of your PC's programs have the hyperthreads to use, and you essentially can get a bit more performance for the game anyway, as it isn't really sharing performance with the game... kind of. Hyperthreading is a bit difficult to explain, but I think you understand what I'm trying to convey. The reason I didn't leave it as a recommendation or "my thoughts" in the guide is because this is more focused on what SLI itself does. It's rare that SLI will say... double your CPU utilization (though Killing Floor 2 is single-thread bound, and SLI has it use an entire second thread for the second GPU, so it does in effect nearly double utilization there).

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

Okay. I've got a lot to explain now xD. Firstly, i5-4460 for a budget build? Yes. For a 980Ti? No. For SLI? No. For anything beyond maybe a R9 380 or GTX 960? No. Squeeze out some more cash, especially if new AAA titles are your desires. 2015's AAA title CPU usage was off the charts, and as you say, we don't have strong enough CPUs to compensate.

 

The PCI/e lane testing I was considering purely for 4K.

 

See this post is in games that are known to use more than 4 cores/8 threads, and in the case of Crysis, in a specific scenario where CPU utilization is high (the place with the cannon and all the grass in the field). What I'm trying to get at, is that even single GPU games not at a heavy GPU bottleneck are likely to benefit from the CPU, and it's not just SLI. Hence why I was suggesting two stock 970s versus an overclocked 980Ti. If possible test them with 3DMark and see that their GPU values are rather close together, and then run them through the same scenarios. The i5 should still produce less frames on average for the single 980Ti than the i7 in those games tested.

 

On the flip side, look at Tomb Raider, which had basically the same FPS.

 

When you're at a GPU bottleneck, you notice the CPU affecting your frames a lot less. And what people fail to realize is that just because a GPU is at "99%" doesn't mean it's at maximum load. That's an estimation, and the reason why some games at 100% heat up a video card less than others at 100%. It's how the GPU is being used. Also, 4K Ultra is a rather GPU-bottlenecked situation. If it were at 1080p you might notice more of a gap.

 

But I always tell people to get an i7 for gaming if they want to play new games especially at high FPSes. People tend to be on the entire "omg, i5 is king, nothing more than i5-4460 is ever needed, omg, stfu" etc etc and not listen. Even if your i7 isn't being utilized by the game, the rest of your PC's programs have the hyperthreads to use, and you essentially can get a bit more performance for the game anyway, as it isn't really sharing performance with the game... kind of. Hyperthreading is a bit difficult to explain, but I think you understand what I'm trying to convey. The reason I didn't leave it as a recommendation or "my thoughts" in the guide is because this is more focused on what SLI itself does. It's rare that SLI will say... double your CPU utilization (though Killing Floor 2 is single-thread bound, and SLI has it use an entire second thread for the second GPU, so it does in effect nearly double utilization there).

i am well aware of all that you said. i dont have 970 sli or a non-x99 cpu rig to test with it currently

 

I listed the first video because if you take a look at it, even in basically the most gpu intensive situation on the market today, an x99 cpu shows huge fps increases over a 4790k at the same very high clocks. aka dont use flagship class gpus in sli without the -e 6+ core platform, and you'd better oc it lol.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

i am well aware of all that you said. i dont have 970 sli or a non-x99 cpu rig to test with it currently

 

I listed the first video because if you take a look at it, even in basically the most gpu intensive situation on the market today, an x99 cpu shows huge fps increases over a 4790k at the same very high clocks. aka dont use flagship class gpus in sli without the -e 6+ core platform, and you'd better oc it lol.

Yeah. The love for mainstream is pretty high though. I don't know why people pick up mainstream equipment and then shove two 980Tis or something at it.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

Is it possible to run 4-way SLI with four dual-GPU graphics cards?

Quote me if you want me to reply.

Link to comment
Share on other sites

Link to post
Share on other sites

Is it possible to run 4-way SLI with four dual-GPU graphics cards?

No. The maximum under any situation is 4 GPUs. You can use more for F@H or other computation if you so desire, but SLI will NOT work.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

I'm currently on a gtx 970, and getting a 980ti has proven difficult since I can barely sell the 970 for a decent amount. Those with SLI experience: Do the problems outweigh the benefits for you? Is it worth the hassle? I'm currently on a 1080p, 144hz monitor, and I play a bunch of games and stream as well.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Aulowry said:

I'm currently on a gtx 970, and getting a 980ti has proven difficult since I can barely sell the 970 for a decent amount. Those with SLI experience: Do the problems outweigh the benefits for you? Is it worth the hassle? I'm currently on a 1080p, 144hz monitor, and I play a bunch of games and stream as well.

I would not suggest SLI for you if you're a streamer and have a 970. If you play a lot of new games you're also going to find you're not using your second card. Even if you want to save up and jump into Pascal or Polaris, that might be a better idea, rather than jumping at a 980Ti.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

So, apparently the SLI guide is completely broken, and using the edit button doesn't work (I just see a blank page). I'm waiting for a while to see if it gets fixed; if not, I might have to simply copy/paste the entire page and re-do it.

 

If a staff member could help (maybe restore it from a cache so it's not broken) that'd be amazing. @Godlygamer23 maybe?

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...

Okay guide has been re-done and is now fixed. Nested spoilers do not work, and broke the guide multiple times while I was trying to fix it.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

  • 4 weeks later...
On 26/11/2015 at 9:51 PM, D2ultima said:

I'd suggest the R9 380 4GB wholeheartedly. SLI nothing under a 980Ti if you can help it.

As of right now, that is good advice. However, in general it isn't. More generally speaking, the second-from-top tier, which was based on the same GPU as the absolute top, offered the best cost/performance ratio, and the most sensible high performance purchase was two of these. So 290 Crossfire made much more sense than the much more expensive but only marginally better 290X Crossfire. Same with 780s in SLI vs 780 Tis in SLI. And before the 980 Ti/Titan X were released, 970 SLI was barely worse than 980s in SLI.

 

The difference right now is that the 390X and 980 never saw enough of a price drop to ever make any kind of sense. It's kind of unfortunate that the 390X is still £70 more expensive than the 290X -- the same exact GPU -- reached in response to the GTX 970s aggressive price point. Of course, you could still make the case that using two 980 Tis or two Furies makes more sense than two Titan Xs or Fury Xs (or even one of either), so I guess the landscape hasn't changed all that much.

 

On 19/01/2016 at 9:11 PM, D2ultima said:

But I always tell people to get an i7 for gaming if they want to play new games especially at high FPSes. People tend to be on the entire "omg, i5 is king, nothing more than i5-4460 is ever needed, omg, stfu" etc etc and not listen. Even if your i7 isn't being utilized by the game, the rest of your PC's programs have the hyperthreads to use, and you essentially can get a bit more performance for the game anyway, as it isn't really sharing performance with the game... kind of. Hyperthreading is a bit difficult to explain, but I think you understand what I'm trying to convey. The reason I didn't leave it as a recommendation or "my thoughts" in the guide is because this is more focused on what SLI itself does. It's rare that SLI will say... double your CPU utilization (though Killing Floor 2 is single-thread bound, and SLI has it use an entire second thread for the second GPU, so it does in effect nearly double utilization there).

 

I agree with getting an i7 for gaming. There's a difference between what is needed and what is optimal. Do people ever suggest putting 4GB of RAM with that i5? It's very rare to find a game that uses more than 4GB, let alone requires it. Screw the fact that you need your OS and other background applications running concurrently. And god help you if your AV decides to do a system scan while you are gaming if you figured an i3 would be enough.

 

For me it's not so much about what you need as what your budget can allow for. Do I need a 5960X and two 980 Tis? Probably not. But if my budget allows for it, then why the hell not?

Link to comment
Share on other sites

Link to post
Share on other sites

Advantage: Multi GPU looks baller. 

 

Disadvantage: Cabling can be a PITA.

 

xD

Our Grace. The Feathered One. He shows us the way. His bob is majestic and shows us the path. Follow unto his guidance and His example. He knows the one true path. Our Saviour. Our Grace. Our Father Birb has taught us with His humble heart and gentle wing the way of the bob. Let us show Him our reverence and follow in His example. The True Path of the Feathered One. ~ Dimboble-dubabob III

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, othertomperson said:

As of right now, that is good advice. However, in general it isn't. More generally speaking, the second-from-top tier, which was based on the same GPU as the absolute top, offered the best cost/performance ratio, and the most sensible high performance purchase was two of these. So 290 Crossfire made much more sense than the much more expensive but only marginally better 290X Crossfire. Same with 780s in SLI vs 780 Tis in SLI. And before the 980 Ti/Titan X were released, 970 SLI was barely worse than 980s in SLI.

 

The difference right now is that the 390X and 980 never saw enough of a price drop to ever make any kind of sense. It's kind of unfortunate that the 390X is still £70 more expensive than the 290X -- the same exact GPU -- reached in response to the GTX 970s aggressive price point. Of course, you could still make the case that using two 980 Tis or two Furies makes more sense than two Titan Xs or Fury Xs (or even one of either), so I guess the landscape hasn't changed all that much.

 

 

I agree with getting an i7 for gaming. There's a difference between what is needed and what is optimal. Do people ever suggest putting 4GB of RAM with that i5? It's very rare to find a game that uses more than 4GB, let alone requires it. Screw the fact that you need your OS and other background applications running concurrently. And god help you if your AV decides to do a system scan while you are gaming if you figured an i3 would be enough.

 

For me it's not so much about what you need as what your budget can allow for. Do I need a 5960X and two 980 Tis? Probably not. But if my budget allows for it, then why the hell not?

I keep updating the guide as things go along. I initially had a listing where I said if your next upgrade is only one tier higher, to get a second GPU instead of an upgrade, and only jump two or more cards (I.E. 770 to 780Ti, 7970 to R9 290X, etc) if you wanted to upgrade without multi-GPU... but this is no longer the case. Put simply, the devs, both nVidia and game devs, do not care about SLI at the present time. Maxwell was DESIGNED to be anti-SLI, no matter what anybody else can tell me. They removed AA features like CSAA from the cards (including the high CSAA abilities that SLI could use), and added MFAA... but MFAA doesn't work in SLI. Then they made the voltage bullshit that is unstable in its own right, but even worse in SLI, and then had the cheek to add notes to their drivers' release notes stating that trying to match the voltages in SLI does not benefit the consumer at all... which is a lie, since custom vBIOSes for maxwell cards that render voltage at a constant are RIDICULOUSLY beneficial, both in stability and in overclockability. Then the forcing GPUs to maximum speeds that doesn't always work in a multi-GPU setup with Maxwell and maxwell alone? Recipe for disaster. In addition to that, DSR came for Maxwell first, but DSR + SLI + Gsync doesn't work, which means not only can you not force higher AA in the drivers, and you lack MFAA for performance, you can't even use DSR if you want to take advantage of gsync.

 

And then you have their drivers. Their drivers since GTA V's game ready driver have been awful. The TDRs have *mostly* stopped, but crashes and visual corruption have plagued the drivers for quite some time, and the last three drivers out of them have been BSOD-happy depending on what you're doing. It's garbage, and they honestly don't care anymore. If they did, they'd take their time and fix things instead of pumping out broken drivers left and right.

 

And on top of that, Unity engine and Unreal Engine 4 being so popular means many games are coming out that won't touch a second card, except with quite a bit of hacking. Ark: Survival Evolved and Survival of the Fittest can't use it without hacking in support (and even then it doesn't work perfectly), Street Fighter 5 is unoptimized for its visuals (and for the fact that it's on UE4) but you can't use multi-GPU if you've got an older setup like mine; Unreal Tournament 4 is the same way too, etc. When your card maker isn't trying, your cards have the functionality mostly as an afterthought as in "yeah people still use that", and many games refuse to use it and if they use it they use it awfully (Assassin's Creed Syndicate on launch had sometimes negative scaling in SLI, even)? Then why would I ever suggest getting two midranged cards over one powerful one? Even if the two midranged cards have more direct grunt in them in a good scenario, most of the time, the single stronger card will be the better experience. And I haven't even touched on games like Killing Floor 2 with its ridiculous stutter in windowed modes as long as SLI is turned on, or the stutter it got when I turned on PhysX while SLI was running. It's made me more than once wish I could trade my two 780Ms for a single 980M and be done with it (which I could do if I had $$); I'm not afraid to admit that.

 

The 390X in USD is just over $400, which honestly probably isn't so worth it over the 390 which is about $100 less give or take, but it's definitely somewhat closer to the 980 in performance for about $130 less (the 980 in USD is $550 still without being on sale). The R9 Fury is the SAME PRICE as the 980, so there's no reason to ever buy a 980 for the proper price... in the US. I have yet to figure out why AMD is so much more expensive in any non-US country (it's not that both teams aren't more expensive; just that nVidia's cards are somewhat comparable... the price in pounds for a 980 would translate to about $600 USD. But the price in pounds for a R9 Fury is closer to a 980Ti, which is retarded). In the UK I could completely understand your statements about the price, unfortunately. I really wish it weren't so.

 

You understand where I come from with the i7. Most games won't max it, but y'know... you eventually will find one that does, and you'll probably be glad for it. And for the rest, well you just have extra power. That's always a good thing. I do somewhat agree with the what your budget allows, but I still think a system should be maximized to everyone's needs within a budget. If their budget is $10,000 for example but they want single-GPU gaming and they're not going to overclock at all and they want the machine fast, there's little reason to go over an i7-4790K and a single 980Ti with some good RAM and some SSDs. On the other hand, if somebody tells me they want minimum 144fps in all games and they want to livestream and their budget is $10,000... well you're getting a 5960X with something like a Kraken X61 or a custom loop with a 980Ti or two or three, all with custom vBIOSes and 32GB DDR4 3000MHz 14-14-14-31 RAM in quad channel, and OC the CPU to minimum 4.5GHz. Leh we see ah game dare to be under 144fps then nah. Leh we see.

 

I always get the best possible choice out of a budget, but I never overspend for anyone. No need to get them something they're not going to use.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, D2ultima said:

[lots of text]

One of the things that I'm looking forward to with Dx 12 is the possibility of API-based multi-GPU support. Of course it relies on support from the devs, but not only could you link up your old 680 (for instance) with your shiny new Pascal (or Polaris!) GPU, but if you did decide to go with dual GTX 1070s or R9 490s which will probably make the most sense at launch before the Big Boys are released there is a very real possibility (though not a given) that the lack of specific driver support needed and full utilisation of your GPUs as independent compute devices could have this massively out-performing SLI or Crossfire.

 

I'm also looking forward to people benchmarking R9 480 vs GTX 480 for shits and giggles, but that's neither here nor there :P

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, othertomperson said:

One of the things that I'm looking forward to with Dx 12 is the possibility of API-based multi-GPU support. Of course it relies on support from the devs, but not only could you link up your old 680 (for instance) with your shiny new Pascal (or Polaris!) GPU, but if you did decide to go with dual GTX 1070s or R9 490s which will probably make the most sense at launch before the Big Boys are released there is a very real possibility (though not a given) that the lack of specific driver support needed and full utilisation of your GPUs as independent compute devices could have this massively out-performing SLI or Crossfire.

 

I'm also looking forward to people benchmarking R9 480 vs GTX 480 for shits and giggles, but that's neither here nor there :P

Mmm. I've been watching DirectX 12 titles for quite some time. What I've gathered is that while it indeed has great potential, nobody cares to put in the time to make use of it. I've seen a lot of benchmarks where DX12 surpasses things, but every single DX12-only title that came through the UWP so far has had absolutely abysmal performance; with a GTX 970 on average basically performing like a Xbox 1 in Quantum Break specifically. Rise of the Tomb Raider got benefits on PCs with lower-end CPUs, but nVidia/i7 gamers in particular actually saw no improvement to a downgrade, and it released without a SLI profile too. I am not sure if it ever did get one. DX12 also has the ability to use multiple GPUs in two ways. Implicit multi-adapter is basically SLI and CrossfireX, letting the drive handle the load distribution. Explicit multi-adapter is where you could use whatever you want, and the game simply is coded to use more than one card. I may have them switched; if I do let me know. Either way, the one I've listed as explicit seems to perform better than implicit on the whole. But since it needs the devs to actually care and optimize, it's a crapshoot.

 

Honestly right now, as far as I see, Vulkan and DX12 are straight downgrades for all but maybe 5% of the titles they're announced for, and if titles don't have a DX11 counterpart, they simply perform like garbage in general. It all boils down to the same problem we've had the entire time: If people would take the time to optimize for PC (or even build for PC and port to consoles after), we would have games running maybe four, five times better than they are right now. DX12 might make things better, with draw calls being higher and less CPU usage, but honestly... DX12 with the poor PC support we have now is just a disaster in performance. Not to mention the artificial locking to Windows 10.

 

I would love to see a R9 480 compared to a GTX 480 for shits and giggles too =D.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, D2ultima said:

Mmm. I've been watching DirectX 12 titles for quite some time. What I've gathered is that while it indeed has great potential, nobody cares to put in the time to make use of it. I've seen a lot of benchmarks where DX12 surpasses things, but every single DX12-only title that came through the UWP so far has had absolutely abysmal performance; with a GTX 970 on average basically performing like a Xbox 1 in Quantum Break specifically. Rise of the Tomb Raider got benefits on PCs with lower-end CPUs, but nVidia/i7 gamers in particular actually saw no improvement to a downgrade, and it released without a SLI profile too. I am not sure if it ever did get one. DX12 also has the ability to use multiple GPUs in two ways. Implicit multi-adapter is basically SLI and CrossfireX, letting the drive handle the load distribution. Explicit multi-adapter is where you could use whatever you want, and the game simply is coded to use more than one card. I may have them switched; if I do let me know. Either way, the one I've listed as explicit seems to perform better than implicit on the whole. But since it needs the devs to actually care and optimize, it's a crapshoot.

 

Honestly right now, as far as I see, Vulkan and DX12 are straight downgrades for all but maybe 5% of the titles they're announced for, and if titles don't have a DX11 counterpart, they simply perform like garbage in general. It all boils down to the same problem we've had the entire time: If people would take the time to optimize for PC (or even build for PC and port to consoles after), we would have games running maybe four, five times better than they are right now. DX12 might make things better, with draw calls being higher and less CPU usage, but honestly... DX12 with the poor PC support we have now is just a disaster in performance. Not to mention the artificial locking to Windows 10.

 

I would love to see a R9 480 compared to a GTX 480 for shits and giggles too =D.

I think some of it does come from the fact that Dx12 is incredibly new. Whenever we have a new console release (apart from current gen which haven been basically Dx11 x86 PCs so far) it always takes some time for performance and optimisations to peak. Looking at the quality of ports now compared to in 2011, it is much better than it has been. I too have heard stories of Dx 12 being particularly difficult to optimise for -- in specific reference to Async Compute too which leaves me doubtful of its usefulness, at least in the short term. However I think after a while we will see greater utilisation and optimisation for Dx12.

 

I'm baffled as to how the UWP support, and particularly Quantum Break, have been so bad. Microsoft have locked this because they are trying to sell an OS upgrade to people who really don't want  one on the back of this, and they are apparently going out of their way to make it even less appealing?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, othertomperson said:

I think some of it does come from the fact that Dx12 is incredibly new. Whenever we have a new console release (apart from current gen which haven been basically Dx11 x86 PCs so far) it always takes some time for performance and optimisations to peak. Looking at the quality of ports now compared to in 2011, it is much better than it has been. I too have heard stories of Dx 12 being particularly difficult to optimise for -- in specific reference to Async Compute too which leaves me doubtful of its usefulness, at least in the short term. However I think after a while we will see greater utilisation and optimisation for Dx12.

 

I'm baffled as to how the UWP support, and particularly Quantum Break, have been so bad. Microsoft have locked this because they are trying to sell an OS upgrade to people who really don't want  one on the back of this, and they are apparently going out of their way to make it even less appealing?

The newness doesn't matter much; the coding is how people coded for consoles from the get go, in general. But I'd actually beg to differ that while games in 2011 didn't have say... borderless windowed, or all the PC options we like, they ran fairly decently. I hate unoptimization then, but I couldn't call it out nearly as much as I can now. Since the current-gen console launched I've noticed it's like devs are going all sorts of crazy. First was vRAM usage; every non-japanese game from I'd say 2014 onwards, which looked about as good as a decent 2012 title did on PC, suddenly requires 3GB+ of vRAM to turn up its textures. The texture resolution has skyrocketed, but the quality of the textures has generally remained the same. I keep harking back to Prototype 2's PC release as to what's capable without just tossing tessellation at everything until it looks decent, murdering GPUs needlessly, and what good textures look like. Next was CPU usage for most 2015 titles; devouring i7 chips whole with games like Dying Light and GTA V and Witcher 3 and anything on the Frostbite 3 engine in general, etc. It's stupid, because any old haswell or later core i3 is better than what X1 and PS4 allot to devs (only about 6 cores per game out of their 8, low IPC, low power, low frequency CPU cores), but sometimes they won't even give a PC the same experience, especially for 60fps console titles. 2016, I don't even know what this year is going to be. But yeah, I think PC ports started getting much better in 2012 and 2013, and then they just went on a landslide. We have lots of graphics options now and even some decent PC settings like FoV and borderless windowed, but they really mean little if the game runs like poo or is buggy as hell.

 

Async compute in itself is not really much issue. Maxwell cards from nVidia can handle 31 queues + 1 graphics render queue. AMD cards; Hawaii and Fiji in particular, can handle 64 queues. Their lower end cards as far as I know can't handle even as much as Maxwell can. The Ashes of the Singularity demo/benchmark for DX12 simply requested more than 31 queues at once, and Maxwell cards just rolled over and died. I don't see it as anything special, but I'm glad nVidia's "current-gen graphics rendering first, everything else can suck it" approach to GPUs while touting preparedness for the future got a solid blow to the ego. It's the same thing as when a game uses Gameworks and literally EVERYTHING is tessellated (Fur, Hair, God Rays, the environment, etc) and then AMD (and Kepler and Fermi nVidia cards) cards chug along because AMD (and old nVidia) can't tessellate as well as Maxwell can. Don't worry about Async compute being any sort of issue for the near future at least, whichever vendor you choose.

 

As for UWP support, it's because Micro$haft basically is following a certain path. They want to get everybody onto their crappy platform so they can have control over PCs and make money by gathering and selling data and/or other products. Instead of making their platform good, they are putting their time and money into advertising, and making as many "reasons to upgrade" as possible. Games you want, updates you want, support, etc. They're cutting support for Windows 7 and 8.1 short to make people get on 10. Businesses NEED support for the OSes they run. When it fully runs out they start upgrading as a matter of necessity, rather than choice. What happens when extended support gets cut short a few years? Gotta upgrade. They've refused to add support for CPUs newer than skylake into any OS other than 10; which means if you buy newer machines from Dell or HP etc for a business, if it's using Kaby Lake or later, or AMD's Zen, then you need to be on Windows 10 for proper support. Same for people buying new machines for gaming. What next? Let's get games on the platform that people want but don't want to buy Xbox 1s for. Can they work on Win 7 and 8.1? Yup! All you need to do is backport WDDM 2.0 into the OSes. Even if the OS kernel for 7 can't handle WDDM 2.0, the kernel for 8.1 most definitely can. What's stopping them? They want people on 10, so they refuse.

 

This is the issue with it all. It doesn't matter if everything runs like crap and the platform is locked down. Their design is that you won't have a choice if you want certain things, and thus you're eventually (and rather soon) going to have to upgrade things. The better (for the consumer) method would be to improve the platform in such a way that it becomes so attractive that people want to upgrade. That got many businesses and users onto Windows 7 real fast, compared to other OS launches. But they don't care about their consumer as they've proven. It's the same thing I say about the UWP. If you are so eager to make it "better" for the gamers, and are "working hard" to fix such issues as disabling vsync and using exclusive fullscreen and allowing applications to hook to it, etc... why did you design it without said features in the first place? Did you have absolutely no idea what gamers wanted or needed from the platform? And if yes, what business do you have even trying to sell that platform?

 

Sorry, I write books.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

  • 3 weeks later...

Great Guide! :D

\/ heuehuehe

On 14/08/2014 at 4:37 AM, D2ultima said:

Hi everyone. I originally wrote this guide over at the kbmod forums, but as it turns out that forum is as dead as Aeris in FF7.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×