Jump to content

Extra Monitors DO Hurt Your Gaming Performance

Plouffe

As many here have already mentioned, this video without a testing phase of different refresh rates is incomplete. Also, what makes this incomplete is the use of just high end hardware for the testing. Sure, Linus did mention the impact will be even more, but just how much more.

 

Having a 1440p 144Hz monitor with a side 1080p 1440Hz on a RTX 3060Ti/RTX 3070 class of GPU is not unusual and should be a pretty common config among people. 

 

Why not test those and find out just exactly how much it affects the performance then? I sometimes really get tired of such findings only being limited to high end hardware when most common people might not have it.

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, Nystemy said:

However, here comes the good old. "what about if one has two GPUs?"

Like one GPU for the primary monitor, and a second one for the ancillary ones.

^ This. I used to have my second monitor connected to the onboard GPU of my i5, which felt noticeably better back when I was still running a GTX 760.

Now with Ryzen 7000 having onboard GPUs across the board this could have been an interesting comparison.

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, themrsbusta said:

If you have your iGPU enabled and plug your second monitor on it, the performance drop is unexistent.

 

Also, this is the video description: image.png.556f01619c2c281fd6c59878d7a1e0d6.png

Edit: Nevermind, I mixed two things together, I thought of USB driven monitors. 

But then you run the monitor off CPU performance, which can be very harsh to performance on slower CPUs and you can experience serious microstutters from running monitors off CPUs on those monitors, I've seen it myself. Much, much better using dedicated GPU if possible. 

PC Setup: 

HYTE Y60 White/Black + Custom ColdZero ventilation sidepanel

Intel Core i7-10700K + Corsair Hydro Series H100x

G.SKILL TridentZ RGB 32GB (F4-3600C16Q-32GTZR)

ASUS ROG STRIX RTX 3080Ti OC LC

ASUS ROG STRIX Z490-G GAMING (Wi-Fi)

Samsung EVO Plus 1TB

Samsung EVO Plus 1TB

Crucial MX500 2TB

Crucial MX300 1.TB

Corsair HX1200i

 

Peripherals: 

Samsung Odyssey Neo G9 G95NC 57"

Samsung Odyssey Neo G7 32"

ASUS ROG Harpe Ace Aim Lab Edition Wireless

ASUS ROG Claymore II Wireless

ASUS ROG Sheath BLK LTD'

Corsair SP2500

Beyerdynamic TYGR 300R + FiiO K7 DAC/AMP

RØDE VideoMic II + Elgato WAVE Mic Arm

 

Racing SIM Setup: 

Sim-Lab GT1 EVO Sim Racing Cockpit + Sim-Lab GT1 EVO Single Screen holder

Svive Racing D1 Seat

Samsung Odyssey G9 49"

Simagic Alpha Mini

Simagic GT4 (Dual Clutch)

CSL Elite Pedals V2

Logitech K400 Plus

Link to comment
Share on other sites

Link to post
Share on other sites

What'd also be cool to cover is the advantages/disadvantages of displayport daisy chaining. I love using it, but I do notice additional screen tearing and occasional flickering when using it.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Zodiark1593 said:

Not necessarily, depending on the setup. When doing this on my own desktop, for example, the iGPU only acts as a pass through for the dGPU. You get full 3D performance regardless of monitor, but the dGPU is still handling both screens. 
 

Though my desktop is going on 8 years old, so I’m unsure of this has changed on recent platforms. 
 

The additional load for multi-display is most likely due to fill rate, and memory bandwidth. Redrawing multiple layers of windows at 4K is probably relatively expensive too, owing to overdraw.

 

I personally use a 10th gen i7 with a 6700xt, main monitor is a 1440p ultrawide and second monitor is a portrait mode 1080p and use the iGPU for my second monitor, and, according to task manager, the iGPU is being used by the apps on the second monitor. I also see a "glitch" when I drag my video around between monitor since the load is being transferred between GPUs so I do believe that the generation of hardware and the OS makes a difference.

 

I read on Reddit someone who dedicated his iGPU to their web browser, so regardless which monitor it was on, they could be sure that the dGPU wasn't doing the video decoding. They were successful at assigning the load after they had their second monitor plugged in the motherboard.

Record holder for Firestrike, Firestrike Extreme and Firestrike Ultra for his hardware

Top 100 for TimeSpy and Top 25 for Timespy Extreme

 

Intel i7 10700 || 64GB Kingston Predator RGB || Asus H470i Strix || MSI RX 6700XT Merc X2 OC || Corsair MP600 500GB ||  WD Blue SN550 1TB || 500GB Samsung 860 EVO || EVGA 550 GM || EK-Classic 115X aRGB CPU block - Corsair XR5 240mm RAD - Alphacool GPU Block - DarkSide 240mm external rad || Lian Li Q58 || 2x Cooler Master ARGB 120MM + 2x Noctua  Redux 1700RPM 120MM 

Link to comment
Share on other sites

Link to post
Share on other sites

Something I havent seen addressed on this one...  Nvidia video upscaling 

https://blogs.nvidia.com/blog/2023/02/28/rtx-video-super-resolution/

Any chance this was left enabled for these tests.  Also I very much like community suggestion of re-test with 2nd,3rd,4th monitor(s) on iGPU.  This is how my setup is done and think it would be an interesting data point.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, Kikorusan said:

I read somewhere that plugging your 2nd monitor to the MoBo will use your processor’s integrated graphics instead of the dedicated gpu which will then not affect performance of the gpu. Is this true? And if so is there a setting thing you have to do to enable it? Will the downside be more temps and possible cpu slowdown?

I'm also interested in this. It has a very practical application (second monitor goes WHERE?)

I wouldn't worry about the extra power draw to the CPU if you're gaming. 

Intel's CPUs for example might draw around 70W while gaming  which is well below their peak (250+) power draw. 
As far as temps are concerned... lower is better but we've had examples of CPUs running at 100C in laptops for 10 years now. 
People definitely freaked out about CPUs going to 50 or 60C back in the early 2000s but as far as I'm aware temps don't really matter much anymore outside of edge cases where you're REALLY pushing things. 1-3W more on a different part of the CPU won't really hurt the overall performance. 

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

57 minutes ago, cmndr said:

I'm also interested in this. It has a very practical application (second monitor goes WHERE?)

I wouldn't worry about the extra power draw to the CPU if you're gaming. 

Intel's CPUs for example might draw around 70W while gaming  which is well below their peak (250+) power draw. 
As far as temps are concerned... lower is better but we've had examples of CPUs running at 100C in laptops for 10 years now. 
People definitely freaked out about CPUs going to 50 or 60C back in the early 2000s but as far as I'm aware temps don't really matter much anymore outside of edge cases where you're REALLY pushing things. 1-3W more on a different part of the CPU won't really hurt the overall performance. 

The temperature sensor locations back then were also somewhat different, and they tended to be slower to measure temperature increases, design software wasn't yet able to estimate where the hotspots would likely be, etc.

 

Also, two physical PCIe GPUs in the same system would also be interesting to see.

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, Zodiark1593 said:

Not necessarily, depending on the setup. When doing this on my own desktop, for example, the iGPU only acts as a pass through for the dGPU. You get full 3D performance regardless of monitor, but the dGPU is still handling both screens. 
 

Though my desktop is going on 8 years old, so I’m unsure of this has changed on recent platforms. 
 

The additional load for multi-display is most likely due to fill rate, and memory bandwidth. Redrawing multiple layers of windows at 4K is probably relatively expensive too, owing to overdraw. 
 

I wonder what the performance difference would be with 4 1080P screens, vs 1 4K screen split 4 ways. 

Change Chrome on Windows from "Performance" to "Power Saving", I'm using a Vega 8 for video playback and a RX 6400 for gaming like that.

https://www.reddit.com/r/AyyMD/comments/14pqs94/tutorial_hybrid_gpus_for_people_that_has/

Made In Brazil 🇧🇷

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, ImorallySourcedElectrons said:

The temperature sensor locations back then were also somewhat different, and they tended to be slower to measure temperature increases, design software wasn't yet able to estimate where the hotspots would likely be, etc.

 

Also, two physical PCIe GPUs in the same system would also be interesting to see.

Temperature sensor location isn't going to create a 50C difference.
The main thing going on here is that we're pumping 2-4x the power through a chip the same size AND that chips are designed to ramp up to max temp quickly and stay there.
 

Interesting fact - my uncle actually DESIGNED temperature sensors while working at Intel around that time period. 
He takes his NDA seriously and I don't know anything juicy or non-publicly known. 

 
 Here's an interesting video on the matter (he says MUCH more than my uncle ever did to me):

 

 

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, cmndr said:

Temperature sensor location isn't going to create a 50C difference.
The main thing going on here is that we're pumping 2-4x the power through a chip the same size AND that chips are designed to ramp up to max temp quickly and stay there.
 

Interesting fact - my uncle actually DESIGNED temperature sensors while working at Intel around that time period. 
He takes his NDA seriously and I don't know anything juicy or non-publicly known. 

 
 Here's an interesting video on the matter (he says MUCH more than my uncle ever did to me):

 

 

You'd be quite surprised! Back in the 90s and early 2000s CPUs sometimes didn't even have an on-die temperature sensor, it was actually part of the socket (, if they had one at all.) To avoid getting into too many details, this is the sort of progression we've been seeing (red circle indicates position of the sensor) in terms of temperature sensor positioning:

image.thumb.png.c8e90830b22795a868339d5f37595ea8.png

The in between thing was primarily in that weird period when we sometimes had CPUs as plug-in cards, and for some of the budget CPUs where they'd put a diode or thermistor on the fanout interposer (though I'm not sure if AMD or Intel ever did that themselves, they weren't the only ones in town back then).

 

But to get back to the temperature measurements, package temperature measured on the fanout interposer can be 50°C below the junction temperature, because quite often the cooling solution on flip-chip style devices is really inefficient at transferring heat to the substrate (fanout interposer in this case), because the die doesn't quite sit against the interposer, and the spacing combined with the relatively poor area coverage of the pillars/balls/... means it kind of acts as a heat break. More thermally conductive underfills have been developed to help transfer more heat into the substrate, but there's only so much you can do without introducing metal (and hence conductive) particles. You're also facing a significant amount of thermal lag due to this heat break. So there's genuinely a large difference on different parts of the CPU, and these also have to be kept within sane limits to prevent excessive thermal expansion from causing damage to the used interconnect solutions or the die itself. That's why you have so many different temperature measurements: TPackage, TJunction, ... And it's also part of the reasons why they don't make the dies much thinner, once you approach 100 to 150 micron thickness, silicone becomes really fragile until you've thinned it down past about 60-70 micron. And anything below half a millimetre is in fact quite nasty to handle, especially if it's a larger device, even though thinning further would make thermal management easier.


But to get back on topic, another part of the problem is the temperature sensor design itself, a larger sensor volume takes a longer time to heat up and can't be as close to the source of the heat (the junction), so you got to account for some "lag" if you have a big sensor that's further away. Part of the reason why they moved it on die (and then put multiple sensors in various spots) is so they can run those higher powers safely. Because no matter what you do, temperature measurements are relatively slow and quite noisy, so your CPU performance is limited by not just the thermal solution, but also by how good the temperature sensors are, and how good your device modelling actually is. Like if you know the amount of power you're dumping into a part of the chip, and you know how much your temperature sensor lags behind, you can go a lot closer to the dangerous zone than if you just mindlessly readout the sensor value.

 

Another challenge with these was that in earlier NMOS, PMOS, and CMOS technologies it was really difficult to include analog circuits on-die due to the limited capabilities of the design tools. Some folks later figured out you could intentionally use "parasitic diodes" to create temperature sensors and just bring it out on a pin and then put the analog circuit on the PCB to turn that into a temperature value. In the 90s, simulation and design tools started to catch up, and we started seeing entirely integrated temperature sensors (diode/PTATs/diffusion resistors/...) with accompanying readout hardware. But around the mid 2000s we ran into an interesting issue, the scaling of the CMOS nodes didn't shrink the size of the analog circuitry (in fact, it sometimes even increased). It wasn't uncommon around 2010 for the temperature sensors + accompanying analog circuitry to take up a couple percent of the die space. A lot of work was done over the last couple of years to shrink these down again, and you can find quite a few papers from Intel's engineers on that topic in fact. Some also feature really exotic designs, and I'm kind of curious which designs are actually being used. Things like PTAT-based oscillators, intentionally skewed ring oscillators, intentionally "crappy" bandgap voltage references that drifted along with temperature and many other solutions have been proposed.

 

Edit: Forgot to add one more thing, another part is also the improvement in design software, now chip design software has gotten better at estimating where the power dissipation on your chip will be. (Well, it's more of a combination of multiple software tools in the places where I've worked, but let's not dive into that insanity.) So you can kind of predict what sort of temperature distribution you're going to see. You can then tie this into the layout process, which is usually more of a guided-automation: you design the critical parts, you provide constraints, you indicate where you'd like to see particular things, etc. And then the software will try to get as close as possible.

 

So yeah, temperature sensing between the early 2000s and now is a massive difference. Like back then a package temperature sensor was often really a sensor that directly read the package temperature, now it's either some reference value or sometimes even a model-derived temperature measurement. To get an idea of what the latter implies: you have three temperature sensors, if you know the material everything is made out of and make a couple of assumptions, you can usually get a pretty good idea of the temperature of every point in between those three sensors. But you can do this with more variables and go for full-fledged modelling, leading to the current trend of "virtual sensors" based on other sensor data.

 

And even trying to explain these things tends to cause kneejerk reactions, which is why we usually hide behind NDAs IRL. Luckily, if someone is annoying online you can just ignore them. 😄 

Link to comment
Share on other sites

Link to post
Share on other sites

I have been waiting for years for a video on this topic! I drive 5 monitors (all 1080p) on two dGPU’s with my i9-9900k. I have a primary GTX 1650 and a second GTX 1060 6gb to run everything. I use the 1650 for two main displays I stream content on and the other three on the 1060 for my music playback or anything else I want to see in the moment. I micro solder and look at various schematics a lot so having the extra screen real estate is so handy when it comes to finding traces on board views or anything I need to be able to look at something and see multiple pictures at once. Only one of my monitors is in portrait; it sits on the far right. The middle two monitors (23” in each) are side by side next to that and on the far left are two stacked on top of each other. Cable management is my biggest pain as I actually do try to keep everything neat :). Besides that I have never seen anyone else in real life use or even make use of 5 monitors but I enjoy my setup and only do graphic intensive things on the primary two displays. I know my GPUs are outdated hardware and I’m planning on upgrading them soon. I am very curious to know if anybody else has an extreme setup or if I’m the only one. 

Link to comment
Share on other sites

Link to post
Share on other sites

I think there are 2 misconceptions in the video and why the case study may be flawed:

  1. NVENC and NVDEC offload encoding and decoding tasks to special parts of the GPs, but my guess is that's specific only to a certain codec, which is not going to be the one you are running in your browser. Nvidia will do it for AVC/H.265. That codec has patents / copyrights /whatever so Google won't use it for online media distribution, since license fees and hardware restrictions will apply forever. In fact THIS video (stats for nerds) shows Codecs vp09.00.51.08.01.01.01.01.00 (313) / opus (251). So nvidia basically is damaging distribution of content under its own format. You don't want to have all your videos lying around in such codec, may you not be able to play them in the future due to tighter law enforcement.
  2. The reason why playing the videos on separate monitors causes higher load, is not that the videos are playing, but they are being drawn on the screen, so it actually decodes the video. If you have the videos "playing" in 1 monitor in the BACKGROUND, it will actually decode only the audio, so basically those FPS are gained because you skip the video decoding, and while you hear the audio you think the load is the same but it's not.

The test you can do to rule out the 2nd point, is to test only 1 monitor and split it by the middle (like 2 displays) if possible. The in 1 test play the videogame in half screen and in the other half the video actually showing the browser. On the second test just minimize the browser to see if not drawing it on the screen makes the difference.

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/9/2023 at 7:20 PM, Spotty said:

Can you please do something about Linus pointing to something on a screen and the camera operator zooming in just to show a blurry unfocused mess before zooming back out as they try to keep up with Linus?

To be clear I'm not blaming the camera operators, I understand it isn't easy trying to focus on to a screen or to keep up with Linus's fast pace as he moves around. I'm not sure what the solution would be whether it would be to plan the shots out in advance so the camera operators know what they're going to be focusing on so they can be better prepared, telling Linus to slow down with his presentation a few seconds when he points to things on screen to give his team the time they need for shots like that (which can be cut in editing), cutting in a separate shot from B roll/screen capture if the primary footage ends up being unusable, cutting those sections from the video if they're too blurry to be usable, or having the camera operator on the spot saying "Sorry Linus I didn't quite catch that can we run it back again so I can focus".

 

 

Linus: "And then look what happens"

 

 

Audience: ?????

 

this.

 

*please* have a workflow where the editors can put in a screen capture overlay of the "this" linus is pointing at.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, kripton said:

I think there are 2 misconceptions in the video and why the case study may be flawed:

  1. NVENC and NVDEC offload encoding and decoding tasks to special parts of the GPs, but my guess is that's specific only to a certain codec, which is not going to be the one you are running in your browser. Nvidia will do it for AVC/H.265. That codec has patents / copyrights /whatever so Google won't use it for online media distribution, since license fees and hardware restrictions will apply forever. In fact THIS video (stats for nerds) shows Codecs vp09.00.51.08.01.01.01.01.00 (313) / opus (251). So nvidia basically is damaging distribution of content under its own format. You don't want to have all your videos lying around in such codec, may you not be able to play them in the future due to tighter law enforcement.
  2. The reason why playing the videos on separate monitors causes higher load, is not that the videos are playing, but they are being drawn on the screen, so it actually decodes the video. If you have the videos "playing" in 1 monitor in the BACKGROUND, it will actually decode only the audio, so basically those FPS are gained because you skip the video decoding, and while you hear the audio you think the load is the same but it's not.

The test you can do to rule out the 2nd point, is to test only 1 monitor and split it by the middle (like 2 displays) if possible. The in 1 test play the videogame in half screen and in the other half the video actually showing the browser. On the second test just minimize the browser to see if not drawing it on the screen makes the difference.

1. YouTube uses H.264, VP9, and AV1 as far as I know. Most videos (and especially older ones) are in H.264, for higher resolution videos it seems to use VP9 and AV1, but I've yet to find a logic to it, they probably have some sort of calculation running when it's worth transcoding. But anyway, NVDEC supports all three if you got a new enough card: H.264 has been supported for a while now, VP9 seems to have appeared around the time of the 1080, AV1 is supported since the RTX 3000 series. Most modern browsers will use hardware acceleration for video playback when available.
2. This would depend on the browser, good question how it behaves.

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/11/2023 at 10:52 PM, ImorallySourcedElectrons said:

1. YouTube uses H.264, VP9, and AV1 as far as I know. Most videos (and especially older ones) are in H.264, for higher resolution videos it seems to use VP9 and AV1, but I've yet to find a logic to it, they probably have some sort of calculation running when it's worth transcoding. But anyway, NVDEC supports all three if you got a new enough card: H.264 has been supported for a while now, VP9 seems to have appeared around the time of the 1080, AV1 is supported since the RTX 3000 series. Most modern browsers will use hardware acceleration for video playback when available.

Please provide examples tried personally by you. The hardware depends on the vendor, each one tries to lock the system to their advantage. For example, Apple Silicon M1/M2/M3 don't provide AV1 hardware encoding/decoding, while NVIDIA RTX40 do. If you use codecs with legal issues, you will never make it hardware accelerated on all systems, so everybody will be screwed up somehow. Using open standards is the way to go and basically sets the bare minimum for everybody.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, kripton said:

Please provide examples tried personally by you. The hardware depends on the vendor, each one tries to lock the system to their advantage. For example, Apple Silicon M1/M2/M3 don't provide AV1 hardware encoding/decoding, while NVIDIA RTX40 do. If you use codecs with legal issues, you will never make it hardware accelerated on all systems, so everybody will be screwed up somehow. Using open standards is the way to go and basically sets the bare minimum for everybody.

What the hell are you going on about? nvenc/nvdec work, you can find plenty of documentation on how it works, and you can use it in your own products. As to licensing, look who's on the H.264 licensee list: https://www.via-la.com/licensing/avc-h-264/avc-h-264-licensees/ For added fun, Microsoft includes an H.264 codec with Windows.

 

As to, the "open standards", VP9 and AV1, both of those are in fact a complete mess regarding licensing, AOM were embroiled in pre-legal hijinks versus a patent troll. But the reality is that if you're up against the combination of Google, Intel, Facebook, Microsoft, Amazon, Apple, nVIDIA, and Samsung in a patent war, you're going to lose badly. Not only do they have such big patent portfolios that they're capable of attacking your own products, if they decide to work around your technology, your patent could just as well be toilet paper.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 8 months later...

I have a problem. I have a gaming pc with i7 13th cpu, 4070ti gpu and 32g ddr4 ram with lg 180hz 32inch screen. I was playing for example mw3  the fps was 220 in this range. I decided to buy a second screen with asus  165hz for streaming and chatting. First I plug it  and every thing was normal I was playing 220 fps without streaming and 190 with streaming no problem until I tried to change the setting from Nvidia controller to make my second screen 165hz instead of 60hz then every thing goes as bad as possible. Now I am playing with 150 fps without streaming  and with streaming 90 fps even after returning default settings and unplug the second screen without a second screen. Please help me.

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, DragonWarrior99 said:

I have a problem. I have a gaming pc with i7 13th cpu, 4070ti gpu and 32g ddr4 ram with lg 180hz 32inch screen. I was playing for example mw3  the fps was 220 in this range. I decided to buy a second screen with asus  165hz for streaming and chatting. First I plug it  and every thing was normal I was playing 220 fps without streaming and 190 with streaming no problem until I tried to change the setting from Nvidia controller to make my second screen 165hz instead of 60hz then every thing goes as bad as possible. Now I am playing with 150 fps without streaming  and with streaming 90 fps even after returning default settings and unplug the second screen without a second screen. Please help me.

I was literally about to ask if the refresh rate makes a difference on the additional screen(s). Do you get the performance back if you completely turn off or disconnect the other monitor?

 

As a new monitor polygamist myself, I keep my second monitor on 60hz lest it F my PS more than intended.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×