Jump to content

Live gameplay encoding with secondary Graphics Card?

Go to solution Solved by mariushm,
22 minutes ago, CoolJosh3k said:

I was wondering if, while running a game via a main GPU, can a second graphics card be used to handle the live video encoding?

 

In my mind; this is not possible as the video frames are processed on the main GPU and so you must either use the included encoder or capture the display signals w/ capture card.

 

I figure though that I don’t understand how the live encoding and I am just plain wrong. The idea that I could have the CPU do encoding instead too, really makes me think I am incorrect.

 

I would be quite happy to be wrong and learn that I could have a secondary graph is card just to handle the encoding, specifically in AV1. Of course I’d have to run my main graphics card in only x8 instead of x16, but maybe that is a reasonable trade of depending on the specific circumstances?

 

A program like OBS can "upload" uncompressed video frames from video card, can tell video card to apply some processing on the frames (resizing, sharpen, other stuff) , then compress to video and then download the compressed video from the video card. 

So the original video card could be used to capture the game, and then "download" the raw frames from the video card instead of "copying" the raw frames to the hardware encoder that's located on the same video card. 

 

If you want to use a second video card just for hardware encoding, that video card could be plugged even in a pci-e x1 slot (for 1080p/1440p recording) because it's just the raw video upload that needs bandwidth.  A lot of motherboards have a bottom pci-e x16 slot connected to chipset which usually has only 4  or 2 actual pci-e lanes connected to it - such slot would work fine for a second video card dedicated to video encoding only. 

uncompressed 4K at 60 fps is basically 3840 x 2160 pixels x 2-3 bytes per pixel x 60 frames = 995,328,000 bytes - 1,492,992,000 bytes  that's at most 2 GB/s ... a pci-e x4 3.0 has almost 4 GB/s of bandwidth.

 

I agree with the others, no need to rush, wait until AMD and nVidia both have AV1 encoders and then you'll see support added for it on Twitch and others. 

I was wondering if, while running a game via a main GPU, can a second graphics card be used to handle the live video encoding?

 

In my mind; this is not possible as the video frames are processed on the main GPU and so you must either use the included encoder or capture the display signals w/ capture card.

 

I figure though that I don’t understand how the live encoding and I am just plain wrong. The idea that I could have the CPU do encoding instead too, really makes me think I am incorrect.

 

I would be quite happy to be wrong and learn that I could have a secondary graph is card just to handle the encoding, specifically in AV1. Of course I’d have to run my main graphics card in only x8 instead of x16, but maybe that is a reasonable trade of depending on the specific circumstances?

Link to comment
Share on other sites

Link to post
Share on other sites

You can copy the display output to a different gpu(or cpu, this is how cpu encoding works) so this would work.

 

But Id probably wait till AV1 becomes more common(im pretty sure twitch doesn't support it), and you can also do cpu encoding. Id be tempted to get a higher end gpu too, as 4xxx, arc, and amds new generation will likely support av1 encoding

Link to comment
Share on other sites

Link to post
Share on other sites

I figure if possible it means I could use a cheaper graphics card instead of a capture card. More expensive, but it wouldn’t be a “one-trick pony”.

 

I’d love confirmation from someone who has done this.

 

AV1 isn’t currently supported by Adobe (?), but it would save disk space before using ffmpeg to convert to h264. Also would be neat to not need the CPU when converting to AV1.

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, CoolJosh3k said:

I was wondering if, while running a game via a main GPU, can a second graphics card be used to handle the live video encoding?

 

In my mind; this is not possible as the video frames are processed on the main GPU and so you must either use the included encoder or capture the display signals w/ capture card.

 

I figure though that I don’t understand how the live encoding and I am just plain wrong. The idea that I could have the CPU do encoding instead too, really makes me think I am incorrect.

 

I would be quite happy to be wrong and learn that I could have a secondary graph is card just to handle the encoding, specifically in AV1. Of course I’d have to run my main graphics card in only x8 instead of x16, but maybe that is a reasonable trade of depending on the specific circumstances?

 

A program like OBS can "upload" uncompressed video frames from video card, can tell video card to apply some processing on the frames (resizing, sharpen, other stuff) , then compress to video and then download the compressed video from the video card. 

So the original video card could be used to capture the game, and then "download" the raw frames from the video card instead of "copying" the raw frames to the hardware encoder that's located on the same video card. 

 

If you want to use a second video card just for hardware encoding, that video card could be plugged even in a pci-e x1 slot (for 1080p/1440p recording) because it's just the raw video upload that needs bandwidth.  A lot of motherboards have a bottom pci-e x16 slot connected to chipset which usually has only 4  or 2 actual pci-e lanes connected to it - such slot would work fine for a second video card dedicated to video encoding only. 

uncompressed 4K at 60 fps is basically 3840 x 2160 pixels x 2-3 bytes per pixel x 60 frames = 995,328,000 bytes - 1,492,992,000 bytes  that's at most 2 GB/s ... a pci-e x4 3.0 has almost 4 GB/s of bandwidth.

 

I agree with the others, no need to rush, wait until AMD and nVidia both have AV1 encoders and then you'll see support added for it on Twitch and others. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, CoolJosh3k said:

 

AV1 isn’t currently supported by Adobe (?), but it would save disk space before using ffmpeg to convert to h264. Also would be neat to not need the CPU when converting to AV1.

CPU encoding of AV1 seems to be a good amount better, and will save you even more diskspace if thats the concern.

 

But this config is already somewhat common with iGPUs and will work here.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, CoolJosh3k said:

 

 

AV1 isn’t currently supported by Adobe (?), but it would save disk space before using ffmpeg to convert to h264. Also would be neat to not need the CPU when converting to AV1.

You wouldn't want to do that.  AV1 is lossy just like h264 is lossy ... it's like  saving to JPEG2000  so that you can save later to JPG, different kinds of loss, resulting in a final product that's worse. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

58 minutes ago, Electronics Wizardy said:

CPU encoding of AV1 seems to be a good amount better, and will save you even more diskspace if thats the concern.

 

But this config is already somewhat common with iGPUs and will work here.

I didn’t even think of that. I don’t have any iGPU, but would any request to the CPU to do encoding default to having an iGPU do the encoding? This would be the x264 setting in OBS, right?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, mariushm said:

You wouldn't want to do that.  AV1 is lossy just like h264 is lossy ... it's like  saving to JPEG2000  so that you can save later to JPG, different kinds of loss, resulting in a final product that's worse. 

 

I figured that I could have the same quality take up less space, until it is later converted to a less efficient compression scheme with the same minimal quality goal.

 

In my mind: a quality factor of x amount remains, but the bitrate would be higher for h264.

 

Even if I didn’t use AV1 now, it’d be sure nice to have for the future when platforms start making use of it. I think YouTube is already trialing it on select videos?

 

RTX 40 series has AV1 encoding and so does the new Intel Arc cards.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, mariushm said:

 

A program like OBS can "upload" uncompressed video frames from video card, can tell video card to apply some processing on the frames (resizing, sharpen, other stuff) , then compress to video and then download the compressed video from the video card. 

So the original video card could be used to capture the game, and then "download" the raw frames from the video card instead of "copying" the raw frames to the hardware encoder that's located on the same video card. 

 

If you want to use a second video card just for hardware encoding, that video card could be plugged even in a pci-e x1 slot (for 1080p/1440p recording) because it's just the raw video upload that needs bandwidth.  A lot of motherboards have a bottom pci-e x16 slot connected to chipset which usually has only 4  or 2 actual pci-e lanes connected to it - such slot would work fine for a second video card dedicated to video encoding only. 

uncompressed 4K at 60 fps is basically 3840 x 2160 pixels x 2-3 bytes per pixel x 60 frames = 995,328,000 bytes - 1,492,992,000 bytes  that's at most 2 GB/s ... a pci-e x4 3.0 has almost 4 GB/s of bandwidth.

 

I agree with the others, no need to rush, wait until AMD and nVidia both have AV1 encoders and then you'll see support added for it on Twitch and others. 

Would the CPU be involved much in this process of getting a second card to handle the encoding?

Link to comment
Share on other sites

Link to post
Share on other sites

I would use one of the new Intel Arc A380 (or A40?) to do encoding, so long as this does not have an impact on the CPU performance.

 

I really am quite new to understanding how encoding is done on a system. I’ve noticed some issues with 4K60 h264 on AMD’s newest encoder (VCN 3.0).

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, CoolJosh3k said:

I figured that I could have the same quality take up less space, until it is later converted to a less efficient compression scheme with the same minimal quality goal.

 

In my mind: a quality factor of x amount remains, but the bitrate would be higher for h264.

 

Even if I didn’t use AV1 now, it’d be sure nice to have for the future when platforms start making use of it. I think YouTube is already trialing it on select videos?

 

RTX 40 series has AV1 encoding and so does the new Intel Arc cards.

If you care about quality, get a 10-20 TB hard drive and use a lossless video codec like MagicYUV or use h264 to lossless or near-lossless dump the video to that hard drive, afterwards recompress it to the highest quality settings, you can leave it overnight to compress as best as possible. 

x264 software encoder has a lossless mode and a near-lossless (practically lossless, you wouldn't be able to tell) mode - the lossless mode is not well supported by some video editors and some older cards can't hardware decode it.  The near-lossless is well supported... 

With the right configuration, you can capture 1080p /1440p 60fps at near lossless at around 80-100 MB/s or around 5 GB per minute while using just one or two cores of a Ryzen 5xxx processor. You can use in parallel a hardware encoder to stream, and for example, later you can edit the raw high quality and create a nice video for Youtube - keeping in mind Youtube will recompress your content either way.

 

1 hour ago, CoolJosh3k said:

Would the CPU be involved much in this process of getting a second card to handle the encoding?

The CPU will be involved either way ...  the video is retrieved from one video card, is pushed into the other card, the driver of the other card uses some cpu to set up the encoder and start encoding the frames, the compressed content is then retrieved from the video card 

The cpu will be used to encode the audio (you compress your game sound, mix it with the microphone) , and then the encoded audio gets compressed to AAC or Opus or MP3 and then the cpu is used to mux (combine) the video track with audio track, and then the resulting stream is uploaded to Twitch or Youtube or whatever.

 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, mariushm said:

If you care about quality, get a 10-20 TB hard drive and use a lossless video codec like MagicYUV or use h264 to lossless or near-lossless dump the video to that hard drive, afterwards recompress it to the highest quality settings, you can leave it overnight to compress as best as possible. 

x264 software encoder has a lossless mode and a near-lossless (practically lossless, you wouldn't be able to tell) mode - the lossless mode is not well supported by some video editors and some older cards can't hardware decode it.  The near-lossless is well supported... 

With the right configuration, you can capture 1080p /1440p 60fps at near lossless at around 80-100 MB/s or around 5 GB per minute while using just one or two cores of a Ryzen 5xxx processor. You can use in parallel a hardware encoder to stream, and for example, later you can edit the raw high quality and create a nice video for Youtube - keeping in mind Youtube will recompress your content either way.

 

The CPU will be involved either way ...  the video is retrieved from one video card, is pushed into the other card, the driver of the other card uses some cpu to set up the encoder and start encoding the frames, the compressed content is then retrieved from the video card 

The cpu will be used to encode the audio (you compress your game sound, mix it with the microphone) , and then the encoded audio gets compressed to AAC or Opus or MP3 and then the cpu is used to mux (combine) the video track with audio track, and then the resulting stream is uploaded to Twitch or Youtube or whatever.

 

For now, I have it record 100Mbps (CBR) h264. I have Adobe spit out my edited product the same and send that to YouTube. YouTube can then reencode at whatever it wants, with a high quality original.


I figure since a second video card would be involved, it’d take more CPU usage as it has to move things between cards. Would it be almost the same as what it already has to do with the same video card doing the encoding?

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×