Search the Community
Showing results for tags 'nvenc'.
-
I know a lot of home servers get used for plex, and I'm sure many of you have tested the hardware transcoding of PMS. When I see people talking about intel quick sync vs nvidia nvenc the topic is always revolving around the number of encodes the GPU can manage to keep up with. What I am more concerned with, yet have never really seen discussed is the quality of the encoders. What's the consensus on quick sync vs nvenc encoding? In the use case of PMS does one look noticeably better than the other? I'm still running an i7-2600 with the first generation of quick sync that supported h.264 hardware encoding. Do newer generations of the hardware encoders supported in PMS visibly look better than what I'm getting with my system?
-
I highly doubt this is the correct location for this, but no one has been able to help me anywhere I asked so this is a last resort. I've been getting a lot of errors in many programs lately, I believe they all relate to my gpu. OBS encoder has been crashing with this log: 16:16:51.736: ==== Streaming Start =============================================== 16:16:51.741: obs-browser[2]: API: performing call to 'getSystemCPUUsageTimes', callback id 41 16:16:51.741: obs-browser[2]: API: completed call to 'getSystemCPUUsageTimes', callback id 41 16:16:51.741: obs-browser[2]: API: performing call to 'getSystemMemoryUsage', callback id 42 16:16:51.741: obs-browser[2]: API: completed call to 'getSystemMemoryUsage', callback id 42 16:16:51.741: obs-browser[2]: API: performing call to 'getSystemHardwareProperties', callback id 43 16:16:51.742: obs-browser[2]: API: completed call to 'getSystemHardwareProperties', callback id 43 16:16:51.769: [jim-nvenc] get_encoded_packet: nv.nvEncLockBitstream(s, &lock) failed: 8 (NV_ENC_ERR_INVALID_PARAM) 16:16:51.769: Error encoding with encoder 'streaming_h264' 16:16:51.769: [rtmp stream: 'adv_stream'] Encoder error, disconnecting 16:16:51.769: Output 'adv_stream': stopping 16:16:51.769: Output 'adv_stream': Total frames output: 0 16:16:51.769: Output 'adv_stream': Total drawn frames: 53 16:16:51.769: warning: 2 frames left in the queue on closing 16:16:51.770: obs-browser[2]: API: posting call to 'getSystemCPUUsageTimes', callback id 44, args: [] 16:16:51.770: obs-browser[4]: CONSOLE: Stream is not started (source: https://obs.streamelements.com/yoink/static/mediaRequest.7b28b375074e78075877.bundle.js?e3bc7bce8ab71e0a53a8:2) 16:16:51.770: obs-browser[2]: API: posting call to 'getSystemMemoryUsage', callback id 45, args: [] 16:16:51.770: obs-browser[2]: API: posting call to 'getSystemHardwareProperties', callback id 46, args: [] 16:16:51.772: ==== Streaming Stop ================================================ I also play VALORANT, and it has been crashing as well. And finally I was able to open an issue in Visual Studio (not sure if any of these are related) which gives me this error: "Exception thrown at 0x00007FFC4AEE2F06 (WindowManagementAPI.dll) in : 0xC0000005: Access violation reading location 0x0000000000000130." I've tried running things as an admin, this didn't fix it either. List of things I've tried: - Updating graphics drivers - Reinstalling graphics drivers - Restarting the computer - Reseating the graphics card - Overclocking, Underclocking, Factory Resetting and Undervolting the graphics card - Scanning for viruses/malware - Booting into safe mode I feel like I'm out of options at this point, can anyone help me or is my graphics card just broken?
-
- access violation
- crashes
-
(and 2 more)
Tagged with:
-
I am trying to decide between buying a 3060 or a 3070 (when they are in stock) and I wanted to know if there were any differences between the encoding performance on these cards. I know that they both have a dedicated chip but is it the same one and do they perform the same?
-
Hi I occasionally use handbrake for NVENC encoding and I noticed my GPU (1080Ti) usage was pretty low. Looking at TaskManager I can see that the GPU Video Encode is pinned to 50% and my CPU (3900X) is hovering around 30-40% usage. Is there a way to derestrict or adjust the software to allow its encoding to use 100% of the NVENC chip on my GPU? It should be able to do it it's not bottlenecked by any of the other components.
-
So i have 2 pc's, one is a streaming/recording pc and want to use nvenc encoder for obs, i want to know what would be better for my streaming pc a Leadtek Quadro p400 or a 1050/ti the quadro is cheaper than the 1050's
-
Not sure if this the right place or not, but yea... So I stream on twitch using a PC + Capture Card. I also use OBS to record to a local storage, for obvious reasons. PC SPECS: -Ryzen 5 3600 -Gigabyte Aorus X570 Elite w/ Wifi [I am hardwired, WIFI is just for when I need to fix the Hardwire/router] -GTX 1660 Super -MSI Mag CoreLiquid 240R My issue is when I'm streaming, everything is perfectly fine with the stream, no dropped frames, nothing. but when I watch the gameplay recording for my video edits, everything is smooth as silk, then 1 frame skips, runs smooth again for 5-10 minutes, then another frame skipped. Most the YT videos, and other sources I've checked to solve this issue all say lower my game settings, but I play on console, so those tips don't help. Thanks in advanced!
-
Typically there is a topic posted for every video in the “LTT Official” portion of the forum, but for this video specifically there was never a post / discussion thread created… Not sure if this was by design because they didn’t want everyone and their mom telling them how they think they could do it better, or if they just forgot to make a thread for it. Assuming it’s the latter, I’m really not sure where else to put this. Love the video, never seen anyone try to capture more than 3 4k sources on one PC. I understand that you already have a working solution and probably don’t want to change it, but here’s a few things I thought of while watching the video: You can bypass Nvidia’s artificial encode limit via this patch, not sure if that’s against your guys’ partnership with them or something but it’s super easy to apply and works perfectly. You can avoid high 3D GPU usage by using FFmpeg to stream and record instead of OBS. Additionally, you would only need that first Quadro with this method as the GP100 is perfectly capable of encoding 6 streams of 1080p (along with basically any other Nvidia card released in the last 5 years assuming you're using the patch), which seemed to be your final output resolution despite the 4K cameras. Using my patched GTX 1080 I am capable of encoding up to 4 streams of 4K60 using FFmpeg, and the GP100 has 3 NVENC encoding chips while the GTX 1080 has 2 (same architecture). Thus, in theory the GP100 could do 6 encodes of 4K60 by itself, which would be cool to see. Though at that point your limitation is more likely to be your CPU, or even software which starts to hiccup as multiple instances of recordings inadvertently hit the same threads and such. I would suggest giving FFmpeg a try with a command / parameters like this: -thread_queue_size 9999 -indexmem 9999 -f dshow -rtbufsize 2147.48M -video_size 1920x1080 -framerate 60 -i video=”[video device]”:audio=”[audio device]” -map 0 -c:v h264_nvenc -preset: hp -r 60 -rc-lookahead 120 -pix_fmt nv12 -b:v 6M -b:v 6M -minrate 6M -maxrate 6M -bufsize 6M -c:a aac -ar 44100 -b:a 320k -vsync 1 -max_muxing_queue_size 9999 -f mpegts C:\Users\[user]\Videos\FFmpeg\Output.ts You can also use the tee psuedo-muxer in FFmpeg to send the same encode to 2 places, like a stream to Twitch and a local file. What makes this particularly convenient is that you don’t have to do the same encode work twice like you have to in OBS: -f tee "[f=mpegts]C:\Users\[user]\Videos\FFmpeg\Output.ts|[f=mpegts]udp://10.0.1.255:1234/" Lastly if desired FFmpeg also has a segment muxer, which would allow you to record 24/7 without over-filling your hard drives, the recording will automatically overwrite the first part when it hits the maximum part you specified: -f segment -segment_time 1800 -segment_wrap 48 -reset_timestamps 1 -segment_format_options max_delay=0 C:\Users\[user]\Videos\FFmpeg\Output%02d.ts You can even combine the tee and segment logic to stream / record 24/7 without every overfilling the drives: -f tee "[f=segment:segment_time=1800:segment_wrap=48:reset_timestamps=1:segment_format_options=max_delay=0]C:\Users\[user]\Videos\Output%02d.ts|[f=mpegts]udp://10.0.1.255:1234/" Since FFmpeg is run in the console you can call these 6 encodes programmatically in something like Powershell using Start-Process, allowing you to easily launch everything from one place. I’ve attached a zip containing the file structure for doing something like this. Linus Example.zip
-
I just wanted to know whether the Rx 550 can support streaming, liek whether it has an encder for it like the nvenc on nvidia cards pls gimme an answer asap
-
How much percentage should I expect in rtx 3070 or 2080ti
-
Hi everyone, I am making a new build for gaming and streaming. On Ebay, the price of the 2070 and 1080ti are comparable, I know that the performance of the 1080ti is greater, at least in Unigine superposition. But I'm not sure how to check the streaming performance. I know that the newer NVENC probably has a huge effect, but I don't really know much about NVENC anyway. So, what would you guys recommend? Should I buy a 2070 or 1080ti for gaming and streaming?
-
dont know if this will be useful but overkill on cores just get as many cores as you can afford. i encode now in x264 with amd 3970x since NVENC on the 3090FE cant handle a 4k stream almost on anything. (single rig setup) nvenc is a great idea but its far from the answer to pushing out the best possible quality but that will change down the road as more of the tech is allowed. i can do this and run CPU heavy games wile still only using 18% - 25% of my cpu ...this setup can run an 8k stream but you will struggle to find a platform that can host it outside of your own website's thresholds
-
Getting NVENC or similar on thunderbolt, without a whole GPU
RedyAu posted a topic in Graphics Cards
Hi! This may be a stupid question, but I hope you can point me in the right direction. I have a ThinkPad T15 laptop with a 10th gen i7 in it. It's great, compact, fast, I love it. It has no discrete graphics, however. I've been thinking of buying a thunderbolt EGPU case, and some low-power (for example GTX 1650) GPU, but I was shocked to find out, how much EGPU cases cost. More than a GPU, even in today's world! I really only need NVENC, or some other hardvare accelerated video encode/decode and processing power. I'm not planning on gaming (and certainly not AAA recent titles), but occasionally I need to edit video, or make a livestream of an event. This CPU alone can't handle that, as OBS uses 30-40% just for encode/decode. Does a device like this exist? Or can you recommend some extremely simple EGPU case/cable/anything, that doesn't cost a furtune? -
I was wondering about how much the new generation of encoders could manage while recording/streaming? I’d love to see streaming benchmarks or just a pass/fail rating of different settings, if such a thing exists. I’m thinking of the Nvidia RTX 40 series, Intel Arc and AMD RX 7000 series specifically. My current solution seems to do 4K60 at best (70% utilisation), but I’d love if one of the new cards could do 4K120 in either H264 or AV1. I presume they can, but where are the limits? It would be really neat if anyone knows the answer or can link me toward some data.
-
Hi everybody I'll explain my problem briefly: due to surging power bills here in Europe and the fact my current one is broken I wanna switch my Jellyfin/Emby media server to a Raspberry Pi. Unfortunately, new RBPis are sold out everywhere and so I'm stuck with an old one I have laying around (a B+ from 2014/2015). It's nice from a reduce/reuse prospective, but that board's SOC is of course old and slow, and won't support any kind of live trans-coding of video content. So i set myself up to convert my entire video library to formats which wouldn't need to be trans-coded by the server but could be played directly by the clients. The clients in question are a WebOS 5 TV (which thanks to LG's amazing OS fragmentation, will probably never get the Jellyfin app and that's why i'm also running Emby), various iOS and Android devices and several PCs (using the apps, I'm not interested in browser streaming). And here are my doubts (I'm kind of new to this all video trans-coding world). I would answer them by live testing the whole thing, but unfortunately my current media server is down for other issues and i would like to be certain of this strategy actually working before ripping out the Raspberry from its current use and setting up this new media server on it. MP4 H264 AAC 8bit seems like the safest bet, every sort of device seems to support it, but it's not as disk-space and bandwidth efficient as H265 and furthermore the latest version of Handbrake doesn't allow to take advantage of Nvidia's GPU NVENC encoder for it (i don't know why, after googling it out it seems that it used to be supported but that's no longer the case) meaning much longer trans-coding times (4 or 5 times as much, with my machine R5 3600 Quadro P4000). These factors are tempting me to go for H265, but while Jellyfin's forum has a table that would suggest that every device I use is compatible with H265, such information is nowhere to be found on Emby's blog/forum; my TV supports H265/HEVC but I came to understand (again not sure if true) that the app itself needs to be compatible with the format, it's not a given of the TV's hardware. Also, I've ended up in a Google rabbit hole of people saying that NVENC is garbage, that it destroys video quality and that shouldn't be used to re-encode stuff, others saying it's the greatest thing since sliced bread...I'm kind of confused honestly. The main question then is: can i get away with H265 given the compatibility doubts, and should NVENC be seriously considered (both for H264 and H265)? If you could help me, that would be much appreciated. Thanks everybody.
-
From doing research we know now that NVIDIA gpu produce better performance for streaming now. For a start I was wondering how GTX 1070 or 10xx series (pascal) compared to 16xx series (turing) like 1650super/1660.From looking at benchmark it is clear that 1070 can gives much better FPS in games compared to 1650 for example because of course they are 2 items of a very different price, 1070 is way more expensive. But look at this video, he tested and said that his GTX1080 perform worst in streaming compared to 1650super. It absolutely producing worse. He siad he can only stream up to 900p60 with GTX1080 compared to 1080p60 using 1650super. Looks like the game dropped a lot of FPS if he do streaming with his GTX1080. Please continue reading my post, I include the video link down below. From this I heard that 1080 produce worst when streaming compared to 1650 because 16xx series and in this case 1650 has a dedicated chip to encode streaming and 1080 don't eventhough both have NVENC. This chip allow 16xx series to save their VRAM for the game instead of having some of them taken for encoding the stream which happen on the 1080 because it doesn't have the dedicated stream encoding chip. So I am asking for anyone who have streaming experience with both cards 1070/1080 vs 1650super /1660 or 10xx series vs 16xx series. Is it true that even 1650 super will perform better in streaming compared to 1070/1080? The video: https://youtu.be/FeCm10Xdkno?t=33
-
I was about to buy a new 6k editing workstation (this is my first option Parts List ) but I saw the impact of rendering using hardware encoding on Premiere and Media Encoder using NVIDIA GPUs. I saw that if your gpu is faster you will get faster results so now idk if you spend that much money on a Ryzen Threadripper 3960x, change it for a Ryzen 3900x and put more money on the GPU and RAM, I know I'm a little bit down on RAM. I'm gonna be recording on BMPCC 6k raw. What do you guys think about the new hardware encoding? And what advice can you give me on building the 6k editing workstation? My budget is 3000 USD Thank you guys, this community rocks
-
Budget (including currency): ≈600,000 CLP, hopefully less (≈390,000 CLP used, 210k left) Country: Chile (all pricing in CLP) Games, programs or workloads that it will be used for: Playing and/or streaming light-ish games like osu!, Etterna, Terraria, CS:GO, Minecraft, etc. Both at 720p60(hz/fps). Single monitor. I expect Medium-ish settings at the worst of times. Other details I decided to build a computer recently, and have been getting 1-2 parts per month, and long story short, I decided to go with a 3200G since it was priced exactly as a 2200G and at a significantly less expensive price (with this budget and purpose anyways) compared to a 1600AF (100k vs 135k, respectively) and a B450 motherboard taking upgradeability into account. The thing is, I've done a bit of research, and apparently the GTX 1650 Super (which is the cheapest graphics card with Nvidia's latest NVENC encoder) would be a good choice since it's hardware encoder would (in theory) help mitigate the load of the encoder from the CPU, but I'm not really sure about what kind of (gaming/streaming) performance I'd be getting, and I've been thinking that maybe there's a better way to approach my situation. The graphics card is the last part of the build and I'll be getting it around August. The parts in question (Everything in bold is definitive): CPU: Ryzen 3 3200G (w/ Vega 8 graphics) [$100k] MB: MSI B450 Pro M2-V2 [$77k] RAM: 2x4GB HyperX Fury 3000Mhz CL15 [$52k] SSD: Adata SU630 240GB (SATA) [$36k] HDD: Seagate Barracuda Compute 1TB [$40k] PSU: EVGA 600W 80+ [$50k] Case: Deepcool Matrexx 30 [$32k] Cooler: Stock (Wraith Stealth) [Included] GPU: Cheapest 1650 Super available in August [≈$210k expected] I thought I might be good with an rx570 4gb (for way cheaper) or a 5500xt (a bit cheaper) but maybe the streaming performance wouldn't be so good with what I've heard about AMD's hardware encoder(s). This is my first build ever and I don't really know what to choose, because maybe, the games are light enough to not load the CPU a whole lot. I'll gladly clarify anything, excuse any mistakes/typos/bad england.
-
use second GPU for NVENC encoding possible?
K0nvikT001 posted a topic in Programs, Apps and Websites
I would like to know whether i can use and setup OBS to use a second GPU (GTX 960 2G) to use for video encoding compression thing??? i am not sure myself but it seems as though NVENC uses GPU resources to do what ever it does. now i tried doing it when using my 1080 but GTA V stuttered so badly and video playback was the worst ever... I just realised now that if i can set OBS to use NVENC on a second GPU, then i wont have to use x264 CPU encoding as that will also kill my resources. Can I setup OBS to use the second GPU???- 7 replies
-
- obs
- content production
- (and 4 more)
-
I've been trying to fix this for months, so I'm just kind of reaching out in whatever random places I can think of now to get it fixed. I make desktop tutorials, and am supposed to be making them on Premiere Pro for my YouTube channel, colleagues, and clients. For this, I use OBS Studio and the Nvenc encoder. Currently, I'm not really willing to change on this, as XSplit does not live up to what I need, and X264 codec (over Nvenc) is too slow while also editing and such. Nvenc allows me to do full 4K60 444 Lossless recordings to a NVMe SSD from 3 monitors without hiccup. It just stopped working with Premiere earlier this year. Keep getting this stupid nonsensical error that doesn't help anyone nor is it factually correct: 20:14:51.847: error: OpenEncodeSessionEx failed: out of memory (10)20:14:51.866: fatal: No NVENC capable devices found20:14:51.866: [NVENC encoder: 'recording_h264'] Failed to open NVENC codec: Generic error in an external library EDIT: It is not a multi-card issue. I bought the Quadro to try and resolve this issue last week. This issue has been present on single card for going on 6-7 months now. I have a GTX 1080 as my main GPU and have recently added a Quadro P400 to handle Nvenc. The idea was Premiere would use up CUDA on the 1080 and Nvenc running on OBS Studio would be allowed to proceed. I even restricted CUDA usage to all Adobe-related EXEs to just the GTX 1080, but OBS just can't initialize Nvenc with Premiere open still. I have also just tried this GPUFoundation.dll mod that fixes a only-tangentially-related plugin and it didn't help me. Anyone have ANY CLUE what I can do to get this working again or fix it? It worked perfectly from ~2014 until earlier this year, and then poof. For more information and context, I have: OBS Forum Thread Adobe Support Thread Entire video detailing my journey thus far Specs: i7-6900k 32GB DDR4 RAM Win10 64-bit GTX 1080 Quadro P400 Latest OBS Studio, Nvidia Stable Drivers, Adobe CC 2017 Suite OBS team isn't likely to be able to help, Adobe people will respond eventually but probably not with anything helpful. So I'm just looking for anyone who may have dealt with this.
- 3 replies
-
- premiere nvenc
- nvenc
-
(and 2 more)
Tagged with:
-
Does NVENC tank the quality of game footage when recording locally? I know it can be worse than x264 in streaming in terms of quality, but I'm only doing local recordings and my cpu is taking a lot of stress on x264 therefore making my game have fps hiccups. Thanks for the input. Here is my system https://pcpartpicker.com/user/Toysoldiers35/saved/spbf7P
-
Hi there, I wanted to try and move a little from YouTube and start live streaming on Twitch however I don't know what the best option would be. Here are the options I have available: Option 1: Two PC setup One for Gaming and the other for Encoding Has a Ryzen 5 1600X CPU for encoding Would have to use NDI to connect my gaming PC to streaming PC (Is NDI worth it? or is the quality rubbish?) Option 2: One PC setup Using NVENC encoder through an RTX 2070S My aim resolution for streaming is 720p at 60fps or 900p at 60fps. I play quite fast paced action games as well. Any solution or suggestions would be appreciated. Gaming PC Specs: i7-6800K RTX 2070S GTX 1060 6GB 16GB 3000MHz DDR4 RAM Secondary PC Specs: Ryzen 5 1600X GTX 760 16GB 3000MHz DDR4 RAM Cheers!
-
I mostly play Apex Legends, Rainbow Six: Siege, and Warframe. Im looking to buy a new Nvidia GPU that will run all of those at 1080p 120fps. I would like to also use NVENC to stream the gameplay at the same time (I know that NVENC on my current machine with a 1660 takes ~15% GPU power so that leaves 85% for the games). Which GPU should I get?
-
I am working on a project that required transcoding bunch of h264 videos into jpeg. I did a test on my own 1050ti gpu, the testing video is 1080p at about 2.5Mbit/s. ffmpeg -i video.mp4 -c:v h264_nvenc output\%05d.jpg The result is about 90 to 100 fps. And now I need to build a server with AMD gpu (considering rx470, 480, 580) The videos would be uploaded from users, not longer then 15 seconds (450frames) I did a little searching, I think h264_amf works with amd gpu? But I dont have a amd gpu on my hand. So let me know how to enable, and how fast RX480 can be. Thanks
-
Just came across this bit of information, thought I'd give everyone here a heads up. Yes it's not in the news subforums because I dont want to follow all the formats and stuff https://www.coolaler.com/threads/gtx-1650-nvenc-turing.355272/?fbclid=IwAR22wI7r8R2hQODAgDtY5Ar97LQjYyIk_zPeTp8VxPSpXgV_Si_y7hMSglY So yes, just because you paid more for a GTX card doesn't mean you're getting the full up-to-dat\e Geforce Experience feature set. Just so I thought this bullshit will end when you leave the GT cards...Pascal's NVENC imo is only good for giving examples on how to get things done, not to record enjoyable content. The quality just isn't good enough even if you can forego compression.
-
Hey I was wondering if anyone could confirm (preferably someone who actually has one) that the GTX 1660ti / 1660 has the same or more encoding bandwidth in comparison to the GTX 1080. When I say bandwidth I mean bandwidth, not performance, I am aware that the Turing NVENC chip produces a higher quality image. What I would like to know is if it can handle the same load, for example, my GTX 1080 can encode 2 4K streams at a 288M bit-rate simultaneously no sweat, while the GTX 1070 and below do not have this capability. The GTX 1080 actually has 2 NVENC engines giving it 2x the bandwidth of it's like series brethren, see NVIDIA's Encode Matrix. You'll also see that the GTX 1070 is listed to have 2 NVENC chips, but for reasons unknown to me only one of them is active, so it still remains true that the GTX 1080 has higher encoding bandwidth. It's reasons like these that I ask and not assume, NVENC chips have in the past been very tricky to judge on paper. Basically, if I could get someone to encode a high bit-rate video in something like OBS and screen shot the GPU section of task manager while doing so I would be highly appreciative. Preferably a 4K60 video at a 288M VBR (variable bit-rate) bitrate, but a 1080p60 video at a 288M VBR will still theoretically work and give me something to reference in comparison to my GTX 1080. Also worth mentioning that something has to be going on on screen to get an accurate measure, the more fast paced the game the more stress it puts on the encoder, so something like Rocket League or Apex Legends works great. Really just looking to see what percentage the "Video Encode" encode section of task manager is at while encoding one of the aforementioned streams. I consistently encode 2 4k streams simultaneously so it's important for me to know that a 1660ti / 1660 can do the same before picking one up. If it can however, they're cheaper, produce a higher quality image, and consume less power, so there would be no reason not to make the swap. PS this is for a dedicated encoding rig, so the 3D performance of the card is inconsequential.
- 4 replies
-
- nvenc
- gtx 1660ti
- (and 4 more)