Jump to content

AMD could be developing a 24-core/48-thread Ryzen 9 7950X CPU w/ a TDP of 170 W & up to 5.4GHz CPU clocks (Updated)

12 minutes ago, leadeater said:

RDNA2 encoder is a lot better though, not quite as good as Nvidia (or Intel) but it's not actually garbage anymore. Perfectly usable. The weakest use-case is for stream encoding, for something like Adobe Premier it's closer again.

 

But yea also highly useful for office computers, I'm sure HP and Dell have been bitching at AMD about that but also the Ryzen Pro APUs have been fine for this.

Source? Last time I checked, which I believe was RDNA 2, AMD's HEVC encode couldn't even keep up with Nvidia's AVC encode at the same bitrate. 

 

I am kot being antagonistic by the way. I think proper comparisons between the three are quite rare so the more I can find and read the better. 

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, LAwLz said:

Source? Last time I checked, which I believe was RDNA 2, AMD's HEVC encode couldn't even keep up with Nvidia's AVC encode at the same bitrate. 

image.png.7dce5a982c53c91cc5247c56d6c0312f.png

 

It is close-ish at higher resolutions.

image.png.11e09e4412950c1617281bddde2f77dc.png

 

Note that the 6900XT is worse than the 2060 mobile. (in this regard)

Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, DANK_AS_gay said:

image.png.7dce5a982c53c91cc5247c56d6c0312f.png

 

It is close-ish at higher resolutions.

image.png.11e09e4412950c1617281bddde2f77dc.png

 

Note that the 6900XT is worse than the 2060 mobile. (in this regard)

Those results are kind of awful for AMD though. 

The first graph is not really that relevant since it's using such high bit rate.

For comparison, Twitch does not allow anything above 6Mbps.

Netflix ultra HD is around 15Mbps.

It's only once we get close to 20Mbps that AMD catches up. 

 

At the lower bitrates we see that for, let's say a VMAF of 75, Nvidia needs about 5.2Mbps and AMD needs about 6.6Mbps.

AMD needs about 27% more data for the same quality. 

 

And that's without mentioning AV1 which Intel might have by the time this gets released.

Link to comment
Share on other sites

Link to post
Share on other sites

54 minutes ago, LAwLz said:

Those results are kind of awful for AMD though. 

The first graph is not really that relevant since it's using such high bit rate.

For comparison, Twitch does not allow anything above 6Mbps.

Netflix ultra HD is around 15Mbps.

It's only once we get close to 20Mbps that AMD catches up. 

 

At the lower bitrates we see that for, let's say a VMAF of 75, Nvidia needs about 5.2Mbps and AMD needs about 6.6Mbps.

AMD needs about 27% more data for the same quality. 

 

And that's without mentioning AV1 which Intel might have by the time this gets released.

Oh I know, their flagship (not anymore, but whatever) card is easily beaten by a mobile chip that draws 80-90w. Given the performance of the 1080, if I were streaming, I would get a Pascal Quadro from eBay for $30, and use that for encoding. Then I would get a 6600/6600XT for the gaming/video editing part.

To clarify, I was agreeing with you, not arguing. 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, DANK_AS_gay said:

Oh I know, their flagship (not anymore, but whatever) card is easily beaten by a mobile chip that draws 80-90w. Given the performance of the 1080, if I were streaming, I would get a Pascal Quadro from eBay for $30, and use that for encoding. Then I would get a 6600/6600XT for the gaming/video editing part.

To clarify, I was agreeing with you, not arguing. 

That has no relation to price or SKU range tho, since the media engines are usually shared between an entire generation, meaning that even a 1650Super would beat a 6900xt when it comes to streaming/transcoding.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, igormp said:

That has no relation to price or SKU range tho, since the media engines are usually shared between an entire generation, meaning that even a 1650Super would beat a 6900xt when it comes to streaming/transcoding.

Why would they not try to brute-force it using the rest of the GPU, or the raytracing acceleration hardware built into RDNA2? Also, why do they keep the media engine the same? I don't want to have that capability in my cheap low end card. I would rather have the price be lower. Like the 3050, I wouldn't want the full encode/decode capabilities of the 3090ti, especially if it costs me more money.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, DANK_AS_gay said:

Why would they not try to brute-force it using the rest of the GPU, or the raytracing acceleration hardware built into RDNA2? Also, why do they keep the media engine the same? I don't want to have that capability in my cheap low end card. I would rather have the price be lower. Like the 3050, I wouldn't want the full encode/decode capabilities of the 3090ti, especially if it costs me more money.

Because it doesn't use the GPU die, the media engine is a separate chip by itself with a fixed function, not a general purpose processor.

Same goes for intel and amd.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, igormp said:

Because it doesn't use the GPU die, the media engine is a separate chip by itself with a fixed function, not a general purpose processor.

Same goes for intel and amd.

Fair enough. I guess it still looks bad that a $30 Quadro can crush a 6900XT, regardless of whether or not SKUs matter.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, LAwLz said:

Source? Last time I checked, which I believe was RDNA 2, AMD's HEVC encode couldn't even keep up with Nvidia's AVC encode at the same bitrate. 

 

I am kot being antagonistic by the way. I think proper comparisons between the three are quite rare so the more I can find and read the better. 

As above posts, but you can also watch some really good comparison videos on YouTube however those are for game streaming. When it comes to viewer experience actually seeing differences is very difficult unless at the lower end of bit rates and resolutions. That's why I said it's perfectly usable. At much higher bitrates, not for streaming then the RNDA2 encoder does a much better job which would be the sort of situation used for video editing acceleration.

 

Not saying it's as good as Nvidia's or Intel's but it's quite usable now and not having it is a problem. Just look at massive Threadripper builds getting match or beaten by Intel CPUs with ~8 cores using Quicksync. It's a little hard sell for those on a budget to start talking about the limitations of Quicksync when it realistically doesn't impact them much, not when you're talking many times higher system price and what Quicksync achieves is excellent image quality.

 

4 hours ago, LAwLz said:

The first graph is not really that relevant since it's using such high bit rate.

It's actually the only one relevant to what I mentioned for video editing, you aren't concerned about Twitch streaming limits in Adobe Premier at all.

 

Anyway if you are going to click off a stream because "it's unwatchable" then I highly doubt anyone really could actually notice

 

"It's fine", if game streaming is more important to you get an Nvidia card. Having a media encoder in a Ryzen CPU now is still important for things that will use it, note if you have a dGPU then this isn't really applicable which is why I'm motioning Quicksync and video editing because it's applicable with the dGPU.

 

Remember we are talking about Ryzen CPUs now having iGPU and encoders, not just RNDA2 encoder performance and quality.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

Remember we are talking about Ryzen CPUs now having iGPU and encoders, not just RNDA2 encoder performance and quality.

would be fun to see recreation done with decode/encode of data. likely need the same version + verification etc. more issues I guess too.

using ML/DL to know what to remove and recreate or upscaling. if that ever gets used or is possible.

Edited by Quackers101
Link to comment
Share on other sites

Link to post
Share on other sites

On 5/21/2022 at 5:58 PM, leadeater said:

I have a 1960's built house and it took a few hours, no it's very simple if you need it. Just pay for it. In to an existing room, next to existing sockets, I ran new wire and installed new high capacity 15A sockets.

 

"Just do it"

"When there is a will there is a way"

Not if you live in a condo, apartment, duplex, or any other form of attached housing you're not. HOA's and Strata's will object to any renovations that don't go through them. Likewise if you don't even own the property, the landlord will probably sue you for it.

 

On 5/21/2022 at 5:58 PM, leadeater said:

Your point is just wrong, your personal unwillingness to pay for an electrician to come in and do work just doesn't make what you said true.

Please talk to someone about how much it costs to actually get building permits for renovations in a major city. I know those stupid reality shows about house flipping like to make it look cheap and quick, but adding a dedicated 20A outlet to a bedroom on the opposite side of the house from the service panel is something that involves ripping up the walls or ceiling in every room. It's not like a commercial building where you can just run new cables on top of the fiberglass panel ceiling.

On 5/21/2022 at 5:58 PM, leadeater said:

But I did and it didn't cost much at all....

 

I wanted it, I needed it, I paid for it, I have it.

 

Nonsense. "You" can do whatever you want to your own house, but your insurance will be void if you don't get it properly installed, permits, etc. I'm not saying you can't. I'm saying that it's simply untrue for where most of the population exists in North America. Those that live in condos, or rent apartments do not have this option, and that's why I said you have to ultimately buy a property that has this feature already.

 

On 5/21/2022 at 5:58 PM, leadeater said:

Yes you are correct going above 900W in NA will cause problems, but it's not impossible to work with.

Here's the thing, Those 3000VA UPS's are "special order". The 1500VA UPS's are $350. The 3000VA units are server-room units and cost $3400 and require the L5-30P. That's something you can arrange for in a commercial building as part of your lease. You aren't going to get that flexibility from a residential condo building. Condo's are basically concrete boxes stacked on top of each other. It's hard enough to get fiber internet in one of these buildings if they were not pre-wired for it. Hell I live in a wooden MDU and can't get fiber despite the tower across the street having it, and the MDU behind mine having it. If I wanted a 20A or 30A outlet in this unit, they would need to do an obnoxious amount of work, and that's just not going to happen. I doubt I would find a new condo or apartment any where in the metro that has 20A outlets in the unit outside the kitchen.

 

It's just not a thing, and Intel, NVIDIA or AMD is not going to convince nerds to buy a new house just for a gamer PC.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Kisai said:

Not if you live in a condo, apartment, duplex, or any other form of attached housing you're not. HOA's and Strata's will object to any renovations that don't go through them. Likewise if you don't even own the property, the landlord will probably sue you for it.

 

Please talk to someone about how much it costs to actually get building permits for renovations in a major city. I know those stupid reality shows about house flipping like to make it look cheap and quick, but adding a dedicated 20A outlet to a bedroom on the opposite side of the house from the service panel is something that involves ripping up the walls or ceiling in every room. It's not like a commercial building where you can just run new cables on top of the fiberglass panel ceiling.

Nonsense. "You" can do whatever you want to your own house, but your insurance will be void if you don't get it properly installed, permits, etc. I'm not saying you can't. I'm saying that it's simply untrue for where most of the population exists in North America. Those that live in condos, or rent apartments do not have this option, and that's why I said you have to ultimately buy a property that has this feature already.

 

Here's the thing, Those 3000VA UPS's are "special order". The 1500VA UPS's are $350. The 3000VA units are server-room units and cost $3400 and require the L5-30P. That's something you can arrange for in a commercial building as part of your lease. You aren't going to get that flexibility from a residential condo building. Condo's are basically concrete boxes stacked on top of each other. It's hard enough to get fiber internet in one of these buildings if they were not pre-wired for it. Hell I live in a wooden MDU and can't get fiber despite the tower across the street having it, and the MDU behind mine having it. If I wanted a 20A or 30A outlet in this unit, they would need to do an obnoxious amount of work, and that's just not going to happen. I doubt I would find a new condo or apartment any where in the metro that has 20A outlets in the unit outside the kitchen.

 

It's just not a thing, and Intel, NVIDIA or AMD is not going to convince nerds to buy a new house just for a gamer PC.

So, basically the problem is that you live in the US and want to use an UPS with a really high power setup in a place that you can't easily adjust to 220v.

 

My bet is that this is a minority between AMD/Intel/Nvidia's clients and they won't care about you nor it will make a dent in their sales.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

Not saying it's as good as Nvidia's or Intel's but it's quite usable now and not having it is a problem. Just look at massive Threadripper builds getting match or beaten by Intel CPUs with ~8 cores using Quicksync. It's a little hard sell for those on a budget to start talking about the limitations of Quicksync when it realistically doesn't impact them much, not when you're talking many times higher system price and what Quicksync achieves is excellent image quality.

3 hours ago, leadeater said:

t's actually the only one relevant to what I mentioned for video editing, you aren't concerned about Twitch streaming limits in Adobe Premier at all.

That comparison is kinda moot since hw encoding offers worse performance than sw encoding, which is fine for quick previews on your projects since it's way faster, but for your final product you'll likely use sw encoding anyway.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, igormp said:

That comparison is kinda moot since hw encoding offers worse performance than sw encoding, which is fine for quick previews on your projects since it's way faster, but for your final product you'll likely use sw encoding anyway.

There are plenty of YouTube creators that have used QuickSync even for final video, it's actually quite good.  As you say even if you don't use it for the final render the workflow improvement you can get from having it is very nice.

 

image.thumb.png.24bfb22bb219a922b701b5183fdc4dac.png

 

Even when not used for export that ~32% improvement probably isn't something you'd want to ignore

 

 

 

Anyway there's always these weird notions that if one competing solution is better than the other that therefore makes the other unusable, simply isn't actually how things always work. Something can be "worse" and still "usable".

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Kisai said:

Please talk to someone about how much it costs to actually get building permits for renovations in a major city

You do not require a building permit for this, why bring this up at all? You're not building nor are you renovating either, there is zero construction required. I suggest you actually get in contact with an electrician and get a quote for such work and ask how it is done, because it seems you don't know which is fine.

 

There are two ways, ceiling space and a long drill bit that extend the entire length of the wall and you drill down through the framing and then use fish tape to pull through the new wire, how it was done for my property. The other way is under the floor/house if you have a raised house. Neither of these works require permits, only consent from the property owner if you do not own it.

 

If you do not have timber framing or multi-story dwelling then you'll be using capping.

 

3 hours ago, Kisai said:

Nonsense. "You" can do whatever you want to your own house, but your insurance will be void if you don't get it properly installed, permits, etc

What on earth have I been saying? Did you not read the get an electrician in to do the work and pay for it. Come on please this is just getting stupid now.

 

Point is because of your situation you made an absolute statement about two things that are not true.

  1. Residential and small office NA UPS models above 1500VA do not exist and can't be easily purchased (note: literally being on a store shelves is not the definition of a consumer product)
  2. NA doesn't have outlets that can handle more than 900W and/or cannot be installed in to existing properties. only newer properties would have them

 

I was just trying to point out that you can in fact purchase such UPSs and the other point has a solution that is readily available, maybe just not to everyone.

 

3 hours ago, Kisai said:

The 3000VA units are server-room units and cost $3400 and require the L5-30P.

Incorrect, this is just bad logic. So a UPS of the same model part of the same product family suddenly becomes non-consumer once you want to purchase the above 1500VA configuration? That's not how things work. Neither does anything prevent you or anyone from buying a tower (or rackmount) server/telecommunications UPS and using it at home, it's just product marketing.

 

Quote

The Eaton 5E is an essential line interactive UPS that provides cost effective and reliable power protection against power outages and bad power quality. Thanks to its small size the 5E can be installed easily either in a business environment or at home.

Also you can get a 2000VA UPS for less than $400 NZD, not USD, NZD.

https://www.pbtech.co.nz/product/UPSPWR64264/Eaton-5E-Tower-UPS-2000VA-1200W-3-X-NZ-Power-Socke

 

or if you want an actual US option

https://www.cyberpowersystems.com/product/ups/smart-app-sinewave/pr2200lcdsl/ ($1000 USD)

 

I did not look hard or for long for a US option, there could well be cheaper options for above 1500VA.

 

 

Anyone with the rights or permissions to get the electrical work done can do so, that's a your situation may vary problem not an absolute nobody can do this. Is it dumb that this might become a requirement if you want to be a x090/x090 Ti + i9 owner? Yes, if that actually eventuates. Thing is anyone buying those sorts of products can certainly afford the electrical work required to use them if that happens.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, igormp said:

So, basically the problem is that you live in the US and want to use an UPS with a really high power setup in a place that you can't easily adjust to 220v.

 

My bet is that this is a minority between AMD/Intel/Nvidia's clients and they won't care about you nor it will make a dent in their sales.

The point is that the "enthusiast" tier has a power cap before people simply can't use those parts because they won't be able to operate them. It's not like buying a sports car, and keeping it in the garage because it's not street legal, at least you can look at it and drive it around your property. I'd actually say we passed that power cap when both Intel and AMD started having Dell/HP require liquid-cooling for the i9/R9 parts. Your average PC does not have liquid cooling, and if you go look at Steve's latest video on trying to save the Dell Alienware unit, you'll also see how Dell basically turned the i9 into an i7 through that incompetent cooling design, because they don't want to change from their 10 year old PC chassis standard.

 

And because I know leadeater is going to try call BS on this...

image.png.79920c01f95ae332d3d01fa3a03996f7.png

No, Dell requires you to select the liquid cooling option when you select the i9/R9

 

2 hours ago, igormp said:

That comparison is kinda moot since hw encoding offers worse performance than sw encoding, which is fine for quick previews on your projects since it's way faster, but for your final product you'll likely use sw encoding anyway.

Software encoding in Davinci Resolve is fine. Slow however. The hardware encoder is generally used in two situations, real-time compositing, and video scrubbing.

 

The quality of the hardware encoder is just as as configurable as the software one, and has the same tradeoffs. Suffice it to say Quicksync "sucks" because if your CPU paths or quality knobs don't invoke the hardware logic, then it's just a software encoder.

 

If you want to see how this plays out. Try encoding a h265 4K stream with QuickSync. It will fall over. I've only ever had the quicksync encoder work under very tight tolerances, both in Davinci Resolve studio and OBS Studio.

 

Nvidia NVENC has a completely different effect when it doesn't like the tuning knobs, it will usually just spit out the key frames, or just the B frames.

 

When you're playing around with Davinci, you really don't care about the quality of the scrub, only the picture it lands on. But if you're trying to configure a stream for Twitch (6mbit max) or Youtube  51Mbit ( https://support.google.com/youtube/answer/2853702 ), you have to consider the entire composition tree on the client end.

 

====Viewer Sees===

6Mbit 1080p60 stream

=================

 

= Streamer setup =

Inputs:

720p60 to 2160p60 camera, or vtuber avatar

+

Streamlabs/Streamelements overlay (web browser compositor)

+

Twitch Redeems ( web browser compositor)

+

Video game or whatever media the streamer is broadcasting

or

1080p30-240p to 2160p240p stream on another PC via a capture tool or NDI

 

For every video input on the computer, that consumes one "hardware acceleration" engine, and this isn't measured in "oh you can only have 32 streams" or something, it's really the number of media acceleration resources, which is why if you have some obtuse tunables set, you might take the capability from "32" down to 1.

https://developer.nvidia.com/nvidia-video-codec-sdk

 

Note 4:2:2 not supported on the decoder side. 

 

So let's say as a total, a streamer has to real-time decode 3 2160p60 streams (because nobody supports streaming 120fps) and the game input has to sub-sample every nth frame to bring it to 60fps for the stream. Then there are several composite layers that are just pre-multiplied alpha "browser canvas", each layer consumes CPU resources like you had a browser window constantly animating, and then the final encode stage.

 

I've seen streamers who have "highest" end systems you can buy, like R9's and Geforce 3090's have their computer dragged into low-framerates because their CPU or GPU is being fought over by something. Like, literately, you'll watch their stream and suddenly it freezes for everyone while the audio keeps going.

 

Ultimately, the Ryzen 3xxx parts were a game changer, but a 24 core CPU doesn't automatically mean you can run software encoders any faster. People have been asking this question with x264 ever since we got to 8 core processors, and the answer has been "rapidly diminishing returns"

https://blog.otterbro.com/x264-threads-we-gotta-talk/

 

It depends on the input resolution, frame rate and other knobs, just like with the hardware encoders. 

 

Anyway most people are just going to run things at defaults unless they are working on animation (which certain tunables should be adjusted to avoid banding in gradients.) 

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Kisai said:

The point is that the "enthusiast" tier has a power cap before people simply can't use those parts because they won't be able to operate them.

It has not. As long as a 1000W can handle such setup, enthusiasts won't care.

And no, the majority of enthusiasts (and PC users even) don't even use an UPS, so that's not a limitation.

 

16 minutes ago, Kisai said:

Slow however.

Well, that's a given since it's done through... software instead of a dedicated, fixed function piece of hardware.

Running your encoding at fast/faster speeds should allow faster sw encoding at the cost of quality tho.

 

17 minutes ago, Kisai said:

The quality of the hardware encoder is just as as configurable as the software one

It is not, it's way more limited to what the underlying hardware actually supports, unlike sw where you can do whatever you want (at the cost of speed, ofc).

 

20 minutes ago, Kisai said:

For every video input on the computer, that consumes one "hardware acceleration" engine, and this isn't measured in "oh you can only have 32 streams" or something, it's really the number of media acceleration resources,

That's a software limit on nvidia's part, which was disabled iirc. You're likely to first hit the actual bandwidth limit of the encoder before you run into that tho.

 

26 minutes ago, Kisai said:

So let's say as a total, a streamer has to real-time decode 3 2160p60 streams

Did you mean encode? Otherwise it makes no sense, you totally misunderstood how the decoder works and there's no such thing as "media acceleration resources" limit. Much like the encoder, your only limit here is the bandwidth of the decoder, which is pretty capable and no streamer will hit it. Also, the only thing that use up a decoder is actual encoded video files/streams, not a browser window or game.

 

Just as an example,  here's me overloading the nvdec of my previous 2060 Super with some HVEC 4k 10bit@400mbps videos:

TbDIRAI.jpg

 

So a bandwidth of ~1.6Gbps is the limit of the decoder before it starts to drop frames, and that's a LOT.

 

32 minutes ago, Kisai said:

People have been asking this question with x264 ever since we got to 8 core processors, and the answer has been "rapidly diminishing returns"

https://blog.otterbro.com/x264-threads-we-gotta-talk/

That link is not about diminishing returns, but about people misusing a parameter.

 

I'd say that scaling for x264 is pretty good, even though it's not linear: https://openbenchmarking.org/test/pts/x264&eval=af4eb8b8591e755e660e5a3da27a902920bfab34

 

Going from a 5800x to a 5950x net's you a ~70% boost, and past those 32t mark is where we actually see no improvement whatsoever.

 

So yeah, more cores will surely help (up to a point), with the major benefit being that you can set isolate your main activity from the encoding through core isolation so they don't interfere with each other during full load.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, igormp said:

It has not. As long as a 1000W can handle such setup, enthusiasts won't care.

And no, the majority of enthusiasts (and PC users even) don't even use an UPS, so that's not a limitation.

Sure, fry your expensive toy.

1 hour ago, igormp said:

Well, that's a given since it's done through... software instead of a dedicated, fixed function piece of hardware.

Running your encoding at fast/faster speeds should allow faster sw encoding at the cost of quality tho.

Experience has shown that all the same parameters you can pass to x264/x265 can also be passed to quicksync and NVENC, but the consequences tend to be dramatically different, with quicksync failing to encode a 4K at anything other than what amounts to "fastest" CBR settings. Which isn't great for quality. Nvidia I've not been able to break it recently, but I also switched to a 3070Ti. The 1080 would break at 4K because the GPU was pretty much maxed out already running something at 4K.

https://developer.nvidia.com/blog/nvidia-turing-architecture-in-depth/

image.thumb.png.0ca73a59397b746a6824fd673721c57e.png

 

1 hour ago, igormp said:

It is not, it's way more limited to what the underlying hardware actually supports, unlike sw where you can do whatever you want (at the cost of speed, ofc).

Again, I'm talking about tuning the compression quality, which you can do, but those fixed function encoders resort to software paths when you tell it anything it doesn't actually support, which you see with Quicksync more than nvenc.

 

Showing me 400Mbps streams doesn't prove anything when the problem is the complexity of the stream compression. eg "this may not work on older smartphones, smartTV's, roku devices, ipod's etc" because the devices don't have the necessary buffer space to decode it, or may just outright drop the p frames from the stream.

 

1 hour ago, igormp said:

That's a software limit on nvidia's part, which was disabled iirc. You're likely to first hit the actual bandwidth limit of the encoder before you run into that tho.

That was what I was referring to. The "number of streams" is a software driver limitation, but the underlying hardware resources is not. If you're driving a game that's already maxing out the GPU performance and the video memory, the encoder suffers.

 

1 hour ago, igormp said:

Did you mean encode? Otherwise it makes no sense, you totally misunderstood how the decoder works and there's no such thing as "media acceleration resources" limit. Much like the encoder, your only limit here is the bandwidth of the decoder, which is pretty capable and no streamer will hit it. Also, the only thing that use up a decoder is actual encoded video files/streams, not a browser window or game.

 

No no. You don't understand the issue at all.

 

When you run OBS, every layer in the scene goes through multiple passes. So if you have a h264-encoded camera feed at 4K and a NDI HX video feed at 4K, this is being decoded somewhere at the same time as the encoder is running. OBS isn't doing the compositing entirely on the CPU before sending it back the GPU. It can't tell the capture source to "wait" until it's done drawing the frame to start the next layer. So every compositing layer ends up adding up. If you have a multi-core CPU this comes in handy, to a point, but you can't decode videos in software and suddenly expect no penalty to it.

 

On Windows, whatever monitor OBS starts up on, is the one that ends up doing the decoding. If you drag OBS to another monitor, it doesn't change it, and it still runs on the original GPU it started up on, but now it's also invoking a memory copy from one GPU to the other through the CPU.

 

So if you want to minmax whatever resources you have available, you startup OBS on the iGPU's monitor, but play your game on the nvidia card's. This is in fact how I get around having games compete for the GPU in the first place. 

 

On Linux you also have another problem

https://github.com/obsproject/obs-studio/pull/1758

https://www.gamingonlinux.com/2022/04/a-developer-made-a-shadowplay-like-high-performance-recording-tool-for-linux/

 

The problem here is the average person finds OBS too difficult to setup a stream efficiently, and many people just stream the desktop because they don't know how to layer things so they don't waste system resources.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, leadeater said:

As above posts, but you can also watch some really good comparison videos on YouTube however those are for game streaming. When it comes to viewer experience actually seeing differences is very difficult unless at the lower end of bit rates and resolutions. That's why I said it's perfectly usable. At much higher bitrates, not for streaming then the RNDA2 encoder does a much better job which would be the sort of situation used for video editing acceleration.

Comparisons on Youtube are bad and can not be trusted, because they apply additional compression.

For an accurate comparison we should look for things like VMAF scores.

 

Benchmarking video encodes by looking at a youtube video is like trying to benchmark two GPUs against each other by watching a youtube video with no FPS counter in it and go "well, both looks kinda smooth, I think".

 

 

9 hours ago, leadeater said:

Not saying it's as good as Nvidia's or Intel's but it's quite usable now and not having it is a problem. Just look at massive Threadripper builds getting match or beaten by Intel CPUs with ~8 cores using Quicksync. It's a little hard sell for those on a budget to start talking about the limitations of Quicksync when it realistically doesn't impact them much, not when you're talking many times higher system price and what Quicksync achieves is excellent image quality.

Is that even a relevant market? I doubt people are buying massive Threadripper builds for video editing, and then not putting in any dGPUs in them.

Actually, do we even know if Ryzen 7000's iGPU will have hardware accelerated encoding on them? AMD has cut that feature from their lower end GPUs before, like the RX 6500 XT, so maybe they have done so here too.

 

 

9 hours ago, leadeater said:

It's actually the only one relevant to what I mentioned for video editing, you aren't concerned about Twitch streaming limits in Adobe Premier at all.

 

Anyway if you are going to click off a stream because "it's unwatchable" then I highly doubt anyone really could actually notice

 

"It's fine", if game streaming is more important to you get an Nvidia card. Having a media encoder in a Ryzen CPU now is still important for things that will use it, note if you have a dGPU then this isn't really applicable which is why I'm motioning Quicksync and video editing because it's applicable with the dGPU.

That's an awful video. It is very clear that the person who made it has no experience when it comes to comparing videos. 

He does not even use the same video footage for the comparisons. He uses one video file for the AMD stuff, and then a completely different video for the Nvidia stuff.

For the "recording" section he couldn't even be bothered to change the default settings, so he ended up using HEVC for the AMD encode, and AVC for the Nvidia encode. He doesn't even mention which quality setting he used. 

 

He does not mention what bit rate he used for the "streaming" section either. But he later in the video mentions that he used "almost streaming bitrate", which to me indicates that he did use higher-than-allowed bit rates for the streaming section as well, probably to make AMD look better than they really are (since the higher bit rate you use, the less difference there is).

 

This video is the equivalence of comparing the 6900 XT vs the 3090 Ti by playing two different games, with different quality settings, without an FPS counter, and then go "well both seem smooth so I guess they both perform about the same".

 

He does not even seem to realize that Nvidia supports HEVC (H.265) since he says this:

Quote

If you're recording your footage, especially at higher bit rates so that you can edit some clips later to put on Youtube, the AMD implementation takes the win. The fact that it supports H.265 at all vs Nvidia flat out not, is actually a really nice step up and offers you better quality and smaller file sizes.

 

Nvidia have supported H.265 encoding for longer than AMD... This guy is completely clueless.

This is possibly the worst comparison I have ever seen. Flawed testing methodology, the guy is clearly clueless and he doesn't even seem to test what he claims to test.

 

The Chips and Cheese comparison that was posted earlier is way better.

 

 

 

The reason why I picked VMAF 75 as the comparison point earlier, is because that's where I would consider the quality fairly decent. That is inline with Netflix's guidelines. AMD's encoder is not good enough to be able to get that score within the allocated bit rate that Twitch allows. Nvidia can, and with a decent margin as well.

 

 

AMD is not even able to match Nvidia's 10 series GPUs when it comes to encoding, much less the 20 series and up.

 

AMD's encoding solution is "usable" in the same sense a 2500K is usable in 2022. It is, but it is significantly behind what the competition offers. You are most likely better off using any other encoder. AMD's should be your last option.

 

 

9 hours ago, leadeater said:

Remember we are talking about Ryzen CPUs now having iGPU and encoders, not just RNDA2 encoder performance and quality.

I am not sure I follow.

The iGPU is RDNA2, so RDNA2 encoder performance and quality is highly relevant.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, igormp said:

That comparison is kinda moot since hw encoding offers worse performance than sw encoding, which is fine for quick previews on your projects since it's way faster, but for your final product you'll likely use sw encoding anyway.

Not if you got an Nvidia GPU.

Nvidia's encoder outperforms x265 at the "slow" preset if you keep the bit rate the same between the two. I doubt many people want to use anything slower than "slow" anyway.

 

Showdown-x265.thumb.jpg.7192d137f61ca51229165eb0b63ab247.jpg

 

 

 

  

6 hours ago, leadeater said:

Anyway there's always these weird notions that if one competing solution is better than the other that therefore makes the other unusable, simply isn't actually how things always work. Something can be "worse" and still "usable".

Yes, but when the competing solutions are a lot better and very widely available then it does not really make much difference. It's an "also-ran".

This will only be a selling point for people that:

1) Want to encode video.

2) Uses the fairly limited selection of programs that can use AMD's encoder.

3) Does not have a dGPU, or a very old one (over 6 years old).

4) Got a brand new Ryzen 7000 series CPU.

 

And this all assumes AMD even implemented the encoding engine on this iGPU, which is not something they did for their lower end dGPUs. 

 

I think that market is miniscule and not at all the reason why AMD did this. My guess is that they did it so that people can use their PCs without a dGPU plain and simple. Probably the biggest market for that are office PCs, or people who do it temporarily such as if their dGPU breaks, or while they wait for a new dGPU to arrive.

Link to comment
Share on other sites

Link to post
Share on other sites

@LAwLzI'd fully expect the iGPU in the 7000 series to be capable of encoding - from the HD7000 series to the end of VCE (replaced by VCN with Raven and Picasso) only their Oland GPU lacked an encoder. Even the shittiest APU of theirs has had the ability to encode video.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LAwLz said:

Yes, but when the competing solutions are a lot better and very widely available then it does not really make much difference. It's an "also-ran".

This will only be a selling point for people that:

1) Want to encode video.

2) Uses the fairly limited selection of programs that can use AMD's encoder.

3) Does not have a dGPU, or a very old one (over 6 years old).

4) Got a brand new Ryzen 7000 series CPU.

As with Intel iGPU and QuickSync you can could have both an Nvidia GPU and still benefit from the iGPU, as is the case with Adobe Premier. That was literally the point, AMD needs a competing solution to QuickSync for their CPUs not a competing solution to NVENC from their CPUs 🤦‍♂️

 

That's a GPU discussion more than CPU, if you do want a much more powerful encoder then a good and economic place for that is the GPU and that's most likely where AMD will put their latest generation encoders first, but that's just my assumption based on history.

 

2 hours ago, LAwLz said:

And this all assumes AMD even implemented the encoding engine on this iGPU, which is not something they did for their lower end dGPUs. 

It's a bigger assumption to say it won't have it considering that not having it is the exception and only a small amount of AMD products lack it and that's directly because it's a repurposed product that was originally paired with an AMD APU that has the encoder i.e. Mobile GPUs slapped on AIB cards.

 

2 hours ago, LAwLz said:

That's an awful video. It is very clear that the person who made it has no experience when it comes to comparing videos. 

Ok then I'll set you a challenge. Find a video comparing RDNA2 vs NVENC gaming streaming footage that is so vastly different I could tell in less than a second and would also cause me to not want to watch. If you can find that you'll have proven your point.

 

The problem you'll find, as did I, was actually finding RDNA2 comparisons. Sure EposVox has RDNA1 but not RDNA2 comparison.

 

However this entire discussion section is entirely irrelevant, when I said streaming is worst case of this I meant it and I thought I was quite clear that if you are game streaming you have a dGPU so tell me how a Ryzen iGPU with encoder is relevant here when you'd be using the dGPU?

 

Like I keep saying it's a part of an answer to QuickSync, not NVENC. AMD needs an answer to this just as much as they need more options with iGPU.

 

2 hours ago, LAwLz said:

For an accurate comparison we should look for things like VMAF scores.

No we should not, when making tan argument if something is "useable" or not then use your eyes. If you want to make a qualitative assessment then use VMAF. Really wish you wouldn't keep trying and make this in to a qualitative comparison argument about which is "better", that's already settled and not being debated.

 

RDNA2 far as I know and have seen is perfectly usable, usable NOT meaning the best or better than others, just usable.

 

Like please stop making this in to a dGPU vs dGPU comparison that's actually applicable to game streaming, sideline that, it's not part of this, MEGA Sighhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh

 

The point is two towns over by now, ya missed the highway exit.

 

2 hours ago, LAwLz said:

I am not sure I follow.

The iGPU is RDNA2, so RDNA2 encoder performance and quality is highly relevant.

As above, reframe you thinking to a CPU not a GPU and not game streaming which WILL have a GPU, most likely an Nvidia GPU. So now have a think about where QuickSync is appliable, even with a dGPU, there you go follow that.

 

We aren't JUST talking about RDNA2 encoders we are talking about CPUs with iGPU, there is a clear and what should be obvious difference here. Last I checked you'll not find NVENC in an x86 CPU, so be careful what you are actually comparing and why you are comparing it.

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, Dabombinable said:

@LAwLzI'd fully expect the iGPU in the 7000 series to be capable of encoding - from the HD7000 series to the end of VCE (replaced by VCN with Raven and Picasso) only their Oland GPU lacked an encoder. Even the shittiest APU of theirs has had the ability to encode video.

I think it will have it too, but since we don't actually know yet I think it's a bit premature to go "they added the iGPU so that they can compete with QuickSync".

 

 

 

4 minutes ago, leadeater said:

As with Intel iGPU and QuickSync you can could have both an Nvidia GPU and still benefit from the iGPU, as is the case with Adobe Premier. That was literally the point, AMD needs a competing solution to QuickSync for their CPUs not a competing solution to NVENC from their CPUs 🤦‍♂️

I don't understand what you mean. Adobe Premier also supports NVENC. If you got an Nvidia GPU then you have no reason to use QuickSync, assuming the quality is similar.

 

 

8 minutes ago, leadeater said:

It's a bigger assumption to say it won't have it considering that not having it is the exception and only a small amount of AMD products lack it and that's directly because it's a repurposed product that was originally paired with an AMD APU that as the encoder i.e. Mobile GPUs slapped on AIB cards.

I am not making any assumptions. I never said it won't have it. You on the other hand, said it will. I think we should wait for more details before making that kind of statement.

 

 

12 minutes ago, leadeater said:

Ok then I'll set you a challenge. Find a video comparing RDNA2 vs NVENC gaming streaming footage that is so vastly different i could tell in less than a second and would also cause me to not want to watch. If you can find that you'll have proven you point.

 

The problem you'll find, as did I, was actually finding RDNA2 comparisons. Sure EposVox has RDNA1 but not RDNA2 comparison.

Yep, I know, which is why I said:

18 hours ago, LAwLz said:

I think proper comparisons between the three are quite rare so the more I can find and read the better. 

 

 

 

13 minutes ago, leadeater said:

However this entire discussion section is entirely irrelevant, when I said streaming is worst case of this I meant it and I thought I was quite clear that if you are game streaming you have a dGPU so tell me how a Ryzen iGPU with encoder is relevant here when you'd be using the dGPU?

 

Like I keep saying it's a part of an answer to QuickSyn, not NVENC.

I feel like we are talking past each other right now because we have different use case and definitions of words in mind.

 

When you say this iGPU in Ryzen 7000 is an answer to QuickSync and not NVENC, what exactly do you mean by that?

Are you perhaps saying QuickSync to refer to video decoding? Because I and everyone else so far has only talked about video encoding.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, LAwLz said:

When you say this iGPU in Ryzen 7000 is an answer to QuickSync and not NVENC, what exactly do you mean by that?

Literally that. Time for you to figure it out at this point. And it's not THE answer it's an answer as to why AMD would want it in their CPUs more broadly.

 

4 minutes ago, LAwLz said:

Are you perhaps saying QuickSync to refer to video decoding? Because I and everyone else so far has only talked about video encoding.

Well hang on now, since it was my point about QuickSync then lets I don't know, actually talk about what I said and not game stream encoding? Seems like a fair ask right?

 

18 hours ago, leadeater said:

It'll be so AMD has an answer for Intel Quicksync because that actually does make a huge difference when utilized, even with a dGPU in the system.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, leadeater said:

 

 

However this entire discussion section is entirely irrelevant, when I said streaming is worst case of this I meant it and I thought I was quite clear that if you are game streaming you have a dGPU so tell me how a Ryzen iGPU with encoder is relevant here when you'd be using the dGPU?

 

Like I keep saying it's a part of an answer to QuickSync, not NVENC.

 

Practical experience-wise, it's possible to use two encoders on the same system, and you'd use Quicksync (iGPU) when you want to spare your dGPU from the overhead. That said, the iGPU only works in two scenarios:

1. You're encoding one video and don't care how long it takes

2. You're streaming/scrubbing a video and can settle for "Good enough"

 

I watch a fair number of streamers on Twitch, and let me tell you, "FPS" style games are the worst to watch (DBD, Fortnite, Valorant, Apex) when the streamer uses settings that result in blurry motion or slide-shows of frame rates.

 

You probably would not use Quicksync to produce a UHD video intended for netflix/amazon/apple/disney+ VOD, and you probably wouldn't use x264/x265 to produce a twitter/youtube/tiktok video that you want done in the same day. We're not yet at a point where everyone has access to 4K encoders (be it h.264, h.265 or AV1) and upstream bandwidth to use them.

 

When we get to AV1 encoding in hardware, things will change, but for now, nvidia couldn't be bothered to put AV1 encoding in their 30-series hardware. Intel ARC will support AV1. Which means/suggests that if Ryzen 7xxx and Intel 13th gen don't have AV1, we're probably going to again see another year or two go by with neither Youtube or Twitch supporting it, and subsequently, Netflix, Apple, Disney+ etc not supporting it.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×