Jump to content

Floatplane DOES NOT provide users with better audio quality

AHPanda
Go to solution Solved by Divritenis,

Seems that audio does indeed differ from FP to YT. From YT I can download 48kHz stream that essentially has no LPF. 44.1kHz stream has LPF at around 16kHz. FP is around 14.4kHz, same as in the reddit post.

 

FP

image.thumb.png.87a140af952d9a355de491612ef4b6ff.png

 

YT

image.thumb.png.cbc2840a1e952e964b5d800a3eb83bf6.png

 

Luke and his team should look into this. I'm not going to brand this as lying, probably more like something that has changed since initial comments on WAN about it. 

19 hours ago, LAwLz said:

140MB = 140*8 Mb = 1120Mb

 

1120 / 938 = ~1,2

 

The bitrate of the entire video is roughly 1,2Mbps, or 1200Kbps.

That's including the video, audio, metadata, container and so on.

 

 

It's very simple math that even a middle schooler should be able to do with ease. Just divide size by time.

It's MiB, not MB; also depending how bitrate is calculated, there is also lossless compression that also gets applied to the stream prior to the calculation; so it's not exactly true you can subtract out audio/metadata.

 

Also @japers posted his FP download of apparently the same video produced a different result.  Average bitrate is actually 1.24 Mbps as per the program

 

Overall though as well, comparing bitrates strictly isn't a thing that can be done either.  It's actually possible for a video to have a lower bitrate and be better quality than the higher bitrate; even on the same codec...depends how much resources you are willing to put into for the motion estimation of each macroblock...I can't remember off hand the max vector size each macroblock can be encoded as moving...but in general there is a lot more limited search (it's why you can effectively trade encoding time for a lower bitrate and have higher quality)

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, wanderingfool2 said:

It's MiB, not MB; also depending how bitrate is calculated, there is also lossless compression that also gets applied to the stream prior to the calculation; so it's not exactly true you can subtract out audio/metadata.

 

Also @japers posted his FP download of apparently the same video produced a different result.  Average bitrate is actually 1.24 Mbps as per the program

It's the same thing in Windows. Windows incorrectly labels MiB as MB all the time, so as long as you stick to one it works out in the end anyway. It's needlessly nitpicky to bring that up. I was just explaining how to get a rough estimate quickly in case someone wanted to double check it for themselves. 

 

My post was meant as a "here is a quick way to do a sanity check for yourself". That's why I was deliberately a bit vague in some cases like how much the matadata takes up and so on, because it wasn't the point of the post. 

 

 

1 hour ago, wanderingfool2 said:

Overall though as well, comparing bitrates strictly isn't a thing that can be done either.  It's actually possible for a video to have a lower bitrate and be better quality than the higher bitrate; even on the same codec...depends how much resources you are willing to put into for the motion estimation of each macroblock...I can't remember off hand the max vector size each macroblock can be encoded as moving...but in general there is a lot more limited search (it's why you can effectively trade encoding time for a lower bitrate and have higher quality)

I know. I linked to a thread where I explained this precise thing several months ago when someone else brought it up on the forum regarding floatplane quality. 

 

I am not really sure what you mean when you say "max vector size each macroblock can be encoded as moving". All of those are terms used for x264 encoding, but the way you use those words doesn't make much sense to me. Are you talking about the size of the mbtree? Because as far as I know, there is no limit to the number of frames a motion vector prediction can be made from. It looks at it frame by frame until it no longer finds any better motion prediction vector. This is done using a range of pixels, not based on frame numbers. 

Or are you talking about when B-frames should be used compared to P-frames? 

 

 

There are also way more parameters you can tweak than just motion estimates. x264 has 23 different settings just for how the analysis of the video should be handled. It has another 25 for rate control. It also has 22 different settings for how various frame types should be used. All of these settings can alter the compression ratio and visual quality. 

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, LAwLz said:

I am not really sure what you mean when you say "max vector size each macroblock can be encoded as moving". All of those are terms used for x264 encoding, but the way you use those words doesn't make much sense to me. Are you talking about the size of the mbtree? Because as far as I know, there is no limit to the number of frames a motion vector prediction can be made from. It looks at it frame by frame until it no longer finds any better motion prediction vector. This is done using a range of pixels, not based on frame numbers. 

Or are you talking about when B-frames should be used compared to P-frames? 

Well I haven't read the full standard of h264, but it's a carry over from the mpeg2 years as well (back in school, the final project was essentially writing a basic encoder for a simpler version of mpeg2)

 

For each macroblock you would run a search for the best match in a given image...but that also takes an absurd amount of time; so a local search is usually done, or a heuristic search is done (an exhaustive search is rarely done).  Each of these estimations are essentially held as vectors;  I couldn't really remember, what the datatype they stored the vectors in...so I wasn't sure what the max vector size could be...like in my school project, we were instructed use signed byte which would limit any search area to only +-128; or 65536 per macroblock...in theory if it was limited to that, you might be able to do an exhaustive search; although if it's a short then an exhaustive search is unlikely to happen..

 

The general gist of what I was essentially getting at is that things like search window for each macroblock can make up a large chunk of encoding; which really is where you could get drastically smaller bitrates with the same quality

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×