Jump to content

Qualcomm Announces Snapdragon 8 Gen 1: Flagship SoC for 2022 Devices

Lightwreather

Apple's A15 Bionic doesn't seem to support AV1 either. I'd say Apple and Qualcomm, being 2 largest high end chip makers kinda define what is used by platforms like Youtube. Mediatek may be largest mobile chip maker, but they are mostly found in rubbish low end phones that are generally too crap to do anything demanding anyway. Decoding AV1 is very demanding and I don't think even dedicated decoder units in those can really be up to the task without dedicating almost entire chip resources just for this.

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, RejZoR said:

I'd say Apple and Qualcomm, being 2 largest high end chip makers kinda define what is used by platforms like Youtube.

Except they don't, because AV1 is being pushed by Youtube and other platforms heavily. Google even requires hardware accelerated decoding to have things like TVs using Android TV be certified. 

Netflix is even pushing AV1 to devices that don't support hardware accelerated decoding. They are doing it in software for some shows on some phones, as well as on the consoles like Playstation.

 

 

21 hours ago, RejZoR said:

Decoding AV1 is very demanding and I don't think even dedicated decoder units in those can really be up to the task without dedicating almost entire chip resources just for this.

This is absolute rubbish. Not sure where you're getting these ideas from.

1) Decoding AV1 is not that demanding. Again, it is done in software on phones already, by Netflix. It might be demanding if we start talking about 4K or something really high end, but other than that it's fine. 

2) You do not need to use other resources on the chip if you have dedicated decoding units, which, like I said, pretty much everyone except Qualcomm (and Apple) has. If you have hardware accelerated decoding for AV1 then your CPU and GPU usage stays at 0% even if you play for example 8K AV1 (assuming your media engine supports it). Dedicated decoding units is not hybrid decoding, and we have already moved past hybrid decoding for AV1 a long time ago.

AV1 was specifically made to be easy to implement in hardware decoding.

 

An i3-9100F gets 65 FPS when decoding 4K AV1 footage, in software.

That's with a 22.7Mbps file by the way, so considerably higher than what companies like Netflix pushes (which I think tops out at 16Mbps).

Link to comment
Share on other sites

Link to post
Share on other sites

Who cares if Youtube is pushing it if other companies don't support it. Only way to "push" it would be to just absolutely demand it, but I can already tell you how would that be cutting off 80% of userbase.

 

Dedicated decoders are not "free" performance that just magically keeps CPU and GPU at 0% usage. You still need to dedicate a portion of actual silicon for it, regardless of how small that is.

 

Your comparison of smartphone chips to a desktop CPU is nothing short of hilarious. One is what, sub 1W part and another is 65W part?

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, RejZoR said:

Who cares if Youtube is pushing it if other companies don't support it.

Youtube, Netflix, Facebook, Amazon... Yeah, everyone is pushing it and supporting it. The only two big companies that don't support it are Apple and Qualcomm. Apple will most likely support it next year since they have joined AOMedia, and Qualcomm... Well they will have to support it sooner or later but they are resisting as much as possible since they want everyone to use the proprietary codec VVC (which Qualcomm gets a royalty from when it is used).

 

8 hours ago, RejZoR said:

Only way to "push" it would be to just absolutely demand it, but I can already tell you how would that be cutting off 80% of userbase.

Google is already demanding it in some categories.

They have also implemented software decoding support into Android itself. Not sure what else you would demand.

 

 

8 hours ago, RejZoR said:

Dedicated decoders are not "free" performance that just magically keeps CPU and GPU at 0% usage. 

But it is... When decoding is done in discrete hardware it puts next to zero stress on the CPU or GPU. That's the whole point...

 

8 hours ago, RejZoR said:

You still need to dedicate a portion of actual silicon for it, regardless of how small that is.

I don't get your point. All other companies thinks it is worth it, and Qualcomm clearly thinks it is too since they will do it. The amount necessary is miniscule and a lot of it can be shared with other codecs anyway. Hell, AV1 was specifically designed to be easy to implement. AMD, Intel, Nvidia and Arm were all working on it together and had direct influence on the codec to make sure it was easy to implement.

 

8 hours ago, RejZoR said:

Your comparison of smartphone chips to a desktop CPU is nothing short of hilarious. One is what, sub 1W part and another is 65W part?

I don't understand why you think it's hilarious. I was pointing out that a several generations old, low end CPU can manage to play a really high bit rate 4K video in software at twice the needed frame rate. It's not like I said "this is the minimum you need". I was saying even one of the lowest end pieces of hardware tested in their high end test did it easily.

AV1 is not hard to decode, not even in software. It's already being done in software on a wide range of devices. But we should still strive to do it in hardware.

 

But since you seem to have a problem with that test I decided to play it myself on my Galaxy S10 (Exynos 9820).

High bit rate 4K AV1 video, decoded completely in software, on a pretty bad CPU (about half the performance of the Apple A12), and I am getting the full 25 FPS and zero dropped or delayed frames. My phone doesn't even get hot.

Spoiler

Screenshot_20211204-001340_mpv.thumb.jpg.d217d31b9bec32abb67944470db82557.jpg

Sadly it seems like Google has decided to remove the option to monitor CPU usage without rooting, so I couldn't get that. Also, since this is just a petty Internet argument I couldn't be bothered to figure out the peak FPS decode rate of my phone. Needless to say, it is more than enough for this clip, and hence it is good enough for like 99% of all videos out there.

 

 

 

Not sure what your problem is with AV1. Is it because it's partially from Google so you automatically have an irrational hatred for it? Is it because you don't understand it? Almost everything you have said about it is wrong on some level.

Link to comment
Share on other sites

Link to post
Share on other sites

"But it is... When decoding is done in discrete hardware it puts next to zero stress on the CPU or GPU. That's the whole point... "

 

At least quote the whole fucking thing and not split it into two to make you look smarter. You intentionally cut out the part where I said you need to dedicate a chunk of actual physical silicon to it.

Link to comment
Share on other sites

Link to post
Share on other sites

What a stupid name, honestly.

 

Just reinforces my point that Silicon Valley has an infatuation with bad naming.

 

Realistically, this might be the way to go considering what to name after the 898, but "Snapdragon 8 Gen1" kind of reeks of "USB 3.2 Gen2".

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, RejZoR said:

"But it is... When decoding is done in discrete hardware it puts next to zero stress on the CPU or GPU. That's the whole point... "

 

At least quote the whole fucking thing and not split it into two to make you look smarter. You intentionally cut out the part where I said you need to dedicate a chunk of actual physical silicon to it.

I did reply to that part as well. I specifically split it up because a couple of posts ago it seemed to me like you were implying hardware acceleration hardware would still require CPU and GPU resources during playback. I just felt like I needed to dismiss that idea if that's what you were arguing. 

As for the part about requiring transistors in hardware, I replied to that right below that paragraph so I'm not sure why you're mad.

 

Yes you need to dedicate some silicon to it. A miniscule amount, a lot of which is shared with other codecs, that all companies (except Apple) have come out and said "yes, it is worth spending that silicon to support AV1". So I am not sure what your argument is. If you are trying to say it isn't worth spending a few transistors to support AV1 then the entire industry (except Apple) have told you that you're wrong. Everyone (except Apple) are already doing it or are planning on doing it. It's just that Qualcomm are behind everyone else in the implementation stage. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×