Jump to content

[UPDATE] Wonderlust – Apple September event; new iPhones and watches

Lightwreather
Go to solution Solved by Lightwreather,

So, here are all the announcements, and stuff that happened:

Apple Watch Series 9

-Largely the same design, the most significant changes are to the internals

-Improved performance, but more crucially, longer battery life, are promised by the new S9 chip. Apple claims 60% more transistors than the Series 8 and a GPU with 30% more transistors. The neural engine's significant improvements to on-device processing for Siri requests, including 25% faster voice dictation, may be the most significant.

-Display now goes up to 2000 nits, double the brightness of Series 8, making it easier to use outdoors. Also goes down to a single nit in dark conditions

-Addition of an Ultra Wide-Band chip to show you the distance and direction to your phone, rather than simply having your phone make a loud noise.

-Apple also promised a new gesture - "Double Tap" that it claims Watch users will be using "every day". This works supposedly by the Neural Engine's detection of "the unique signature of tiny wrist movements and changes in blood flow when the index finger and thumb perform a double tap." Bit skeptical if this will actually catch on though.

-FineWoven is a new watch strap design that Apple is phasing out of its whole product line to replace the leather. FineWoven is a "microtwill made of 68 percent post-consumer recycled content that has significantly lower carbon emissions compared to leather," according to Apple. 82 percent of the yarn used to make the new Sport Loop is recycled.

- Supposedly, the company's first carbon-neutral device.

The Apple Watch SE remains available for $249, while the Series 9 starts at $399. They're both available for preorder today and should be released on September 22.

image.thumb.png.0d40c1b4125c596e50f996f23130ca67.png

 

Apple Watch Ultra 2

-The Ultra 2 has the same new S9 chip as the Series 9 alongside the same "double tap" feature.

-There's a new display that hits 3k nits, even more than the Series 9. (Sidenote: How bright would be too bright?)

-There's also a new "modular ultra" watchface that uses the edge of the display, “to present real-time data, including seconds, altitude, or depth”.

-There's also now support for Bluetooth cycling accessories, and ANT+ support.

-The battery is the same, hitting 36 hours on a single charge and 72 hours in low-power mode.

 

iPhone 15 and 15 Plus

Why, it's star of the show, and was literally what everyone was waiting for, waiting agonising minutes after it was revelled to see if a highly anticipated feature would be released. It was.

 

-Apple ditched its proprietary Lightning port in favour of USB-C (albeit on the USB2 standard; some claim it's because of SoC limitations) on this iteration of the iPhone, very likely because of the EU.

-The edge of the aluminium enclosure has a new contoured design that looks a bit different from the iPhone 14 and gives me a Pixel 4-esque vibe

-Apple also claims the iPhone 15 devices are the first phones to have a "color-infused back glass." Apple's announcement said that it strengthened the phones' back glass with a "dual-ion exchange process" and then polished it with nano-crystalline particles and etched it for a "textured matte finish."

-Dynamic island was introduced.

I take minor issue with Apple calling it an "all-new design" when it really isn't that different from the iPhone 14 and 14 Pro but I suppose that's a me thing.

 

-Spec bump to the A16 which was on last year's Pro line-up

-Main camera system was upgraded to a 48 MP sensor that takes 24 MP photos, using a default computational photography process. This is the same system found on the iPhone 14 Pro. Alternatively, you can opt for the "2x Telephoto" option with three "optical-quality" zoom levels (0.5x, 1x, or 2x).

-Additionally, for all of the iPhone 15's cameras, machine learning will automatically switch the main camera into portrait mode, with richer color and low-light performance, when appropriate. Night Mode ("sharper details and more vivid colors") and Smart HDR (brighter highlights, and improved mid-tones, shadows, and renderings of skin tones) are reportedly upgraded, too.

 

-The cameras will also introduce focus and depth control, which lets you switch focus on an image from one subject to a different subject after the photo has been taken.

I will admit, that is pretty cool

 

-Also includes the second-gen Ultra Wideband chip (same as in the Apple Watch). Apple said it enables connectivity with other devices from up to a three-times longer distance.

-The iPhone 15 is supposed to have better audio quality during calls, thanks to a new machine learning model that automatically prioritises your voice and can filter out more background noise, if you select "voice isolation" mode during a call.

This is also pretty cool, and will absolutely be very helpful when yer trying to listen to someone talking in a very windy place. THANK YOU APPLE.

-Apple is also adding Roadside Assistance via Satellite with the new devices. Users will be able to text roadside assistance and then select what they need assistance with, with options such as "flat tire" and "locked out" appearing via a menu that comes up in response. The feature will debut in the US with AAA.

-The iPhone 15 starts at $799 (128GB), and the iPhone 15 Plus starts at $899 (128GB) available on September 22, with preorders starting this Friday.

-Apple also launched new case options for the iPhone 15 series, including the new FineWoven material.

 

image.thumb.png.605a45ad77ef302f304408a319a6baca.png

Apple (via ArsTechnica)

 

iPhone 15 Pro and Pro Max

-Apple announced that new iPhone 15 Pro line-up will switch from a stainless steel frame to one made out of a brushed "grade 5 titanium," which they says makes the phone more durable and lighter.

I honestly expected them to bump the price just because of this

-The phone is also a little smaller than past models thanks to slimmer display bezels. The screen sizes stay the same—6.1 inches for the base model and 6.7 inches for the larger iPhone 15 Pro Max one. Apple didn't announce any changes to the phones' actual screens, so expect the same resolution, ProMotion refresh rate, and brightness as before.

-The iPhone's tradition mute switch was replaced by an "action button" . By default, it still serves as a mute switch, but users can change it to launch apps or the camera or custom Shortcuts workflows, which I see as a neat inclusion. There will be a different haptic response depending on if the action mutes or unmutes.

-The Pro line-up will once again be a generation ahead of the non-pros with the introduction of the A17 Pro, which is apparently Apple's first chip built on TSMC's 3nm node. It continues to use two large high-performance cores and four high-efficiency cores; Apple says the performance cores are 10 percent faster than they were before, a relatively mild improvement, while the efficiency cores are more efficient rather than being faster.
   -The A17 Pro's six-core GPU is 20 percent faster than the A16 and Apple has also added hardware-accelerated ray-tracing. This is something I don't see as being particularly useful on a phone, but I did call out that this would be implemented by Genshin Impact and Honkai Star rail. Speaking of which RE4 will also be coming to the iPhone.

   -The chip also includes hardware acceleration for the AV1 video codec that is becoming more popular on streaming services.

-Apple has also added a new USB controller to power that USB-C port, allowing the iPhone 15 Pro (Max) to use USB 3 transfer speeds. 
"technically, this would make it either a USB 3.1 gen 2 or 3.2 gen 2 controller, if you can keep the USB-IF's naming straight" –ArsTechnica

-Apple says the Pro's 48-megapixel main sensor is larger than the one in the regular iPhone 15. Like the iPhone 15, by default, it will shrink the finished product down to 24 MP to save storage space, but if you're shooting in ProRAW mode you can get the full 48 MP image for cropping and editing. The camera defaults to a 24 mm focal length, but 28 mm and 35 mm options are also made possible by the large sensor, and you can set any of the three focal lengths as your default.

 

-Apple says that after an iOS update "later this year," the phone will be able to shoot spatial video that can be viewed in three dimensions in a Vision Pro headset.

-The iPhone 15 Pro starts at $999 (128GB), and the iPhone 15 Pro Max starts at $1,199 (256GB) available from the 22nd of September with preorders starting this Friday. FineWoven cases will also be available for the Pro line-up.

 

iOS and macOS

-iOS 17 hits supported devices on September 18

- macOS 14 Sonoma will be available on September 26, just over a week after iOS 17

 

Sources

Apple - [1] [2]

ArsTechnica – [1] [2] [3] [4] [5] [6] [7]

1 hour ago, dalekphalm said:

I don't agree with you here. In practice, very very few people are going to use Live Photos to "capture the essence of the moment". That might be what you specifically use it for, but Apple advertises it as a way to "pick the key photo and make edits", just like I stated.

I doubt most users would even notice or care, to be honest. It sounds like you use Live Photos in a particularly niche way.

I suspect people use it for just that all the time, because the iOS generated memories often use the video portion of them.

 

I feel like Apple's marketing of them is pretty in line with what I was saying:

https://developer.apple.com/design/human-interface-guidelines/live-photos

 

Quote

Live Photos lets people capture favorite memories in a sound- and motion-rich interactive experience that adds vitality to traditional still photos.

Link to comment
Share on other sites

Link to post
Share on other sites

I‘ve also never heard of Live with the purpose to chose the best frame - how should that even work given that every video format will be in severely lower quality compared to a still image?

 

Also, I‘m not sure the live video is actually stored inside the HEIC. Maybe it‘s simply linked in from an external file?

Link to comment
Share on other sites

Link to post
Share on other sites

59 minutes ago, Dracarris said:

I‘ve also never heard of Live with the purpose to chose the best frame - how should that even work given that every video format will be in severely lower quality compared to a still image?

"Severely lower quality",  maybe - but the quality is still pretty high - I've personally never noticed a major difference when I've edited a live photo and chosen a different key frame. It's a choice you get to make as a user.

59 minutes ago, Dracarris said:

Also, I‘m not sure the live video is actually stored inside the HEIC. Maybe it‘s simply linked in from an external file?

I couldn't tell you how it works from a technical perspective, but everything is in the one file, since if I share that file to another iPhone user, they can see the Live Photo in full. Same if I download a backed up version of the file.

 

According to this very old article from 2015, the iPhone 6s (where Live Photos were introduced) takes a series of still images, animates them together, and records audio separately, then stitches the whole thing together.

https://9to5mac.com/2015/09/11/how-live-photos-works/

 

If this is accurate, then there's literally no downside to this, as all the frames will be the same quality as the original key frame.

 

Given that's how it's being described, I see zero issues with Apple using the same method to create a Live Photo using JPEGXL or any other newer format.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, dalekphalm said:

"Severely lower quality",  maybe - but the quality is still pretty high - I've personally never noticed a major difference when I've edited a live photo and chosen a different key frame. It's a choice you get to make as a user.

I couldn't tell you how it works from a technical perspective, but everything is in the one file, since if I share that file to another iPhone user, they can see the Live Photo in full. Same if I download a backed up version of the file.

 

According to this very old article from 2015, the iPhone 6s (where Live Photos were introduced) takes a series of still images, animates them together, and records audio separately, then stitches the whole thing together.

https://9to5mac.com/2015/09/11/how-live-photos-works/

 

If this is accurate, then there's literally no downside to this, as all the frames will be the same quality as the original key frame.

 

Given that's how it's being described, I see zero issues with Apple using the same method to create a Live Photo using JPEGXL or any other newer format.

That's not accurate/how it works. There is A picture file, and a much lower resolution video.

 

If it was a series of full quality photos animated smoothly, you'd have MASSIVE files every time you took a picture.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, LAwLz said:

It does support depth layers though. In fact, it supports 4099 channels, and a typical RGB image will only use 3. So you have 4096 channels remaining to fill with whatever data you want. Alpha channel for transparency, depth channel for depth data, maybe a thermal channel if you got a thermal camera. In that way, the format is very flexible. 

So no video side channel (I use this to store other ... non video vector data), from looking at apis I can fine it looks like the other channels however also need to be the same x,y size as each other you cant provide each with its own size/offsset/metadata.    Having the flexibility is nice as an app developer that can use it to store some of the original source data for an email within it so if users re-open the image within an app that edited it you can undo/alter old edits. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, dalekphalm said:

If this is accurate, then there's literally no downside to this, as all the frames will be the same quality as the original key frame.

 

Not all frames are the same quality at all, there are a few key frames that are of high quality and the rest are deltas applied to that using HEVC video encoding.  If you were to just do 60fps for 5 seconds at 24MP (10 bit) in JEPG-XL you would be looking at MASIV files as the standard does not do any frame to frame compression like a vide format does. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Obioban said:

That's not accurate/how it works. There is A picture file, and a much lower resolution video.

 

If it was a series of full quality photos animated smoothly, you'd have MASSIVE files every time you took a picture.

Feel free to provide counter evidence to how it works.

 

My point being that in practice, I've yet to notice any serious quality degradation when choosing a new key frame.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, dalekphalm said:

Feel free to provide counter evidence to how it works.

 

My point being that in practice, I've yet to notice any serious quality degradation when choosing a new key frame.

On photos for Mac, select a live photo. File, export unmodified original. Choose destination, click export. What you get at that destination is a video file and an image file.

 

Original photo:

dptW7IW.thumb.jpg.ff7231ca3c5725750e4d2c31f249fe8e.jpg

 

Changed key frame:

EPYGOMw.thumb.jpg.b6f6b4e7132b58992dcb26710d4f3d11.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Obioban said:

On photos for Mac, select a photo. File, export unmodified original. Choose destination, click export. What you get at that destination is a video file and an image file.

Thank you for explaining how you got your result - that is useful information. Granted, I'm not a Mac user, so this is of limited use to me personally, but I will concede the point that it's an image file with a video file.

 

That doesn't change my stance mind you.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, hishnash said:

So no video side channel (I use this to store other ... non video vector data), from looking at apis I can fine it looks like the other channels however also need to be the same x,y size as each other you cant provide each with its own size/offsset/metadata.    Having the flexibility is nice as an app developer that can use it to store some of the original source data for an email within it so if users re-open the image within an app that edited it you can undo/alter old edits. 

If you just use the "video channel" to store other data then you should be able to do that in the other channels in the JPEG XL container as well.

You can also save multiple images with varying dimensions inside the same JPEG XL file if that's what you're after. You can read a bit more about it on page 6 in this whitepaper.

 

 

Apparently, I was wrong about the inter-frame compression. JPEG XL can do that by marking a frame as a "reference frame", and then use that when encoding subsequent frames. Although I doubt it's as good as the video formats.

 

 

Anyway, HEIC will have the edge when it comes to things involving video (since it is a video format after all), but for photos/pictures, JPEG XL is unmatched.

 

 

It's also worth noting that Google is currently heavily against JPEG XL, going as far as removing support for it in their software. Apple on the other hand is the company with the best support for JPEG XL, supporting it in pretty much all their software. I never though I'd experience the day where I was saying Apple were at the forefront of format support, and Google were the ones acting like a child who want everyone to like their toy and nothing else.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, dalekphalm said:

Feel free to provide counter evidence to how it works.

Evidence: A live photo is not 100MB+ as it would have to be if there'd be several other full-quality frames to chose from. A single still image is 3-5MB in HEIC.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Dracarris said:

Evidence: A live photo is not 100MB+ as it would have to be if there'd be several other full-quality frames to chose from. A single still image is 3-5MB in HEIC.

To be fair, that's an explanation, not evidence, but @Obioban already provided a good explanation with photo evidence.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LAwLz said:

I never though I'd experience the day where I was saying Apple were at the forefront of format support, and Google were the ones acting like a child who want everyone to like their toy and nothing else.

I think this comes down to HW encoder/decoder support.  There appears to be rather good JPEG-XL support in modern apple chips and apple have been pushing hard for a HDR image standard (that JPEG-XL supports).   

 

2 hours ago, LAwLz said:

You can also save multiple images with varying dimensions inside the same JPEG XL

Im still looking for a IO library that abstracts this to a level that does not require me to implement my own.  

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, LAwLz said:

It's also worth noting that Google is currently heavily against JPEG XL, going as far as removing support for it in their software. Apple on the other hand is the company with the best support for JPEG XL, supporting it in pretty much all their software. I never though I'd experience the day where I was saying Apple were at the forefront of format support, and Google were the ones acting like a child who want everyone to like their toy and nothing else.

 

It's amazing how Google's solutions are always garbage and the only reason they can push them at all is by taking away the choice to not use them.

 

JPEG XL should be the de-facto replacement for JPEG. They only gave it 2 years. Did you know it took more than 5 years for PNG to replace GIF? And we still do not have a lossless animated alphachannel format?

 

Hell we don't even have a lossy alphachannel format. Not that there would ever be a desire for such as a still image. But there is for video, and the only format that supports Alpha channel is ProRes422 (ffmpeg prores_ks), which is like 100MB/sec. Not Megabits, MegaBytes. But another format that supports "alpha channel" is SpeedHQ SHQ7/SHQ9 (NDI4)

 

So what is one to do when they need to composite alpha-channels? Well, sucks to be you if you're an animator on Windows. 

 

It's kinda funny in the grand scheme of things, how all the photography and Film (live action) tools pay lip service to alpha channel needs. HEIC supports storing "metadata" for alpha, but that doesn't mean it will survive a pass through a tool that modifies the image.

 

H.265 "can" support alpha channel, in a hugely hacky way, which is why HEIC is still a hacky way to do it.

 

AVIF. Same problem as HEIC and WebP in this regard. These are not "image" formats, these a "Video formats" that require Video decoder paths, to render a still image. WebP, HEIC, AVIF should never have been "image formats", because what they really are, are containers for video data that is at the discretion of the runtime, in which everything is optional. So one browser might render it in software and be slow, and another might use a hardware decoder but doesn't support alpha.

 

So yes, JPEGXL makes sense as a still image format, because it looses the baggage of being a video format first. That makes it much more suitable for portable applications and still image software.  That means tools like Photoshop, stand-alone don't need to have video decoder licenses. Likewise all other photo and drawing tools.

 

But we mustn't forget the reason why PNG exists. There was political pushback against patented file formats. There is no reason to adopt a format that is encumbered by patents if the existing stuff we have works already. PNG replaced GIF as a still image, but we didn't get "MNG", the accompanying animated format in the browser, and it was subsequently removed by mozilla for "browser bloat" reasons, despite the fact that it was essentially just "animated PNG". So they developed APNG which is just PNG but animated like GIF (sequence of frames) instead of the more complex MNG which had some features that made it able to replace Flash.

https://bugzilla.mozilla.org/show_bug.cgi?id=18574#c72

 

Mozilla killed MNG over it being "300KB", meanwhile both Firefox and Chrome it ship with AVCODEC which supports JpegXL, a library that takes up several megabytes when compiled.

 

Killing MNG in Mozilla also killed support for JNG, which was a PNG-like container for JPEG data.

WebP, is terrible, and should never have come into existence. With the death of Flash, there is still a void left

 

Like just to point out the elephant in the room. Chrome's attempt to remove JPEGXL is like Mozilla's removal of MNG. The removal of the support, resulted in tools to make the files to cease development because nobody could then view the files. Chrome removing JPEGXL is extremely flimsy because it's part of AVCodec. They would have to make an actual choice to remove it from their fork of AVCodec.

https://chromium.googlesource.com/chromium/third_party/ffmpeg/+/refs/heads/master/configure

 

It's not like Chrome is independently linking all these libraries directly into Chrome anymore more than Firefox is.

 

But what drives support for an image support is people being able to share it, and you nerf that from the browser, and then people won't use it.

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, hishnash said:

I think this comes down to HW encoder/decoder support.  There appears to be rather good JPEG-XL support in modern apple chips and apple have been pushing hard for a HDR image standard (that JPEG-XL supports).   

I don't really buy that argument for why Apple supports it and Google doesn't, because as far as I know there is no hardware acceleration going on with JPEG XL (at least not at this time), and in terms of encoding and decoding, JPEG XL is way faster than AVIF which is what Google is pushing.

 

If it was a performance thing that kept Google from implementing it, they would definitely side with JPEG XL over AVIF.

 

 

6 hours ago, hishnash said:

Im still looking for a IO library that abstracts this to a level that does not require me to implement my own.  

It's a very new format, and just like before we see that replacing an image format is a very difficult task.

If it takes off you will most likely see software implementations that do what you want them to do. But we will see if that happens.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Kisai said:

JPEG XL should be the de-facto replacement for JPEG. They only gave it 2 years.

If you're talking about Google then it's even worse than that, because they didn't give it a fair chance during those two years.

They implemented it as a flag but kept it turned off by default. Then they neglected it (it took them about 2 years to fix a bug that caused a 3x performance drop because it treated static images as being animated) and then they discontinued it citing "lack of usage". No shit people won't use it if you don't even enable it by default, needing users to go into a hidden menu to enable it.

 

 

5 hours ago, Kisai said:

And we still do not have a lossless animated alphachannel format?

JPEG XL can do that, but if you want something today you can use APNG.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, LAwLz said:

I don't really buy that argument for why Apple supports it and Google doesn't, because as far as I know there is no hardware acceleration going on with JPEG XL (at least not at this time), and in terms of encoding and decoding, JPEG XL is way faster than AVIF which is what Google is pushing.

 

I get the feeling there is something more than CPU only since if I try to encode a JPEG-XL in a widget extension (intent) I get an encoding error related to not having access to that HW units from the extension runtime, just the same as if I encode a HEIC or regular JEPG but I can encode PNGs.  

 

Typically intents cant access HW units when triggered while the app process is not running (you cant use the neural engine, gpu etc) so while their might not be any dedicated fix function for JPEG-XL apple might be using some of the existing HW units (other than pure cpu) to do some encoding otherwise I would not expect this exception to be raised. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, hishnash said:

I get the feeling there is something more than CPU only since if I try to encode a JPEG-XL in a widget extension (intent) I get an encoding error related to not having access to that HW units from the extension runtime, just the same as if I encode a HEIC or regular JEPG but I can encode PNGs.  

 

Typically intents cant access HW units when triggered while the app process is not running (you cant use the neural engine, gpu etc) so while their might not be any dedicated fix function for JPEG-XL apple might be using some of the existing HW units (other than pure cpu) to do some encoding otherwise I would not expect this exception to be raised. 

It shouldn't be.

I don't know what Apple does but at least in Chrome there is no hardware acceleration for JPEG encoding or decoding. 

Chrome uses libjpeg-turbo which is purely a CPU-based encoder and decoder. It does however use SIMD instructions.

 

In any case, even if Apple does something in hardware for JPEG XL, the format is lightweight enough to do in CPU just like JPEG is. In fact, some tests shows JPEG XL being just as easy to decode as regular JPEG (when using the backward-compatible recompression option).

There is no hardware reason for why Google dropped support for it and Apple didn't. It's just that Google wants to push its own formats instead. Well, some at Google.

 

Their AVIF team released a comparison to justify their decision about a year ago and it was awful. They used severely outdated software for JPEG XL (an 18 month old version of Chrome, vs the newest for JPEG), they measured the quality using MS-SSIM which is really outdated way of measuring quality and they used a MS-SSIM tuned version of AVIF but not JPEG XL, and they even misunderstood their own charts because it shows JPEG XL being superior in most metrics, yet they concluded that it was worse.

That comparison was one of the reasons they used to remove JPEG XL support, along with stuff like "there is not enough interest from the entire ecosystem", which is funny because the only ones not interested in JPEG XL seem to be Google themselves. But things might be changing, because not too long ago someone from the Chromium team requested that they look into adding support again.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×