Jump to content

The LTS Linux Kernel 5.10 To Be Maintained For Only 2 Years If Companies Don’t Help Support It

Lightwreather

Summary

 Kernel 5.10 is the latest LTS release and it will be supported till 2022 only unless some companies decide to support it.

 

Quotes

Quote

As of now, Linux Kernel 5.10 LTS will be supported until 2022. Even though some believe that 2 years is a very short time for a Long-Term Release, Greg disagrees by mentioning:

Not true at all, a “normal” stable kernel is dropped after the next release happens, making their lifespan about 4 months long. 2 years is much longer than 4 months, so it still is a “long term supported” kernel in contrast, correct?

Makes sense, right? But, what was different for Linux Kernel 5.4 that it is supported until 2025 when compared to Linux Kernel 5.10?

Linux Kernel 5.10 LTS is also set to be the default kernel for Debian 11 “Bullseye” and Google’s next Android release is also going to be utilizing it.

So, why can’t it be maintained for the next 6 years?

Well, it looks like, not enough companies have committed to help for the maintenance and testing of Linux Kernel 5.10 compared to 5.4.

So, without any proper resources to support its maintenance over the years for various devices and systems, how can we expect Greg to keep working on it?

That’s what he had to say about it:

Because, 5.4 almost did not become “6 years” of support from me. That was because in the beginning, no one said they were going to use it in their devices and offer me help in testing and backporting. Only when I knew for sure that we had people helping this out did I change the date on kernel.org. So far the jury is still out for 5.10, are you willing to help with this? If not, why are you willing to hope that others are going to do your work for you? I am talking to some companies, but am not willing to commit to anything in public just yet, because no one has committed to me yet. What would you do if you were in my situation?

Greg also hints that Linux Kernel 5.10 can be supported for more than 2 years only if enough companies commit their resources to help achieve that.

 

My thoughts

Well, I do hope that companies will step forward and support this kernel, especially if android 12 and beyond are gonna be based on it.

Sources

https://news.itsfoss.com/linux-kernel-5-10-support/

"A high ideal missed by a little, is far better than low ideal that is achievable, yet far less effective"

 

If you think I'm wrong, correct me. If I've offended you in some way tell me what it is and how I can correct it. I want to learn, and along the way one can make mistakes; Being wrong helps you learn what's right.

Link to comment
Share on other sites

Link to post
Share on other sites

I thought Greg gets paid by the foundation though. 
 

If he gets paid by the foundation then why is he asking for support?

 

Also, not a fan of how dismissive he is of the very real needs of LTS support lengths for Linux.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, AluminiumTech said:

I thought Greg gets paid by the foundation though.

 

If he gets paid by the foundation then why is he asking for support?

 

One person can't maintain the entire Linux Kernel all on their own. He's not asking for support to pay his own wages, he's asking for support to actually maintain the damn thing in the form of manpower and testing.

CPU: i7 4790k, RAM: 16GB DDR3, GPU: GTX 1060 6GB

Link to comment
Share on other sites

Link to post
Share on other sites

If anything this will help with Hardware Support, the LTS kernel always lags behind and is selective about what gets backported, though that may be distro specific.

I personally don't see why we need a Kernel supported for more than 2 years. A stable release is pushed every 4 to 6 months and nothing is supposed to be pushed that would break current compatibility.

Long term stability and compatibility should be unaffected, with the advantage of receiving better hardware support.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, AluminiumTech said:

If he gets paid by the foundation then why is he asking for support?

It's not about money, it's about whether he has the time to tend to 5.10 AND the other 4-5 kernels he needs to organize maintenance for at any given time without sufficient help. Not that he's alone in doing this of course, but clearly he figured the people he could count on for maintaining this release weren't enough to warrant a promise in that sense.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Sauron said:

It's not about money, it's about whether he has the time to tend to 5.10 AND the other 4-5 kernels he needs to organize maintenance for at any given time without sufficient help. Not that he's alone in doing this of course, but clearly he figured the people he could count on for maintaining this release weren't enough to warrant a promise in that sense.

Reading over the source though it looks like this is a chronic problem and not a one off. Greg seems to be saying that he doesn't have the manpower for any LTS for 6 years unless companies contribute manpower.

 

Which begs the question why doesn't he have the manpower.

15 minutes ago, tim0901 said:

 

One person can't maintain the entire Linux Kernel all on their own.

And he doesn't, there are a lot of testers etc that help.

18 minutes ago, gabrielcarvfer said:

The foundation doesn't have the manpower to backport every single fix for everything that will eventually come out in the future and for every LTS release. 

They could make LTS releases less often and last for longer. Or if they have the money they could hire an army of testers.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, AluminiumTech said:

Reading over the source though it looks like this is a chronic problem and not a one off. Greg seems to be saying that he doesn't have the manpower for any LTS for 6 years unless companies contribute manpower.

Yeah, but so far he seems to have found that help. Maybe this time will be different.

 

Still has nothing to do with who is paying his wage 🤔

9 minutes ago, AluminiumTech said:

Which begs the question why doesn't he have the manpower.

Not enough qualified people volunteer...? The foundation could hire someone but if there isn't enough interest in maintaining a given kernel why spend money that could go to more useful endeavors? Not to mention the required hires may be too many for the foundation to shoulder anyway.

13 minutes ago, AluminiumTech said:

They could make LTS releases less often and last for longer.

That would take the exact same amount of time to maintain... if you keep them around for twice as long but release them half as often the total number of kernels that need to be maintained at any given time is the same. Plus you'd run into backporting issues even more so than they do now if they had to deal with even older kernels.

16 minutes ago, AluminiumTech said:

Or if they have the money they could hire an army of testers.

Clearly they don't or prefer to spend it elsewhere...

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

Not sure why that's news, this happens pretty much to every LTS release, they start off with 2 years, see if companies pick it up (meaning more tests and efforts into maintaining it, otherwise there'd be no reason for it to be a LTS) and then add another ~3 years of support.

 

Since Android and Debian will use it, it's pretty much likely that it'll also have 5 years of support.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, AluminiumTech said:

Reading over the source though it looks like this is a chronic problem and not a one off. Greg seems to be saying that he doesn't have the manpower for any LTS for 6 years unless companies contribute manpower.

 

Which begs the question why doesn't he have the manpower.

And he doesn't, there are a lot of testers etc that help.

They could make LTS releases less often and last for longer. Or if they have the money they could hire an army of testers.

The entire problem with Linux, in a nutshell. That version inflation (which you see in a lot of web-facing software) just does not work on hardware-facing software.

 

There is no reason to build a kernel that isn't supported for 10 years. They're only being supported for 2 years because the OS's being based on them are (eg Android). Nothing requires inflating the version number except adding or removing functionality from it, so maybe the hardware vendors should stop trying to make their hardware disposable so their functionality doesn't have to keep being cycled in and out of the kernel. Meanwhile perhaps Linux should stop trying to release a new point release of a kernel as frequently as possible, given RHEL is way back on 4.18, and stuck around on the 2.6.x kernel for 15 years until 2018.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Oof, that doesn't sound good at all. I truly hope that companies step forward and contribute to it. It's rough when the people who have been maintaining things like this need help. It's like that one package that so and so used back in 2004 that is now a founding principle of the Internet is maintained by Bill in Oklahoma. 

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, Kisai said:

Meanwhile perhaps Linux should stop trying to release a new point release of a kernel as frequently as possible,

I hope not. Linux's contributions keeps getting bigger and more numerous. Waiting for longer before releasing just means bigger release with more regressions and having trouble identifying the commit causing problems.

 

There are hundreds/thousands of bug fixes (that often become security fixes) that never gets backported to LTS and SLTS releases. Staying on an old kernel is never a good idea. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, kvuj said:

I hope not. Linux's contributions keeps getting bigger and more numerous. Waiting for longer before releasing just means bigger release with more regressions and having trouble identifying the commit causing problems.

 

There are hundreds/thousands of bug fixes (that often become security fixes) that never gets backported to LTS and SLTS releases. Staying on an old kernel is never a good idea. 

Yet, that's exactly how LTS releases work. All other OS's don't treat their kernel like a web browser version. There's rarely a reason to increment the major version.

 

Kernel versions, and most open source software unofficially agreed to a versioning mechanism like so:

MAJORVERSION.MINORVERSION.BUILD.PATCH

So 5.10.1.330 or something like that. For all intents, all 5.x kernels should work on any Linux OS running a 5.x kernel, but the reality is that only ever ends up being true if you compile the kernel on the same device, which is something you had to do with 2.2 kernels if you wanted the kernel to fit on floppy disks and USB drives of the time. If you want to keep the newest kernel with whatever OS you slapped together, that's the only path you have to upgrade the kernel on an otherwise unsupported distro.

 

Major versions are when you break backwards compatibility, and you still need a good reason to break that functionality, which many software packages do not have one. Take PHP for example. PHP has removed or broken functionality even on minor versions, for often no reason other than refactoring something that wasn't broken.

 

So it goes:

Major version - New features that replace old features, old feature removal

Minor version - New features that don't replace old features, old features depreciated but not removed, compiler warnings will warn about using old api.

Build version - no new features, bug fixes

Patch version - specific to this OS.

 

Linux's additional problem is that software built on one OS using the Linux Kernel, isn't binary compatible with another OS using the same Linux kernel. Unlike Windows (which will happily run Windows 95 and 98 software in many cases) or MacOS X (which will run anything with x86-64 binaries.)  FreeBSD, being a complete OS, not just a kernel, doesn't have this problem except with Major version changes. OS's built using the Linux kernel are dependent on the OS distro to build everything, and should the distro no longer want to maintain it, sucks to be you. Or the RHEL/CentOS situation.

 

In a perfect scenario, there wouldn't be so many flavors and forks of Linux, and everyone would build off or one or two OS distros (eg one free/consumer, and one commercial/enterprise) with 5+ year support structure, and thus be binary compatible within that linux major kernel version. Yet the reality is NIMBY/Not-invented-here often holds back open source software, and results in a mixture of libraries or services that can't guarantee even statically compiled binaries will work that rely on kernel features.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Kisai said:

Major versions are when you break backwards compatibility, and you still need a good reason to break that functionality, which many software packages do not have one. Take PHP for example. PHP has removed or broken functionality even on minor versions, for often no reason other than refactoring something that wasn't broken.

That's not the case for Linux. Linus updates the major version whenever he seems fit, usually when the minor version is "too big".

 

2 hours ago, Kisai said:

Linux's additional problem is that software built on one OS using the Linux Kernel, isn't binary compatible with another OS using the same Linux kernel.

It is, it's just a matter of dynamic linked binaries vs static linked ones. The same happens in every OS. A completely static linked binary can run in pretty much any kernel version too, no matter the distro.

 

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, igormp said:

That's not the case for Linux. Linus updates the major version whenever he seems fit, usually when the minor version is "too big".

At least he is "one person" in control of it, or things would be an even larger mess.

 

Quote

It is, it's just a matter of dynamic linked binaries vs static linked ones. The same happens in every OS. A completely static linked binary can run in pretty much any kernel version too, no matter the distro.

 

Not true. The nature of Linux, and similar OSS (eg FreeBSD) operating systems is that they demand that you compile against the system library rather than install thousands of versions of shared libraries. So the consequence of that is one OS has version 1.0.0 of OpenSSL and another version has 1.0.1 of OpenSSL and neither are compatible with each other. I've also seen this with LZMA. Core libraries aren't standard, nor are they built into the kernel. If someone streamlined a kernel and omitted pieces from it, then there's no guarantee even a static binary will work if those functions aren't available.

 

A perfect static, portable, binary for Linux requires some very careful compilation and optimization to avoid calling anything that has a chain of library dependences leading to the kernel functions that might be missing, which means it typically has to be written in C only, and any libraries to be statically compiled to also avoid shared/system libraries (license permitting) can't call additional library dependencies without also the entire source code of that library being compiled statically as well.

 

Which then creates the problem that shared libraries was intended to prevent, where a system library could be updated to fix bugs/security issues. However even then, from a security point of view, you don't want the shared library to be a point of interception either.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Kisai said:

Not true. The nature of Linux, and similar OSS (eg FreeBSD) operating systems is that they demand that you compile against the system library rather than install thousands of versions of shared libraries. So the consequence of that is one OS has version 1.0.0 of OpenSSL and another version has 1.0.1 of OpenSSL and neither are compatible with each other.

I'm not sure that true. When my Fedora system downloads a libc update, I don't have to redownload every single package. Same goes for OpenSSL and every package using crypto.

 

at my

/usr/lib64/

there is only one libc-2.32.so

 

Linux (the kernel) has a stable userspace. The libc & core libraries are often used to use as wrappers and those tend to be super stable. But in the case you want control over your binary and don't to dynamically link it, we have also seen some cool stuff out of snaps/flatpaks. These bundle the libc/lbraries by version so the developpers get to pick their versions. Thanks to OSTree, you can also have file deduplication. So after having downloaded the GNOME 3.36 runtime, the 3.38 used by another program will share files that are identical.

 

Problems can rise with out of date runtimes, but a big warning + an easy way to upgrade those with a pull request (even as a non coder) can help a lot.

 

There is the rare case of needing deep hooks into the kernel (like the NVIDIA driver) in which case it breaks every time the kernel devs decide to change them (like with 5.10), but that stability was never assured by them.

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, kvuj said:

There is the rare case of needing deep hooks into the kernel (like the NVIDIA driver) in which case it breaks every time the kernel devs decide to change them (like with 5.10), but that stability was never assured by them.

BUt yOu dOn"T unDErStANd, iT'S "t̶h̶e̶ ̶w̶a̶y̶ ̶i̶t̶'̶s̶ ̶m̶e̶a̶n̶t̶ ̶t̶o̶ ̶b̶e̶ ̶p̶l̶a̶y̶e̶d̶"

Ok it's not like AMD's problem free either, but boy that marketing though (*cough* HW Unboxed)...

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Kisai said:

The nature of Linux, and similar OSS (eg FreeBSD) operating systems is that they demand

They don't, that's up to the dev.

 

3 hours ago, Kisai said:

So the consequence of that is one OS has version 1.0.0 of OpenSSL and another version has 1.0.1 of OpenSSL and neither are compatible with each other

As I said, that's the case with dynamic linked binaries. Building a static one means that whichever OpenSSL version you used is "bundled" in your binary, and will just ignore your system's one.

 

3 hours ago, Kisai said:

If someone streamlined a kernel and omitted pieces from it, then there's no guarantee even a static binary will work if those functions aren't available.

That's why Linus tries to have a somewhat stable ABI, meaning that your binary won't simply stop working due to missing symbols from one version to another.

 

3 hours ago, Kisai said:

A perfect static, portable, binary for Linux requires some very careful compilation and optimization to avoid calling anything that has a chain of library dependences leading to the kernel functions that might be missing, which means it typically has to be written in C only, and any libraries to be statically compiled to also avoid shared/system libraries (license permitting) can't call additional library dependencies without also the entire source code of that library being compiled statically as well.

I can send you a statically linked program in C or C++ if you want to try it out. Some languages like Go and Rust make it extremely easy too. 

Fun fact, C and C++ are actually somewhat hard to fully static link due to the usual glibc dependency (you can get around that by using musl or any other sane libc impl).

 

3 hours ago, Kisai said:

Which then creates the problem that shared libraries was intended to prevent, where a system library could be updated to fix bugs/security issues. However even then, from a security point of view, you don't want the shared library to be a point of interception either.

Binary sizes is another problem. Well, used to be a problem, but not really for desktop/server usage.

 

 

Somewhat related to the topic but not to the argument, you might like to check out Oasis. It's a fully static linked linux system. Also here you can see some dynamic linking "advantages" being somewhat debunked.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, igormp said:

Somewhat related to the topic but not to the argument, you might like to check out Oasis. It's a fully static linked linux system. Also here you can see some dynamic linking "advantages" being somewhat debunked.

That, actually doesn't surprise me at all. Aside from glibc/libc and openssl/openssh on a system most of the other stuff seems to work fine statically compiled on open source systems. MacOS X and Windows, which don't use these normally, still have the problem with their own libraries which provide the same functionality.

 

Anyway doesn't change the argument. You'd probably be able to get more "LTS" out of an OS if all the shared library nonsense was stripped from the OS as much as possible, and a reverse monitoring of the software was used to prevent software using versions of libraries that have defective functions from running unattended.

 

Link to comment
Share on other sites

Link to post
Share on other sites

I do wish all software update at the same time so nobody need to make legacy compatibility 

Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 9 5900X          Case: Antec P8     PSU: Corsair RM850x                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×