Jump to content

My opinion on Linus and Luke's Linux challenge

out for the box Linux mint is best but really i wanta see something  different .  let see red core and funtoo

 

Link to comment
Share on other sites

Link to post
Share on other sites

The problem is that enthousiasts are most of the times 'powerusers' They know every knook and cranny of the computer and OS they are working on. This makes them do tasks just on muscle memmorry. If I want to continue development on my indie game  I just mash the windows key press v, i ,s and visual basic community would pop up and after pressing enter it is off to the races.

 

Now I also rock a netbook with Arch32/LXQT. Out of the box LXQT has a legacy start menu and the standard keyboard shortcuts are different. For instance the 'windows-key' does nothing out of the box. Instead you have to use ALT-F1. But this will just open the start menu. And typing does nothing. Now there is in fact similar and IMHO better functionality. To access it you have to press ALT-F2 and boom type "ka" for kate and I am happily programming away. (Sadly there is no visual studio Code for 32bit architecture anymore). The windows search breaks every other day. The LXQT variant just works.

 

So for me the experience on my netbook is actually better. But for someone new to this particular environment the experience is going to be awfull. They will think this feature is missing and just doing the simple task of loading a rich text editor will take them longer then they are used to.

 

It is like having to drive on the right hand side while you are used to driving on the left hand side of the car. While I was in South-Africa the first couple of days I would constantly turn on the window wipers when I meant to turn on the turn signal! But after some time I became used to it and turning the turn signal on and off was again something that I was just doing from muscle memory. The point being: It took some effort to become as proficient a driver as I was in my home country. Now imagine this but the controls are a lot more complex. That is the learning curve a new GNU/Linux user has to go through!

 

It might be useful to make a video just explaining all the analogues for common  'Windows power features'  Like shortcuts, commands and recommended apps for tasks. Although this will most likely be a minefield because just for terminal emulation there are more then 20 options. Most of them just forks with a slightly different feature set.  Which is really nice because you can use the app that bests suits your needs. But to be honest the terminal emulator bundled with your desktop environment is probably more then good enough for >99% of users.

 

But the mere amount options can be quite overwhelming for new users.

Link to comment
Share on other sites

Link to post
Share on other sites

@LinusTech Hey so feel free to ignore this just a short thought. You pay for a windows license that gives effectively 0 real support right? Have you considered testing out the payed support provided by major distros such as Ubuntu. Their whole business model is providing High quality support for businesses or users with un common use cases. I would be curious of the value on offer there as I have never used it. However it may or may not wind up a good idea as Free as in freedom is more of a selling point of linux than just the free part.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/1/2021 at 10:17 PM, mail929 said:

If this happens to catch your attention hi @LinusTech and @Slick...... @GabenJr will be a good filter for you.

:

:

Ranking

If I had to sort my recommendations:

  1. Ubuntu
  2. Pop
  3. Fedora
  4. Manjaro
  5. Elementary
  6. Debian
  7. Mint
  8. SUSE
  9. Arch
  10. Gentoo

 

There are many more Linux distros out there and I'm sure plenty of people will disagree with me, but that's the beauty of Linux, there is something for everybody.

 

I only just joined this forum because I heard about the challenge third hand since I stopped using YT almost over a year ago now and didn't care to follow any creator that wasn't elsewhere like Odysee/LBRY or Rumble. Plus I'm exclusively a Linux user only using Windows once a quarter for a couple of hours when I have to do some data stuff at work where I can't risk downloading the data to my machine for security reasons, so Windows content is generally just of little to no relevance for me.

 

Anyway, I heard Gentoo somehow made the list and was LMAO thinking Linus and co using Gentoo? One of their fans must be trolling them.

 

Seeing this post, and I have used Linux since 2000 and have been through pretty much all distros back when I was still trying to find what worked best for me (I have played the distro hoping, desktop environment hoping, feeling cool changing boot loaders, grub screen background, coverflow, and just about every customisation you could make game; even created my own distro at one point). Much older now and not sure wiser but definitely more about getting work done and not wasting time faffing around with things I don't need to, over my time using Linux, I have come to the conclusion that for most people, there are really only 2 core distros to consider @mail929 has pretty much covered them all here, though there are things I would dare to changed.

 

Firstly, the 2 core distros I would recommend for the typical Windows or Mac user are Ubuntu and Arch depending on how comfortable they are with OS upgrades. However, I would not use either of these 2 distros as their base unless, for example, using AMD GPUs and one needs the proprietary drivers from AMD themselves rather than using the opensource Mesa drivers. In both cases I would go with distros based on the core two. So for me that is:

 

Ubuntu (Windows style OS version upgrades)

  1. PopOS!,
  2. Elementary,
  3. Mint

and 

 

Arch (rolling OS upgrades)

  1. Garuda (would recommend getting the Dragonified workstation or gaming version that uses KDE),
  2. Manjaro,
  3. EndeavourOS

 

I used ElementaryOS since 2017 and only recently switched to Garuda because I needed more up-to-date mesa drivers for my Radeon VII and the guys behind Elementary were taking too long to roll out the Ubuntu 20.04 based Elementary 6 putting me at risk of the obaif PPA ending support for Ubuntu 20.04 long before Elementary 7 would be out. Added to that, I also got tired of the entire process involved in upgrading from one version of Ubuntu to the next. So the rolling updates of Arch seemed to make more sense.

 

I still have a base Ubuntu 18.04 rig I use as a media streaming, data storage, NextCloud server and crypto mining rig (where I need the official AMD OpenCL drivers).

 

I would stay well clear of Gentoo even if it is a Gentoo-based distro like Redcore Linux or Sabayon Linux for obvious reasons that @LinusTech and @Slick are both new to Linux.

 

Fedora is great and would be up there with Ubuntu, but unless you plan on doing enterprise type software development or engineering and a lot of the things Wendell @Level1Tech does, I would not consider Fedora appropriate for the average LTT follower.

 

Debian I simply wouldn't recommend again not because it is bad (Ubuntu was built off Debian), but because it has a habit of being out-of-date due to the underlying design philosophy of long-term consistency. You can get around this problem by using the backports to get the more bleeding edge updates but it may just become too much work for a typical LTT follower who I feel to be a typical Windows user who would not have the attention span or interest to go into repo settings. Essentially, Debian is great if you are running a server or a system you are not planning to change or keep current. It is very similar to Fedora in that sense of consistency and stability.

 

SUSE I have not used or gone anywhere near since 2007 because it was a cluster back then and was just a nightmare to work with. I am sure that has changed and millions of people use it, but it left a bad enough taste in my mouth back then that I have just never gone back to even look at it.

 

Another thing to keep in mind when choosing the distro from the perspective of PC gamers is, Valve's choice of distros they based SteamOS on. Historically, Valve have favoured Ubuntu and many hardware vendors providing Linux support have also started with Ubuntu. However, Valve has now moved to Arch for the Steam Deck to it is very likely that for gaming, you would want an Arch-based distro or at least pick a distro that is on a similar software and driver update cycle as Arch.

 

In any case, Linux requires a different mindset that Windows when it comes to getting software and drivers. With Linux, it is very rare for the user to be going to manufacturer websites for drivers. There are community maintained repos (PPAs or AURs, etc) that have pretty much everything you need. In a sense Linux is more like MacOS and its integration with the Apple App store. The Nvidia vs AMD thing in the GPU side also gets flipped on its head because AMD GPUs generally just play better with Linux and that is all Nvidia's fault for refusing to play nice with desktop Linux and opensource more broadly. When I was running GPU passthrough, I just landed on AMD = Linux, Nvidia = Windows for minimal headaches. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just finished todays WAN show and I thought I'd share a screenshot on the small chance Linus reads this thread...

 

desky.png.4ccb016a25580259943573b49031cd4c.png

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/27/2021 at 4:43 AM, Switchboy said:

The problem is that enthousiasts are most of the times 'powerusers' They know every knook and cranny of the computer and OS they are working on. This makes them do tasks just on muscle memmorry. If I want to continue development on my indie game  I just mash the windows key press v, i ,s and visual basic community would pop up and after pressing enter it is off to the races.

 

 

Thank you! I have had this debate over and over with other Linux users for as long as I can remember and it is always the same thing, many Linux users forget that they are NOT the typical PC user. The typical PC user only uses their computer for Word, internet and streaming videos and are very much mouse clickers. So when some Linux users start making recommendations to people looking to get into Linux, they often end up making completely out of touch recommendations that turn off the person coming to them for advice.

 

It is actually a similar thing I believe when it comes to PC building or buying advice too. You have a class of PC users that are running 1440p 144hz monitors buying RTX 3080 or RTX 3090 GPUs because they want the butter smooth frame rate experience to play first person shooters, meanwhile they completely ignore the fact the vast majority of PC users including PC gamers are using 1080p 60hz monitors and wouldn't necessarily even notice any difference in their experience even if they bought a 1440p 144hz monitor. And the typical PC user would not spend a lot of money on their monitors so even if they got a 1440p monitor it would be on the lower end. For me personally, I prefer gaming at 4K again getting as close to a consistent 60fps as possible because I don't play competitive shooters and tend to play games that hare high in visual arts quality/rendering like Tomb Raider, Resident Evil 2/3, Death Stranding, Star Citizen, Project Cars, etc because I notice and pay attention to those finer details in shadows, reflections, image sharpness, text, colour fade, etc. However, I wouldn't recommend other people buy the same type of hardware I got if they really don't need it for how they use their PCs.

 

This is why, when people come to me for advice on PC builds and OS choices, the first thing I do is get a sense of who they are and how they will use their PCs. It is why, the learning curve for most people I advice on Linux distros is not as high as most people would think. Most people I have advised on Linux generally just end up taking to things like a duck to water because I am not making stupidly insane recommendations like use Gentoo for someone who only ever used word and the internet on Windows.

 

As for the short-cuts, what you find is a lot of the Windows or MacOS shortcuts translate like-for-like in Linux. And if you're using the exact same applications in Linux as you did in Windows or Mac, there is no difference. Eclipse, Blender, DaVinci Resolve, etc all function in the exact same way on Linux as they do on Windows. The only places you will find major differences is in the desktop environment or window managers (gnome, KDE, enlightenment, Xfce,  Pantheon, i3, Sway etc) itself. And this is where people get confused and tripped up. Desktop environments are customised desktop views of how you interact with applications. Microsoft and Apple defined these Windows and MacOS desktop environments a long time ago and stuck with that design; though MS has changed things a bit with Windows 11 just as they did moving from Windows 95 to Windows 7 and Windows Vista. Linux offers a high degree of customisation and so offers a massive array of desktop environments. Each desktop environment has its own quirks or differences from the others. For example in gnome the rename shortcut key is F2 while KDE is F10, Pantheon doesn't permit desktop interaction or application shortcuts by default (something I like because of just how cluttered people keep their desktops because they save everything to desktop and don't do proper file management). Enlightenment uses right click on the desktop as the start menu. However this is just about design philosophy and not really any different from going from Windows to MacOS or MacOS to Windows.

 

So for a MacOS user going to Linux I would recommend Garuda Linux (KDE) or ElementaryOS (Pantheon)  because both would be easier to hit the ground running with. For Windows users, there would be more options, but it would be the same thing, pick a simple to install and install on distro and pick a desktop environment that feels similar to Windows even if it doesn't look similar LXQT, and Gnome are good choices here. I would never recommend tiled desktops like i3, bspwm, or Sway until the person is comfortable with tinkering with Linux.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Master Disaster said:

Just finished todays WAN show and I thought I'd share a screenshot on the small chance Linus reads this thread...

 

desky.png.4ccb016a25580259943573b49031cd4c.png

This doesn't look like Dolphin. Linus told that it is working in Nautilus (or whatever is there in gnome) but doesn't in Dolphin.

 

It is known issue but it's decade old, lol. Although right way through polkit is slowly going there.

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, gudvinr said:

This doesn't look like Dolphin. Linus told that it is working in Nautilus (or whatever is there in gnome) but doesn't in Dolphin.

 

It is known issue but it's decade old, lol. Although right way through polkit is slowly going there.

Wait what distro is Linus using? I use Dolphin on Garuda Linux and there is a "root actions" service in KDE. The attached screenshot is not from my system but the KDE site  so surprising it's even an issue

48411-1.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, thedarthtux said:

Wait what distro is Linus using? I use Dolphin on Garuda Linux and there is a "root actions" service in KDE. The attached screenshot is not from my system but the KDE site  so surprising it's even an issue

48411-1.jpg

It's not possible to run Dolphin from root since 2017 when IIRC vulnerability was found. It might be some feature in your distro.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, gudvinr said:

It's not possible to run Dolphin from root since 2017 when IIRC vulnerability was found. It might be some feature in your distro.

O right! Yeah completely forgot about that. For some reason I was thinking about opening terminal as admin from the folder probably reflex since that's what I use the root actions for all the time now I'm on Garuda and the Arch AURs so no longer manually installing or copying things into the /opt folder like I used to on Elementary OS. I typically have another file manager like Thunar or Nautilus to open as root. Not sure why I kept Dolphin as my default file manager now I'm thinking about it.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...
On 10/21/2021 at 8:33 PM, finest feck fips said:

Can you ELI5 why FreeBSD jails are supposed to be more secure than Docker?

Jails were developed as a way to contain the omnipotent root in Unix that was a criticism of Unix at the time. (Only root can do stuff but root can destroy the box.) Essentially it's OS level virtualization to run untrusted or insecure code. (at the time sendmail and ftpd) without developing fine grained access controls. It's a slick solution to a hard security problem.

 

Being that it was built with the purpose of being a security tool and developed in the late 90's. It is deeply baked into FreeBSD and is a primary construct.

The same is true for Solaris Zones. (Sun's idea of Jails but better)

 

Docker took an entirely different road to being birthed. It took two different concepts in the Linux kernel designed for different things (namespace and cgroups) and made a framework for OS level virtualization in the aim to provide rapid deployment and development. Security was an afterthought in it's initial design and added later (via various selinux modules) None of these independent groups talk very well with each other and docker has had some major problems because selinux defaults sometimes are configured to allow any user in docker to mount the hosts OS's drive. (What?! I know, trust me it's dumb) Docker or containers are not a primary construct in the Linux kernel and nowhere in the code will you find something called a container or any reference to OS level virtualization. It's all a upstack construct on top of the kernel. All the security is wrapped around it like an egg.

In FreeBSD and Solaris/Illumos that's not the case every tool in the system is aware of these.. (if you do a ls or a top in FreeBSD you see the jail processes and their Jail ID number for instance) Really it's an artifact of how Linux is designed.. by many groups all trying to scratch their own itch. Where as FreeBSD and Illumos are one group making one OS.


As it sits right now most sysadmins and experts will not run Docker outside of a VM and that defeats the purpose and performance gains of OS level virtualization. It's not really there yet and the wonderful world of OS level virtualization has not yet been realized by most people. (unless they are using FreeBSD or Illumos)

ELI5. Jails and Zones were made to be secure, security was the problem they were trying to solve. Docker was not made to be secure from the start, rapid deployment was the problem they were focused on.

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, jde3 said:
Spoiler

Jails were developed as a way to contain the omnipotent root in Unix that was a criticism of Unix at the time. (Only root can do stuff but root can destroy the box.) Essentially it's OS level virtualization to run untrusted or insecure code. (at the time sendmail and ftpd) without developing fine grained access controls. It's a slick solution to a hard security problem.

 

Being that it was built with the purpose of being a security tool and developed in the late 90's. It is deeply baked into FreeBSD and is a primary construct.

The same is true for Solaris Zones. (Sun's idea of Jails but better)

 

Docker took an entirely different road to being birthed. It took two different concepts in the Linux kernel designed for different things (namespace and cgroups) and made a framework for OS level virtualization in the aim to provide rapid deployment and development. Security was an afterthought in it's initial design and added later (via various selinux modules) None of these independent groups talk very well with each other and docker has had some major problems because selinux defaults sometimes are configured to allow any user in docker to mount the hosts OS's drive. (What?! I know, trust me it's dumb) Docker or containers are not a primary construct in the Linux kernel and nowhere in the code will you find something called a container or any reference to OS level virtualization. It's all a upstack construct on top of the kernel. All the security is wrapped around it like an egg.

In FreeBSD and Solaris/Illumos that's not the case every tool in the system is aware of these.. (if you do a ls or a top in FreeBSD you see the jail processes and their Jail ID number for instance) Really it's an artifact of how Linux is designed.. by many groups all trying to scratch their own itch. Where as FreeBSD and Illumos are one group making one OS.


As it sits right now most sysadmins and experts will not run Docker outside of a VM and that defeats the purpose and performance gains of OS level virtualization. It's not really there yet and the wonderful world of OS level virtualization has not yet been realized by most people. (unless they are using FreeBSD or Illumos)

ELI5. Jails and Zones were made to be secure, security was the problem they were trying to solve. Docker was not made to be secure from the start, rapid deployment was the problem they were focused on.

Thanks! This is the explanation I was looking for.

Do Illumos and FreeBSD also use policy systems to manage and configure Jails and Zones, just with better defaults, or are they secure-by-default in a way that doesn't require any policy system at all?

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, finest feck fips said:

Thanks! This is the explanation I was looking for.

Do Illumos and FreeBSD also use policy systems to manage and configure Jails and Zones, just with better defaults, or are they secure-by-default in a way that doesn't require any policy system at all?

Yes'ish not really a global policy. They are controlled through sysctl and jail.conf file. (Illumos has tools for this zonecfg)

 

It's more a thing where every part of the OS is aware of them.. they aren't really a bolt on. Jails are pretty secure by default.. I can't think of anything really they missed but the system security might need to be relaxed a little to do certain things.. depending on the app you want to run.

I'd say try Bastille on FreeBSD https://bastillebsd.org/

It's a very easy manager and very docker'esque. It was created by a SaltStack developer so it's jails for rapid deployment. Basically it's Docker for jails. We use it to spin up test systems very quickly or deploy something we need for a temporary fix. Once it's all setup you can get a new system deployed with it in about 30 seconds.. it's soo niiicce when you need a microservice right now.

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, jde3 said:

Yes'ish not really a global policy. They are controlled through sysctl and jail.conf file. (Illumos has tools for this zonecfg)

That's kinda better. Comprehensive policy systems usually strike me as complex and verbose. Sysctl and jail.conf sound nice and simple.

 

18 minutes ago, jde3 said:

I'd say try Bastille on FreeBSD https://bastillebsd.org/

I'll give it a try! Sounds cool. I don't have a real use case atm but maybe I'll use a FreeBSD VM for development for some courses I'm starting next week, then mess with Bastille while I'm on there.

 

Embarrassing question; all the purists will hate it and warn me about how it'll break scripts I might otherwise be able to use from the community, etc., but: what's the easiest way to install GNU coreutils, find, sed, and grep on FreeBSD (just) for my login shell (without prefixes like gls, gfind, gsed, etc.)?

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, finest feck fips said:

That's kinda better. Comprehensive policy systems usually strike me as complex and verbose. Sysctl and jail.conf sound nice and simple.

A lot of FreeBSD is that way.. usually it's just a file. simple.

 

16 minutes ago, finest feck fips said:

Embarrassing question; all the purists will hate it and warn me about how it'll break scripts I might otherwise be able to use from the community, etc., but: what's the easiest way to install GNU coreutils, find, sed, and grep on FreeBSD (just) for my login shell (without prefixes like gls, gfind, gsed, etc.)?

Running jails in a VM sort of defeats the purpose of doing OS level virtualization but if you don't have a server and want to play around with it it's fine.

And no FreeBSD uses BSD userland (like MacOS). Yes, you can install the GNU coreutils but they are prefixed with g as you mentioned. The only thing I can think of is rewrite the scripts for BSD userland so they work without a bunch of stupid flags the GNU added lol ... OOORR... add aliases to your .bash_profile for them.

alias ls=gls

alias find=gfind
etc. (do not do this for root)

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, jde3 said:

And no FreeBSD uses BSD userland (like MacOS).

I know, using macOS is how I know I want the GNU coreutils, hahaha! Aliases should be fine since it's more just for interactive use that I prefer them. I want my bloat, baybeeeeeeee!!


It's actually mostly for interactive use, since that's kinda muscle memory at this point. For scripts, learning the differences won't be a frustration.

Next week I'll hit you up to ask why the system won't boot after I install systemd. 😉

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, jde3 said:

Docker took an entirely different road to being birthed. It took two different concepts in the Linux kernel designed for different things (namespace and cgroups) and made a framework for OS level virtualization in the aim to provide rapid deployment and development. Security was an afterthought in it's initial design and added later (via various selinux modules) None of these independent groups talk very well with each other and docker has had some major problems because selinux defaults sometimes are configured to allow any user in docker to mount the hosts OS's drive. (What?! I know, trust me it's dumb) Docker or containers are not a primary construct in the Linux kernel and nowhere in the code will you find something called a container or any reference to OS level virtualization. It's all a upstack construct on top of the kernel. All the security is wrapped around it like an egg.

This isn't a very good summary of what Docker is/does and confuses SELinux changes made to add security features to Docker with Docker itself. Also the default SELinux + Docker permissions do not allow you to mount / into your container. You're thinking of non-SELinux distros, it's there where the default Docker permissions allow sudo users & members of the "docker" group (if it exists) to mount / into any container they like. 

 

That said, Docker is not built around the idea of running untrusted software. You should never run any binary inside a Docker container that you wouldn't run outside of it. The Docker daemon runs as root, so the security of Docker is roughly equivalent to any other process run as root.

 

Docker was and is used to allow developers to bundle and run software with all it's dependencies in one place. This is why container orchestration has become such a big deal lately, because it lets you have some of your dependencies be other containers. The idea here is that nginx:latest runs *exactly* the same on my machine as it does in production. This is Docker's big feature, and what makes Docker/containers so popular.

 

Nobody runs nginx, node, or mariadb in a Docker container because they don't trust the software enough to install it directly onto their machine from the repos. They run in a Docker container for the orchestration, development, and testing benefits bring.

 

FreeBSD jails and other virtualization tools just have different goals. Saying that Docker is bad or insecure because it doesn't offer the same level of process isolation as FreeBSD jails completely misses the point of what both tools are trying to do.

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/12/2021 at 4:42 PM, maplepants said:

This isn't a very good summary of what Docker is/does and confuses SELinux changes made to add security features to Docker with Docker itself. Also the default SELinux + Docker permissions do not allow you to mount / into your container. You're thinking of non-SELinux distros, it's there where the default Docker permissions allow sudo users & members of the "docker" group (if it exists) to mount / into any container they like. 

 

That said, Docker is not built around the idea of running untrusted software. You should never run any binary inside a Docker container that you wouldn't run outside of it. The Docker daemon runs as root, so the security of Docker is roughly equivalent to any other process run as root.

 

Docker was and is used to allow developers to bundle and run software with all it's dependencies in one place. This is why container orchestration has become such a big deal lately, because it lets you have some of your dependencies be other containers. The idea here is that nginx:latest runs *exactly* the same on my machine as it does in production. This is Docker's big feature, and what makes Docker/containers so popular.

 

Nobody runs nginx, node, or mariadb in a Docker container because they don't trust the software enough to install it directly onto their machine from the repos. They run in a Docker container for the orchestration, development, and testing benefits bring.

 

FreeBSD jails and other virtualization tools just have different goals. Saying that Docker is bad or insecure because it doesn't offer the same level of process isolation as FreeBSD jails completely misses the point of what both tools are trying to do.

I stand by my comments. Both tools provide OS level virtualization. Both can be used the same way for the same reasons you want containers but the difference is nobody trusts docker to use it for a sandbox or security isolation. Whereas Jails was designed to do that 20 years ago.

FreeBSD Jails can do 1:1 identical rapid deployments the same way as docker.
EX: bastille template TARGET bastillebsd-templates/nginx

 

In fact, since FreeBSD has a backwards compatible API you can jail old versions of FreeBSD on newer kernels. You can run a 20 year old FreeBSD 4 jail on FreeBSD 13's kernel. So if you have that one old system your company can't live without, you can tar it up, move it to a new system and run it in a secure layer.

 

The real problem is the Linux kernel team has not done the hard work to fully bake it into the system and make this a reality, so as it sits there is no true container in Linux.. just an upstack abstraction. That's why ppl run them in VM's. The VM isn't needed for Jails or Zones because they did the work to make them part of the system.
 



 

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, jde3 said:

Both can be used the same way for the same reasons

No, they really can't. The purpose of FreeBSD jails is completely different from the purpose of Docker. For one, you can run software you don't trust in a FreeBSD jail, and Docker was never intended to do that. The reason that the nginx container has over 1 billion downloads is not that people don't trust the version in their distro's repos. It's because of the other features that Docker has.

2 hours ago, jde3 said:

FreeBSD Jails can do 1:1 identical rapid deployments the same way as docker.
EX: bastille template TARGET bastillebsd-templates/nginx

This is completely untrue. FreeBSD Jails don't offer anything remotely similar to docker-composedocker swarmdocker hubkubernetes, or the docker registry

 

Of course this isn't to say FreeBSD Jails are bad. It's just that the use case for FreeBSD Jails and for Docker are wildly different. 

2 hours ago, jde3 said:

The real problem is the Linux kernel team has not done the hard work to fully bake it into the system and make this a reality, so as it sits there is no true container in Linux.. just an upstack abstraction. 

Not to be a broken record, but this is also totally wrong. Not only is lxc a part of the Linux kernel, but it used to be how Docker containers were run. These days the most popular container runtime using lxc directly is lxd. However, there's technically nothing stopping you from using lxc directly. Personally, I prefer the interface that something like Docker or lxd provides, but if you just want to use what the kernel provides you can roll your own lxc container runtime without installing anything beyond the gcc tools you'd need to compile your code.

 

I'm trying not to be too negative here, because I don't want this to feel like a post that just focuses on the specific technical things you got wrong. The thing you're kind of advocating for here "what if you had container like orchestration, but full VM isolation and security" does in fact exist. It's called multipass and it leverages cloud-init to basically let you construct a complete VM solution that aims to rival the standard docker cli. The forums have a good number of pre-defined images, plus some official appliances and honestly, the whole thing is cool as hell. It's much more flexible than something like firecracker, because it doesn't require you to compile your own kernel for each image.

 

If you've kind of held back on using containers because you want the isolation a vm provides then you should check out the multipass project.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, maplepants said:

No, they really can't. The purpose of FreeBSD jails is completely different from the purpose of Docker. For one, you can run software you don't trust in a FreeBSD jail, and Docker was never intended to do that. The reason that the nginx container has over 1 billion downloads is not that people don't trust the version in their distro's repos. It's because of the other features that Docker has.

This is completely untrue. FreeBSD Jails don't offer anything remotely similar to docker-composedocker swarmdocker hubkubernetes, or the docker registry

 

Of course this isn't to say FreeBSD Jails are bad. It's just that the use case for FreeBSD Jails and for Docker are wildly different. 

Not to be a broken record, but this is also totally wrong. Not only is lxc a part of the Linux kernel, but it used to be how Docker containers were run. These days the most popular container runtime using lxc directly is lxd. However, there's technically nothing stopping you from using lxc directly. Personally, I prefer the interface that something like Docker or lxd provides, but if you just want to use what the kernel provides you can roll your own lxc container runtime without installing anything beyond the gcc tools you'd need to compile your code.

 

I'm trying not to be too negative here, because I don't want this to feel like a post that just focuses on the specific technical things you got wrong. The thing you're kind of advocating for here "what if you had container like orchestration, but full VM isolation and security" does in fact exist. It's called multipass and it leverages cloud-init to basically let you construct a complete VM solution that aims to rival the standard docker cli. The forums have a good number of pre-defined images, plus some official appliances and honestly, the whole thing is cool as hell. It's much more flexible than something like firecracker, because it doesn't require you to compile your own kernel for each image.

 

If you've kind of held back on using containers because you want the isolation a vm provides then you should check out the multipass project.

 

Just because not as many people are using Jails, and as a result it has fewer templates and tools out there does not mean the technology is different. It's not my fault the Linux world is stupid and not using better technology or fixing the flaws in Docker / The Linux Kernel or placing them in VM's like cavemen and accepting this as perfectly normal.

From the base level, jails can do everything docker can do and more. There is just a smaller community around them.

 

I personally use them to spin up on demand microservices, how is that not like docker? The use case is not different, only your perception of them is different.

 

1 hour ago, maplepants said:

If you've kind of held back on using containers because you want the isolation a vm provides then you should check out the multipass project

Multipass is a KVM frontend! .. I'm just.. Actually stunned you said that and it's clear you just don't get it. If you've held back on containers because docker is broken and does not provide isolation.. maybe look for a real solution like Jails or Zones and have a complete OS level virtualization stack and not just cgroups and namespace with a bunch of userland duct tape on top.

 

The Linux community is very frustrating. Jails are 20 years old.. and here you are in 2021 telling people to use a VM. Get out of your echo chamber once in a while.. there is other stuff out there. FreeBSD (and Solaris) solved the container problem a long time ago.

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, jde3 said:

It's not my fault the Linux world is stupid and not using better technology or fixing the flaws in Docker / The Linux Kernel or placing them in VM's like cavemen and accepting this as perfectly normal.

This is such a weird take. I get that you value isolation, and the ability to run untrusted software, but what makes you think Docker is inherently bad because those are not the primary problems it's trying to solve?

 

1 hour ago, jde3 said:

From the base level, jails can do everything docker can do and more. There is just a smaller community around them.

They manifestly can't, and it's odd that you're so dedicated to pretending the can. The big one that jail cannot do (because they're not designed to solve this problem) is orchestration.

 

The reason why nothing like kubernetes exists for FreeBSD isn't some kind of failing in the FreeBSD community. It's because that's just not the problem that FreeBSD jails are made to solve.

 

1 hour ago, jde3 said:

If you've held back on containers because docker is broken and does not provide isolation

Picking one particular feature of your favourite virtualization tool and then deciding Docker is broken because it's not there is so bizarre. Docker also doesn't provide much GUI support, it doesn't permit any emulation, there any number of things it doesn't do. 

 

No one piece of software can be all things to all people.

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, maplepants said:

This is such a weird take. I get that you value isolation, and the ability to run untrusted software, but what makes you think Docker is inherently bad because those are not the primary problems it's trying to solve?

 

They manifestly can't, and it's odd that you're so dedicated to pretending the can. The big one that jail cannot do (because they're not designed to solve this problem) is orchestration.

 

The reason why nothing like kubernetes exists for FreeBSD isn't some kind of failing in the FreeBSD community. It's because that's just not the problem that FreeBSD jails are made to solve.

 

Picking one particular feature of your favourite virtualization tool and then deciding Docker is broken because it's not there is so bizarre. Docker also doesn't provide much GUI support, it doesn't permit any emulation, there any number of things it doesn't do. 

 

No one piece of software can be all things to all people.

Because Docker containers are extraordinarily inefficient to run in a VM. Watch the video I posted. I'm an Ops person.I have to live with this shit. It's a half baked solution from the system level. Linux kernel team does not want to implement a real container infrastructure and don't value OS level virtualization at the same time everyone uses this garbage upstack stuff because thats what is popular however there is no reason that upstack stuff couldn't work on jails. It has the system level support for true OS level virtualization.

 

You can do orchestration with jails. They work fine with Ansable and Puppet. The deployment for us is managed by Bastille. (tho we use custom jail configurations, it's pretty much the same thing)

 

And actually people have done large VPC's with FreeBSD but it's an in house thing for various companies. There is no Docker as a company pushing/selling this stuff. They are using bhyve here (most people want linux so bhyve) but it can also do on the metal containers with OS level virtualization and that is the only difference between this and EC2.. oh and it's faster and way more efficient.


 

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, jde3 said:

Because Docker containers are extraordinarily inefficient to run in a VM.

Yeah, Docker containers are not good VMs. That's true and I'm agreeing with you on that. The thing is though, they're not trying to be good VMs, so who cares? Running nginx in a Docker container isn't supposed to be an alternative to running it in a VM, it's an alternative to installing nginx directly from your distro's repos. 

 

1 hour ago, jde3 said:

I'm an Ops person.I have to live with this shit. It's a half baked solution from the system level. Linux kernel team does not want to implement a real container infrastructure and don't value OS level virtualization at the same time everyone uses this garbage upstack stuff because thats what is popular however there is no reason that upstack stuff couldn't work on jails. It has the system level support for true OS level virtualization.

If you're an Ops person, you really should understand the containerization use case better. I encourage you to look into Docker but to drop your weird hangups about process isolation and how Docker not being a VM solution must make it useless. Try to understand Docker for what it is, and what goals they're actually attempting to solve. You don't have to come away thinking it's good software, but it's super weird for you to hate on Docker because it doesn't solve a problem that it wasn't even meant to solve. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, maplepants said:

Yeah, Docker containers are not good VMs. That's true and I'm agreeing with you on that. The thing is though, they're not trying to be good VMs, so who cares? Running nginx in a Docker container isn't supposed to be an alternative to running it in a VM, it's an alternative to installing nginx directly from your distro's repos. 

 

If you're an Ops person, you really should understand the containerization use case better. I encourage you to look into Docker but to drop your weird hangups about process isolation and how Docker not being a VM solution must make it useless. Try to understand Docker for what it is, and what goals they're actually attempting to solve. You don't have to come away thinking it's good software, but it's super weird for you to hate on Docker because it doesn't solve a problem that it wasn't even meant to solve. 

I'm well aware. I don't say this because I don't understand them but because I understand them all too well.

 

When we use the term VM here we are talking about hardware level virtualization exclusively. Where as docker is an, albeit flawed, form of OS level virtualization. My entire point is with docker.. you have to use both and that is terribly inefficient from an operations standpoint. What is faster and leaner on resources, one OS kernel or two?

 

I'm glad the developers can deploy software quickly, yay devs. But their choice in platform isn't good when alternatives exist that offer better performance and better multitenancy.

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, jde3 said:

When we use the term VM here we are talking about hardware level virtualization exclusively. Where as docker is an, albeit flawed, form of OS level virtualization.

Docker is not a VM solution. Containerization is not the same as virtualization. From one operations guy to another; you really should look into learning about contanerization.

 

Containerization platforms are not just failed virtualization technologies. They have completely different goals, and that's why the feature set doesn't line up one to one.

 

1 hour ago, jde3 said:

What is faster and leaner on resources, one OS kernel or two?

Docker engine and all it's containers share the host container on Linux and share a single Linux kernel with Docker desktop on macOS and Windows.

I really encourage you to read about containerization, because you're getting absolutely basic stuff wrong here in addition to your complaints just fundamentally misunderstanding the problem Docker, lxc, lxd and other solutions are trying to solve.

 

1 hour ago, jde3 said:

I don't say this because I don't understand them but because I understand them all too well.

I don't mean this in a rude way, but based on your posts here you don't know or understand much about Docker & containerization at all.

 

  • You've complained that Docker doesn't make a good VM, which it isn't trying to be
  • You've said that Ansible and Puppet offer the same functionality as kubernetes and they just don't (check out the ansible role for kubernetes and compare the problems solved there with the problems solved by Ansible and Puppet)
  • You didn't know that lxc was built into the Linux kernel
  • You didn't know that containers share the host's kernel and hardware drivers (this is actually a huge benefit for containers like TensorFlow which rely on specific hardware drivers)
  • You think the main benefit of conainers is fast deployments, when it really isn't

Obviously in your current role, containerization doesn't play a large role and your lack of knowledge there doesn't hold you back. But I'd encourage you to actually take the time to look into containerization more deeply. If you're on the job market as an Ops guy a limited knowledge of containerization will really hold you back. You will not be able convince shopify or some other tech company to completely ditch containers just because you don't have a deep knowledge of them.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×