Jump to content

finest feck fips

Member
  • Posts

    221
  • Joined

  • Last visited

Everything posted by finest feck fips

  1. As someone who uses most of these applications on his daily driver for work, there are a lot of problems with many of these utilities: PowerToys is quite buggy and slow Run has enough lag that it's little better than the Start Menu FancyZones seems related to an issue with the window manager where normal window dragging lags so much the system is unusable until DWM.exe is killed, requiring periodic restarts of that process or logging out and logging back in Global microphone mute sometimes doesn't work right, making it untrustworthy for work contexts Windows Subsystem for Linux is riddled with bugs accessing Windows files from the Linux guests is atrociously slow no way to pass hardware to the Linux guests resize/drag glitches, multimonitor issues, and random crashes with GUI apps via WSLg wsl --terminate and wsl --shutdown are not always safe to run (can result in filesystem corruption on the guests) weird filename/case sensitivity incompatibilities with git checkouts depending on whether they were originally made on the Linux side or the Windows side huge memory leak that brings the whole system to a grinding halt when many Docker containers are running on a Linux guest Docker Desktop's default behavior breaks Docker on some distros the version of systemd used for systemd-enabled systems and WSLg is kinda broken (this is actually a systemd bug, not really Windows' fault) Windows Terminal is pretty good but decent performance requires GPU acceleration there are issues with accessing a single WSL instance from both privileged and unprivileged contexts at the same time Quake Mode is subpar— you can't even control the size of the dropdown window... you're better off using a third-party hack like the Windows Terminal Quake app Windows' SSH implementation is hugely deficient in ways that are undocumented except as GitHub issues ssh-agent behavior is super weird and requires admin privileges to set up, not clear if it's even possible to have multiple agents sharing an SSH agent with WSL guests is a huge pain in the ass and may require configuration of complicated third-party apps multiplexing (Control Master) is completely unsupported, just missing remote forwarding to Unix sockets doesn't work right the version that ships with Windows doesn't support modern encryption on the client, so you need to install a separate copy of Microsoft's port to use, e.g., OpenSSH's native 2FA-enabled keytypes configuring authorized_keys and other things for SSHD has to be done twice for admin accounts, various other quirks Winget and Chocolatey aren't real package managers and have various deficiencies both are just installer-wranglers no granular packaging little packaging of development libraries both require administrator privileges way too much neither can really ensure that installations are non-interactive upgrades and uninstallations don't work reliably, can leave dangling loose ends like normal Windows uninstallers (because all they do is invoke the normal uninstallers) PowerShell is basically unusable as a login shell compared to bash, zsh, or fish, it is unbelievably slow— especially if you try to grow a reasonably comfortable or efficient-to-use config oh-my-posh is anemic compared to a real shell framework; it's just a prompt and it's slow as hell since it's implemented in PowerShell. just use Starship instead! PowerShell profile configuration is quite brittle as well as overcomplicated background tasks and settings that work fine in interactive shells can completely b0rk your profile for non-interactive use way too many places for profiles to be configured for a given user (like a dozen or more) fucking Documents folder used for config files (???) no fscking job control (WTF?) Having them around is better than nothing, but they definitely still suck. A Linux or Unix noob probably won't notice the differences, but longtime Linux workstation users and sysadmins are likely to stumble over a ton of missing features. As for what was the last good version of Windows: there's never been one, from the ones you used to launch from MS-DOS all the way to Windows 11. They all suck.
  2. What does `diskutil list` show on the terminal of macOS recovery mode? I wonder if you just need to fsck your old filesystem, mark it bootable, or reset your nvram or something.
  3. They didn't, actually. What year their distribution is from is a totally different matter than the question of in what year the software they want to run came out. Now that I've time to check myself, I've learned that qemu 4.2.0 came out in 2019, three years before Ubuntu 22.04 was released. Which helps make clear that this: is nonsense because qemu 4.2.0 has never, ever been in the repos for Ubuntu 22.04, because it was obsolete years before the release process for that version of Ubuntu even started. If you're still around: You probably can't just install those versions of mtools and qemu on Ubuntu 22.04 as native debs, since there will likely be versioning conflicts with other parts of your system. But the desired versions of qemu and mtools can both be installed on your current distro via Nix. If you're trying to use qemu-kvm, there may be compatibility issues with that version of qemu and the version of KVM on your host system, plus quirks related to library paths and whatnot. Feel free to post back if you have specific issues with installing those versions in that way.
  4. They do, and they have for many, many years. In Ubuntu's case, the packages which users are warned about when they attempt to uninstall them are those whose uninstallation would render the package manager inoperable. But there has been some debate going back to the first incident about whether such warnings are sufficient. And clearly, for some users, they are not effective. This is not accurate, neither with respect to what meta-packages are nor with respect to how pacman treats them. Meta-packages are just packages, themselves containing no files to install, whose sole function is to pull in other packages by declaring those others as dependencies. Whether something is a meta-package and whether or not it is somehow ‘protected’ from uninstallation are completely orthogonal. Moreover, comparing RPM to APT is a category error— each tool sits at a different layer in its respective package management stack. APT is comparable to other high-level package management tools, like Zypper, YUM, and DNF. The counterpart of the rpm program in the Debian-based world is dpkg, not APT.
  5. Sure. The points that may be of interest here, reiterated from discussions of what happened to Linus during the Linux challenge, is that 1. This isn't really about a problem with Pop_OS, per se. 2. When you use the system package manager, you are operating on your entire system as an interconnected whole. 3. The expectation with Unix and Linux tools is that when a user requests something, the system is expected to make it happen, whatever that takes. In the case of package managers, that means computing and then offering various changes that meet the requirements given by the user/administrator, and expecting the user/administrator to evaluate the tradeoffs involved in those solutions. It vindicates the decisions by distribution developers to explore new systems that steer end users away from the system package manager and toward containerized or bundle-based app deployment, when their goal is to accommodate 'power users' who are either new to the concept of systemwide package management or prefer a BSD-style separation between app installation and operating system deployment. Users who are accustomed to administering their systems via package management (people like you and me) often prefer to avoid containerized/bundled systems like that (Snap, AppImage, Flatpak) for perfectly good reasons (e..g., disk use, platform immaturity, load times, ease of OS customization, as-of-yet-unresolved sandboxing quirks, etc.). But Windows and macOS users unaccustomed to having to consider (2) and (3) will likely prefer those systems that use an immutable base system and lock things down in various other ways in order to eliminate footguns.
  6. The latest release of Ubuntu (incidentally an LTS release) is currently affected by exactly the same kind of dependency issue as Linus Sebastian encountered in Pop_OS some months ago. On a freshly installed system which has not yet been updated, trying to install an unlucky package produces a dependency conflict that APT offers to solve for the user by uninstalling their desktop environment. (In this case, I think Ubuntu actually doesn't even make the user type the infamous warning message, since it doesn't mark the ubuntu-desktop meta-package as essential.)
  7. I'm not sure. AMD had to rewrite their entire driver stack around a new software architecture in order to open-source their drivers, and it took them many, many years to get it done. The way they have implemented their drivers is absolutely a factor, but they had to rewrite their whole driver stack to implement them that way in the first place. I'm not sure what the motivation was. I think of course it wasn't purely about doing a service for Linux users. But they definitely didn't do it out of sheer convenience.
  8. Posting this separately mainly because it's really long. Sorry for the triple-post, but this seems better than having people search through a long post for quotations. /home isn't generally used for installed programs. except for Steam assets (by default, but you can put them anywhere) and Homebrew (whose usage of /home is wrong). It's just for config files and per-user asset caches. Then... don't do that? Sounds like you're not competent to manage permissions yourself and are insisting on doing so anyway. Is this mainly for storage constraint reasons? Like your root partition is a small SSD so you want to install some programs on your big HDDs? You don't need to ‘track’ applications on Linux; your package manager does that systematically. I'm not sure what permissions issues you're running into with symbolic links, but if you want to map directories inside your home directory to directories outside your home directory in a way that's more transparent to applications, you can use bind mounts. You can mount your big disks outside of the traditional FHS in a way that's analogous to Windows drive letters, e.g., to /media/SomeBigDisk /media/AnotherBigDisk and then create per-user and shared subdirectories in each, like /media/SomeBigDisk/per-user/nord1ing /media/SomeBigDisk/shared /media/AnotherBigDisk/per-user/nord1ng /media/SomeBigDisk/shared and give your user ownership of the appropriate per-user directories. Then you can use bind mounts to mount specific directories over directories in your home folder, e.g., mount -o bind /media/SomeBigDisk/nord1ing/Documents /home/nord1ing/Documents mount -o bind /media/AnotherBigDisk/nord1ing/Simulations /home/nord1ing/Documents/Simulations mount -o bind /media/AnotherBigDisk/shared/Videos /home/nord1ing/Videos etc. Make them persistent with appropriate entries in /etc/fstab. You can set default permissions for new files and handle multiple groups with setfacl. You mean from inside a running system? You can't take a disk image of a normal, running disk unless you have a CoW filesystem or you're using logical volumes. Does Paragon Backup and Recovery let you back up the partition of the C:\ drive while the system is running? What's the use case for block-level backups here? If this is your use case and you'd like to do it without virtualization, look into tools like Snapper, Timeshift, or basic ZFS snapshot and clone usage. Linux has much more advanced and mature filesystems than are available on non-enterprise editions of Windows, and they're generally preferable over dumb (filesystem-agnostic) whole-partition clones. Yes. Also yes. If you're having trouble understanding where things go and what permissions for different shared directories should be on Linux, the thing you need to learn about is the Filesystem Hierarchy Standard. The full documentation is available online, but you might start with a video like this one instead. (If you let me know what your native language is, we can look for docs in that as well.) There aren't really any cross-platform filesystems in terms of permissions, including NTFS. If the issue with FAT is metadata features, there are no alternatives. But if the issue is just large file compatibility, exFAT is one. NTFS is second-class on Linux and it has some permissions issues. That's a tradeoff you can make for files that mainly get used on the Windows side. But you can also make the reverse tradeoff for files that you mostly use from Linux, by installing Windows drivers for Linux filesystems. The main three are ext2fsd/ext3fsd and ext4fsd for ext2/3/4, WinBTRFS for BTRFS. and ZFSIn for ZFS. (Probably none of them are as stable as ntfs-3g or the new Linux kernel driver for NTFS donated by Paragon, but WinBTRFS looks relatively good.)
  9. I'm not sure. There are a substantial number who game exclusively on Linux. They're just not really embedded in ‘PC gamer culture’, so-to-speak. In other words: they game mostly offline, they play mostly indie games, or they play mainly emulated and retro games. These are people whose use cases are more similar to Linus' than to Luke's. If you're already on Linux and you're looking to add PC games to what you do on your computer, there are more excellent games available for the platform than you have time to play. This is mostly an interesting proposition if your game selection is fundamentally faddish, i.e., one of the top reasons for you wanting to play a game is the mere fact that lots of other people are playing it at a given time. This is probably a lot of people in LTT's audience, since its audience is mostly boys and men in their teens and twenties, and relatively fanatical about videogames. But it's not everyone! The selection issue is mostly for fanatics, who follow trends because gaming is among their single biggest passions in life, and competitive gamers, who are forced to pay attention to network effects because they need games that have a large enough playerbase to host a healthy competitive scene. But Linux gamers who don't fall into those categories really are gaming on Linux these days. Is MS Flight Simulator really for flight simulation enthusiasts? FlightGear is in actual FAA-certified training simulators, and so is X-Plane, which has FAA-certified distributions, and a native Linux client. FlightGear was created because MS Flight Simulator was too videogame-y and not realistic enough in terms of the flight mechanics. I imagine that a flight simulation enthusiast would care about the realism of the simulation. That said, MS Flight Simulator 2020 is gorgeous and FlightGear is not, really.
  10. Does LMG have a fact checking process? There were factual errors and misleading claims in each video in this series that LMG already had adequate in-house resources (internal expertise or just the ability to Google a bit) to prevent from making the final cuts. Seems like with as large of a staff as LMG has, the high tech equipment, and the current expansion of the company, etc., LMG is overdue for bolstering its fact checking.
  11. Huh. You've never used a networked file share at school or work? Or just didn't make the connection that that feature was available to you on Windows at home? That's interesting. I wonder why that is. I suspect that TrueNAS folks just have high expectations for reliability, and the real issue is just that USB HDDs tend to be very cheap and cheaply made, so they're not ‘safe’ to use without redundancy. I'd be surprised if ZFS had some special issues with USB drives. This is definitely some ZFS user perfectionism stuff. You don't need ECC RAM to use ZFS; with regular RAM you'll be no worse off than if you were using some other filesystem. ECC RAM gives you additional protection against certain kinds of corruption, and the kind of people who build fancy storage systems based on FreeBSD or Linux with ZFS tend to value that highly. But if you're just messing around (rather than spending thousands of dollars building a redundant storage system) there's no reason that lacking ECC RAM should make you steer away from ZFS. Definitely mess around with TrueNAS and some Linux distros then, just because there's a lot to explore. If you feel like trying to learn a new operating system all at once is overwhelming, I guess it's fine to start with a Windows server. But if you enjoy trying and learning new things, I'd recommend getting into free software operating systems for this anyway, just because they're more fun for that type of person. Check out the Linux and FreeBSD magazines at your local bookstore some time. They have articles with project ideas and tutorials as well as DVDs that come with multiple operating systems you can try, usually featured in reviews. It's a good way to find high-quality guides that are ready to try, and it's a lot of fun!
  12. No worries, dude. Taking your existing grab bag of hardware and hoping it'll work with an OS that the manufacturers may not adequately support is always a gamble and often a pain. Good luck with your projects!
  13. Does Windows still limit the number of open connections for SMB/CIFS to just a handful on editions other than workstation and server? Back in the day (Linus will love this), browsing via KDE file managers (not sure if Dolphin was around yet back then, or it was Konqueror) used to render Windows-based CIFS shares unusable, because Dolphin/Konqueror opened multiple connections in order to speed up operations, and the (pointless, totally artificial) connection limit built into Windows was absurdly low.
  14. The strategy for Windows Server has never really centered on competing with Unix-likes on traditional server OS virtues like reliability, efficiency, security, simplicity, or ease of automation. The main driver (flowing from Microsoft's monopolies on desktop operating systems and office software) has always been highly integrated turnkey solutions for enterprise networks of desktop computers, especially directory services and email. The other big advantage for Microsoft customers is that point-and-click sysadmins are cheaper and easier to find than Unix sysadmins, because they don't have to know how to write read or write code and the expectation for problem solving is more troubleshooting (reboot or reinstall) than debugging (root cause analysis). This is an especially good fit for small businesses where manual processes are adequate. None of Windows' market strengths really make it a good server OS qua server OS, and most of them don't apply to the home server use case, either. If Windows Server were actually a good server OS, Unix-likes wouldn't comprise a supermajority of servers on Microsoft's own cloud services. Windows is not a good server OS. It's a passable server OS that gets carried forward by integrations with turnkey solutions for common office use cases. But it's always lagged on server functionality like filesystems, automation, virtualization, clustering, control over updates and software deployments, uptime, etc. Where Windows does offer something reasonably powerful in one of those areas, you're not gonna get access to it with Windows 10 Home anyway. (PowerShell is cool though, and it's nice that Windows finally has an SSH server, though.)
  15. Yeah, it doesn't make much sense to use an old Windows box for redundant storage, either, because Windows doesn't support decent filesystems unless you pay big $$$$ for workstation or server licenses, and absent a filesystem with good software RAID, you can't get good RAID without an expensive hardware RAID controller. By the time you're buying $800 worth of additional drives in pairs for the sake of redundancy, it doesn't make sense to stay on Windows anymore.
  16. The GUI-centric, you-don't-have-to-learn-anything Windows focus for the server project kinda feels like an excuse to motivate the Pulseway dependency, since on a normal headless server, whether it's Linux or FreeBSD or Windows Server Core or anything else, you'd just set up SSH or PSRemoting yourself. In a home use case where you have such local remote access tools configured, Pulseway seems mostly redundant or unnecessary unless (1) accessing your server from outside the home is actually required, (2) the protocol you'd use to connect is too bandwidth-heavy or insecure for internet usage OOTB, and (3) configuring dynamic DNS and port forwarding is too hard for you to figure out (which doesn't really make sense for this kind of project). Who is this video for? People who don't know that Windows File Sharing exists? It seems like it doesn't have anything to say to its target audience that they don't already know (except that Pulseway exists and they should buy it), even though it assumes its target audience knows almost nothing at all.
  17. It definitely shouldn't be enabled by default. This is an Arch design issue, and to some extent an AUR wrapper design issue. More sophisticated build systems have every build run in a sandbox as an unprivileged user with no network access. Distros that want to have repositories like the AUR should use build systems with similar properties. There's no reason outside of makepkg's design that PKGBUILDs are allowed to run as root. The same is true of potential security issues with package installation. Package installation doesn't have to be carried out as root (see Linuxbrew, Flatpak, Nix, Guix), and packages don't need to run hooks and triggers, either (see Distri). The security issues you're describing only occur when a package manager does both of those things, as pacman does. This is what I mean about Arch users and developers ignoring or avoiding the fact that Arch's design and practices are factors in problems faced by users of the distro.
  18. It's not essential for me, because I am comfortable packaging software myself, and that's what I do when a distro I've chosen doesn't have some software I want in the repos. For most actual Arch users in the real world, it is essential, and that's the issue. This doesn't address the substance of my critique, which is that the meager offerings in the Arch repositories drive people to a situation in which they are ultimately managing a dual system. This remains true even if every package they install from the AUR is well-written, up-to-date, and works without issue, because compatibility with foreign packages is a consideration that pacman is not equipped to make when solving for dependencies. (And even if it could check that, not breaking them would require partial upgrades, which pacman lacks support for by design.) When the Arch wiki says this has nothing to do with the quality of unofficial packages installed, but the fact that pacman will routinely break packages not currently in the Arch repos on install or update actions without so much as a warning. Because Arch is a rolling release, there's no guarantee of binary compatibility between library versions in the main repos, which means potentially anything that depends on them may need to be rebuilt after updates. This leads to kludges like the rebuild detector and it's the reason EndeavourOS has a separate ‘AUR Update’ function in its little GUI toolbox. It's a brittle system, and the workarounds for coping with it are cumbersome hacks. The issue is not that I mind reading package definitions. I frequently read and write package definitions on the distros that I use. The issue is that Arch's de facto reliance on the AUR is both real and a problem. (And that said reliance is problematic due to design flaws in Arch and in the AUR.) I don't use Arch. The reason that I originally raised this point is that the unfortunate real function of the AUR in the communities of Arch Linux and Arch derivatives is one of the pain points Linus mentions in the video, and it makes him unsure about whether using an Arch-based Linux distro is a sound choice. By suggesting that users who find some of the packages they need missing from the Arch repositories but present in the AUR should simply choose not to use Arch in light of the inherent problems in supporting the AUR, you're just agreeing with Linus (and with me).
  19. This is a bit of a Linux ecosystem pet peeve of mine. Let's see how I can handle it. The AUR's eternally unofficial status would be fine if it weren't de facto required to get a decent user experience on Arch. The AUR is huge, but Arch itself provides packages for fewer than 10,000 software projects. For a sense of perspective: the biggest distro provides packages for over 58,000 projects, Debian over 25,000, and Fedora just under 20,000. In terms of projects packaged, Arch doesn't even crack the top ten (collapsing redundant distros from the same families above it into one, e.g., Ubuntu and Debian are together counted only once, under Debian, since Ubuntu is downstream of Debian). It's smaller than the ports systems for most major BSD distros as well (only OpenBSD's ports system is smaller than the Arch repos). At the same time, the AUR is widely hailed as one of the great benefits of Arch Linux; access to the AUR is supposed to be a reason to run Arch in the first place. The end result is that a huge number of Arch users (most of them, I would guess) will end up installing and keeping at least a handful of things from the AUR at some point. And when they do, they'll discover that, as ‘foreign’ packages, those packages are second-class citizens on their system when it comes to dependency resolution in pacman. Integration problems ensue. Package quality on the AUR is extremely inconsistent, to the point that installing some AUR packages will do things like overwrite your glibc and break your whole system. Arch maintainers assert that this isn't a problem essentially because the AUR is only for advanced users, you should always be reading any PKGBUILDs you get from it yourself, etc. But using the AUR is simultaneously so necessary and so cumbersome that there's a whole little ecosystem of ‘AUR helpers’ designed to paper over all of that and let users treat AUR packages as though they were natively part of the base system. Still, those are kept out of the Arch repositories, so users at least have to bootstrap their way into running them, right? Well, not really. All of the most popular Arch downstreams, including those that have been mentioned in LTT videos (Manjaro and EndeavourOS), include AUR helpers out of the box. I'd write more but I'm falling asleep at this point (perhaps, reader, so are you). The point is: The AUR is poorly integrated, and packages installed with it are poorly integrated with pacman unless you run local repositories to host them. This causes breakage. Arch users (and developers) pretend that the AUR is optional, while in fact, the distro depends on the AUR to be usable for many people, and most users include the AUR as part of their pitch for Arch. It's bad. It feels like using a distro whose core tooling is unfinished.
  20. I tend to be highly suspicious of any talk of ‘intuitive’ user interfaces. In one of the previous official posts on an official episode, one user related the experience of baby duck syndrome, which I think is a very useful concept here. I recently encountered a very interesting old paper on the research and acceptance testing that went into the first Windows release to include a start menu. There's a lot in it to suggest that what may now be easy to think of as ‘intuitive’ is very much a matter of learned convention. Take for instance the ‘standard’ Windows behavior of single-clicking to select and double-clicking to launch. The paper describes the following as a ‘key finding’: Or consider that many users like Linus and Luke might describe an experience where right-clicking on the desktop offers options for changing the desktop background but not the screen resolution as ‘unintuitive’. But how intuitive are context menus? For a more potent example, consider the behavior of the ‘show desktop’ button on KDE Plasma, which temporarily minimizes all windows, but restores the position of all windows once you move away from the desktop and open any window. Is actually minimizing all windows an intuitive behavior for new users, given that So here the path actually forks: does KDE want a behavior that is ‘intuitive to new users’ or one that is familiar to experienced Windows users? I get the impression from bits like this in your previous comment that you already agree with that. But I want to stress that in the context of an audience who grew up with computers in general and Windows computers in particular, it can be very difficult to separate out what is intuitive from what is familiar.
  21. I agree with most of your post, but not quite this. I heard something similar yesterday about manuals. You only need to do intensive reading when you're still figuring out how to read in the specialized genre. Once you get up and going, even just for a few weeks or days, you will begin to develop relevant intuitions that eventually let you safely decide what to read, what to look up, and what to pass over. In cases like this, the sheer number of packages proposed for removal is your red flag. But users, like Linus in the first episode, who've literally only been using the OS for 30 minutes won't yet have the background of normal package management experiences against which that behavior can seem weird, because they can't. For them, unfortunately reading is more important because they need more context to interpret the same information. In this case, looking up 1 or 2 of packages listed as ‘essential’ would likely have been enough. Can you (or could Linus, now having had that experience) avoid what happened to Linus with the existing documentation and tools as Linus encountered them? Yes, absolutely! But should the tools do a better job of highlighting really critical information and making it clear to users what's important vs. what's mostly noise? Yes, absolutely.
  22. Yes, absolutely. Linus succeeded in highlighting a number of ways that APT output could be improved, and I hope several of them go all the way out to end users. My point was just that still not fair to say Another improvement that afaik upstream isn't yet explicitly considering is changing APT's default behavior with orphaned packages. apt doesn't normally delete orphaned packages, so it inevitably accrues a long message at the beginning that's a list of ‘packages which are no longer needed’, and it spams you with that for every operation. It's good that the Pop!_OS documentation advises users to read everything. It's bad that the APT affordances implicitly do the opposite and encourage users to zone out. The docs should encourage users to read, and the tools should reward them for reading with visual hints toward the most critical information, and sparing them as much noise as possible. In 2021, it's probably safe to just automatically remove orphaned packages. That would get rid of a paragraph of unrelated text in this case.
  23. The KDE Frameworks are built on Qt, but they are not Qt. KDE applications are built on the KDE Frameworks. I'm not sure what you mean. You can install whatever KDE packages you want. If you want just the desktop environment, on Arch it's just the plasma-desktop package. If you want that plus a few basic utilities, including a system monitor, you can pull that in with plasma-meta. Then you can install whatever additional KDE apps you want. KDE is not responsible for what packages are in your distro, or for how they are bundled into metapackages or groups or whatever. That's on the distro. Arch is a distro that makes a point of eschewing convenient metapackages providing opinionated distributions of upstream desktop environments, which is what something like a kde-applications-minimal group would be. The nearest equivalent to what you're asking for is what you get with pacman -S plasma-meta dolphin kate khelpcenter konsole which is roughly equivalent to installing Plasma alongside the kde5-baseapps package from Void Linux, for example. No, the pipeline you're referring to is what's required to minimize a ‘bloated’ installation like you carried out, not what is required to ‘run a simple KDE install’. You're right that using a a 4-command pipeline to do something like that is not very ergonomic. Complicated removals tend to go that way on Arch, especially when you use package groups with pacman -S (because it's the same as installing every package in the group manually). Even under better circumstances, dealing with orphaned packages is annoying because Arch's support for dealing with them is not very well-integrated. I know you're apologizing to someone else here, but I'm sorry I got so bitchy, too. Trying to optimize performance by choosing the right KDE-based Linux distro is nonsense in the same way as obsessively tuning the CFLAGS for your whole system. The obsession with ‘bloat’ and the whole discourse around it is dogmatic, confused, overstated, and fanboyish. (That is intended as a statement of a pet peeve of mine with how I see software discussed online, not as a criticism of you.) The choice to use Arch juxtaposed with complaints about not being offered an OOTB-type easy button is self-contradictory, and it suggested to me that your interest in Arch was more grounded in its meme status as an ‘advanced’ distro than in a particular need. That way of choosing Arch for bad reasons strikes the same nerve for me, it's part of the same super shallow culture and discourse you see surrounding Linux on /g/ and Reddit. Much of that is an expression of my frustration with subcultural trends I took you to be a representation of because of language in the OP. That doesn't necessarily make my reaction to you a fair one, even if you hold some related opinions that I disagree with. I also don't mean to say that bloatware is not a thing, or that code bloat never matters. But here with the kde-applications group in Arch, we're talking about installing 170+ desktop applications that in total uses less than 10% of the size of a blank Windows 11 install, with very few real applications on it. Trying to get a super lightweight KDE/Plasma installation doesn't make a ton of sense, and neither does the general obsession within the Linux community with ‘bloat’. Anyway, if you're happy with KDE on Arch now that you've removed the software you don't use, that's great.
×