Jump to content

finest feck fips

Member
  • Posts

    221
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. As someone who uses most of these applications on his daily driver for work, there are a lot of problems with many of these utilities: PowerToys is quite buggy and slow Run has enough lag that it's little better than the Start Menu FancyZones seems related to an issue with the window manager where normal window dragging lags so much the system is unusable until DWM.exe is killed, requiring periodic restarts of that process or logging out and logging back in Global microphone mute sometimes doesn't work right, making it untrustworthy for work contexts Windows Subsystem for Linux is riddled with bugs accessing Windows files from the Linux guests is atrociously slow no way to pass hardware to the Linux guests resize/drag glitches, multimonitor issues, and random crashes with GUI apps via WSLg wsl --terminate and wsl --shutdown are not always safe to run (can result in filesystem corruption on the guests) weird filename/case sensitivity incompatibilities with git checkouts depending on whether they were originally made on the Linux side or the Windows side huge memory leak that brings the whole system to a grinding halt when many Docker containers are running on a Linux guest Docker Desktop's default behavior breaks Docker on some distros the version of systemd used for systemd-enabled systems and WSLg is kinda broken (this is actually a systemd bug, not really Windows' fault) Windows Terminal is pretty good but decent performance requires GPU acceleration there are issues with accessing a single WSL instance from both privileged and unprivileged contexts at the same time Quake Mode is subpar— you can't even control the size of the dropdown window... you're better off using a third-party hack like the Windows Terminal Quake app Windows' SSH implementation is hugely deficient in ways that are undocumented except as GitHub issues ssh-agent behavior is super weird and requires admin privileges to set up, not clear if it's even possible to have multiple agents sharing an SSH agent with WSL guests is a huge pain in the ass and may require configuration of complicated third-party apps multiplexing (Control Master) is completely unsupported, just missing remote forwarding to Unix sockets doesn't work right the version that ships with Windows doesn't support modern encryption on the client, so you need to install a separate copy of Microsoft's port to use, e.g., OpenSSH's native 2FA-enabled keytypes configuring authorized_keys and other things for SSHD has to be done twice for admin accounts, various other quirks Winget and Chocolatey aren't real package managers and have various deficiencies both are just installer-wranglers no granular packaging little packaging of development libraries both require administrator privileges way too much neither can really ensure that installations are non-interactive upgrades and uninstallations don't work reliably, can leave dangling loose ends like normal Windows uninstallers (because all they do is invoke the normal uninstallers) PowerShell is basically unusable as a login shell compared to bash, zsh, or fish, it is unbelievably slow— especially if you try to grow a reasonably comfortable or efficient-to-use config oh-my-posh is anemic compared to a real shell framework; it's just a prompt and it's slow as hell since it's implemented in PowerShell. just use Starship instead! PowerShell profile configuration is quite brittle as well as overcomplicated background tasks and settings that work fine in interactive shells can completely b0rk your profile for non-interactive use way too many places for profiles to be configured for a given user (like a dozen or more) fucking Documents folder used for config files (???) no fscking job control (WTF?) Having them around is better than nothing, but they definitely still suck. A Linux or Unix noob probably won't notice the differences, but longtime Linux workstation users and sysadmins are likely to stumble over a ton of missing features. As for what was the last good version of Windows: there's never been one, from the ones you used to launch from MS-DOS all the way to Windows 11. They all suck.
  2. What does `diskutil list` show on the terminal of macOS recovery mode? I wonder if you just need to fsck your old filesystem, mark it bootable, or reset your nvram or something.
  3. They didn't, actually. What year their distribution is from is a totally different matter than the question of in what year the software they want to run came out. Now that I've time to check myself, I've learned that qemu 4.2.0 came out in 2019, three years before Ubuntu 22.04 was released. Which helps make clear that this: is nonsense because qemu 4.2.0 has never, ever been in the repos for Ubuntu 22.04, because it was obsolete years before the release process for that version of Ubuntu even started. If you're still around: You probably can't just install those versions of mtools and qemu on Ubuntu 22.04 as native debs, since there will likely be versioning conflicts with other parts of your system. But the desired versions of qemu and mtools can both be installed on your current distro via Nix. If you're trying to use qemu-kvm, there may be compatibility issues with that version of qemu and the version of KVM on your host system, plus quirks related to library paths and whatnot. Feel free to post back if you have specific issues with installing those versions in that way.
  4. They do, and they have for many, many years. In Ubuntu's case, the packages which users are warned about when they attempt to uninstall them are those whose uninstallation would render the package manager inoperable. But there has been some debate going back to the first incident about whether such warnings are sufficient. And clearly, for some users, they are not effective. This is not accurate, neither with respect to what meta-packages are nor with respect to how pacman treats them. Meta-packages are just packages, themselves containing no files to install, whose sole function is to pull in other packages by declaring those others as dependencies. Whether something is a meta-package and whether or not it is somehow ‘protected’ from uninstallation are completely orthogonal. Moreover, comparing RPM to APT is a category error— each tool sits at a different layer in its respective package management stack. APT is comparable to other high-level package management tools, like Zypper, YUM, and DNF. The counterpart of the rpm program in the Debian-based world is dpkg, not APT.
  5. Sure. The points that may be of interest here, reiterated from discussions of what happened to Linus during the Linux challenge, is that 1. This isn't really about a problem with Pop_OS, per se. 2. When you use the system package manager, you are operating on your entire system as an interconnected whole. 3. The expectation with Unix and Linux tools is that when a user requests something, the system is expected to make it happen, whatever that takes. In the case of package managers, that means computing and then offering various changes that meet the requirements given by the user/administrator, and expecting the user/administrator to evaluate the tradeoffs involved in those solutions. It vindicates the decisions by distribution developers to explore new systems that steer end users away from the system package manager and toward containerized or bundle-based app deployment, when their goal is to accommodate 'power users' who are either new to the concept of systemwide package management or prefer a BSD-style separation between app installation and operating system deployment. Users who are accustomed to administering their systems via package management (people like you and me) often prefer to avoid containerized/bundled systems like that (Snap, AppImage, Flatpak) for perfectly good reasons (e..g., disk use, platform immaturity, load times, ease of OS customization, as-of-yet-unresolved sandboxing quirks, etc.). But Windows and macOS users unaccustomed to having to consider (2) and (3) will likely prefer those systems that use an immutable base system and lock things down in various other ways in order to eliminate footguns.
  6. The latest release of Ubuntu (incidentally an LTS release) is currently affected by exactly the same kind of dependency issue as Linus Sebastian encountered in Pop_OS some months ago. On a freshly installed system which has not yet been updated, trying to install an unlucky package produces a dependency conflict that APT offers to solve for the user by uninstalling their desktop environment. (In this case, I think Ubuntu actually doesn't even make the user type the infamous warning message, since it doesn't mark the ubuntu-desktop meta-package as essential.)
  7. I'm not sure. AMD had to rewrite their entire driver stack around a new software architecture in order to open-source their drivers, and it took them many, many years to get it done. The way they have implemented their drivers is absolutely a factor, but they had to rewrite their whole driver stack to implement them that way in the first place. I'm not sure what the motivation was. I think of course it wasn't purely about doing a service for Linux users. But they definitely didn't do it out of sheer convenience.
  8. Posting this separately mainly because it's really long. Sorry for the triple-post, but this seems better than having people search through a long post for quotations. /home isn't generally used for installed programs. except for Steam assets (by default, but you can put them anywhere) and Homebrew (whose usage of /home is wrong). It's just for config files and per-user asset caches. Then... don't do that? Sounds like you're not competent to manage permissions yourself and are insisting on doing so anyway. Is this mainly for storage constraint reasons? Like your root partition is a small SSD so you want to install some programs on your big HDDs? You don't need to ‘track’ applications on Linux; your package manager does that systematically. I'm not sure what permissions issues you're running into with symbolic links, but if you want to map directories inside your home directory to directories outside your home directory in a way that's more transparent to applications, you can use bind mounts. You can mount your big disks outside of the traditional FHS in a way that's analogous to Windows drive letters, e.g., to /media/SomeBigDisk /media/AnotherBigDisk and then create per-user and shared subdirectories in each, like /media/SomeBigDisk/per-user/nord1ing /media/SomeBigDisk/shared /media/AnotherBigDisk/per-user/nord1ng /media/SomeBigDisk/shared and give your user ownership of the appropriate per-user directories. Then you can use bind mounts to mount specific directories over directories in your home folder, e.g., mount -o bind /media/SomeBigDisk/nord1ing/Documents /home/nord1ing/Documents mount -o bind /media/AnotherBigDisk/nord1ing/Simulations /home/nord1ing/Documents/Simulations mount -o bind /media/AnotherBigDisk/shared/Videos /home/nord1ing/Videos etc. Make them persistent with appropriate entries in /etc/fstab. You can set default permissions for new files and handle multiple groups with setfacl. You mean from inside a running system? You can't take a disk image of a normal, running disk unless you have a CoW filesystem or you're using logical volumes. Does Paragon Backup and Recovery let you back up the partition of the C:\ drive while the system is running? What's the use case for block-level backups here? If this is your use case and you'd like to do it without virtualization, look into tools like Snapper, Timeshift, or basic ZFS snapshot and clone usage. Linux has much more advanced and mature filesystems than are available on non-enterprise editions of Windows, and they're generally preferable over dumb (filesystem-agnostic) whole-partition clones. Yes. Also yes. If you're having trouble understanding where things go and what permissions for different shared directories should be on Linux, the thing you need to learn about is the Filesystem Hierarchy Standard. The full documentation is available online, but you might start with a video like this one instead. (If you let me know what your native language is, we can look for docs in that as well.) There aren't really any cross-platform filesystems in terms of permissions, including NTFS. If the issue with FAT is metadata features, there are no alternatives. But if the issue is just large file compatibility, exFAT is one. NTFS is second-class on Linux and it has some permissions issues. That's a tradeoff you can make for files that mainly get used on the Windows side. But you can also make the reverse tradeoff for files that you mostly use from Linux, by installing Windows drivers for Linux filesystems. The main three are ext2fsd/ext3fsd and ext4fsd for ext2/3/4, WinBTRFS for BTRFS. and ZFSIn for ZFS. (Probably none of them are as stable as ntfs-3g or the new Linux kernel driver for NTFS donated by Paragon, but WinBTRFS looks relatively good.)
  9. I'm not sure. There are a substantial number who game exclusively on Linux. They're just not really embedded in ‘PC gamer culture’, so-to-speak. In other words: they game mostly offline, they play mostly indie games, or they play mainly emulated and retro games. These are people whose use cases are more similar to Linus' than to Luke's. If you're already on Linux and you're looking to add PC games to what you do on your computer, there are more excellent games available for the platform than you have time to play. This is mostly an interesting proposition if your game selection is fundamentally faddish, i.e., one of the top reasons for you wanting to play a game is the mere fact that lots of other people are playing it at a given time. This is probably a lot of people in LTT's audience, since its audience is mostly boys and men in their teens and twenties, and relatively fanatical about videogames. But it's not everyone! The selection issue is mostly for fanatics, who follow trends because gaming is among their single biggest passions in life, and competitive gamers, who are forced to pay attention to network effects because they need games that have a large enough playerbase to host a healthy competitive scene. But Linux gamers who don't fall into those categories really are gaming on Linux these days. Is MS Flight Simulator really for flight simulation enthusiasts? FlightGear is in actual FAA-certified training simulators, and so is X-Plane, which has FAA-certified distributions, and a native Linux client. FlightGear was created because MS Flight Simulator was too videogame-y and not realistic enough in terms of the flight mechanics. I imagine that a flight simulation enthusiast would care about the realism of the simulation. That said, MS Flight Simulator 2020 is gorgeous and FlightGear is not, really.
  10. Does LMG have a fact checking process? There were factual errors and misleading claims in each video in this series that LMG already had adequate in-house resources (internal expertise or just the ability to Google a bit) to prevent from making the final cuts. Seems like with as large of a staff as LMG has, the high tech equipment, and the current expansion of the company, etc., LMG is overdue for bolstering its fact checking.
  11. Huh. You've never used a networked file share at school or work? Or just didn't make the connection that that feature was available to you on Windows at home? That's interesting. I wonder why that is. I suspect that TrueNAS folks just have high expectations for reliability, and the real issue is just that USB HDDs tend to be very cheap and cheaply made, so they're not ‘safe’ to use without redundancy. I'd be surprised if ZFS had some special issues with USB drives. This is definitely some ZFS user perfectionism stuff. You don't need ECC RAM to use ZFS; with regular RAM you'll be no worse off than if you were using some other filesystem. ECC RAM gives you additional protection against certain kinds of corruption, and the kind of people who build fancy storage systems based on FreeBSD or Linux with ZFS tend to value that highly. But if you're just messing around (rather than spending thousands of dollars building a redundant storage system) there's no reason that lacking ECC RAM should make you steer away from ZFS. Definitely mess around with TrueNAS and some Linux distros then, just because there's a lot to explore. If you feel like trying to learn a new operating system all at once is overwhelming, I guess it's fine to start with a Windows server. But if you enjoy trying and learning new things, I'd recommend getting into free software operating systems for this anyway, just because they're more fun for that type of person. Check out the Linux and FreeBSD magazines at your local bookstore some time. They have articles with project ideas and tutorials as well as DVDs that come with multiple operating systems you can try, usually featured in reviews. It's a good way to find high-quality guides that are ready to try, and it's a lot of fun!
  12. No worries, dude. Taking your existing grab bag of hardware and hoping it'll work with an OS that the manufacturers may not adequately support is always a gamble and often a pain. Good luck with your projects!
  13. Does Windows still limit the number of open connections for SMB/CIFS to just a handful on editions other than workstation and server? Back in the day (Linus will love this), browsing via KDE file managers (not sure if Dolphin was around yet back then, or it was Konqueror) used to render Windows-based CIFS shares unusable, because Dolphin/Konqueror opened multiple connections in order to speed up operations, and the (pointless, totally artificial) connection limit built into Windows was absurdly low.
×