Jump to content

Ralphred

Member
  • Posts

    679
  • Joined

  • Last visited

Everything posted by Ralphred

  1. Since the ntfs3 driver is now built into the kernel (compared to the old fuse based ntfs3g driver) NTFS is my filesystem of choice for "shared partitions". Before that I used ext4 and the Ext2Fsd driver in windows, but modern ntfs3 is better. That said the Ext2Fsd is fine if you need to occasionally access a primarily linux drive in windows. exFAT was always a shitshow for meaningful usage.
  2. Similar experience, was able to use an SDcard to "put android" on an ancient WinCE device. Bending embedded stuff to your will is not for the faint of heart though.
  3. Just because I like to drop this info for dual booters: Setting GRUB_DEFAULT=saved and GRUB_SAVEDEFAULT=true in grub.cfg (or ideally /etc/default/grub) means grub will remember which OS you used last boot and treat that as the "default", it means you don't have to hand-hold those "I'm going to reboot 12 times" windows updates.
  4. This isn't strictly true; you can "passthrough" GPU control between multiple clients, as long as one of them isn't windows that is on all the time, even the then I'm pretty sure my inability to "disconnect it without summoning the BSOD demon" could be sidestepped by windoze people who know what they are doing.
  5. The further away you go from the "root distro" the more choices are going to be pre-made for you by the maintainers of the downstream distro. If the choices they have made fit your use case then go fit, if they don't - move further upstream. The thing you need to be mindful of is knowing that these "pre-made" choices exist, and if/when they are/aren't suitable for your use case; unfortunately this knowledge comes only through experience. Some of these "pre-made" choices need a deeper understanding to "choose differently" too, like whether you are going to slow the boot process by using an initrd and disk stored loadable drivers.
  6. I'm thinking, why remove mate at all, just leave it in the background, well for at least 6 months or so until you are "happy" with your new DE.
  7. The whole recent "lzma library backdoor" requires systemd to work - anyone who didn't see this coming is either a liar or a fool; I don't care which, because your opinion is obviously garbage, built upon a pile of nonsensical hot s**t. If you can't see how this is the case, then your opinion is obviously garbage, built upon a pile of nonsensical hot s**t, and subsequently worthless to me. The paradigm is "One job, do it well!" not "30 jobs, do them all with the lamentable mediocrity of a public sector worker one month from retirement and a comfy pension, if your up for it...", literally SMFH!
  8. I've always been wary of systemd for two reasons: "Do one job and do it well" as a paradigm, has always served me exceptionally well. zeroconf and pulseaudio were a fscking sh**show for years before you were able to bridle them and make them dance to your tune. At one point I got to the stage "Well, you are building for a laptop, systemd is supposed to be quicker for boot and such". Soon after setting the system up I realised that because I "sleep, then hibernate after 60 minutes of sleep" the longest part of the boot process was copying 8Gig of SSD disk into 8Gig of ram, so "boot time" was actually pretty moot. After (genuinely) hours trying to reconcile systemd's self contradictory documentation, I gave up trying to make it do what it was told, and created my first "systemd-network-unf**ker.service". As the build evolved (read: added layers of needed software) I had to write 2 more systemd-[system component]-unf**ker.service files and associated fixing scripts. Some months later, a systemd update was available, and after applying it, it started "booting fresh" from hibernation status. At this point I gave up and switched to OpenRC - it took less time than trying to reconcile systemd's docs, let alone writing 3 "unf**ker" services. Tl;dr, my conclusion from this exercise in pointless ovine mediocrity: Never again, ever; if your "distro of choice" makes it take longer to write an LACP network config than it does switch from systemd to SysVinit, then you chose wrong, because I know I did when thinking "systemd might actually work!". Thanks for reading my blog, T. *nix user of 28 years and counting.
  9. The beauty of a rolling release is you only have to do it once, the trick is to know what you want when you start. Yeah there is the odd glitch; I updated my kernel* the other day and the battery monitor for my gamepad stopped working because the nomenclature surrounding the file that stored the battery level changed, five minutes later it was working again. *the AMD-pstate driver for CPU scheduling is on another level; my CPU now runs faster and cooler during gaming than ever before, because the powersave policy is so low latency to update I don't need to set it to performance to game anymore.
  10. It's ability to mix and match stable with testing (and even git sources) means you can have the best of both worlds. Slotted packages means you can keep multiple version of wine floating around for legacy games you "just don't want to let go". And because you have had absolute control over your OS from the very beginning, making sure that memory usage and CPU overhead aren't wasting bits and cycles not making pretty graphics is really easy. The only thing people really have a problem with is the steep learning curve. The pro of Gentoo: It does exactly what you tell it to, no more. The con of Gentoo: It does exactly what you tell it to, no more.
  11. Nothing ever is, just another 'layer of frustration'; just keeping closing the doors if bad actors find them and lock the ones we can predict they'll try to open. I previously posted "I'm sure smarter people than myself have more practicable solutions though.", I should have included effective in that too. You are being overly broad with the term "binary blob". In this case we are not talking about the output of an entire package build, which yes would require significant controls to produce the same output, but some binary test files which could be reproduced programmatically in a fairly simple controlled environment. The most important thing is though, if a couple of schmoes in a tech forum can have a productive discussion about ways of thwarting similar attempts moving forward, all hope is not lost
  12. Binary blobs don't just appear, and you are seeing a narrow view of the term stakeholder; if you include package maintainers for effected distro's in that cohort and poll them I'm pretty sure they'd be happy to to produce and compare their "blobs" between themselves. It's a reference to how the git repo itself was clean, but the release archives offered were different to the repo. Yes, there are reasons for this to happen, but everywhere in the "real world" security is always usability trade off, nearly always at the cost of "upstream" work to maintain as much "usability" as possible at the point of actual use; just think of any security measure you have ever interacted with and you'll be able to see the administrative or usability cost associated with it's existence. Back onto FOSS software though, a mechanism to check that the offered archive was a facsimile of the repo would frustrate this malicious code obfuscation method.
  13. Well to circumvent this one binary blobs need to be verified and signed by multiple stakeholders, or outright excluded, and instead of pulling archives from git repos pull via git at a specific point in the repos history. If both thees practices were followed the two vectors used in this attack wouldn't have been available. I'm sure smarter people than me have more practicable solutions though.
  14. That reminds me: There was an update late last night showing the exploit was for REC and not authentication bypass. Whilst this is moot from a "consequence" point of view it does give a springboard for log investigation (read: connections to port 22 that don't attempt any authentication).
  15. The thing he doesn't even brush on is the convergence of configuration choices and system set-up required for the malicious code to even be exploited, so lets list those here: An ssh daemon running on an open port of a public IP Said sshd using RSA priv/pub key auth Be using systemd Your systemd is built with lzma support Your openssh has been patched to link sshd to libsystemd You have an infected liblzma On point 1, as discussed above with @igormp, there are two types of people who do this; Those who know what they are inviting and are ready to deal with it, and those who "get what they fscking deserve". Point 2 is pretty much everyone who's ever read a "how to ssh server" tutorial, and it does it by default*. Point 3 is probably most people, except those who actively avoid it. Point 4 is going to be distro/personal choice, but again most are probably built with xz compression support. On point 5 you need to check if your distro does this, I know none I let near open public IP ports do, and neither does Arch. Point 6 is almost no one as you'd have to be running "bleeding edge" software, most people who do this (who aren't distro testers) chose distro's that allow them to pick and choose which parts should be bleeding edge and which should be stable, based on need. Don't get me wrong, this is serious and will have repercussions for foss moving forward, but not because "half of all linux servers are infected with malware and are security compromised" because they just aren't and, without further developments in this case, aren't going to be either. *In sshd_config from the openssh git repo.
  16. Agreed, devs don't run "bleeding edge" on the systems they build stable packages on, it's just inviting non-issues to present themselves. Did Fedora devs build and package on their own "infected systems", I doubt it very much, if I build for other systems I do it on the most mature and stable system available, and all because no one wants a shitty binary or archive. Not really, the payload was only uploaded 5 weeks ago, so even if it did/does have "other functionality" (like injecting malicious code whilst de/compressing an archive) nothing produced before 23rd Feb has even the remotest chance of being compromised. If you are concerned because you have had sshd running on an open port exposed on a public IP with RSA priv/pub key authentication enabled and use systemd then run ldd $(which sshd). If liblzma isn't in the list you have nothing to worry about.
  17. How do you think they work without an open sshd port? Yeah, I can see this, no one in their right mind runs anything other than "stable" on outward facing hardware. The issue right now is it was pure luck this was found, "will we find the next one?" and "did we find the last one?" are scary questions.
  18. "The resulting malicious build interferes with authentication in sshd via systemd." SysVinit chads keep winning. On a serious note https://www.openwall.com/lists/oss-security/2024/03/29/4 is an interesting read, and for anyone wondering what to do right now I'll quote it The script mentioned is https://www.openwall.com/lists/oss-security/2024/03/29/4/3 but the use of set -e means that when the grep fails it exits without telling you "probably not vulnerable", so just look for 'liblzma' in the list when you run ldd $(which sshd), if it's there get and run the script for a deeper diagnosis... Meh, they should if there purpose is to allow remote access. That is what's pernicious about this backdoor, it interferes with RSA authentication, the default "hardened" approach to keep the script kiddies out. Indeed!
  19. Hence the *. If it isn't essential I'm not interested in using, let alone supporting, that software. An alternative will nearly always exist, if it doesn't; make it yourself. The "EULA" cartoon by IllWIllPress springs to mind...
  20. I've never used iOS, but any droid app I use asking for access to "contacts"* gets uninstalled - at that point you are sharing other peoples data too. Google "social media shadow profiles". *without good reason
  21. Yes, if you think something may be superfluous configure it as a module, then you can check to see if that module was loaded after booting later and go from there. Yeah, once you open the configuration tree you'll often find that the "module in use" is a sub-option of the "driver" that you have to enable to get to that option anyway. The only exception I've seen to this is when using vfio drivers, or the "sub-option" is in a different branch of config tree, like a disk controller driver being in one section and the sata driver being elsewhere (this is just an example, don't go looking for it ) Yeah; what we are relying on when searching is the normal practice of "the module will be called [some name]" in it's docs or [CONFIG_SYMBOL] being similar to it's name. Quite a lot of stuff is enabled/disabled implicitly: If you pick a generic sounding option and hit ? you'll get a rundown on that option, along with things it will automatically turn on (normally dependants) or things that will automatically disable it (needs to be mutually exclusive with). If you get your file systems supported, and make sure your GPU driver is "y" and not "m" you're in a starting position to not be guessing about the rest as long as you turn off any "quiet" options on your kernel command line (so you can see any hangs/errors as the kernel initiates). Anything you *need* (excepting proprietary nvidia drivers) that is set as "y" instead of "m" will cut down on the amount of work udev has to do at boot time as long as you copy the requisite parts from modprobe.d/some_module.conf onto the kernel command line and include any firmware in the kernel - but you/we can revisit this once you get your custom kernel working at all. The default for Gentoo (for about 20 years) was to configure your own kernel, with this in mind I'd suggest you read https://wiki.gentoo.org/wiki/Handbook:AMD64/Installation/Kernel#Alternative:_Manual_configuration it's not a huge doc and will cover things I'll forget, just ignore things that are specific to that guide (like chroot, gentoo options etc) and it'll give you a good gist. EDIT: Yeah, it's so OK I actually forgot to respond It's worth noting that "generic" kernels supplied with distro's nowadays need a lot of support, in the way or initrd and udev, so they can dynamically load all the drivers they need at boot time for any conceivable hardware combination. You are rejecting this paradigm to have your kernel only include what is needed and include it by default - none of my custom kernels even have module loading support, it's not needed in my case, and reading your lspci, not needed in yours unless you expect to be plugging in "foreign" USB devices as the laptop travels around- but again a topic to revisit when you have it "working".
  22. This is usually due to distro patches being applied to the vanilla source and some test cases falling through the net. I've never had vanilla sources fail when the issue wasn't somewhere outside of the kernel itself (normally PCI devices changing bus addresses between major versions or modular/static kernel configs invalidating other system config files).
  23. Sounds like the GPU driver was missing. This is the only time you have to use -jX, it isn't really going to be of any benefit other times. The easiest thing to do regarding making sure all the drivers you need are included is taking the module names in use from Kernel driver in use: [name] from lspci -k output. You can then use / to search for these names when in menuconfig - the result will direct you to the option you need to turn on. During rebuild, unless something in your tool chain has changed, or some error, you only need to re-run make -j8 and install the kernel and modules; it will automatically not rebuild the stuff it doesn't have too because of your config changes, and subsequent builds after adding a single module/driver are much quicker. Other kernel building tips: Enable "CONFIG_IKCONFIG" - it's useful to be able to copy a "working" .config on the fly. Always run grub-mkconfig -o grub.cfg-[kernel name] BEFORE running grub-mkconfig -o grub.cfg, this will give you backups of known working grub.cfg's later down the line. Unless some specific thing has been added to the kernel that you want, make oldconfig (after copying the .config from /proc/config.gz to the new src tree) will nearly always suffice for a "new kernel" when one comes along. When you copy bzimage, add a meaningful identifier; grub will still find it and you'll get to have "redundancy" options if you mess something up. Backup your .config files to /usr/src/config-x.y.z-[name], that way you can keep your "work" and still delete old kernel source trees.
×