Jump to content

Ralphred

Member
  • Posts

    674
  • Joined

  • Last visited

Awards

This user doesn't have any awards

4 Followers

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. This isn't strictly true; you can "passthrough" GPU control between multiple clients, as long as one of them isn't windows that is on all the time, even the then I'm pretty sure my inability to "disconnect it without summoning the BSOD demon" could be sidestepped by windoze people who know what they are doing.
  2. The further away you go from the "root distro" the more choices are going to be pre-made for you by the maintainers of the downstream distro. If the choices they have made fit your use case then go fit, if they don't - move further upstream. The thing you need to be mindful of is knowing that these "pre-made" choices exist, and if/when they are/aren't suitable for your use case; unfortunately this knowledge comes only through experience. Some of these "pre-made" choices need a deeper understanding to "choose differently" too, like whether you are going to slow the boot process by using an initrd and disk stored loadable drivers.
  3. I'm thinking, why remove mate at all, just leave it in the background, well for at least 6 months or so until you are "happy" with your new DE.
  4. The whole recent "lzma library backdoor" requires systemd to work - anyone who didn't see this coming is either a liar or a fool; I don't care which, because your opinion is obviously garbage, built upon a pile of nonsensical hot s**t. If you can't see how this is the case, then your opinion is obviously garbage, built upon a pile of nonsensical hot s**t, and subsequently worthless to me. The paradigm is "One job, do it well!" not "30 jobs, do them all with the lamentable mediocrity of a public sector worker one month from retirement and a comfy pension, if your up for it...", literally SMFH!
  5. I've always been wary of systemd for two reasons: "Do one job and do it well" as a paradigm, has always served me exceptionally well. zeroconf and pulseaudio were a fscking sh**show for years before you were able to bridle them and make them dance to your tune. At one point I got to the stage "Well, you are building for a laptop, systemd is supposed to be quicker for boot and such". Soon after setting the system up I realised that because I "sleep, then hibernate after 60 minutes of sleep" the longest part of the boot process was copying 8Gig of SSD disk into 8Gig of ram, so "boot time" was actually pretty moot. After (genuinely) hours trying to reconcile systemd's self contradictory documentation, I gave up trying to make it do what it was told, and created my first "systemd-network-unf**ker.service". As the build evolved (read: added layers of needed software) I had to write 2 more systemd-[system component]-unf**ker.service files and associated fixing scripts. Some months later, a systemd update was available, and after applying it, it started "booting fresh" from hibernation status. At this point I gave up and switched to OpenRC - it took less time than trying to reconcile systemd's docs, let alone writing 3 "unf**ker" services. Tl;dr, my conclusion from this exercise in pointless ovine mediocrity: Never again, ever; if your "distro of choice" makes it take longer to write an LACP network config than it does switch from systemd to SysVinit, then you chose wrong, because I know I did when thinking "systemd might actually work!". Thanks for reading my blog, T. *nix user of 28 years and counting.
  6. The beauty of a rolling release is you only have to do it once, the trick is to know what you want when you start. Yeah there is the odd glitch; I updated my kernel* the other day and the battery monitor for my gamepad stopped working because the nomenclature surrounding the file that stored the battery level changed, five minutes later it was working again. *the AMD-pstate driver for CPU scheduling is on another level; my CPU now runs faster and cooler during gaming than ever before, because the powersave policy is so low latency to update I don't need to set it to performance to game anymore.
  7. It's ability to mix and match stable with testing (and even git sources) means you can have the best of both worlds. Slotted packages means you can keep multiple version of wine floating around for legacy games you "just don't want to let go". And because you have had absolute control over your OS from the very beginning, making sure that memory usage and CPU overhead aren't wasting bits and cycles not making pretty graphics is really easy. The only thing people really have a problem with is the steep learning curve. The pro of Gentoo: It does exactly what you tell it to, no more. The con of Gentoo: It does exactly what you tell it to, no more.
  8. Nothing ever is, just another 'layer of frustration'; just keeping closing the doors if bad actors find them and lock the ones we can predict they'll try to open. I previously posted "I'm sure smarter people than myself have more practicable solutions though.", I should have included effective in that too. You are being overly broad with the term "binary blob". In this case we are not talking about the output of an entire package build, which yes would require significant controls to produce the same output, but some binary test files which could be reproduced programmatically in a fairly simple controlled environment. The most important thing is though, if a couple of schmoes in a tech forum can have a productive discussion about ways of thwarting similar attempts moving forward, all hope is not lost
  9. Binary blobs don't just appear, and you are seeing a narrow view of the term stakeholder; if you include package maintainers for effected distro's in that cohort and poll them I'm pretty sure they'd be happy to to produce and compare their "blobs" between themselves. It's a reference to how the git repo itself was clean, but the release archives offered were different to the repo. Yes, there are reasons for this to happen, but everywhere in the "real world" security is always usability trade off, nearly always at the cost of "upstream" work to maintain as much "usability" as possible at the point of actual use; just think of any security measure you have ever interacted with and you'll be able to see the administrative or usability cost associated with it's existence. Back onto FOSS software though, a mechanism to check that the offered archive was a facsimile of the repo would frustrate this malicious code obfuscation method.
  10. Well to circumvent this one binary blobs need to be verified and signed by multiple stakeholders, or outright excluded, and instead of pulling archives from git repos pull via git at a specific point in the repos history. If both thees practices were followed the two vectors used in this attack wouldn't have been available. I'm sure smarter people than me have more practicable solutions though.
  11. That reminds me: There was an update late last night showing the exploit was for REC and not authentication bypass. Whilst this is moot from a "consequence" point of view it does give a springboard for log investigation (read: connections to port 22 that don't attempt any authentication).
  12. The thing he doesn't even brush on is the convergence of configuration choices and system set-up required for the malicious code to even be exploited, so lets list those here: An ssh daemon running on an open port of a public IP Said sshd using RSA priv/pub key auth Be using systemd Your systemd is built with lzma support Your openssh has been patched to link sshd to libsystemd You have an infected liblzma On point 1, as discussed above with @igormp, there are two types of people who do this; Those who know what they are inviting and are ready to deal with it, and those who "get what they fscking deserve". Point 2 is pretty much everyone who's ever read a "how to ssh server" tutorial, and it does it by default*. Point 3 is probably most people, except those who actively avoid it. Point 4 is going to be distro/personal choice, but again most are probably built with xz compression support. On point 5 you need to check if your distro does this, I know none I let near open public IP ports do, and neither does Arch. Point 6 is almost no one as you'd have to be running "bleeding edge" software, most people who do this (who aren't distro testers) chose distro's that allow them to pick and choose which parts should be bleeding edge and which should be stable, based on need. Don't get me wrong, this is serious and will have repercussions for foss moving forward, but not because "half of all linux servers are infected with malware and are security compromised" because they just aren't and, without further developments in this case, aren't going to be either. *In sshd_config from the openssh git repo.
  13. Agreed, devs don't run "bleeding edge" on the systems they build stable packages on, it's just inviting non-issues to present themselves. Did Fedora devs build and package on their own "infected systems", I doubt it very much, if I build for other systems I do it on the most mature and stable system available, and all because no one wants a shitty binary or archive. Not really, the payload was only uploaded 5 weeks ago, so even if it did/does have "other functionality" (like injecting malicious code whilst de/compressing an archive) nothing produced before 23rd Feb has even the remotest chance of being compromised. If you are concerned because you have had sshd running on an open port exposed on a public IP with RSA priv/pub key authentication enabled and use systemd then run ldd $(which sshd). If liblzma isn't in the list you have nothing to worry about.
  14. How do you think they work without an open sshd port? Yeah, I can see this, no one in their right mind runs anything other than "stable" on outward facing hardware. The issue right now is it was pure luck this was found, "will we find the next one?" and "did we find the last one?" are scary questions.
  15. "The resulting malicious build interferes with authentication in sshd via systemd." SysVinit chads keep winning. On a serious note https://www.openwall.com/lists/oss-security/2024/03/29/4 is an interesting read, and for anyone wondering what to do right now I'll quote it The script mentioned is https://www.openwall.com/lists/oss-security/2024/03/29/4/3 but the use of set -e means that when the grep fails it exits without telling you "probably not vulnerable", so just look for 'liblzma' in the list when you run ldd $(which sshd), if it's there get and run the script for a deeper diagnosis... Meh, they should if there purpose is to allow remote access. That is what's pernicious about this backdoor, it interferes with RSA authentication, the default "hardened" approach to keep the script kiddies out. Indeed!
×