Jump to content

Ralphred

Member
  • Posts

    685
  • Joined

  • Last visited

Everything posted by Ralphred

  1. Yeah, pulse/pipewire on top of jack (on top of alsa), but it's not for the feint of heart. The reason this works is: ALSA has to work for anything else to work, it just sucks at "client side" stuff. Jack is a very good virtual mixer, it's very easy to make it "dance to your tune", you just have to tell it what to do, but client side support is "nebulous". pulseaudio and pipewire suck at most things, apart from "an API for devs to throw sounds at", so offering it at the top level just makes sense. People will moan about "over engineered" or "too many layers to be real-time", but, yeah, I'll take the 12ms latency hit and make it do as it's told, thanks. Anything that wants to use native alsa can be redirected through p/pw at the top of the stack, and anything that *wants* to use jack should be redirect-able too. What you get is a seamless desktop experience where everything "just werks" from volume controls to output switching, but jack gives you a key to the "in between" door so you can redirect things to new (specific) outputs/inputs for recording, monitoring, filtering etc. I set this up on one system as it had a specific use case, but it works so well it's my go to now for anything that needs more control than "balance and volume".
  2. So, the reason systemd was lauded as "fast" is because all the dependency calculations and service start-up is done as a function of compiled code and not scripted. What this ignores is that systems like runit and S6 have a "static dependency tree" pre-configured essentially sidestepping a lot of "spent time" at boot. One of the limitations with systemd is that you are tied to the "in built functionality", so anything that goes slightly off piste means building two services, one that does the "checking you would do naturally" within svinit scripts, then one that depends on it to run the actual service. I think there are some legs in not using udev too, but would have to read up on it. Whilst @Gat Pelsinger is trying to make a system boot fast, there are many layers of optimisation that can be gone through, but each requires a new level of understanding of the underlying system, and steps to remove "dynamic" solutions* to his known static system. *A small list would include No bootloader No initrd Choice of compiler and glibc Module less kernel (plus other kernel tweaks) Pre-configured init system back-grounding of non-essential services @Gat Pelsinger, mate short of going full basement dweller and installing LFS, just swallow the time cost and install Gentoo. It's "not really a distro" just a set of tools for "building your own OS", it will give you luxury of a working system whilst you work to understand the tweaks you can make to speed things up (a lot of which are OoTB options). You've got a machine capable of the heavy lifting in the long term, you can run up a binhost or "compiler helper" in a VM to simplify updates etc.
  3. From a purely "intuitive" point of view, S6 or runit should be quickest as there is no "dynamic" scripting going on - you do those "calculations" and store them as a static boot process, but you can do the same (almost) by careful dependency management in config files and parallel startup with others, forcing "the optimal solution" to be simple to find.
  4. Might struggle with this, need to check out openrgb, the rest will be fine, 550 support has existed for a very long time.
  5. Yep, and if the kernel isn't using it that makes it free for other programs to use.
  6. You can config a custom kernel to boot faster, exclude drivers you explicitly don't need and save memory (some options are boolean and not available as tri-state modules). Now booting faster means nothing to me, my desktop uptimes are measured in weeks, and my laptop restores from sleep/hibernate so the longest part of booting is "copying to memory from disk". As far as quantifying memory saved, I saw a guy showing the amount he saved using a custom kernel in ubuntu - I don't remember how much it was because "I do that already and I see no reason to change", but he was impressed so YMMV... In 20 years of building static non-modular kernels I've only built one that uses an initrd and is dynamically loading drivers from disk, yes it's slower, but the difference is still an order of magnitude less than the time bios takes to POST.
  7. Since the ntfs3 driver is now built into the kernel (compared to the old fuse based ntfs3g driver) NTFS is my filesystem of choice for "shared partitions". Before that I used ext4 and the Ext2Fsd driver in windows, but modern ntfs3 is better. That said the Ext2Fsd is fine if you need to occasionally access a primarily linux drive in windows. exFAT was always a shitshow for meaningful usage.
  8. Similar experience, was able to use an SDcard to "put android" on an ancient WinCE device. Bending embedded stuff to your will is not for the faint of heart though.
  9. Just because I like to drop this info for dual booters: Setting GRUB_DEFAULT=saved and GRUB_SAVEDEFAULT=true in grub.cfg (or ideally /etc/default/grub) means grub will remember which OS you used last boot and treat that as the "default", it means you don't have to hand-hold those "I'm going to reboot 12 times" windows updates.
  10. This isn't strictly true; you can "passthrough" GPU control between multiple clients, as long as one of them isn't windows that is on all the time, even the then I'm pretty sure my inability to "disconnect it without summoning the BSOD demon" could be sidestepped by windoze people who know what they are doing.
  11. The further away you go from the "root distro" the more choices are going to be pre-made for you by the maintainers of the downstream distro. If the choices they have made fit your use case then go fit, if they don't - move further upstream. The thing you need to be mindful of is knowing that these "pre-made" choices exist, and if/when they are/aren't suitable for your use case; unfortunately this knowledge comes only through experience. Some of these "pre-made" choices need a deeper understanding to "choose differently" too, like whether you are going to slow the boot process by using an initrd and disk stored loadable drivers.
  12. I'm thinking, why remove mate at all, just leave it in the background, well for at least 6 months or so until you are "happy" with your new DE.
  13. The whole recent "lzma library backdoor" requires systemd to work - anyone who didn't see this coming is either a liar or a fool; I don't care which, because your opinion is obviously garbage, built upon a pile of nonsensical hot s**t. If you can't see how this is the case, then your opinion is obviously garbage, built upon a pile of nonsensical hot s**t, and subsequently worthless to me. The paradigm is "One job, do it well!" not "30 jobs, do them all with the lamentable mediocrity of a public sector worker one month from retirement and a comfy pension, if your up for it...", literally SMFH!
  14. I've always been wary of systemd for two reasons: "Do one job and do it well" as a paradigm, has always served me exceptionally well. zeroconf and pulseaudio were a fscking sh**show for years before you were able to bridle them and make them dance to your tune. At one point I got to the stage "Well, you are building for a laptop, systemd is supposed to be quicker for boot and such". Soon after setting the system up I realised that because I "sleep, then hibernate after 60 minutes of sleep" the longest part of the boot process was copying 8Gig of SSD disk into 8Gig of ram, so "boot time" was actually pretty moot. After (genuinely) hours trying to reconcile systemd's self contradictory documentation, I gave up trying to make it do what it was told, and created my first "systemd-network-unf**ker.service". As the build evolved (read: added layers of needed software) I had to write 2 more systemd-[system component]-unf**ker.service files and associated fixing scripts. Some months later, a systemd update was available, and after applying it, it started "booting fresh" from hibernation status. At this point I gave up and switched to OpenRC - it took less time than trying to reconcile systemd's docs, let alone writing 3 "unf**ker" services. Tl;dr, my conclusion from this exercise in pointless ovine mediocrity: Never again, ever; if your "distro of choice" makes it take longer to write an LACP network config than it does switch from systemd to SysVinit, then you chose wrong, because I know I did when thinking "systemd might actually work!". Thanks for reading my blog, T. *nix user of 28 years and counting.
  15. The beauty of a rolling release is you only have to do it once, the trick is to know what you want when you start. Yeah there is the odd glitch; I updated my kernel* the other day and the battery monitor for my gamepad stopped working because the nomenclature surrounding the file that stored the battery level changed, five minutes later it was working again. *the AMD-pstate driver for CPU scheduling is on another level; my CPU now runs faster and cooler during gaming than ever before, because the powersave policy is so low latency to update I don't need to set it to performance to game anymore.
  16. It's ability to mix and match stable with testing (and even git sources) means you can have the best of both worlds. Slotted packages means you can keep multiple version of wine floating around for legacy games you "just don't want to let go". And because you have had absolute control over your OS from the very beginning, making sure that memory usage and CPU overhead aren't wasting bits and cycles not making pretty graphics is really easy. The only thing people really have a problem with is the steep learning curve. The pro of Gentoo: It does exactly what you tell it to, no more. The con of Gentoo: It does exactly what you tell it to, no more.
  17. Nothing ever is, just another 'layer of frustration'; just keeping closing the doors if bad actors find them and lock the ones we can predict they'll try to open. I previously posted "I'm sure smarter people than myself have more practicable solutions though.", I should have included effective in that too. You are being overly broad with the term "binary blob". In this case we are not talking about the output of an entire package build, which yes would require significant controls to produce the same output, but some binary test files which could be reproduced programmatically in a fairly simple controlled environment. The most important thing is though, if a couple of schmoes in a tech forum can have a productive discussion about ways of thwarting similar attempts moving forward, all hope is not lost
  18. Binary blobs don't just appear, and you are seeing a narrow view of the term stakeholder; if you include package maintainers for effected distro's in that cohort and poll them I'm pretty sure they'd be happy to to produce and compare their "blobs" between themselves. It's a reference to how the git repo itself was clean, but the release archives offered were different to the repo. Yes, there are reasons for this to happen, but everywhere in the "real world" security is always usability trade off, nearly always at the cost of "upstream" work to maintain as much "usability" as possible at the point of actual use; just think of any security measure you have ever interacted with and you'll be able to see the administrative or usability cost associated with it's existence. Back onto FOSS software though, a mechanism to check that the offered archive was a facsimile of the repo would frustrate this malicious code obfuscation method.
  19. Well to circumvent this one binary blobs need to be verified and signed by multiple stakeholders, or outright excluded, and instead of pulling archives from git repos pull via git at a specific point in the repos history. If both thees practices were followed the two vectors used in this attack wouldn't have been available. I'm sure smarter people than me have more practicable solutions though.
  20. That reminds me: There was an update late last night showing the exploit was for REC and not authentication bypass. Whilst this is moot from a "consequence" point of view it does give a springboard for log investigation (read: connections to port 22 that don't attempt any authentication).
  21. The thing he doesn't even brush on is the convergence of configuration choices and system set-up required for the malicious code to even be exploited, so lets list those here: An ssh daemon running on an open port of a public IP Said sshd using RSA priv/pub key auth Be using systemd Your systemd is built with lzma support Your openssh has been patched to link sshd to libsystemd You have an infected liblzma On point 1, as discussed above with @igormp, there are two types of people who do this; Those who know what they are inviting and are ready to deal with it, and those who "get what they fscking deserve". Point 2 is pretty much everyone who's ever read a "how to ssh server" tutorial, and it does it by default*. Point 3 is probably most people, except those who actively avoid it. Point 4 is going to be distro/personal choice, but again most are probably built with xz compression support. On point 5 you need to check if your distro does this, I know none I let near open public IP ports do, and neither does Arch. Point 6 is almost no one as you'd have to be running "bleeding edge" software, most people who do this (who aren't distro testers) chose distro's that allow them to pick and choose which parts should be bleeding edge and which should be stable, based on need. Don't get me wrong, this is serious and will have repercussions for foss moving forward, but not because "half of all linux servers are infected with malware and are security compromised" because they just aren't and, without further developments in this case, aren't going to be either. *In sshd_config from the openssh git repo.
  22. Agreed, devs don't run "bleeding edge" on the systems they build stable packages on, it's just inviting non-issues to present themselves. Did Fedora devs build and package on their own "infected systems", I doubt it very much, if I build for other systems I do it on the most mature and stable system available, and all because no one wants a shitty binary or archive. Not really, the payload was only uploaded 5 weeks ago, so even if it did/does have "other functionality" (like injecting malicious code whilst de/compressing an archive) nothing produced before 23rd Feb has even the remotest chance of being compromised. If you are concerned because you have had sshd running on an open port exposed on a public IP with RSA priv/pub key authentication enabled and use systemd then run ldd $(which sshd). If liblzma isn't in the list you have nothing to worry about.
  23. How do you think they work without an open sshd port? Yeah, I can see this, no one in their right mind runs anything other than "stable" on outward facing hardware. The issue right now is it was pure luck this was found, "will we find the next one?" and "did we find the last one?" are scary questions.
×