Jump to content

Ralphred

Member
  • Posts

    678
  • Joined

  • Last visited

Everything posted by Ralphred

  1. Right, I can't write the solution to this here, but I can tell you where too look. This detection is going to show up in udev if your laptop is configured correctly, so udevadm monitor run in a terminal is going to show what udev knows about "seeing the pen above the screen" or not. At this point I'd start scripting /usr/bin/xinput to find, identify, and disable the touchpad, but you are using wayland, so you'll need someone familiar with wayland to help with that. Once you have a script(s) to switch the touchpad on and off you just need to tie it to the events generated in udev when you move them pen in/out of range by adding custom rules in /etc/udev/rules.d/ This isn't newbie stuff, but it's totally doable (assuming wayland is capable of disabling/re-enabling the touchpad), and a worthy endeavour.
  2. That article is out of date, ntfs-3g is the fuse userspace driver (old and slow without full functionality), the modern driver was added around kernel 5.15 and is called ntfs3. mount -t ntfs3 /dev/mapper/ldm_vol_AYAANMAGS-SIDEK-Dg0_Volume1 /mnt/Ddrive should to it, a comparable fstab line would read /dev/mapper/ldm_vol_AYAANMAGS-SIDEK-Dg0_Volume1 /mnt/Ddrive ntfs3 rw,noatime,iocharset=utf8 0 2 you can replace "/dev/mapper/ldm_vol_AYAANMAGS-SIDEK-Dg0_Volume1" with the UUID from `blkid` ( a 16 character hex string for NTFS ) if you like.
  3. Fixing it should be as simple as issuing a grub-install command from your Ubuntu install media. The only part that windows messes with is the actual loader in the boot-sector, so mounting your /boot over the top of the live OS's /boot, and running grub-install --skip-fs-probe --target=x86_64-efi --force /dev/sda Replacing /dev/sda with your drives device file (maybe /dev/nvme0n1, /dev/sdc etc) You should look at grub's GRUB_SAVEDEFAULT and GRUB_DEFAULT config settings too, would be useful in this use case.
  4. You should have a bunch of autostarted programmes in either/both /etc/xdg/autostart and/or ~/.config/autostart. These two directories should be read by all desktop environments and the some_program.desktop files in side should be read. Try copying one of these files and tailoring it to start you pre-conky script. Obviously in ~/.config, it will only work for that user, /etc should work for all users.
  5. You are approaching this from the wrong perspective, OP doesn't need a distro that does what he wants, he needs one that allows him to *make* it do what he wants, the guy was wise enough not to install AUR packages on a Manjaro system, this means he's been to more than one source of information before proceeding and is capable of reading, it's time for @GCandy77 to take the training wheels off. Ubuntu, Pop, Zorin & Mint are normie tier distros, Fedora and Suse - RC testing areas for EL distros, thus moving him from one walled garden to another, in which, sooner or later, he'll find one of the walls too constraining and be faced with distro hopping again, or becoming a package dev. Whilst the logical recommendation would be debian, I believe he'll be capable of "duplicating" his current system on Arch, but in this case that is what a virtual machine will help with re: Checking you can install a distro before you *need* to install it, if arch is too arduous then try debian.
  6. Just turn off hardware acceleration. Whilst I admit my mesa is complete, I never use hardware decoding because I'm apt to watch podcasts etc whilst gaming and don't need the GPU getting any hotter, everything works fine. The only time I'd bother messing around to make vaapi work "properly" is if I was streaming/recording CPU intensive tasks. If you are getting to the point that Manjaro is starting to constrain your attempts to make your PC do what you it want it to, then it's probably time to move upstream to arch.
  7. Try this, it should be available from one of your distro repos though.
  8. Sometimes, from the last few lines of "man xorg" you can see there is a man page for xorg.conf (/etc/X11/xorg.conf), which is one of the most complete man pages ever, when considering the possible complexity of xorg.conf.
  9. Your passthrough has failed, it should show as : Kernel driver in use: vfio-pci Kernel modules: amdgpu I have a very similar set-up to that which you are trying to achieve, but my kernel is static. That's what I use (and I don't use amd_iommu, but because `static kernel` it's implied). That said, it's not working after an upgrade to kernel 6.x (still works fine on the 5.x series), but that's probably a consequence of my hazy `make oldconfig` attention span, I'll post here when I find what the problem was... Now, because you run your cards with different kernel modules (both mine are amdgpu, so I probably would have taken the *easy* way out if it was an option) I'm going to recommend you watch this guide**, or, more importantly, check out the scripts he uses to safely "disassemble" the running linux machine and make the GPU available for his windows VM. **Now, the reason I recommend this particular guide*** is passthrough on a single GPU is clearly the most complicated set-up of the available options, and although it isn't your end game, using the technique of moving the passthrough "initialisation" into userspace allows you to test the "backend" (read:kernel capability, bios & iommu grouping) without having to tinker with grub etc and reboot just to test a different setting/deal with (possible) udev nonsense. ***For seasoned linux users some of his mistakes are a bit cringe, but he owns them and gets the job done, whilst showing the amount of research and understanding he had to do to achieve the end goal is not trivial, and worthy of respect. I'm off to check my linux-6.config, I hope the resources help you move forward... EDIT: I found and fixed the 6.x problem: I managed to include some "guest VM drivers", so the kernel seemed to passing the "spare" GPU through to itself, odd state of affairs...
  10. Erm, you shouldn't storing your kernel config in / Personally, extracting tarballs in /usr/src/ and running operations from /usr/src/linux[whatever] as root is fine; you are configuring the ostensibly lowest level part of your OS, doing so as root is fine (also some 3rd party modules expect to find the current kernels source in /usr/src/linux, so using uname at boot to have /usr/src/linux as a symlink that points to the running kernel source can be a good (re:applied laziness) idea. I don’t understand what u mean by that So, this is a bit abstract from kernel specific compilation, but "make" as a command expects to find a file in it's invoking directory, literally named "Makefile". This file tells the make command how to handle specific "targets" (nothing also being a "target"), targets being parts of the software you are trying to configure/compile, and said Makefile is normally in the root directory of the source/submodule you are trying to work with. When compiling a kernel, the targets "[something]config" all deal with the (possibly complex) process of configuring the kernel before compiling it. For make to complain a "target" is missing means you have misspelled something in your command, or you are not in the correct "source root" directory to be running make on that target. The problem with the guide you are following is that it tells you what to do, but not why you are doing it, all the way down to packaging your kernel as a .deb when a simple "make install && make modules_install && grub-mkconfig >/boot/grub/grub.cfg && reboot" does everything you need. Also copying /boot/config-[version] to /usr/src/linux/.config is possibly flawed, when zcat /proc/config.gz >/usr/src/linux/.config will always copy the latest running config and isn't prone to human error (useful after it becomes necessary to make mrproper). Long and short, make sure you are actually in the kernel source root before running make [anything].
  11. Post outputs of lspci -kk|grep -A6 VGA dmesg|grep amdgpu cat /var/log/Xorg.0.log (careful with this, start X, stop it then post, or just post the initial stages) find /etc/X11/ please...
  12. Nah, just tell the kernel it's not to load drivers for it and leave it for passthrough, Linux can cope with headless. It only takes an hour, from scratch, for config and build, it's worth it for the security benefits alone.
  13. This isn't strictly true; though I wouldn't recommend someone try to configure and implement GPU passthrough for the first time on a CLI, there is no reason to have X permanently installed. If I were doing this: Pick any distro (but use your "endgame" kernel*) Passthrough anything the host doesn't need (audio, keyboard**, mouse**) Build your VM (qemu/virtmanager) Backup the config files (the virtual drive should be a whole drive/partition) Install your final host OS Run a script from inittab to read /proc/cmdline, and start the VM when passthrough is active Re-implement the VM, set-up ssh for host administration Set-up grub with administration entry (no passthrough, for host updates etc) Optionally set-up a bootable USB with a GUI for SHTF recovery/administration Considering that the libvirtd and VMM come out of the RedHat house, CentOS seems a sensible build OS, but systemd is bloat for this use case, S6 would be the ideal choice. *Any generic kernel is going to be the epitome of bloat for this use case, config/build your own, with the extra security layer of no capability for module loading, using an initrd would also be a complete waste of time. **The reason I say these is you want the best end experience, and if you are allowing windows to load it's native drivers for things like MM keys and touchpads etc IME it'll work better, and cut down on translation layer overhead.
  14. Aye, "if" is "input file", "of" "is output file". The downside of dd is it will copy the empty space on a drive/partition too, this is obviously fine when you have optimised a source, from which you are going to crate an image, to have no/minimal "empty" space, but for the purposes of backup things up on a regular basis rsync or tar are better alternatives. If your "new SDD" is order of magnitudes larger than the original, then storing images as files is a better idea: #only once mkfs.ext4 /dev/sda mkdir -p /mnt/snapshots #every time mount /dev/sda /mnt/snapshots #make a snapshot dd if=/dev/nvme0n1 of=/mnt/snapshots/2022-11-30 #or to restore dd if=/mnt/snapshots/2022-11-30 of=/dev/nvme0n1 #or to use compression (for the "empty" space) #make a snapshot dd if=/dev/nvme0n1 | xz -zc >/mnt/snapshots/2022-11-30.xz #restore a snapshot xzcat /mnt/snapshots/2022-11-30.xz| dd of=/dev/nvme0n1
  15. Make sure you aren't running a schedutil or ondemand (well, anything that isn't `performance` really) CPU frequency scaling profile: cat /sys/devices/system/cpu/cpufreq/policy*/scaling_governor will show you what profile's currently running.
  16. If you want to do it unmanaged as a cron job, maybe, but you may as well do it daily in that case. This is not 2012 and we are not using paludis, aside from GLSA's and packages I want to update, I only update when portage starts bitching, because it's the path of least resistance to fix one issue at a time, this leads to me updating 4-6 times a year. If your not confident in your ability to bend portage to your will, well that's one of the things crossdev and binhosts are for. You should rephrase that to "a server I can't update", but feel free to wrap it up as a stage 4 and send it to me, then i can update it for you and send it back...
  17. This is especially not true too, I've updated Gentoo systems with over a year between syncs, and that was before you could do some "git-fu" and go all "wayback machine" on your Portage tree. Worst case scenario is you do a selective stage 3 over the top and trick portage into rebuilding everything, which is very hands off from a "time spent hand-holding updates" point of view.
  18. I've been doing it for over a decade, and the advances with: AMD kernel drivers Wine Vulkan and translation layers* The work done by Valve (re: proton) have pushed gaming on linux from a few users, able to tweak and script, being able to run some popular games, to an almost blanket coverage where games that don't work are rarer than games that do. There has been a downtick in games that run native, but linux is a dynamic environment, constantly evolving, and thus a moving target for game devs; yeah there are some LTS distro releases that become popular, but 90% of game playing users are going to switch to the latest LTS version only 2 years in to it's predecessors 7 year release cycle. Steam has provided, with it's packaged runtime and proton, a much more stable (and easier to hit) target; whilst I might have philosophical objections to lazy packaging with things like flatpack, when the end result is more games available to linux users at the click of a button, it's hard not to put those objections aside in light of the gained benefits. *Also, massive props to Philip Rebohle who's work is so ubiquitous you probably don't even know his name...
  19. Debian has certain qualities to it that I gravitate towards too; I find "derivative" distro's constraining in one way or another, to the point they are never worth it and thus end up defaulting to the `root` distro they are derived from. As any RPM based distro's root would technically be Redhat (or fedora) that's out of the question, if you fancy arch then use arch, but not some 'easy to use version'. I'm not saying debian is hard to use, it just requires more hands on knowledge than most of the derived offerings, and as such "man [problem software]" is already in your toolkit. I'm not convinced arch offers the toolkit to mix and match "bleeding edge" and "stable" as well as gentoo does, if for example you needed specific newer version of things like mesa/vulkan to keep eeking out the best from your new system, but this is kind of moot, as in 6 months it'll be in the debian testing branch (which is more stable than most distro's stable branch, lets be honest about this), so what it boils down to is what you buy, and seeing as debian is binary anyway, if you buy a new hard drive you can mess around with whatever you like on it, and just swap you current drive into the new machine for daily driving, allowing you to run something as stable as a 4 year old on crack for the other stuff without any real consequences.
  20. Privoxy does that sort of thing. https://www.privoxy.org/faq/configuration.html#WHITELISTS
  21. And it stayed on after the fall? The way to think about it is "what could have happened inside, due to that kind of force?". The most obvious things are anything with a direct PCB based interface (Ram, NVME, wireless card etc), the next would be the flat connectors where the shock could have "released" the clip that keeps them in place. My advise would be to take it apart as much as possible, check everything for physical damage, clean everything*, and reassemble, leaving out optional components (battery, wifi, etc) *how often have you read "I re-seated the card and it work fine now" Yellow circle look like the plug may be unseated. Red circle looks like loose washer? (both maybe just artefacts of the angle though)
  22. That just disables memory checking at boot time, more of a "user preference" than anything to worry about. Just have fun poking around in Mint, it's quite nice IMHO.
  23. Two potential issues: Your virtualisation is switched off, don't worry about changing it, just be aware it's off if you ever want to use VMM, and switch it on then (VMM works OTB in Mint). S5 sleep state is off. It's kind of a corner use case in which people NEED it, but I find it useful.
  24. Just as a point of interest, when connecting two devices (as in your 5G router to the ER-x) where no other devices will ever be introduced, I use /30 network addresses to do so. In your example I'd assign 192.168.AAA.0/30 between these two so: 192.168.AAA.0 is the network address 192.168.AAA.1 is the Modem address 192.168.AAA.2 is the ER-X address 192.168.AAA.3 is the broadcast address If you wanted to be at the other end of the /24 address range: 192.168.AAA.252 is the network address 192.168.AAA.253 is the Modem address 192.168.AAA.254 is the ER-X address 192.168.AAA.255 is the broadcast address
  25. Like I said, it can be moot anyway. The only difference would be when you are masquerading/overloading, the router is "creating" DNAT/SNAT rules on the fly and remembering where each one "points to". For example (in a standard single router set-up) your phone sends an http request to google.com from ip 10.0.0.2:[some port], this ends up on your routers LAN interface and it forwards the request via it's WAN interface, this is when the SNAT rule is applied, so google gets a request from an address it can actually reach (re: not a private class a|b|c) . When it comes back from google the DNAT rule starts playing a part, first it looks up who the request is for in it's "masquerade table" and DNAT is applied to the reply so it gets back to your phone. This needs to be "dynamic" as, if your pc was to make a request to google.com at the same time we don't want to be sending the packets back to the wrong device. In your case the ER-X can't "send them back" to the wrong device, as it's always being sent back to the gryphon router which does the actual connection tracking/masquerading for the LAN network address(es). So if you consider that with a single DNAT/SNAT pair of rules you can create a situation where every possible dynamic DNAT/SNAT pair can be replicated without the overhead of [finding the right rule/looking up the right rule] you **might** experience lower latency << But this depends on how the UI manifests the rules you have asked to be created within the routing software (I would *expect* Ubiquity to do it "properly", especially in the case of the ER-X, but they are (IME) one of these borderline mfgrs that sometimes leave me thinking "G*d d*mn Ubi, I thought you'd do that properly!"). If the ER-X UI made it difficult for *me* to enter the SNAT rule in my mind i wanted to enter, and I wasn't having latency issues, I'd probably be "F*ck you then!" and leave it masquerading (or look into it's "DMZ" capabilities, 'cos I'm a stubborn b*stard). Similar principles apply to the NAT rules on the 5G modem, as it's always sending packets on the LAN side to the same address, the ER-X. Once your packets leave from your WAN address, all sorts of routing happens within your ISP's own network space before it reaches the big bad internet space (where even more goes on), but as none of this is [stateful/involves NAT contracking] the added latency is negligible: Ideally this is the kind of routing you would be doing within your own network, but so many consumer grade routers assume they are "the only router in the village" you can't turn off NAT, as you have experienced.
×