Jump to content

Ralphred

Member
  • Posts

    657
  • Joined

  • Last visited

Posts posted by Ralphred

  1. 47 minutes ago, Gat Pelsinger said:

    Do I install all of them?

    Yes, if you think something may be superfluous configure it as a module, then you can check to see if that module was loaded after booting later and go from there.

    47 minutes ago, Gat Pelsinger said:

    And for some, there are multiple drivers

    Yeah, once you open the configuration tree you'll often find that the "module in use" is a sub-option of the "driver" that you have to enable to get to that option anyway.

    The only exception I've seen to this is when using vfio drivers, or the "sub-option" is in a different branch of config tree, like a disk controller driver being in one section and the sata driver being elsewhere (this is just an example, don't go looking for it 🙂 )

    47 minutes ago, Gat Pelsinger said:

    I could not find some items like the "skl_uncore" which is listed as the first driver in use.

    Yeah; what we are relying on when searching is the normal practice of "the module will be called [some name]" in it's docs or [CONFIG_SYMBOL] being similar to it's name.

    Quite a lot of stuff is enabled/disabled implicitly: If you pick a generic sounding option and hit ? you'll get a rundown on that option, along with things it will automatically turn on (normally dependants) or things that will automatically disable it (needs to be mutually exclusive with).

     

    If you get your file systems supported, and make sure your GPU driver is "y" and not "m" you're in a starting position to not be guessing about the rest as long as you turn off any "quiet" options on your kernel command line (so you can see any hangs/errors as the kernel initiates).

     

    Anything you *need* (excepting proprietary nvidia drivers) that is set as "y" instead of "m" will cut down on the amount of work udev has to do at boot time as long as you copy the requisite parts from modprobe.d/some_module.conf onto the kernel command line and include any firmware in the kernel - but you/we can revisit this once you get your custom kernel working at all.

     

    The default for Gentoo (for about 20 years) was to configure your own kernel, with this in mind I'd suggest you read https://wiki.gentoo.org/wiki/Handbook:AMD64/Installation/Kernel#Alternative:_Manual_configuration it's not a huge doc and will cover things I'll forget, just ignore things that are specific to that guide (like chroot, gentoo options etc) and it'll give you a good gist.

     

    EDIT: 

    47 minutes ago, Gat Pelsinger said:

    Is that still okay?

    Yeah, it's so OK I actually forgot to respond 😛

     

    It's worth noting that "generic" kernels supplied with distro's nowadays need a lot of support, in the way or initrd and udev, so they can dynamically load all the drivers they need at boot time for any conceivable hardware combination. You are rejecting this paradigm to have your kernel only include what is needed and include it by default - none of my custom kernels even have module loading support, it's not needed in my case, and reading your lspci, not needed in yours unless you expect to be plugging in "foreign" USB devices as the laptop travels around- but again a topic to revisit when you have it "working".

  2. 27 minutes ago, wasab said:

    the newest one unsurprisingly wouldnt boot and i had to go into recovery mode to revert the kernel back

    This is usually due to distro patches being applied to the vanilla source and some test cases falling through the net. I've never had vanilla sources fail when the issue wasn't somewhere outside of the kernel itself (normally PCI devices changing bus addresses between major versions or modular/static kernel configs invalidating other system config files).

  3. 5 hours ago, Gat Pelsinger said:

    but when I selected my custom kernel, it only outputted the loading text and then the cursor kept blinking (perhaps not even blinking and froze, I don't remember). I waited a while but the boot process didn't advance.

    Sounds like the GPU driver was missing.

    5 hours ago, Gat Pelsinger said:

    make -j8

    This is the only time you have to use -jX, it isn't really going to be of any benefit other times.

     

    The easiest thing to do regarding making sure all the drivers you need are included is taking the module names in use from Kernel driver in use: [name] from lspci -k output. You can then use / to search for these names when in menuconfig - the result will direct you to the option you need to turn on.

    During rebuild, unless something in your tool chain has changed, or some error, you only need to re-run make -j8 and install the kernel and modules; it will automatically not rebuild the stuff it doesn't have too because of your config changes, and subsequent builds after adding a single module/driver are much quicker.

     

    Other kernel building tips:

    • Enable "CONFIG_IKCONFIG" - it's useful to be able to copy a "working" .config on the fly.
    • Always run grub-mkconfig -o grub.cfg-[kernel name] BEFORE running grub-mkconfig -o grub.cfg, this will give you backups of known working grub.cfg's later down the line.
    • Unless some specific thing has been added to the kernel that you want, make oldconfig (after copying the .config from /proc/config.gz to the new src tree) will nearly always suffice for a "new kernel" when one comes along.
    • When you copy bzimage, add a meaningful identifier; grub will still find it and you'll get to have "redundancy" options if you mess something up.
    • Backup your .config files to /usr/src/config-x.y.z-[name], that way you can keep your "work" and still delete old kernel source trees.
  4. A quick look at the script and associated doc's leads me to believe you are trying to perform the equivalent of "running nitrous through an engine because I can".

    12 minutes ago, Gat Pelsinger said:

    The deep firmware will not let anything bypass it.

    What are the conditions you are creating to force the firmware to behave in such a way that it would "utilise the maximum amount of latitude regarding consumption available"?

    Do you monitor any per-core frequency or power data?

    You understand that a 10210U is 10th gen and not of the Alder Lake+ "permanent PL2" family?

    You understand that pre-Alder Lake CPU's allow the use of the PL2 state (per-core) without reporting it?

    Are you able to observe states PL3/4 being reached?

     

    The distro (or OS) you use to test this is arbitrary, what's important is the kernel - if you've let someone else configure it then you have nothing to complain about, if you've configured and instructed it yourself (a Linux kernel) and the observed behaviour doesn't match the documentation then you should be filling bug reports with Intel, they wrote the driver after all.

  5. On 3/14/2024 at 12:30 PM, CosmicEmotion said:

    To me, the concept of "fighting with your system" has been absolutely reversed in the time I was gone.

    Personally, I've used *nix derivatives for nearly 30 years, Windows for almost as long. The worst thing to happen to windows (IMHO) was the loss of the ability to close it between win3.x and win98. Since that time the expectation for an OS to do 'the sort of thing I expect you to do out of the box' has increased beyond Moore's law to the level of a 90's cellphone (Read: work without crashing, rebooting or user input of any kind, ever* and on all possible hardware combinations. *including at the time of install.). This is obvious nonsense is pushed by the tech illiterate and tinkerers with self aggrandisement issues.

     

    Well, you know what 'they' say: "Make something idiot proof, and they'll just breed a new class of idiot!" -  this has been the limiting force in all of linux's "technological development" for far too long, and they keep finding new classes of idiot (re: People who still use GNOME after 2009). If you aren't an idiot then your raft of choices is becoming smaller, not larger - this is antithetical to both freedom of choice and technological progression, but midwits are gonna midwit and *nix became their target, one of the biggest failings of *buntu was not containing them, because I honestly fail to see any other reason for it to exist other than a containment distro for people people not smart enough to use Debian effectively....

  6. 8 hours ago, Gat Pelsinger said:

    I am on Debian and I am looking for a good hardware monitor. Hwinfo is only command line for some reason and I want a nice GUI with many features. I want to be able to monitor power usage of all the components, temps, clock speeds, etc in realtime.

    Conky is for displaying whatever you want as a desktop overlay widget kind of thing.

    Other than that KDE has systemmonitor - similar to windows Task Manager, gnome has similar IIRC too.

     

    Anything you can get in a terminal you can make conky show, if you are just doing diagnostics.

  7. 13 hours ago, Kriz said:

    Damn if only Microsoft stuck to making the same operating system they were making in the 90's I wouldn't think about alternatives but things such as ads in OS, telemetry, bloat and poor updates not to mention changes to desktop environment you can't op out of are getting worse and worse.

    Look up the "Windows AME project". It used to be against the rules to discuss it here because they sidestepped activating windows by supplying an ISO, but they just provide scripts to turn windows into "windows light" now. I can run it in a VM (with GPU pass through) and the overhead of linux and winAME is lower than native windows, and it performs quicker. I don't know what full full experience is like, but it runs my windows only (the very few I have to now) games fine.

  8. 2 hours ago, Vecna said:

    How do you feel about Nobara?

    A quick read of their page makes it sound like "Matured Fedora". One reference that stuck out was the EAC/GLibc configuration issue, to my mind it means they have their finger at least close to the pulse of the gaming community, but whether the red hat devs (ultimately responsible for their source) are causing their work to be cut out for them is another question.

    I saw nothing on their page that would make me say "kill it with fire", but from the point of view of an OS "doing what it's told to and nothing more" my guess is that it's footprint **might** be larger than necessary.

    My advice, try it - if it doesn't tick all your boxes kick it to the curb and move on to something else.

     

    Generally i'd say find your 3~4 tops picks, try each of then, assess them "out of the box". If nothing fits the bill install Arch or Gentoo and force it do so...

  9. 32 minutes ago, wasab said:

    I feel it is a pretty sad state of affairs that games meant to run on Linux run worse or not run at all vs games developed for windows running under a compatibility layer.

    I 'feel' the same, but we can't have our Linux OS's bending to our every whim AND expect studios to support them the same as a psuedo-static target like windows.

    Just step back and take the wider view, the Steamdeck and Valve have propelled wine development to the point it's mostly seamless, and there are more gamers using linux now - for those of us who have been doing that for a long time the money from those 'new linux gamers' is feeding development and making our lives much MUCH easier.

  10. 47 minutes ago, Vecna said:

    why do people censor ubuntu?

    Ubuntu used to ship (less so today) many different versions depending on which desktop it supported out of the box. The point of the * is not censorship but to glob, so it means {ubuntu, kubuntu, xubuntu, lubuntu} - Games don't care what DE you use, they are more interested in your core libraries and graphics stack.

    It's to avoid the "People said to use Ubuntu, but I want KDE/XFCE?" confusion.

  11. 21 minutes ago, wasab said:

    Feral Interactive

    I have some of their stuff, and it worked fine at the time of purchase - all credit to them. To be fair, you can't expect them to support every distro devs patch de la jour across all time, there needs to be a static target.

    The ones that no longer work; I CBA with diagnosing that, ticking the box next to an "era appropriate" proton version is much easier than working out what changed in my OS since 2015...

  12. On 2/26/2024 at 7:02 AM, wasab said:

    everything less popular than Ubuntu is out

    When you consider the steam-runtime is based off of Ubuntu libraries, *buntu isn't such a bad choice.

     

    On 2/26/2024 at 8:48 AM, Vecna said:

    arch is out, gentoo is out

    As far as I know these are the only two distros that let you mix and match stable, testing and alpha quality packages out of the box, but that requirement only seems necessary for the first 6 months of "newborn hardware", after that stable drivers are fine.

  13. This is what the "database server" use case is designed for; whilst I understand your hesitancy, it's only really warranted for Access/MYSQLite style databases, and any "auditing" you do needs to recognise that you are working on a snapshot, so "auditing adjustments" should be relative and not absolute.

  14. 5 minutes ago, RONOTHAN## said:

    What they meant was if you bought a kit as a pair and went to RMA it to the manufacturer, you would only do one stick

    No, they are "matched pairs", you have to RMA the whole part (i.e. a pair) no just 1/2 of it: There is no way to match 1 stick to it's "other half" without it being present to read it's tolerances - this is how they can sell devices that are "meant to be overclocked".

    If you look for a "slower part number" it came from the same source, on the same line, it just didn't test as well, so gets sold cheaper as a slower version, mass production of solid state electronics has always been like this: a 1N4006 is just a 1n4007 that failed 1000v testing, but passed 800v testing etc.

×