Jump to content

Wild Penquin

Member
  • Posts

    389
  • Joined

  • Last visited

Everything posted by Wild Penquin

  1. I agree with others in this thread that in OPs case, best thing to do is to take it to a specialist. The data is too valuable to be lost, and it is probably still there. It might even not cost that much, since the recovery might be trivial for a specialist. However, for others in the same boat (who might wander into this thread), YMMV. It the data is not valuable enough to take it to a specialist, one might attempt recovery themselves. There are already good tips in this thread. I'd go about it like this: Preparation: Calm down. Don't do stuff like OP (panick and yank out drives etc.). Remember: no data will be lost if you don't do anything. Don't rush. Keep this mindset throughout recovery. Read the documentation of the tools you plan on using beforehand. There are many alternatives, and I will give one way, the one I would do it (since I am familiar with these tools). Actual recovery (with any Linux system and GNU tools): Get another drive with more free space than the total size of the disk you need to recover. Set up a Linux system so that you can write to this new spare drive. Make sure all utilities mentioned in the steps below are available. Make sure you know what you are doing before proceeding (up until this point you have not touched the drive to be recovered!) Connect the drive to be recovered to the system. Make an image of the drive (make sure you will not write on it doing this!). In this case, you can use dd. If there is physical damage (bad sectors) you might want to use ddrescue instead. Be prepared to say goodbye to the data in case of bad sectors, there is only so much ddrescue can do, and attempting reads might degrade the drive further (*). For a non-physically damaged drive, reading it is safe, however. (messing up parameters of dd is not!). Do data recover on the image. Start by recovering the partitions. GNU parted has a recover partition option, which is helpful if you have even a vague idea where the partition(s) used to be. Mount the partitions from the image read-only. Try to see if your data is in there. If you can find it, copy it somewhere safe. If you can not mount the partitions, parted has misdetected them - or (worse) someone/something has written over the data. Try again from step 6 with different values for recovery. If you can not recover the data - remember, you still have the original drive untouched (since the mishap occurred) - you can now change your mind and take it to a specialist! Once you have your data copied, the recovery is complete! But in best case, you now have the values for the partitions which you can create on the original, too (since you don't need it anymore, as the data is safe!). Best scenario is, you can just boot off it like nothing happened! The main point was already said in this thread: Don't write on the drive to be recovered! Instead, make an image and work on the image. This can be achieved with many tools. Backups: I might sound like a preacher here, but really, really do make backups. I know, it is boring. But strong anecdotal evidence tells OP is not the only one who has had similar loss of data (or, hopefully for OP, near loss). Too many people have lost all their family photos/videos taken over many decades, and people have even lost the only copy of their thesis (or similar) which they have been working on for a few years (hope it is not CS, that would be too embarrassing)! Cheers! *) p.s. A small (but important) sidenote: Some people recommend that for disks with known bad sectors or known as physically failing, making an image is not the best approach. Instead what is often recommended is to mount the filesystem read-only (on a non-invasive OS which does not "read the disk behind your back"; this might be made on a minimal live Linux), and copy over the most important data, and little by little less important data - and after that, perhaps as a last step try reading off an image. Reading a whole image is stressing a failing drive, and most of the data on the disk is usually not that important, such as free space and the operating system etc.; the most important data might in best cases be only a few magabytes, such as the thesis you've been working on
  2. How do you know this (added emphasis to make my point clear)? There is also a lot of other information, but I have no idea what you have based it on since there are no logs or other information. Post the output of `lspci` and - most importantly - the Xorg.0.log, and perhaps even the Kernel log from a boot (`journalctl -kb`) (use pastebin or something for very long logs). Also, `ls -l /dev/dri` can be used to determine what GPUs the Kernel has found. My guess would be that X.org does not handle several graphics cards too well per default. This might even be by design since a GPU could be used for many things (computing; separate X.org sessions) and unless the user tells it otherwise, it will choose only one to use? Most (all?) GUI monitor settings in DEs are not used to configure X.org that much but only the outputs (of the GPUs the X.org sessions has in use). There's information probably buried down somewhere in X.org documentation, but I guess little information online since running a single GUI with many graphic cards (in a non-hybrid graphics session with separate monitors on each) is a bit niché. Arch wiki has some examples on what to configure for multi-GPU setups (X.org is mostly uniform accross distributions so they should work in any, but check your distributions configuration examples). Anyways, I'd do the above steps first to determine what is going on. Xinerama is an old, legacy way of handling multi-monitor setups and does not in any way help with several GPUs. AMDGPU Pro has only advances for computing (OpenCL) users, for normal GUI users the open driver is better (but I could be wrong - doesn't hurt to try).
  3. 1. The output of the fsck command would be useful (but obviously it seems you didn't save it). 2. Also, check the journal for any errors. before / around / after the failure. 0. Otherwise, it is quesswork. I would not use any USB adapter for any permanent use, though - it is just a too likely and usually unnecessary point of failure.
  4. The main target of most (all) game publishers targeting Linux is still Ubuntu. It makes little sense to install any other kind of distribution, if the main goal is gaming. I must admit the bleeding edge of Arch Linux (or a derivative) is enticing. However, bleeding edge comes with a cost; it might sometimes break or be unstable because not that much testing has taken place (yet). I don't mind since I know my way around in a Linux environment. I would not recommend any Arch based distribution for a beginner in general (however, if one is prepared for breakage, and knows what might entail, is not afraid to read documentation - then go ahead with it). Manjaro is a nice idea, but since it is a much smaller undertaking than an Ubuntu based distribution (or Arch), it has less QC which means some more rough edges; or, that was my experience the last time I tried. If one wants more stability than pure Arch, then I'd say Manjaro is not an alternative for some Ubuntu flavor. Things might have changed, though - there was nothing majorly broken which couldn't be fixed, but their customization and differences from Arch brought more problems (=time needed to fix things) than benefits for me. Manjaro deviates too much from Arch to "feel like home", and is in an "uncanny valley" between stability and bleeding edge, and not good in either (sadly, before you ask - I don't remember specifically what was broken in Manjaro the last time I tried; so take this all with a grain of salt!). Another reason I'd like to use an Arch-based distribution is the AUR; however, as Manjaro is not in sync with Arch, which means sometimes packages in AUR will not work without intervention. But YMMV - you might not care about AUR and/or might want to adjust the PKGBUILDs when needed / make your own! (typing this, I'm starting to think about giving Manjaro another shot!) For a workhorse / gaming or any Linux installation which "should just work" (I'm not interested in tinkering with it) I'd choose a non-rolling, mainstream release distribution (Ubuntu).
  5. Most (all?) Linux distributions can resize NTFS partitions. This means your data will not be gone, even if installing on the same disk (contrary to what some people claim in this thread). However, do make backups of all the important data on your Windows drive before installation. Any partition resize operation is dangerous in principle (for example, a power outage at the wrong time -> the file system being operated on will be corrupted, possibly beyond repair). Installing alongside a Windows can be confusing for someone not accustomed to partitioning - and, frankly, I've seem some quite confusing user interface choices in some distribution installers especially in the partitioning phase. So one could erase their windows partition during installation by mistake (although strictly not necessary and despite the installer offering another choice).
  6. Didn't notice OP hasn't build their system yet. I'd indeed take an AMD GPU over an NVidia one given the choice, however NVidia drivers are not necessarily bad either. They will more likely interfere with stuff like hibernation, so with a Laptop, it is double as important to try to steer towards an AMD GPU. Things used to be vice-versa (going back in time to the fglrx times), but things have changed! But NVidia drivers are still usable. Also, this may seem nit-picky: but that statement is untrue (emphasis mine). Both AMD and Nvidia are "built into the OS" exactly on the same level. The difference is that the NVidia drivers are closed source and the AMD drivers are open source (the desktop-oriented version; there's also the CUDA/enterprise/computing oriented version which is partly closed source). In practice, this means two things: first, the users are at the mercy of NVidia on how they develop their drivers and on what they prioritize, and their co-operation with the Linux Kernel hasn't been that good AFAIK.... the AMD drivers clearly seem to have a better development model and they've benefited from the fact their driver is Open sourced. Get a deal-breaker bug with NVidia? Though luck - you can try to post on their forum and **hope**! Got a deal-breaker bug with AMD? Provided you know how to make a proper bug report, you can make one on your distribution or upstream bugzilla and it will gain attention unless the bug is really, really rare and obscure! If it is a common one, someone else will report the bug! EDIT: Got some other bug in the Kernel, and using NVidia? The first thing they will tell you is to disable the OOT NVidia drivers since it "taints" the Kernel and then reproduce the bug! The second thing from users point of view is that the NVidia driver needs to be downloaded / enabled separately (on most, but not all distributions), but the core reason is in the licensing. From software point of view, it is a driver (or several components of the chain) exactly similarly as the AMD driver is.
  7. It really boils down to what you were not satisfied with. What are you going to do with it? Any distribution can be thinned down (shut down unnecessary services), but it really doesn't matter that much. While 8GB of RAM might be a bit little these days, the background services don't consume that much resources or CPU cycles after all. Arch is good in that it doesn't install anything extra but only the stuff you want. This also means it will be more difficult; it will assume the user needs nothing and you will need to install and configure a lot of stuff which might be there per default on some other distribution. As such Arch will also require you to be able to read it's documentation (having a general feel how a Linux system works, with GNU tools and the command line helps). It's really nothing too special in terms of resource usage. Assuming you had problems with the DE? For a desktop user, what matters more is the choice of DE than the distribution. If it was lack of RAM, then the mentioned LxQt and lxde are not nearly as lightweight as they used (relatively speaking) to mainstream DEs; for example KDE had Plasma some major reworking with the recent versions to reduce RAM use. If you need something really lightweight, try a tiling window manager (instead of a full-blown DE) such as i3 - it won't get much lighter than that (even by running bare X.org), but be prepared to have a keyboard cheat sheet ready for it's shortcuts, hand-edit the configuration files and shift your mindset to a tiling window manager.
  8. You didn't tell why you want / need to change to Linux - and as such @Dat Guyposed an important question! For your priorities it seems like the best option is to stick with Windows. However, that doesn't mean you don't have a reason you didn't tell us (it's your choice after all, you don't need to tell us =) ). If your mind is set, be aware there might be things you will need to give up. Any distro will do, but as you are new to Linux distributions, choose a mainstream one, like some flavor of Ubuntu. As long as you don't choose a server or stability oriented distribution (which means, lagging behind in gaming, multimedia and desktop-oriented updates in favor of server-grade stability), your gaming experience will be as good as it gets. Indeed anti-cheat will cause problems along with some other incompatibilities with windows games. Sometimes you still need to choose to play games which work on Linux, and have a good reason to use Linux - if you can not change the games you play, stick to Windows. PopOS! is often recommended here, and as such I suppose it is good (I haven't tried it myself). I'm not sure why PopOS is considered the easiest - from a quick glance it seems the only thing separating it from other desktop-oriented is that it enables proprietary NVidia drivers per default (which is almost mandatory for gaming). But installing the proprietary drivers is a matter of a few mouse clicks via the GUI in most mainstream distributions, and with an AMD gpu this is not an issue (the drivers - for gaming/desktop users - are already in the Kernel). For general desktop experience, the most important thing is the choice of DE (not the distribution per se!). Many distributions use Gnome, which I really dislike (it is too basic and dumbed-down, if you will, for my taste), but the choice of DE really boils down to user preference. I encourage to try out several ones! You can install several desktop environments in practically any distribution out there and change the one you use quite easily. I'm really set to using KDE Plasma (more suitable for a power user IMO, more Windows-like than Gnome), but I have tried several DEs over the years. I also like i3 for it's simplicity, but remember you will really need to change your orientation towards the UI as there will be some paradigm shifts if going to some less-mainstream DEs. As for video editing: I don't do any video editing, but last time I checked, video editing applications are much fever in Linux than on Windows. Some proprietary editors do have a Linux version IIRC, while some don't; Sauron claimed your editors don't work in Linux. As a result, there are very few people doing any serious video editing on Linux AFAIK (it's an egg-or-chicken problem - there will be no good editors as long as there are no users). Some FOSS solutions exists but they are often buggy (community driven where development has often stalled) and / or have a sub-optimal user experience because of a badly designed UI. The FOSS solutions might not be up to any professional work (but ok for some every now-and-then, say a few times a year home video editing etc.). Blender is often cited as the best FOSS editor, despite being a 3D computer graphics software suite! In general it absolutely makes sense to check before switching if the applications you absolutely need work in Linux, and if they don't, if you can find an alternative which does. WineAppdb is handy if you want to check wine compatibility, and for games, add protondb to the list. If it seems bad, then you might consider not switching or dual-booting.
  9. It's been a few decades when I used Windows, but typically for a shared printer you don't install drivers on the client. CUPS has the drives and handles the printer for the client, and shares the printer (as a generic printer with advertised features, such as color/BW, Duplex, paper sizes etc.). But as someone who has abandoned Windows I really can't help on the Windows side, nor can I vouch that your CUPS configuration is correct. From a quick glance it seems your CUPS is configured correctly. What I would do is remove all printer from Windows, and try to re-add the printer as a network printer. Do not choose a printer model to add (that will assume a local printer or a non-managed printer connected straight to the LAN, without a server such as CUPS in between).
  10. Well, why not just try it out and see if it works? Note: The keyword here is not "internal" or "external", but "removable" vs "non-removable". Some BIOSes might not allow booting from all (USB) drives, though. It really depends on the UEFI implementation on the laptop BIOS and how the bootloader was configured on the OS you've installed and what is the default EFI binary on the external drives EFI partition. Typically, internal drives are configured slightly differently than external (removable) ones - however, most bootloaders on a typical Linux distribution will install themself into the default bootlaoder (too) if none exist there yet, and should work if the drive is started to be handled as a removable one. For external (=removable) drivers, it makes no sense to make an UEFI entry in the NVRAM (obviously, since the drive is removable and could be used in any computers, and OTOH there is no guarantee it will be present on the next boot), and this is really the difference to non-removable ones. So the only EFI loader usable on removable drives is the default one (it could be a bootmanager which can start other EFI binaries). If your OS bootloader has installed itself as the default, it should work from an external drive. If it hasn't, then what typically needs to be done is to boot somehow into a Linux distribution and 1) make the NVRAM entries with efibootmgr (or similar) or 2) re-install the whole bootloader. If you do plan to indeed remove the drive, you should install the bootloader as the default EFI partition entry. If you are booting a Windows, you are probably better of using it's tools. I'm not sure if Windowses can be installed on a removable drive easily, but I guess it should be possible. But based on the information you have given it is impossible to give any definite answer for your case specifically. I can't actually make out which OS you are actually trying to boot from the external drive from your post. I.e. is it the original OS (Windows?) your laptop shipped with, some other Linux installation, or the Pop OS installation you are trying to boot?
  11. The reply is same as for the swap question: there is no one-size-fits-all answer. It depends on what applications you run and how many of them at the same time. If you use regular office applications, a web browser and gaming, then no (as a rule of thumb). If you do video editing, CAD, run IDEs or have a home server in addition to all of the above... then the answer is maybe. Look at your RAM usage in your worst case scenario you find likely. That should be enough to give an idea, if adding more RAM is worth it.
  12. Good comments here, especially one by @LloydLynx. As a rule of thumb, don't worry too much about swap (whether you have it or not). It doesn't make a lot of difference these days for most users. It never hurts to have swap, though. If you often leave idle processes running, you will get more data cached in the RAM by having swap. There is no one-size-fits-all answer, but it could be ~the size of RAM used by idle processes on your system, or if you need suspend-to-disk, the size needed for suspending. As for wearing down SSD's: this is really not a problem and hasn't been for a while (a decade?). Only the very few first generation SSD could be worn out in a typical home/office use scenario, but these days even if one would trash away on the disk with constant small writes, it will probably be obsolete because of other reasons before it starts to fail because of too much writes. The cells are more durable and wear leveling is more intelligent than it used to be.
  13. It was (and is) very easy to invent Windows95 product keys, as the formula (checksum bits) are very, very weak. I don't actually remember the details, but I've seen it in a YT video or something. Later Windowses had a bit stronger reg numbers (up to but not including WinXP) but still it was really not too difficult to invent "bogus" ones. I suppose they only functioned to deter piracy by average users (during a time when not everything was online), but those hacky enough were going to pirate their windowses anyway if they wanted to. Even a strong product key can have limited effect, if there is no online activation, and conversely it is difficult to make a functional (non-hackable one) which does not have some online component.
  14. Yes, there is. Many good suggestions already. You most probably already know you can do anything you can from command line with SSH. Screen or tmux (or some alternative) are immensely useful to leave stuff running without you needing to be connected all the time. But the olden golden X11 forwarding was not mentioned yet. It's basically same thing as rdp AFAIK, but for the X11 protocol / X.Org (never used rdp, and only shortly tested X11 forwarding "just because" - didn't really need it, but it does indeed work).
  15. Which features specifically do you need to adjust? Pardon my ignorance, but as someone who has abandoned Windows years ago, I have no idea what AMD Radeon Software does in Windows. In GNU+Linux, things are done a bit differently you might expect. The general philosophy is that any piece of software should do one thing and do it well. "Bloat" in software is discouraged. This philosophy is strongly favored in core components every system might need (in applications, such as games, office applications or media players etc., more bloat might be tolerated). As a result, any GPU driver does just that - it's a GPU driver. Stuff which is (or should be) common to all display adapters or a GUI, is done "on top" of the driver on the GUI stack. For example display-related settings are done by the desktop environment (or the frontend, if it exists, is in the desktop environment). The stage of these frontends might vary, as there is no central authority, APIs are in constant flux, there might be several standards (X.org and Wayland) etc. (there is a contrast here stemming from the development model compared to Windows or MacOS). Most things are done (or could be done) by simple command line tools "under the hood" (these tools still talk with APIs; but still many frontends use these command line tools, even if talking with the API would be more "correct"). As a result, often you need resort to command line or a third party frontend, which might not be as polished as you might have accustomed to in Windows. If it is an overclocking frontend you are looking for, check out tuxclokcer or corectrl.
  16. If I would need to do this without any kind of other boot device besides the one in the computer, I would look into how to install some boot manager or loader from within Windows - assuming OP is running Windows (10?). Op never answered the really relevant questions posed by @cretsiahpreviously. If I could manage to install, say, grub2 from windows, then I could probably boot most mainstream distributions ISOs straight from the HDD. Grub2 might not (or might!) be able to load them from NTFS, but resize the EFI partition to be large enough to hold an ISO and you are good to go. (EDIT: actually, it would be the Linux kernel, which needs to be able to read the ISO from the NTFS, but the same thing applies; it is more likely to understand FAT32 than NTFS) Alternatively, I would look into if Windows boot menu can boot an ISO somehow. It is probably more worthwhile to go trough the grub2 way, since it is a good exercise and grub2 is so commonly in use (so it is not a futile exercise, either). Also, I would be surprised if Windows can load an ISO of another bootable OS (which means: efforts to that achieve are probably an dead end). But really, is is easiest to get a thumb drive. They're cheap, they have many uses, and unless the computer is ancient, any computer has USB these days. Or even a burnable DVD would do and it's not like they cost a fortune (though, after installation it is a coaster, unless it is a re-writeable one). But, say, if I was somewhere far, far away from any shop, and I need to install it today, then... EDIT: Be careful when installing grub, since if you don't have a valid Windows boot entry in it, your system will be non-bootable, unless you are using UEFI (most modern computers are). But even then it is better to be safe than sorry, and have some recovery way to boot windows, i.e. another boot media... be careful when messing with boot setup! Overall, what OP is asking is not a task for a beginner / newbie with boot setups, and in any case only something which can be helpful in a pinch / corner-case
  17. Short answer: no. Longer answer: If developers would start to put similar effort into developing and optimizing native games for Linux, then yes, it might outperform a windows port with similar investment in developing resources. However, I believe the difference will always be negligible. Linux might outperform on some data-/server-oriented workloads traditionally, but not on desktop. For desktop usage and optimization, Windows has had a couple of decades head-start in development. However, the development model of the core OS components might enable a better performance for Linux, well, in principle - this has happened already on non-desktop usage, and might happen on desktop, too. But TBH, modern Windowses are already quite well optimized (IMHO the problems I personally see with windows is unwanted telemetry, forced bundling of stuff such as a store etc.; not so much on the optimization of the OS). But for any single game, it becomes a lot more complicated than just the optimization of the OS and the GUI stack. In any case, Linux is not a magic wand which will make computers perform better, if that is what OP has in mind. It can be more easily tailored for specific (non-desktop) tasks than Windows. Linux+GNU tools and X.org is not, and never was meant to be, gamer-oriented OS to begin with. Gamer-oriented setups are starting to become feasible now, as a kind of a side-effort / by-product.
  18. There are many ways to run stuff at startup in Linux distributions. Already mentioned crontab and systemd work fine. A third options is to run stuff after the user has log in via the DE. You didn't mention your distribution nor your DE, so we can not know what this means: I.e. "startup programs" might mean many things, depending on the DE! Which one to choose depends on at what stage you want to run the command (in this case: when to switch on the LEDs). If you want to turn them on whenever the RPi is powered, use system-wide systemd unit or root's crontab with @reboot keyword. If you want to tie it to a user logging in, use user specific systemd unit file (systemctl --user enable ...). If you want to run it when a user has logged in to the GUI, then use the GUIs facilities. For this approach, if there is a ~/.config/autostart-scripts folder, you might try to put your scripts in there. It is a bit non-standard, so many DEs might not look into that folder. This really depends on your DE and how you start it. Even .xinitrc might work.
  19. Hi, here's just my 2 cents inline: That's plenty of H/W for any file sharing server. The distribution doesn't really matter, just disable the services you don't need. This also applies to a GUI, as it is also a service. Just don't enable if you are not using the GUI (or don't log in; any login manager will take very little resources). The only exception is HDD space; if you are really low on it, then the choice of GUI might actually matter, but since you're setting up a NAS I would assume you have plenty of storage space. Also remember: Linux DEs don't replace a server-oriented Windows, as the whole philosophy of the OS is different. As a result, the GUI tools to set up any kind of server just (generally) don't exist. There may be exceptions, and if there are GUI tools, they are probably 3rd party front-ends for configuration, so don't expect them to be as polished - i.e. you might still need to revert back to the dreaded command line, in case some 3rd party front-end does not do it's job properly. It is likely that by having a GUI the only thing you achieve is the possibility to have many terminal windows open (as an alternative to screen / tmux / VTs) and a web browser on the same computer, but that's about it - and you can have the same functionality by SSHing into the box, but with your familiar Windows/whatever desktop you currently have (and as such, this approach might actually even be more convenient than doing maintenance locally, but YMMV...). The suggested Webmin does look quite cool, though, and most probably works well for established and developed services. But if the service backend is in heavy development, well, you can expect things to break. If some Webmin module has some bugs in it or is not in sync with the backend, or if some module is limited in its options, you might still need to use the command line and hand-editing. Personally, I wouldn't use Webmin since it just looks like an additional layer in between stuff I can already do, but YMMV - and on the other hand (provided the modules are well written), using it and hand-editing are not mutually exclusive. When do you have slowdowns? When does it crash? These are important questions! What you're describing sounds like you might even have a H/W issue, and if that is the case, starting from scratch will not solve anything. Generally, in Linux it makes much more sense to troubleshoot an existing installation than to re-install (compared to pre-Vista Windows... hopefully more modern Windowses are also more clever in this regard, and so I've been told). Linux does nothing "behind your back" as some other OS's *ahem*Windows*ahem* might do. If you are certain you have screwed up the configuration so that you can not revert chances you've made, then (and only then, really) it makes sense to re-install. Learn your package manager and how to restore stock configuration files with it, as that will take care of screwed up configuration most of the time. If the server it's just for Samba, I would choose some flavor of Debian, but just because I'm already familiar with Debian. But the already suggested Ubuntu Server is probably an excellent choice, too (and might not differ that much from Debian; it's been a few years since I've used either one, though). But the choice of distribution doesn't really matter that much - all distributions work just fine for home server use. Just don't choose something which is on bleeding edge (Arch / any rolling-release) and prefer LTS releases, and you're good to go. Bleeding edge distros will also work, but might break because of bleeding-edge features or bugs kreeping into the distro... that's why they're called bleeding edge! The choice of tools for file sharing depend more on the client, i.e. what OS/software you need to access the files with. Again, the distro doesn't matter since all GNU and other FOSS tools are available for all distributions; it will be hard time finding exceptions! If you need the plex server, your choices are already a bit limited, as that is a closed-source piece of software. But overall, at Plex they seem to support quite a good selection of mainstream distributions, i.e. most Debian and Fedora based. If you need to access the files from within Windows, so that all software can see the files transparently - and you need write access - the samba is probably the way to go. If the other computers are running Linux, then choose nfs. If you only need read access (and don't need the traffic to be encrypted, say, because it is in your private LAN and the files are not private / sensitive) then even FTP might work for you. How you configure something like Samba will not really differ that much between distributions - the configuration will be identical for practical purposes (there might be minor differences in configuration file locations etc.).
  20. There are ways to do what the OP wants (conversely to what some replies here claim). I've done this before, way back when I had a computer with no internet-connection (actually, the computer had a slow modem connection but it was not used for upgrading, since it would not have been feasible), and wanted to upgrade it's Debian-based distribution. Though, I must admit I'm not 100% sure what the OP wants and if this way is the right tool for the job, but if the goal is to be able to upgrade a Debian-based distribution without an internet connection (or a non-usable, for example, too slow one), it can be done. Anyways, an internet connection is not required for upgrading (or installing additional packages), but of course some other way to transfer files from another Debian-based computer is (and as for security, for an offline computer this should not be an issue). What is needed is another computer with a fast (enough) internet connection and of course something to transfer the files to the other computer. One needs to confirm the computer used to fetch the packages uses exactly the same sources.list configuration (obviously, if they use a different configuration this will not work). Then fetch the packages and metadata. Packages need to be fetched so that apt assumes no installed packages (there are switches in apt or other tools to do this). This will handle dependencies, and can also be used to fetch packages for new (not-installed on the target) software. After this has been done, move the metadata + packages to the computer with no internet connection (USB drive, HDD...), configure an offline repository on the target and apt-update and upgrade (or install) as usual. My memory is hazy but IIRC I used a method similar to this which is a bit different to what the link in Ubuntu help page posted by @TorC. There may be other ways. Probably nothing which can be done with a few mouse clicks via a GUI (as this is a bit too special use case), but something which requires a little bit of searching and reading documentation.
  21. Well, I agree with mahyar and others in this thread - it really depends on what you are doing on the computer, your priorities, and also heavily on what games you play. Depending on the titles and your luck, experiences may vary =). As someone gaming on Linux since the early 2000s (!), things have improved a lot since then. I'm also someone who can easily do and doesn't mind doing a few tweaks here and there. However, some games work just fine out-of-the-box without any tweaks (with Proton and/or Wine,), some games have native ports and those numbers have been steadily albeit slowly increasing over the year - though, I must admit, some ports are poorly done / optimized, some are excellent. What I would suggest, since I've gathered you've determined you want to install some Linux distribution in any case, just try it! If you're lucky (with the titles you want to play), you don't need to dual boot, but if some game doesn't work because of incompatibility issues, boot into Windows. You can do both - they're not mutually exclusive
  22. First, post the complete output of the apt install command. Copy and paste it her (in code tags), including the command you've typed. One possibility is that you have not synced the apt database (with apt update) before running apt install. Another alternative is that your apt mirror sources are misconfigured for some reason. A third possible reason is wrongly configured network (or some network error anywhere from your computer to the mirror). Or, possibly, those packages with those names just don't exist (I didn't check). Apt should give reasonably sensible error messages which should hint at what is wrong, but there is no way for anyone here to know unless we can see the output. For an introduction to apt, see: https://help.ubuntu.com/community/AptGet/Howto. You need to run apt update and apt upgrade periodically (see "Maintenance commands" in the page I've linked above). If you don't, 1) apt could try to download old packages which don't exist anymore on the mirrors if your metadata (updated by running apt update) is not in sync or 2) fail because of unresolved dependencies, if upgrade hasn't been done recently enough. As for PPAs, look here for a quick explanation: https://help.ubuntu.com/community/PPA
  23. Ok, that tells us the VGA cable is at least working. There should be some kind of boot manager installed (grub or something) but depending on how it is configured, the delay for it automatically selecting Linux could be so short you can never see it (especially if the Linux installation is the only choice). What about the VTs?
  24. VGA input is intelligent enough that it can communicate with the GPU (and OS) so that a resolution / refresh rate can not be set outside what the display can handle. Unless someone has specifically told / forced X.org to use that high display rate, it should not set it that high (well, more modern revisions were - but you have a flat panel, so...). There is the possibility that your VGA cable is very old or broken and can not communicate with the display (if the data pin is broken, the OS can not "sense" the display, and can do whatever it wants with the signal... actually, old CRT VGA monitors had not protection whatsoever and user needed to set vsync and hsync limits manually). Important details are, how did you (or someone) install the distribution, what display manager is in use and if there are any special X.org configurations in place. I don't know that distribution, but most probably you need to look at /etc/X11/xorg.conf.d/ and see what configuration files are in place there, and if there are any sync rate, modelines or resolutions forced in there. Also, important questions are: at what point of bootup you get the error (from the display OSD, I presume, about sync rate too high or similar?)? Can you see BIOS screen? boot manager? Bootup logo / messages? Do you get any display if you change to a different VT (CTRL+ALT+F[1...7], with 1 or 7 usually being the GUI)
×