Jump to content

Wild Penquin

Member
  • Posts

    389
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Profile Information

  • Gender
    Male
  • Location
    Finland

System

  • CPU
    i4790k
  • Motherboard
    Asus Maximus VII Gene
  • RAM
    32GB
  • GPU
    Radeon RX Vega 64
  • Case
    Lian Li DX-04
  • Storage
    Loads
  • PSU
    Works and is quiet
  • Display(s)
    Samsung LC34F791
  • Cooling
    Air
  • Keyboard
    Corsair K95RGB Platinum
  • Mouse
    Logitech G703
  • Sound
    Jazz, Progressive Rock, electronic music!
  • Operating System
    Arch Linux

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I agree with others in this thread that in OPs case, best thing to do is to take it to a specialist. The data is too valuable to be lost, and it is probably still there. It might even not cost that much, since the recovery might be trivial for a specialist. However, for others in the same boat (who might wander into this thread), YMMV. It the data is not valuable enough to take it to a specialist, one might attempt recovery themselves. There are already good tips in this thread. I'd go about it like this: Preparation: Calm down. Don't do stuff like OP (panick and yank out drives etc.). Remember: no data will be lost if you don't do anything. Don't rush. Keep this mindset throughout recovery. Read the documentation of the tools you plan on using beforehand. There are many alternatives, and I will give one way, the one I would do it (since I am familiar with these tools). Actual recovery (with any Linux system and GNU tools): Get another drive with more free space than the total size of the disk you need to recover. Set up a Linux system so that you can write to this new spare drive. Make sure all utilities mentioned in the steps below are available. Make sure you know what you are doing before proceeding (up until this point you have not touched the drive to be recovered!) Connect the drive to be recovered to the system. Make an image of the drive (make sure you will not write on it doing this!). In this case, you can use dd. If there is physical damage (bad sectors) you might want to use ddrescue instead. Be prepared to say goodbye to the data in case of bad sectors, there is only so much ddrescue can do, and attempting reads might degrade the drive further (*). For a non-physically damaged drive, reading it is safe, however. (messing up parameters of dd is not!). Do data recover on the image. Start by recovering the partitions. GNU parted has a recover partition option, which is helpful if you have even a vague idea where the partition(s) used to be. Mount the partitions from the image read-only. Try to see if your data is in there. If you can find it, copy it somewhere safe. If you can not mount the partitions, parted has misdetected them - or (worse) someone/something has written over the data. Try again from step 6 with different values for recovery. If you can not recover the data - remember, you still have the original drive untouched (since the mishap occurred) - you can now change your mind and take it to a specialist! Once you have your data copied, the recovery is complete! But in best case, you now have the values for the partitions which you can create on the original, too (since you don't need it anymore, as the data is safe!). Best scenario is, you can just boot off it like nothing happened! The main point was already said in this thread: Don't write on the drive to be recovered! Instead, make an image and work on the image. This can be achieved with many tools. Backups: I might sound like a preacher here, but really, really do make backups. I know, it is boring. But strong anecdotal evidence tells OP is not the only one who has had similar loss of data (or, hopefully for OP, near loss). Too many people have lost all their family photos/videos taken over many decades, and people have even lost the only copy of their thesis (or similar) which they have been working on for a few years (hope it is not CS, that would be too embarrassing)! Cheers! *) p.s. A small (but important) sidenote: Some people recommend that for disks with known bad sectors or known as physically failing, making an image is not the best approach. Instead what is often recommended is to mount the filesystem read-only (on a non-invasive OS which does not "read the disk behind your back"; this might be made on a minimal live Linux), and copy over the most important data, and little by little less important data - and after that, perhaps as a last step try reading off an image. Reading a whole image is stressing a failing drive, and most of the data on the disk is usually not that important, such as free space and the operating system etc.; the most important data might in best cases be only a few magabytes, such as the thesis you've been working on
  2. How do you know this (added emphasis to make my point clear)? There is also a lot of other information, but I have no idea what you have based it on since there are no logs or other information. Post the output of `lspci` and - most importantly - the Xorg.0.log, and perhaps even the Kernel log from a boot (`journalctl -kb`) (use pastebin or something for very long logs). Also, `ls -l /dev/dri` can be used to determine what GPUs the Kernel has found. My guess would be that X.org does not handle several graphics cards too well per default. This might even be by design since a GPU could be used for many things (computing; separate X.org sessions) and unless the user tells it otherwise, it will choose only one to use? Most (all?) GUI monitor settings in DEs are not used to configure X.org that much but only the outputs (of the GPUs the X.org sessions has in use). There's information probably buried down somewhere in X.org documentation, but I guess little information online since running a single GUI with many graphic cards (in a non-hybrid graphics session with separate monitors on each) is a bit niché. Arch wiki has some examples on what to configure for multi-GPU setups (X.org is mostly uniform accross distributions so they should work in any, but check your distributions configuration examples). Anyways, I'd do the above steps first to determine what is going on. Xinerama is an old, legacy way of handling multi-monitor setups and does not in any way help with several GPUs. AMDGPU Pro has only advances for computing (OpenCL) users, for normal GUI users the open driver is better (but I could be wrong - doesn't hurt to try).
  3. 1. The output of the fsck command would be useful (but obviously it seems you didn't save it). 2. Also, check the journal for any errors. before / around / after the failure. 0. Otherwise, it is quesswork. I would not use any USB adapter for any permanent use, though - it is just a too likely and usually unnecessary point of failure.
  4. The main target of most (all) game publishers targeting Linux is still Ubuntu. It makes little sense to install any other kind of distribution, if the main goal is gaming. I must admit the bleeding edge of Arch Linux (or a derivative) is enticing. However, bleeding edge comes with a cost; it might sometimes break or be unstable because not that much testing has taken place (yet). I don't mind since I know my way around in a Linux environment. I would not recommend any Arch based distribution for a beginner in general (however, if one is prepared for breakage, and knows what might entail, is not afraid to read documentation - then go ahead with it). Manjaro is a nice idea, but since it is a much smaller undertaking than an Ubuntu based distribution (or Arch), it has less QC which means some more rough edges; or, that was my experience the last time I tried. If one wants more stability than pure Arch, then I'd say Manjaro is not an alternative for some Ubuntu flavor. Things might have changed, though - there was nothing majorly broken which couldn't be fixed, but their customization and differences from Arch brought more problems (=time needed to fix things) than benefits for me. Manjaro deviates too much from Arch to "feel like home", and is in an "uncanny valley" between stability and bleeding edge, and not good in either (sadly, before you ask - I don't remember specifically what was broken in Manjaro the last time I tried; so take this all with a grain of salt!). Another reason I'd like to use an Arch-based distribution is the AUR; however, as Manjaro is not in sync with Arch, which means sometimes packages in AUR will not work without intervention. But YMMV - you might not care about AUR and/or might want to adjust the PKGBUILDs when needed / make your own! (typing this, I'm starting to think about giving Manjaro another shot!) For a workhorse / gaming or any Linux installation which "should just work" (I'm not interested in tinkering with it) I'd choose a non-rolling, mainstream release distribution (Ubuntu).
  5. Most (all?) Linux distributions can resize NTFS partitions. This means your data will not be gone, even if installing on the same disk (contrary to what some people claim in this thread). However, do make backups of all the important data on your Windows drive before installation. Any partition resize operation is dangerous in principle (for example, a power outage at the wrong time -> the file system being operated on will be corrupted, possibly beyond repair). Installing alongside a Windows can be confusing for someone not accustomed to partitioning - and, frankly, I've seem some quite confusing user interface choices in some distribution installers especially in the partitioning phase. So one could erase their windows partition during installation by mistake (although strictly not necessary and despite the installer offering another choice).
  6. Didn't notice OP hasn't build their system yet. I'd indeed take an AMD GPU over an NVidia one given the choice, however NVidia drivers are not necessarily bad either. They will more likely interfere with stuff like hibernation, so with a Laptop, it is double as important to try to steer towards an AMD GPU. Things used to be vice-versa (going back in time to the fglrx times), but things have changed! But NVidia drivers are still usable. Also, this may seem nit-picky: but that statement is untrue (emphasis mine). Both AMD and Nvidia are "built into the OS" exactly on the same level. The difference is that the NVidia drivers are closed source and the AMD drivers are open source (the desktop-oriented version; there's also the CUDA/enterprise/computing oriented version which is partly closed source). In practice, this means two things: first, the users are at the mercy of NVidia on how they develop their drivers and on what they prioritize, and their co-operation with the Linux Kernel hasn't been that good AFAIK.... the AMD drivers clearly seem to have a better development model and they've benefited from the fact their driver is Open sourced. Get a deal-breaker bug with NVidia? Though luck - you can try to post on their forum and **hope**! Got a deal-breaker bug with AMD? Provided you know how to make a proper bug report, you can make one on your distribution or upstream bugzilla and it will gain attention unless the bug is really, really rare and obscure! If it is a common one, someone else will report the bug! EDIT: Got some other bug in the Kernel, and using NVidia? The first thing they will tell you is to disable the OOT NVidia drivers since it "taints" the Kernel and then reproduce the bug! The second thing from users point of view is that the NVidia driver needs to be downloaded / enabled separately (on most, but not all distributions), but the core reason is in the licensing. From software point of view, it is a driver (or several components of the chain) exactly similarly as the AMD driver is.
  7. It really boils down to what you were not satisfied with. What are you going to do with it? Any distribution can be thinned down (shut down unnecessary services), but it really doesn't matter that much. While 8GB of RAM might be a bit little these days, the background services don't consume that much resources or CPU cycles after all. Arch is good in that it doesn't install anything extra but only the stuff you want. This also means it will be more difficult; it will assume the user needs nothing and you will need to install and configure a lot of stuff which might be there per default on some other distribution. As such Arch will also require you to be able to read it's documentation (having a general feel how a Linux system works, with GNU tools and the command line helps). It's really nothing too special in terms of resource usage. Assuming you had problems with the DE? For a desktop user, what matters more is the choice of DE than the distribution. If it was lack of RAM, then the mentioned LxQt and lxde are not nearly as lightweight as they used (relatively speaking) to mainstream DEs; for example KDE had Plasma some major reworking with the recent versions to reduce RAM use. If you need something really lightweight, try a tiling window manager (instead of a full-blown DE) such as i3 - it won't get much lighter than that (even by running bare X.org), but be prepared to have a keyboard cheat sheet ready for it's shortcuts, hand-edit the configuration files and shift your mindset to a tiling window manager.
  8. You didn't tell why you want / need to change to Linux - and as such @Dat Guyposed an important question! For your priorities it seems like the best option is to stick with Windows. However, that doesn't mean you don't have a reason you didn't tell us (it's your choice after all, you don't need to tell us =) ). If your mind is set, be aware there might be things you will need to give up. Any distro will do, but as you are new to Linux distributions, choose a mainstream one, like some flavor of Ubuntu. As long as you don't choose a server or stability oriented distribution (which means, lagging behind in gaming, multimedia and desktop-oriented updates in favor of server-grade stability), your gaming experience will be as good as it gets. Indeed anti-cheat will cause problems along with some other incompatibilities with windows games. Sometimes you still need to choose to play games which work on Linux, and have a good reason to use Linux - if you can not change the games you play, stick to Windows. PopOS! is often recommended here, and as such I suppose it is good (I haven't tried it myself). I'm not sure why PopOS is considered the easiest - from a quick glance it seems the only thing separating it from other desktop-oriented is that it enables proprietary NVidia drivers per default (which is almost mandatory for gaming). But installing the proprietary drivers is a matter of a few mouse clicks via the GUI in most mainstream distributions, and with an AMD gpu this is not an issue (the drivers - for gaming/desktop users - are already in the Kernel). For general desktop experience, the most important thing is the choice of DE (not the distribution per se!). Many distributions use Gnome, which I really dislike (it is too basic and dumbed-down, if you will, for my taste), but the choice of DE really boils down to user preference. I encourage to try out several ones! You can install several desktop environments in practically any distribution out there and change the one you use quite easily. I'm really set to using KDE Plasma (more suitable for a power user IMO, more Windows-like than Gnome), but I have tried several DEs over the years. I also like i3 for it's simplicity, but remember you will really need to change your orientation towards the UI as there will be some paradigm shifts if going to some less-mainstream DEs. As for video editing: I don't do any video editing, but last time I checked, video editing applications are much fever in Linux than on Windows. Some proprietary editors do have a Linux version IIRC, while some don't; Sauron claimed your editors don't work in Linux. As a result, there are very few people doing any serious video editing on Linux AFAIK (it's an egg-or-chicken problem - there will be no good editors as long as there are no users). Some FOSS solutions exists but they are often buggy (community driven where development has often stalled) and / or have a sub-optimal user experience because of a badly designed UI. The FOSS solutions might not be up to any professional work (but ok for some every now-and-then, say a few times a year home video editing etc.). Blender is often cited as the best FOSS editor, despite being a 3D computer graphics software suite! In general it absolutely makes sense to check before switching if the applications you absolutely need work in Linux, and if they don't, if you can find an alternative which does. WineAppdb is handy if you want to check wine compatibility, and for games, add protondb to the list. If it seems bad, then you might consider not switching or dual-booting.
  9. It's been a few decades when I used Windows, but typically for a shared printer you don't install drivers on the client. CUPS has the drives and handles the printer for the client, and shares the printer (as a generic printer with advertised features, such as color/BW, Duplex, paper sizes etc.). But as someone who has abandoned Windows I really can't help on the Windows side, nor can I vouch that your CUPS configuration is correct. From a quick glance it seems your CUPS is configured correctly. What I would do is remove all printer from Windows, and try to re-add the printer as a network printer. Do not choose a printer model to add (that will assume a local printer or a non-managed printer connected straight to the LAN, without a server such as CUPS in between).
  10. Well, why not just try it out and see if it works? Note: The keyword here is not "internal" or "external", but "removable" vs "non-removable". Some BIOSes might not allow booting from all (USB) drives, though. It really depends on the UEFI implementation on the laptop BIOS and how the bootloader was configured on the OS you've installed and what is the default EFI binary on the external drives EFI partition. Typically, internal drives are configured slightly differently than external (removable) ones - however, most bootloaders on a typical Linux distribution will install themself into the default bootlaoder (too) if none exist there yet, and should work if the drive is started to be handled as a removable one. For external (=removable) drivers, it makes no sense to make an UEFI entry in the NVRAM (obviously, since the drive is removable and could be used in any computers, and OTOH there is no guarantee it will be present on the next boot), and this is really the difference to non-removable ones. So the only EFI loader usable on removable drives is the default one (it could be a bootmanager which can start other EFI binaries). If your OS bootloader has installed itself as the default, it should work from an external drive. If it hasn't, then what typically needs to be done is to boot somehow into a Linux distribution and 1) make the NVRAM entries with efibootmgr (or similar) or 2) re-install the whole bootloader. If you do plan to indeed remove the drive, you should install the bootloader as the default EFI partition entry. If you are booting a Windows, you are probably better of using it's tools. I'm not sure if Windowses can be installed on a removable drive easily, but I guess it should be possible. But based on the information you have given it is impossible to give any definite answer for your case specifically. I can't actually make out which OS you are actually trying to boot from the external drive from your post. I.e. is it the original OS (Windows?) your laptop shipped with, some other Linux installation, or the Pop OS installation you are trying to boot?
  11. The reply is same as for the swap question: there is no one-size-fits-all answer. It depends on what applications you run and how many of them at the same time. If you use regular office applications, a web browser and gaming, then no (as a rule of thumb). If you do video editing, CAD, run IDEs or have a home server in addition to all of the above... then the answer is maybe. Look at your RAM usage in your worst case scenario you find likely. That should be enough to give an idea, if adding more RAM is worth it.
  12. Good comments here, especially one by @LloydLynx. As a rule of thumb, don't worry too much about swap (whether you have it or not). It doesn't make a lot of difference these days for most users. It never hurts to have swap, though. If you often leave idle processes running, you will get more data cached in the RAM by having swap. There is no one-size-fits-all answer, but it could be ~the size of RAM used by idle processes on your system, or if you need suspend-to-disk, the size needed for suspending. As for wearing down SSD's: this is really not a problem and hasn't been for a while (a decade?). Only the very few first generation SSD could be worn out in a typical home/office use scenario, but these days even if one would trash away on the disk with constant small writes, it will probably be obsolete because of other reasons before it starts to fail because of too much writes. The cells are more durable and wear leveling is more intelligent than it used to be.
  13. It was (and is) very easy to invent Windows95 product keys, as the formula (checksum bits) are very, very weak. I don't actually remember the details, but I've seen it in a YT video or something. Later Windowses had a bit stronger reg numbers (up to but not including WinXP) but still it was really not too difficult to invent "bogus" ones. I suppose they only functioned to deter piracy by average users (during a time when not everything was online), but those hacky enough were going to pirate their windowses anyway if they wanted to. Even a strong product key can have limited effect, if there is no online activation, and conversely it is difficult to make a functional (non-hackable one) which does not have some online component.
  14. Yes, there is. Many good suggestions already. You most probably already know you can do anything you can from command line with SSH. Screen or tmux (or some alternative) are immensely useful to leave stuff running without you needing to be connected all the time. But the olden golden X11 forwarding was not mentioned yet. It's basically same thing as rdp AFAIK, but for the X11 protocol / X.Org (never used rdp, and only shortly tested X11 forwarding "just because" - didn't really need it, but it does indeed work).
  15. Which features specifically do you need to adjust? Pardon my ignorance, but as someone who has abandoned Windows years ago, I have no idea what AMD Radeon Software does in Windows. In GNU+Linux, things are done a bit differently you might expect. The general philosophy is that any piece of software should do one thing and do it well. "Bloat" in software is discouraged. This philosophy is strongly favored in core components every system might need (in applications, such as games, office applications or media players etc., more bloat might be tolerated). As a result, any GPU driver does just that - it's a GPU driver. Stuff which is (or should be) common to all display adapters or a GUI, is done "on top" of the driver on the GUI stack. For example display-related settings are done by the desktop environment (or the frontend, if it exists, is in the desktop environment). The stage of these frontends might vary, as there is no central authority, APIs are in constant flux, there might be several standards (X.org and Wayland) etc. (there is a contrast here stemming from the development model compared to Windows or MacOS). Most things are done (or could be done) by simple command line tools "under the hood" (these tools still talk with APIs; but still many frontends use these command line tools, even if talking with the API would be more "correct"). As a result, often you need resort to command line or a third party frontend, which might not be as polished as you might have accustomed to in Windows. If it is an overclocking frontend you are looking for, check out tuxclokcer or corectrl.
×