Jump to content

PianoPlayer88Key

Member
  • Posts

    1,627
  • Joined

  • Last visited

Everything posted by PianoPlayer88Key

  1. I disconnect & reconnect the power & data cables on SATA hard drives and SSDs on my desktop all the time, it supports hot swap anyway, I haven't had issues with it killing drives or the system or anything like that. (I just have to be careful to make sure the drive isn't being used, at least not by anything that legitimately has a reason to use it, for example if I'm in the middle of copying files to or from it, or it's my boot drive, or I have a file on it open in some application, or some other things like that, I try not to unplug it. But, if I know I'm not doing anything with it, I know it's not set to use pagefile, and yet it says can't eject right now when I click the icon in system tray to unplug it, I'll probably unplug it anyway.)
  2. Okay so I ran some tests yesterday ... Original drive to be backed up: 240GB Crucial BX300 2.5" SATA SSD, that I've been using in my dad's old Dell D830 laptop Drive to host the backed-up image: 12TB Toshiba MG07ACA12TE 3.5" SATA HDD Drive to restore the image to: 1.05TB Crucial MX300 2.5" SATA SSD. Desktop boot drives (the one that I'm doing most of the work on): Windows (10 Pro 21H2): 1TB SIlicon Power P34A80 M.2-2280 NVMe SSD Linux (Ubuntu Studio, forget if it's 20.04 LTS or something else): 250GB Crucial MX200 M.2-2260 SATA SSD (this one had been Linux in my Clevo laptop with an i7-6700K, but it works fine in my Ryzen 9 5950X desktop without needing a reinstall - Windows, can you do that? although it did need an update cause some things (software) were a year or two old.) TL;DR: worked fine: straight-up image from BX300 to file on MG07, as well as reading files in the image (with 7-zip, but mounting as disk image didn't work), and restoring image to MX300. worked, partially: piping "dd" to "7z" to compress the image. what did NOT work: trying to read the contents of the compressed image. (It insisted on trying to decompress the entire 240GB image to RAM, and I only have 128GB plus another 70GB or so of swap on the 250GB Linux boot SSD in the laptop.) not yet tested: restoring the compressed image to the MX300 SSD and booting from it in the D830. Main question now: What do I need to do to "dd" an image to a compressed image file whose contents can be browsed in both Windows and Linux (like with 7-ZIp, WITHOUT needing to decompress the ENTIRE thing first? Also in a google search I did, someone mentioned a way to automatically put in the current date and time in the image filename while doing the image. Second question: I would like to be able to automatically insert several fields into the image filename: current date (YYYY-MM-DD, with punctuation) current time (HH;MM;SS, with punctuation, substituting semicolon in place of colon cause Windows doesn't like that in filenames, also using 24-hour) drive capacity in GB (x1000, not x1024) with appropriate SI prefix) (for example 240GB, 1.05TB, etc) drive manufacturer drive model number drive serial number (if possible) drive type, interface, form factor, maybe a few basic SMART stats Also this probably would be difficult to make "automatic" but I'd also like to have a friendly name as for what the purpose of the imaged drive is (for example boot drive for XYZ computer using UVW OS, or data storage for OPQ type of files, or whatever). (Also I might change the order up a bit from what I listed above, but I should probably decide on something for consistency.) I first prepared the 12TB HDD by formatting an NTFS partition taking up the entire drive. (This is so I can access files & images from within Windows.) Then, I did a straight-up image/clone of the 240GB BX300 to a file on the 12TB MG07. This was done in Linux at the terminal with a command something like dd if=/dev/sdb of=/media/stephen/SSD_Backups/test01/test1_CT240BX300SSD1_20211129_0039.iso conv=noerror,sync bs=16M I may have forgotten one or two things in there, also those aren't the exact names of the media or folder, but are somewhat close, and a somewhat similar pattern. Also I should mention that the BX300 has a few partitions on it - a main Windows NTFS partition, a couple smaller WIndows / boot-related partitions, and a Linux ext4 partition. (I don't think it has a separate Linux swap partition, but I'm not sitting at that PC right now and it's booted into Windows so I can't see Linux partitions anyway, other than the fact that there's a partition there.) This resulted in a 240GB (not GiB) image file, the size of the entire SSD. No compression was done, and even the unused space was "taken up" in the image. (The SSD was maybe half full or a little less.) Then I went into Windows to test reading the image. Double-clicking the image (which I gave a .iso extension) didn't work, just gave a "disk image corrupted" type of error. However, opening it with 7-Zip did work, and allowed me to browse the file structure, including on the Linux partitions. (Each partition was its own separate #.type in the root folder, for example 2.ntfs, 4.ext4 or something like that.) To test restoration ... back into Linux I go, this time with the 1.05TB MX300 plugged in, which had been completely emptied and re-initialized, this time with an MBR partition scheme, the same as the BX300. (Previously it'd been GPT, although since I was doing an entire drive clone/image I wonder if MBR vs GPT preparation wouldn't have mattered anyway.) Now, the command is something like: dd if=/media/stephen/SSD_Backups/test01/test1_CT240BX300SSD1_20211129_0039.iso of=/dev/sdg bs=16m (can't remember if I did conv=noerror,sync though) This results in the image being put onto the 1.05TB MX300, with about 800GB or so of unallocated space at the end. (We'll address that later.) Then .. plug the 1.05TB MX300 into the Dell D830 laptop, power it on ... and after an initial hicuup (says no drive detected, happens a lot probably cause I don't quite plug it in all the way, reseating it usually fixes it) it boots to the Windows OS, which is default. Then I test booting to Linux, and it takes a while but it eventually does get there. (I've had trouble with it before, not sure if it's the same install or not but somewhere I have a video where I show some of the things it's struggling with .. if I find that I may add it later.) So I know that the straight-up image works ... but there's a lot of wasted space on the now bigger SSD. Let's fix that. Back to Linux we go for this part of the exercise. Now, I open GParted, and start by moving the ext4 partition to the end of the drive, and growing it from about 38GB to about 80GB or so. I also moved a Windows recovery partition right before the ext4 partition (that was right after the main 200GB NTFS partition, idk why WIndows installer/setup/whatever doesn't put it at the beginning of the drive to make it easier to extend the main partition when you clone a drive), but for the time being left the main 200GB NTFS partition alone. Interesting thing, though ... GParted hung up (at least the user interface) a few minutes into the process, and I let it sit for a while, probably a couple hours (which was several times longer than the 15 or 20 minutes or so that the initial "dd" clone / restore took) ... then when I was unable to kill the process or shut down the PC, I resorted to hitting the reset button on my case to force-restart. I thought something might have gotten messed up on the clone due to forcing the PC to restart before it saying it was done, but I was hopeful because I had allowed it plenty of time ("maybe Linux is being Linux and not showing any output, or something bugged with the reporting ....") ... and it turned out my fears were quelled, and my hope was justified -- the partitions showed up properly on the clone, including in their new sizes & positions. (At this point, I forget if I did a quick test boot in Linux before resizing the NTFS partition, but if I did, it worked as well as it did before.) Now, I go into Windows on the desktop, in Disk Management, and use that to extend the 200GB partition to fill the rest of the available space, which turned out to be 964 GB (or about 898 GiB, I think). Then, pop that 1.05TB MX300 SSD back into the D830 laptop (and have to reseat it yet again), boot Windows, and it's fine. Okay, so the clone works with a straight up dd to an image .... but I'd really like to save space where possible. So, after a bit of googling, I come up on a post on stack exchange (come on, when I set my preferences to required cookies only, I want it to apply across ALL stack exchange sites, not just the specific sub-category, and better yet I'd like my browser / network to automatically default to that anyway without asking, and ONLY ask me if something is required for functionality I'm trying to do!) where someone suggested piping the dd command into 7z (the terminal version of 7-zip). So ... I go with dd (my input from before, but not specifying of= ) | 7z -a si /media/stephen/SSD_Backups/test01/test1_CT240BX300SSD1_(some code he mentioned in his post that automatically inserts the current date and time, but without punctuation, resulting in something like 20211129_134638).tar.7z That puts the image file in the same place on the NTFS partition on the 12TB hard drive, which this time only takes about 56 GiB or so. Open the 7z file with a GUI utility (whichever one ships with Ubuntu Studio, don't think it was 7-Zip). There's the tar file. Open the tar file ... and it proceeds to try to decompress the entire image into RAM / cache before it would even let me see what's in it. (BTW this Ryzen 9 5950X system I was working with has 128GB RAM.) I see it's taking a while (and am a bit concerned about it eventually running out of swap as well) so I go away for a while. When I return maybe an hour or so later, the zip gui has crashed, although I forget if it was an out of memory error or something more generic. Okay, so I wonder if Windows 7-Zip utility is a bit better behaved.... so back there we go, open the .7z file with 7-Zip, it sees the TAR. Open the tar, and it proceeds to try to decompress the entire thing. This time I abort it, "we're not gonna try that again". So maybe wrapping it in a tar wasn't the right thing to do, maybe it added an extra step? So this time, back to Linux, redo the dd | 7z again, but this time ending the image in just .7z, not .tar.7z Open the image again with the Linux viewer, it opens up to another image file... opening that again, it tries to decompress. Again. Back to Windows .... same thing ... opens from the 7z into the big image file, then trying to open that it wants to decompress the entire image. What can I do here? I want to be able to have the image be compressed, and be able to see the file / folder structure within it without having to decompress the entire thing. With normal zip / 7z files (created by adding files & folders in Windows, for example, to a 7z archive), I'm able to do that, but not with these disk images. Also I'd like to be able, when creating the image, to insert several fields directly into the filename, like date & time with punctuation, drive model & serial number, etc. Any ideas on how to do that?
  3. … (I'm working on getting some screenshots of file explorer from my desktop PC with some HDDs plugged in, showing some folder structure of several drives copied to their own subfolders in one big partition, plus a clonezilla images screenshot or few maybe .... it's not ready yet (and I'm taking a break right now, been working on it a while) but basically something like that would be how I would want it to appear to the end user once it's all set up, but better organized than in the screenshots.) Okay I got the screenshots, will put them in a spoiler. Okay so apparently I forgot to enable the option to show some hidden files (protected OS files specifically, on this newer PC setup / Windows install) so there might be some things that don't show in the screenshots. (I'm not going to redo them. Found out cause I was getting ready to finish cleaning off an SSD in preparation for doing something else with it, happened to boot into Linux and mounted it, and saw folders there that didn't appear in Windows.) Ahh, I guess I don't understand how booting a VM is the same as booting bare metal or testing it. Basically what I've done in the past is clone the drive, then shut down the system, physically unplug the source drive (technically SATA supports hot plug but I don't think you should hot-unplug the drive you're booting from), make sure the copy is plugged in, power on, and if it doesn't automatically boot from it, select it in a boot menu (usually I do have to do that). Also I don't remember for sure, but I think either I have had, or have read about other people who have had ... issues with similar situations, where something worked fine in a VM but didn't work on bare metal, or the other way around. I know about being able to make virtual disks, just haven't learned how to move them between VMs yet. (I should learn how, there are some VMs I created on another PC or install of a host OS (Windows most likely, or maybe Linux) that I want to migrate.)
  4. LOL no For buying just a case, I'd like to keep it under about $90-100 or so, preferably under $55-65 or so. A while ago I had seen some cases that would have come close to meeting my needs & budget, but I wasn't really ready to buy then. Some cases I've seen recently that would come close to (or maybe actually) meeting my needs in my budget, except for being way too big overall, include the Rosewill Challenger, Cooler Master N400 or Antec VSK4000E U3, although prices do fluctuate on an almost daily basis, it seems. If I was getting a smaller form factor motherboard (like Micro ATX or Mini ITX), I'd also maybe consider ones like the Apex TX-606-U3, Fractal Design Node 304, Silverstone PS08/09B, Fractal Design Core 500 for example. If I did a combo like that I'd want to keep it under about $120-150 or so, problem is I've been having a hard time finding inexpensive Mini ITX motherboards with 6 or more SATA ports, or Micro ATX motherboards with 8 or more SATA ports. (I know of one Mini ITX motherboard compatible with my i3-6100 that supports ECC and has 8 SATA ports, but can't find any available right now and they're usually upwards of $200 or so.)
  5. Ah ... well, I guess some of my HDDs are kind-of small... I'd actually prefer a lot smaller, and I don't want the unused space by the expansion slots, on either side of the CPU cooler, etc, to go to waste. I won't be using a GPU or watercooling, and I already have a motherboard (ATX - ASRock Z97 Extreme6) and would prefer not to get another one, unless I was building an entire backup server from scratch. If I was doing that, then $300 or so (maybe a bit less) would be my budget for the entire build, not including the storage drives, also ECC RAM would probably be a requirement. (If I went with UnRAID, the license would be included in the budget.) Also if I was rolling a backup server, I already have an i3-6100 laying around, as well as, I think, an orphaned 8GB stick of SO-DIMM DDR4-2133 non-ECC RAM. Hmm ... those are as big as or bigger than my Define R5 though ... I was hoping to find something that was quite a bit smaller in cubic volume, like maybe 1/3 to 1/2 the size. Also I don't think either the Define 7 or Meshify 2 support mounting HDDs over the motherboard (instead of, say, a GPU or other things, even in storage mode. Interesting, I didn't know that thing existed ... but I'd prefer to not order from overseas for now. (USA preferred, Canada would probably also be okay, idk about Mexico even though I'm like maybe a 30 minute drive from the border.) (Another option I had considered was a moddiy 5-in-3 cage - fits 5.25" drives in the space of 3x 5.25" bays, although I might use it externally.) Ah, I'm not too familiar with those options ... and I don't have access to that kind of equipment. (Also my attempt to google 80/20 failed...) Yeah, I think it's a standard 120mm fan so I could probably swap it. (Or maybe I could underclock enough to just run passive, or put the stock heatsink on, although I *really* don't want to remove the cooler, it was a huge pain in the to install.) I'd like as high a density of components as possible, without going rackmount. (I have no place to put a rack.)
  6. Ahh. Anyway I was just looking for some screenshots of occasions when I had HDDs plugged in that had lots of partitions (copied from other drives, mostly). There's several, so I'll put them in a spoiler. I'm basically trying to avoid situations like those. (I'm working on getting some screenshots of file explorer from my desktop PC with some HDDs plugged in, showing some folder structure of several drives copied to their own subfolders in one big partition, plus a clonezilla images screenshot or few maybe .... it's not ready yet (and I'm taking a break right now, been working on it a while) but basically something like that would be how I would want it to appear to the end user once it's all set up, but better organized than in the screenshots.) Ah, I've looked a little at ddrescue but mostly have used dd. Yeah, a VM might be an idea as well ... but idk how you would test "dd" in a VM when you're copying from an actual physical block device, you want to test reading the copy from outside the VM, you want to test booting from a copy on bare-metal hardware.
  7. Ah okay, so I guess I don't need to worry about it. Heh ... I did buy a key for my newest PC (5950X, etc). (I wish I could buy one maybe $150-200 key or something then use it on as many computers as I own for life ) Also I prefer to spend the vast majority of my computer / tech budget on physical hardware, although I do buy software and games now and then. (I used to pirate a few things a couple / few decades ago but I haven't done that since the mid-late 2000s or so.) compared to some of the upgrades I or my family have done in the past For example… Feb 2008 (or Apr 2012 - complicated) to Jan 2015: Athlon 64 X2 4000+ (or Core 2 Duo T7250 - AM2 board died) to i7-4790K 2 GB RAM (3GB after Apr 2009, then 2GB from Apr 2012) to 32GB RAM Jan 1989 to ~Oct 1995: 286-10 (Intel) to AMD (486) DX4-120 640 KB RAM to (probably) 8 MB RAM (but may have been 4 MB) 40 MB MFM/ST-506 HDD to 540 MB or 1.6 GB IDE/PATA HDD Also this upgrade, at least on the CPU/mobo side, was about 1/3 cheaper for the new hardware vs the old. I'm not sure what the direct comparison between the 286-10 vs 486DX4-120 CPU performance was (I did look up their MIPS on wikipedia, extrapolating from CPUs actually listed but I"m not sure if there's maybe a better way) but I'm guessing there was an average of 2-2.5x uplift per year in performance/price. Also several years ago, a friend from church retired a PC he had been using in his machining shop, and transferred to a newer PC. Basically he went from a 386 ... straight to either a Core 2 Duo/Quad, or maybe a Sandy Bridge office PC. (It was around 2009 to 2011, maybe 2012 or so, when I, bro & dad helped him with that.) Ahh ... it was, though. (I think I showed This PC open in the timelapse video, but yeah, the quality wasn't very good, mostly cause I exposed the camera for the screen when it was powered off, and then when I powered it on it blew things out. Also is there a better way I should have posted the video? I'd prefer not to have to upload to somewhere else - I don't have a vimeo account for example, and I'm pretty sure it's more than 20 MB... also sometime later I might want to post a short video clip doing a rapid scroll through my google photos feed, and ask if anyone has suggestions on how to go about better organizing things on there, as well as some media files on my own PC ... I'm not ready to compose & post that yet though.) The C:\ right now has .... 610 GB free. I noticed that system restore wasn't working properly or something, and investigated (this was probably a couple months ago now) and noticed that there was some phantom (missing) entry, which I think somehow got there during the cloning process when I cloned from the 250GB to the 1TB SSD. Ahh ... I hadn't really looked into imaging much, although someone else and I are in a brief discussion on imaging on another thread. (Basically my question there was how to back up a bunch of smaller SSDs onto a single large HDD, without having a bunch of partitions cluttering up the HDD, being able to boot from the images to test them, and being able to mount the images to read their files in both Windows and Linux.) I used the Linux "dd" command to do the clone / image ... maybe I should have done something else? I've had trouble with other methods though (Clonezilla being one, I think, although I thought I had gotten it to work once several years ago...), as in not being able to boot from the copy. Also right after I copy the drive, if I plug in both at the same time, the OS will complain about a signature collision, so I have to resolve that. (usually in Windows it's right clicking on the drive in disk management and changing the signature, or UUID or something, I forget now.)
  8. Ahh, that's good to know. (Yeah somehow I've mounted ISOs in Windows, did it once fairly recently and it had a drive letter, forgot how I did it but I guess double-clicking is how it's done.) Also when I made a few clonezilla images some time ago, it put them in several smaller zip files (or 7z I forget now), each being about 2 or 4 GB or so. Also how does that mounting images work if the image contains multiple partitions from the drive that was imaged? Yeah if I can get it to work right, looks like image files would be the way to go ... so I guess I do something like dd if=/dev/sdb (or whatever drive I'm cloning) of=(mount point of /dev/sda3)/drive_backups/model_serial_date_time.ISO or .IMG, plus whatever options like blocksize, noerror, sync, status=progress, etc? I'm thinking maybe I should test it on a smaller source drive (like maybe an 8.4GB or 80GB PATA drive although it's empty, or a 240GB SSD with a Windows install), put an image file on a HDD (maybe a 12TB or 14TB that's currently empty - I want to make sure I won't overwrite an entire partition accidentally), then see if I can mount that and read the files.
  9. How does it work for having a bootable image on the HDD? Are you able to boot directly from that image on the HDD (and if you have multiple images on one drive, select which one to boot from in a boot menu on startup), or do you have to restore from the image onto the same type of drive that the image was made from? (For example, restore a GPT NVMe image to a GPT NVMe drive) Ah, I hadn't really thought of doing it as images like that. I'm guessing it would be fairly easy / trivial to mount the images in Linux ... but what if I want to mount them in Windows? I'd like to have that option as well - at least for NTFS & FAT - windows-compatible partitions; ext* mounting in Windows would be nice but I could probably live without that. (Also I'd like the option of mounting everything at once, instead of having to mount each one separately, if I want to do that.)
  10. What do you mean by "tetresing"? (Or you referring to tetris?) And it would be used to back up the HDDs, I'm not sure if I'd want to use it as a NAS. (Sure it might be nice to be able to access my desktop files from my laptop or my phone, but I get better performance reading & writing from HDDs that are directly connected to the motherboard, and I'm a bit wary of possible security pitfalls of people getting into the network from outside.)
  11. So who else has a current CPU with a bigger cache size than your family's first hard drive? 😂

    1662120853_Screenshot(10).thumb.png.9656f56dc368cdd9970cc49a2d053e43.png

    (Would have liked to post this as a topic in storage or CPUs, but the mods probably would have locked it & said it's better as a status update ... but who ever replies to these if you're not Linus?  (was gonna paste an image that looked like I was tagging him, but apparently it highlights in a different color in day vs night theme))

  12. Ahh, I'd most likely be using Linux for the most part. There are some partitions that Windows cannot see or interact with, also while there is a "dd" tool for Windows, it's nothing like the one in Linux from what I can tell. Although, for the NTFS (and if still applicable, FAT variant) partitions, I could still use Windows file explorer to copy the files over. Even so, I may probably still use Linux anyway, because I've sometimes had files be inaccessible even though I was logged in as admin. (I think this could be due to files that were originally on a different / older PC, with different owner, admin, etc.) Sometimes though, even Linux won't see some partitions on the drive, or at least not see the file structure, with normal tools. In the past I've used the free version of Parted Magic, but that hasn't been updated since like 2013 (yes I know there's a newer paid version available but I try to avoid that whenever possible) so I haven't really been using it much recently. I've actually had the TestDisk utility see some partitions and files that either only showed as an unknown partition in, say, GParted, or sometimes didn't even show up at all. (These include, for example, 3 MB or 16 MB BIOS or Boot partitions, or some like that.) Basically I want to back up *everything*, including, if possible, the firmware itself. Ah hmm.... I would like it to be bootable, but I would also like to be able to explore and read (and sometimes write, but should probably have some safeguards) the file structure using normal OS tools like file explorers / managers, apps that can open the files, etc. I don't think I've ever used rsync ... any good resource to read up on it, and learn how it works, what it's used for, etc? Also I'm not sure if I could make a backup from an NVMe SSD bootable directly off a SATA HDD. (I wasn't able to boot a SATA install cloned to an NVMe SSD, I imagine it also doesn't work the other way.) I usually like to test my clones by booting from them, and I don't have any extra NVMe SSDs to use as test subjects. A rough example of how I might want the backup to look, seen from the OS's file manager: (Windows) || (Linux) R:\drive1\part1\... (folders & files) || /dev/sdg1/drive1/part1/... R:\drive1\part2\... || /dev/sdg1/drive1/part2/... R:\drive1\part3\... || /dev/sdg1/drive1/part3/... R:\drive2\part1\... || /dev/sdg1/drive2/part1/... R:\drive3\part1\... || /dev/sdg1/drive3/part1/... ... R:\drive10\part4\... || /dev/sdg1/drive10/part4/... R:\drive11\part1\... || /dev/sdg1/drive11/part1/... R:\drive11\part2\... || /dev/sdg1/drive11/part2/... R:\drive12\part1\... || /dev/sdg/drive12/part1/... Although, I would probably name the "drive" subfolders based on the model & serial of the drive being backed up, and the "part" subfolders based on the partition names and types. Also if I had to, I could probably consider using 2 partitions, one for Windows-compatible partitions, the other for ones that need Linux. (But that could reduce my flexibility of resizing in case I need more for one than I had originally allocated, and was using less of one that had more space allocated to it.) Another idea that popped in my cabeza was just use generic "drive" and "part" names for subfolders, and have something like a R:\ || //dev/sd1/ drives_&_partitions.txt (or .ods) for reference ... but then that would make searching for files a bit of a chore, and there might be other issues I haven't thought of yet as well.
  13. I have 10 Pro, and have losesleep update set to notify before downloading ... I wasn't aware that I could even get enterprise LTSC, I would have liked to do that. (I'm not one that likes doing major upgrades frequently that require shutting down the system, I'd really like to have uptime measured in multiple years or even decades, or "six or seven nines" of uptime.....)
  14. Which system / which drive? Some have Windows 10, some have Linux. Also the HDD won't be plugged in all the time, I'm basically thinking just plug it in to to the backup, then put it on a shelf, plugging it in as needed to update the backup or restore. (Although I probably should come up with some solution to do real-time on-the-fly backups ... I did some backups a year or so ago with a couple 8 and 10 TB HDDs, and haven't updated them since....)
  15. So I recently upgraded my desktop platform, and would like to put my older platform in a case. (I had been thinking of just running it open air like on a motherboard box or on a wood surface, but considering the klutz I am and my crowded space that might not be the best idea.) My existing case is the Fractal Design Define R5, which now houses an ASRock B550 Taichi, Ryzen 9 5950X, Arctic Liquid Freezer II 360, 128GB RAM (XMP is 3600 but won't post there so I'm running at 3200), as well as a reused Corsair AX760, EVGA SC GTX 1060 3GB, 1TB SPCC P34A80 SSD and a few HDDs. Anyway, my ASRock Z97 Extreme6, i7-4790K, Cooler Master Hyper 212 Evo, and 32GB DDR3-1600 is currently homeless. That PC will likely be relegated to relatively light duty work - for example some kind of occasionally-powered-on backup server was an idea, except a gripe I have is that it doesn't support (or have) ECC RAM. I'd just be using the iGPU, and probably not add any expansion cards (unless I were to add an extra SATA/SAS HBA or two), so I'd be fine with using that space by the expansion slots (and around the CPU cooler) for mounting the PSU and some HDDs. Here's a pic of an example of what I envision by a "compact" setup. Now I need a case to fi them like that. (I'm thinking maybe something around 12.5" x 10" x 6.75" or so, or about 320mm x 255mm x 170mm, or a bit under 14 liters.) BTW, I could probably take a couple HDDs out, to allow for a bit more airflow, and maybe a bit easier cable management. I haven't found any cases that are that compact like what I'm looking for, though. Also speaking of airflow ... I might need to replace the fan on that Hyper 212 Evo .... unless there's a way to resurrect it? Or maybe I should put the stock heatsink on - the 4790K came with one, or just run it passive and underclocked? (But I was thinking of using the fan, as well as the PSU fan, for airflow....) https://photos.app.goo.gl/azh2JCMH1gN8ArXn8 (I wonder if I killed it though while trying to blow dust out...)
  16. I would like to back up these: onto this: and I'm trying to figure out a good way to do it, if anyone has any ideas. Some of the SSDs are primarily data storage and things like that, so I'm thinking of just making folders in an NTFS partition on the HDD and copying the files over. But, a few of the SSDs have OS installs on them, and just copying those files doesn't really work in case I need to restore the installs. In the past, when I've backed up a bunch of smaller drives, some with multiple partitions, onto a larger drive, I had at times 20 or 25 partitions on a 5TB or 8TB drive, which got to be a mess with drive letters, trying to keep track of things, and so on. Also, when I've backed up OS drives, pretty much the only way that has worked for me is to boot Linux and do something like "dd if=/dev/sda (source drive) of=/dev/sdb (destination)", and clone to another dedicated drive. (I've also done a couple images with Clonezilla, but haven't had as much success with that.) Also, a while ago I tried cloning a Windows install on a SATA SSD to an NVMe SSD and that didn't work, so I had to clone to a SATA SSD. (This indicates to me there could be issues with trying to back up an OS install from an NVMe SSD to a SATA HDD.....) This time though, I'd like to put that all onto one drive. with a strong preference of it being all in one partition so it looks "cleaner" in Windows. (Also as far as the OS backups, I'd like to be able to navigate the folder / file structure with normal file manager / explorer tools, something I can't do with the aforementioned Clonezilla images.) Is there a good way to do this? As for what's currently on those SSDs, and where they're used.... 240GB 2.5" SATA Crucial BX300 = Windows 10 boot drive, in older Dell D830 laptop (T7250, 2GB RAM, etc) 250GB M.2-2260 SATA Crucial MX200 = Linux boot drive, temporarily in my new 5950X desktop platform. (Previously was Windows 10 in my i7-6700K laptop, was going to be a Linux boot drive for the laptop but ... it's complicated as to why it's not.) 256GB 2.5" SATA Crucial M550 = Linux boot drive, most recently used with my 4790K desktop 500GB 2.5" SATA Team Vulcan #1 = backup of older Windows install from 4790K, MBR 500GB 2.5" SATA Team Vulcan #2 = backup of older Windows install from 4790K, GPT 1TB 2.5" SATA WD Blue 3D #1 = current Windows 10 install in 6700K laptop (cloned from 250GB MX200) 1TB 2.5" SATA WD Blue 3D #2 = several Virtualbox VMs (mostly Linux) 1TB M.2-2280 NVMe Samsung 970 Evo = Windows 10 install on 4790K desktop (currently without a case) 1TB M.2-2280 NVMe Silicon Power P34A80 = Windows 10 install on 5950X desktop (in Fractal Design Define R5, which previously housed the 4790K setup) 1.05TB 2.5" SATA Crucial MX300 = was a data storage SSD in the 6700K laptop, been emptying this one off after finding that on of my desktop HDDs already has an almost complete duplicate of what was on this SSD. 2TB 2.5" SATA Seagate Barracuda 120 = data storage in 6700K laptop 2TB M.2-2280 NVMe Silicon Power P34A80 - data storage & pagefile in 6700K laptop So there's a total of 10.796 TB of SSD capacity there, not taking into account space not being used. I'm pretty sure the 12TB HDD would have room enough to spare for everything on those SSDs. Also I'm wondering if I should look into getting an external HDD dock? I don't want the toaster kind cause I'm afraid of kicking it and knocking it over. Several years ago I used to have a Rosewill one (forget the model number) that worked alright (except when I killed a hard drive by running it upside down, was doing that cause of where the cooling fan on that dock was mounted, would have been choking the airflow if I ran it with the drive right side up), but it only supported drives up to 2TB.
  17. Hey so I recently did a minor incremental platform upgrade on my desktop PC, resulting in it now being compatible with Windows 11, at least if I were to enable TPM in UEFI. But, I'm in no hurry to upgrade to Windows 11, I'm happy enough (not quite "perfectly happy" but it "works") with Windows 10. (I might want to skip 11 entirely and wait for 12 or 13, or eventually switch to Linux. Previously I upgraded almost directly from XP to 10 - "almost directly" because I briefly had 7 for several months in the first half of 2015.) If I leave TPM disabled in BIOS, should that prevent M$ from trying to force me to upgrade to 11? Or, is there a good reason to enable it, and do I not need to worry about it? (The minor upgrade was i7-4790K -> Ryzen 9 5950X, 32GB DDR3-1600 -> 128GB DDR4-3600 but currently running at 3200 cause 3600 XMP won't POST, ASRock Z97 Extreme6 -> B550 Taichi, Cooler Master Hyper 212 Evo -> Arctic Liquid Freezer II 360. The other parts were reused for the most part.) Also my laptop, with an i7-6700K, won't update past Windows 10 1909, the upgrader thinks there's not enough disk space, yet there's upwards of 600-630 GB free on the 1TB SSD it's booting from. (I did previously clone it from a 250GB SSD that was running out of space, although idk why that would cause an issue if my SSD now has plenty of space.) My older desktop with a 4790K, as well as my dad's old laptop with a Core 2 Duo T7250, have both updated to 21H1, or maybe 21H2 I forget now. Any ideas why my 6700K laptop won't update? (In case it helps diagnosing, a timelapse video is at <link removed by mod> , although I probably need to learn to better focus / expose the camera, it's challenging when I'm starting with a PC that's turned off and afaik can't change the exposure once the time lapse has started shooting...or maybe there's a way to do that on the Panasonic FZ1000...)
  18. yeah ... but when I've had it turned on I've had the issue where.... plug in external drive... copy several 10s (or more recently a couple hundred) GB of files to internal storage progress dialog reaches completion, closes unplug drive "uh, oops, data corruption, we weren't done...." If I turn off caching I don't have that specific problem. The dialog doesn't disappear until it's actually done with the operation. (Basically, if the dialog is gone, that should mean it's done with its operation. I always like to set my drives, etc, as optimize for safe removal.) But even with it off, it will still sometimes hang at the very end for a little bit, I'm guessing cause it's emptying the cache onto the actual storage. Interesting thing... even my internal HDDs will show up in the "remove hardware" option ... probably cause I have hot swap enabled on the SATA ports. (Also I usually flip the HDDs the other way around, and leave the side panel off my Define R5, cause I'm frequently swapping them.)
  19. Ah. (Also I wonder if the fact that I bought mine in November 2018 makes any difference ... I don't think the Evo Plus existed then but I'm not sure, and idk about the SN750.)
  20. I usually turn caching off on my PC, but for some reason I've still seen clues that some form of caching is active. I might notice the speed of a file transfer start off at ridiculously high speeds (like 1 or 2 GB/s transferring to/from a HARD DRIVE! for a couple seconds or so) before it slows down to more normal / expected speeds, and with task manager open, I'll see my RAM usage increase over the course of several seconds by several GB or so, and remain elevated until the transfer is done. I wonder if there's some other setting that I'm missing to make sure there's no caching going on? (Bonus points for a setting that works across platforms, OS's, etc.).
  21. Hmm does that mean my 970 "just" Evo is a bad drive? (I haven't had any problems with it so far...currently using it as a boot drive on a secondary / tertiary PC (4790K, etc))
  22. Ahh, that's good to know. Looks like I should try copying the program files and the related appdata folders to another place, then uninstall them (through apps/features or maybe a few have their own uninstallers), reinstall them on the place I want to put them, then copy the backed-up folders over them? (Or is that not the right way to do things?) Yeah, hopefully that won't mess things up. Ahh ... I was wanting the flexibility of installing to another location in case I run out of space on my main drive. Also ... while I don't plan on setting this up right now, I think I would eventually like to be able to set things up so that various things are isolated / containerized / sandboxed from each other (or whatever it would be called) ... For example each of: main host OS (I'm considering multiple OS's, researching hypervisor vs OS options, VMs, etc) OS settings installed apps app settings other app things, like browser history, cache, etc installed games game settings saved games, other game user data general user data (photos, videos, documents, music, etc) (and whatever else I forgot to think of) would each be on their own dedicated storage, whether it multiple partitions on one or a couple drives, or maybe later (few years down the road, if I can eventually afford a Threadripper / Epyc setup), their own separate HDDs / SSDs. A big reason I want to do that is ... if I need to do a reinstall (whether the OS, or programs), or an update, I don't want to lose whatever I have accumulated as far as settings, history, etc. Also I touched on researching VMs ... A little preliminary research tells me that some VMs can be set up to be portable - as in, move to different hardware. (I've also seen a little bit about maybe being able to dynamically allocate CPU and RAM, but I'm still not sure if I'd be able to set that up...) I'm also trying to figure out what type of hypervisor I'd want to use, doing some research on that. So far I've all but ruled out ones that use a web interface, as I will be interfacing with the VMs on a monitor, keyboard & mouse directly plugged into the motherboard, not through a VM; also using a web browser takes up resources that.... well, I think I still have quite a bit to learn. I've been using VirtualBox on Windows, but might also consider KVM on Linux, would need to figure out a good lightweight distro. Also I'd want to figure out how to use one GeForce GPU with multiple VMs. I don't plan on gaming on multiple VMs simultaneously, just on one, but I would like to be able to game on a VM instead of the host OS. (I'm most likely planning to install Linux as the host OS, or maybe a type 1 Hypervisor but not sure about that, and some of the older / less popular games I play may not support Linux.) While I'd only be gaming with 1 VM, I likely will have several VMs running simultaneously for other things. (Right now I have 5 up and running on this laptop.) I may even eventually possibly have more VMs running than my CPU has threads.... (Edit: crap, forgot to put this in the pic... and I can't snap a shot of the SSD I'll be using yet, since it's in my laptop right now, it's what I'm working on cleaning off, hence this thread.) I already do it with a GTX 970M in my laptop in which that's the only GPU (technically the i7-6700K has an iGPU, but it's functionally disabled / not hooked up in this laptop), but I really can't game in the VMs. (3DMark doesn't run, I don't think, and I get maybe 3 or 5 fps in Half Life 1 / Team Fortress Classic, at very low resolution and settings (like 640x480 or lower.) I really don't want to have Windows as my host OS, but I'd consider running it in a VM for things that require it. I really don't want to buy any extra GPUs .... if I did, it would need to be single-slot (so I don't cover up other expansion slots), not worse efficiency than Maxwell, not more than about $200 or so, performance between a current-gen APU and a 980 / 1070, and any more than 1 extra would have to plug into a PCIe x1 slot. (My board has 3 PCIe x16 slots but I want to leave one of the x16 slots available for a possible future 8+-port SATA/SAS HBA or something.) (Should some of the other things be split off into a separate thread?)
  23. I've used the DBAN utility (Darik's Boot and Nuke) to totally wipe hard drives. To be safe, I make sure that ONLY the drive(s) I want to wipe are physically connected to the PC, I unplug everything else. (I'll usually boot DBAN off a USB/DVD/CD, and that shouldn't show up as an option for a device to wipe; also I think DBAN is not recommended for wiping SSDs.)
  24. Okay I'll try to be brief... I need to move a few programs that are installed on one SSD to another drive entirely. The list includes: Fraps (3.5.99, build 15623) JDownloader 2 (build from 2016-12-22) MuseScorePortable (2.0.3 - I suspect this one should be easier to move cause it's "portable") obs-studio (26.0.2) PCMark 10 Advanced (1.0.1413) Python 3.7.2 Sketchup Make 2017 (17.2.2255) TurboTax Deluxe 2018 All of them are in a custom install folder ("D:\Programs") I suspect MuseScore will be one of the easiest to move cause it's a "portable" version. I'm quite concerned about obs studio though ... I really don't want to lose / have to reconfigure settings and other things. Also PCMark - I don't think I can redownload it. Some of the others I don't use much anymore, but want to still have them available for a while just in case I need them. I really don't want to have to reconfigure things (or whatever I'd have to do after a fresh clean install of the programs). If it weren't for that (and the fact that I can't find some of the installers), I'd be okay with reinstalling the programs. BTW how forgiving is Linux, vs Windows, for moving programs to a different drive / location after the fact? A brief google search told me that using Apps & Features won't work, cause they weren't installed from Windows Store .. Also using an app like Steam Mover (was mentioned several times in Google search) won't work either, because, from what I can tell, it makes symbolic links (or junctions) to point to the new locations. Anyway ... I plan to remove that SSD from the laptop entirely, cause I'm getting ready to set up a partial upgraded PC (new motherboard, CPU, RAM, cooler, retaining some things but will need to do a new OS install). I'm planning to use that SSD as my OS drive in the new setup. I've already gotten almost everything moved off of it (except some pics I have on it temporarily while working on them, a Windows10Upgrade folder, a couple dump files and maybe a few misc tiny other things. (For some reason a couple things in Steam games got broken in the process, though - for example in Team Fortress Classic, it messed with my custom grenade timer and removed some custom death sprites I had ... btw that move was done a month or two ago I think.) To emphasize ... using sym links, or junctions, WILL NOT WORK ... because when I'm done with this process ... as far as this current computer is concerned, that physical drive will no longer even EXIST. What can I do to have those programs still work without any changes being apparent to the end user? (Okay, that's me, but I hope you know what I mean.) Ordering another SSD is out of the question for now. BTW the SSD in question is a 1TB Silicon Power P34A80. (I also have a pretty-much empty Crucial MX300 1.05TB 2.5" SATA drive ... I'm thinking about using that as an alternate OS install for like dual boot, but I haven't decided yet. In my experience though it's easier to put Windows and Linux on separate physical drives, rather than try to put both on the same one with separate partitions.) (Also I have some questions about multi boot, VMs, etc, for example having a very lightweight (for example needing only a few MB RAM and few 10s of MB disk space by itself, with CPU usage similar to what CPUs had when that RAM & storage were common) hypervisor as my base OS, then running other OS's on top of that so it's easier to make snapshots, etc ... also possible issues with running VMs, including games, when I only have one GPU (GTX 1060 3GB, not planning to upgrade anytime soon) .... but that's probably better for another thread.)
  25. Okay I'll admit my data in memory (gray matter) is at least a few years old, but I thought even the free version of Resolve supported GPU acceleration? (I think it was with only 1 GPU though, to use multiple GPUs you'd need the paid "Studio" version afaik....) Or has that changed? Also I thought the free version supported 2160p (or at least 3840x2160, not sure about 4096x2160)....
×