Jump to content

Elbow

Member
  • Posts

    53
  • Joined

  • Last visited

Awards

This user doesn't have any awards

1 Follower

Profile Information

  • Location
    Anonymous proxy

Recent Profile Visitors

459 profile views
  1. Perhaps it is though I doubt so. Clearly something is very broken with my current OS so I'll try reinstalling LTSC through a different installer and if I'm still getting issues I'll go for one of the other Windows 10 editions instead. Normally I could just reset my OS but even that functionality is missing lol. I wonder though, seeing as I've made a raid 10 through my MOBO with 4x6TB disks, will I be able to keep the data on it or will they be wiped as well if I reinstall my OS? I imagine they should be fine since the OS is on a separate 1TB SSD, right?
  2. So I've been using LTSC for my new computer for a couple of months now and while it didn't have perfect functionality from the start intentional or unintentional like a bunch of settings options missing, this is all I get now But the last few weeks the problems I've been having with it is kicking into overdrive. Currently whenever I do anything processor heavy some random process (can be from Windows, Nvidia, Firefox etc) pushes CPU to 100% and either crashes or freezes my PC. I am aware that regular Windows 10 users have had problems with a patch doing similar things but that shouldn't even be a factor since LTSC isn't supposed to be making updates to your computer on its own volition. Some of the things I've tried to fix it - Q-flashing/updating MOBO BIOS - Tried updating every driver I could get my paws on - Remounting the CPU+cooler - Mem86 test with flash drive - Tried different overclock settings (which worked fine before) - Set power settings back to ''balanced'' - Ran SFC and DISM scans - Troubleshooting as many of the functionalities I could (though being unable to troubleshoot a lot of things intentionally or unintentionally due to LTSC) So far I've had zero progress and I'm about ready to uninstall the OS since I'm only having problems with it. Is there anyone here who have used LTSC for a longer period of time without any issues or is it fundamentally broken and instead of reinstalling it's better to go for one of the regular versions of Windows 10?
  3. I've had similar issues but with CPU usage for Windows 10 LTSC. It started a couple of weeks ago that some random process (from windows, nividia, game etc) would randomly push CPU usage to 100% and crash my computer. I read about other Windows 10 users having this issue as well and it was due to a recent update but as far as I understand LTSC shouldn't be installing updates on it's own in the first place (only get error 0x80080005 if I try updating manually) so no idea why I'm getting it too. Tried half a dozen fixes but nothing seems to be working. Honestly stumped on how to deal with this.
  4. Yeah just checked my MOBO manual and saw the same thing, got all 4 of them working now! Thanks a lot for the help mate! Time to attempt making a raid 10 in bios and come back crying again i 5 minutes haha.
  5. So as title says I've installed 4 X 6TB WD reds yet somehow two of them aint showing up, I could understand it if it was only one since it might be a production error or damaged in shipping but with 2 it's probable that the issue is at my end. I tried switching SATA-cables as well with a spare but unless I have 3 broken ones at random that shouldn't be the problem either and I can tell from the vibrations that they're running but just not found by the system. I've also made sure that the drivers are up to date, I've ran Windows Memory Diagnostics but as said the drives still don't show up in disk manager, device manager and BIOS. Anyone knowledgeable about these kinda issues that could fathom a guess as to what the problem might be? I've tried googling for an hour now and I swear if I read one more article where the solution is ''hurr just enable them in disk manager silly'' I'm going to start smashing my head against my wall. Using Windows 10, Aorus Z390 Master (Gigabyte Bios) and the drives are WD60EFAX.
  6. Yeah I think I'll try it tomorrow then, using a Z390 Aorus Master but IIRC they share the same BIOS design as Gigabyte so should be able to find a guide for it or maybe it's in my manual somewhere. I wonder if they're able to do raid 10 though or if I'm able to do it by applying raid 1 then raid 0 or if doing it through MOBO only allows one configuration of your drives? I don't have any spare internal drives lying around but it should be a good idea to mark them at least. Since they're all the same model I wonder how the system names them, by which cable port I'm using maybe?
  7. I know with PG279Q at least a lot of them have problem with back-light bleed (upwards half of them?) so in this case it might be smart to do what I did and get a demo-screen since at least the retailers I use are obliged to tell you if something is wrong with it, so you get one of the good ones for cheaper rather than buying a new one and have to go through the hassle of purchasing/returning until you get one without errors. Otherwise it's a pretty damn good screen.
  8. So I've gotten 4 x 6TB drives now that I was thinking of turning into a 12TB raid 10 but I'm not entirely sure about the best way to go at it. I've heard that some people use Windows Storage Spaces to set up theirs but have also heard some criticism that the software is sub-par and hurts the performance compared to other solutions. So what would be the best way to go at it in you guys opinion? I know I can set up a raid through my MOBO but unsure if that works for raid 10, plus it'd make me completely dependent on that particular model if I wanna move or restore data. Any information on this is highly appreciated!
  9. Not your fault for seeing it incorrectly due to the sticker covering part of it but it says R/N not M/N, which I would assume stands for registry number or something like that.
  10. The R/N is the same and there isn't anything called ''M/N'' on it. If you mean S/N then yeah they're different. The dates are also 3 days apart but I'm not sure how long a production cycle lasts over there and if it guarantees being different? Yeah they're a fair bit cheaper where I live but I imagine it being a pain making a raid out of them rather than just backupping shit. The externals have a tendency to use random drives from a selection of the cheapest they could find and you shouldn't use different drives to create a raid, not to mention the issue of cracking up the casing if you wanna put them in your chassi which means zero warranty. As said I've tried to avoid it by buying from different retailers but in the worst case that I got a couple of similar ones then I can just put them in different raid 1s, so even if I lose them at the same time my data is still likely to be fine. And yeah I do get a 3 year warranty for them which is pretty nice. Raid 5 was already getting outdated a decade ago wasn't it? Seeing as almost all drives have a URE rate of 1^14 or once every 12TB, if I make a raid 5 and have a drive fail and have to restore data with 3 x 6TB drives probability says I'm very unlikely to succeed in restoring it. That's why raid 6 has been a thing since it uses two drives for parity so even if you have one fail you can still afford one URE showing up. Why not just use raid 6 then you might ask? Partly because a shared trait between it and raid 5, namely that the recovery process takes a hell of a lot longer with parity drives rather than backup drives, and the longer you have to use in order to rebuild the data (1+ days isn't unlikely for large drives) the larger the chance of something going wrong during the process and you being unable to complete it. Basically the larger the drives are the worse they are to use for parity drives, personally I would stop to consider them an option once you start using drives over 4TB or have a very large total filesize (which is where technology is driving which makes raid 6 increasingly irrelevant). So at least from my understand on which raid you should use depending on your priorities, feel free to skip this part since this is just me rambling about raids; Raid 0 - For those that don't care about keeping their data intact and only want speed and the comfort of having a single drive in the system for storage Raid 1 - For those who only use two drives and would rather ensure the safety of their data rather than have two times the storage Raid 5 - For those who have only 3 drives of 4TB or less and don't wanna get more, an array of many smaller drives or wanna have extra storage compared to raid 6 and don't care that much about data safety or performance of the unit but would rather do something than nothing at all. Was a good option 10-15 years ago when file sizes still were fairly small but with how large they've gotten today and the likelihood of hitting at least 1 URE thus loosing all your data makes it hard to recommend Raid 6 - For those with 5 or more drives of medium size or many smaller ones, but wanna protect themselves more compared to raid 5 but don't wanna lose as much storage as with raid 10 for the extra level of safety and performance so a middle-ground between them. Same as with raid 5 still not that great an idea to use with either large drives due to the recovery process or a large total size of the array due to the increased likelihood of hitting more than 1 URE. Raid 10 - Basically the business standard unless they're on a tight budget, used with 4 or a larger even number of drives of any size that's poor in terms of storage efficiency (with half of it going away in syncing the drives) but much better in terms of safety and everyday performance compared to raid 5 and 6. One thing that's worse though compared to raid 6 is that with raid 6 you can lose any of the two drives in the array to disc failure/URE and still be fine, but with raid 10 you can get extremely unlucky and have the multiple failures/UREs on the same raid 1 in the raid 10 system. Still the recovery process being so much safer since you're just syncing rather than restoring data from parity and you being able to potentially lose 50% of the drives in the system and still be fine more than makes up for it so it isn't completely perfect but better. Main reason people use raid 10 over 6 is the performance though, which is the best among the options after raid 0 so it depends about what you wanna prioritize. Note that there are special drives you can get with an URE rate of 10^15 rather than 10^14 which would make raid 5 and 6 look a lot better if you're using a large number of smaller drives, but seeing as the VAST majority of drives (even among new ones) out there has 10^14 this is more an exception to the rule than a factor in raid decision making. I believe WD Gold has 10^15 but that's like 1.5~2x times more expensive than my reds so yeah, not really a great option for common people building raids at home. This is how I understand it anyway after reading up on it online a lot but would hardly call myself particularly knowledgeable on the subject and then there are even more raid types, but that's at least my general thoughts on the different kinds and who they're intended for.
  11. So my plan is making a raid 10 consisting of 4 x 6 TB reds for 12 TB storage space, question is that I've heard (and had it explained) that it's good to buy drives from different production batches since otherwise there's a higher likelihood of several of your drives failing at once thus kinda invalidating the whole purpose of investing into safeguarding your data from failure. I've made sure to order mine from different dealers to lower the risk but I'd like to cross-check to make sure, and if I have a couple from the same batch I can spread them out to different raid 1s before combining them into raid 0. So as said in title my question is, what of the text here shows which batch it's from? I tried using evil censorship company Google but as usual it misdirects me into miscellaneous stuff with no relevance to my original query. Providing a picture so you guys can see the information I'm talking about.
  12. So peaking at 1.344 then with 1.31V in bios? I guess I've still got some way left to go then at least voltage wise, 1.37V in bios should be fine as a max. Am I right to assume that since going from 5.1 GHz to 5.2 GHz is a ~2% increase I should increase voltage by 2% in bios from 1.31 to ~1.34? Obviously I'll try my way forward aiming for the minimum but in using it as a starting point. I could also order some Grizzly Kryonaut and cleaning kit to reapply thermal paste, probably wont make a huge difference but hey a few degrees is a few degrees. I googled ''mounting problems Kraken X62'' and saw that some people put some extra space between MOBO and back-screws to increase maximum screw-in length thus increasing pressure, I think I have some rubber disc things from MOBO extra screws that I can try applying as well when reapplying the paste. Will probably take a week to get the goods but that's fine since I'm still waiting on my third WD Red 6TB to set up a raid 5 so still have some tinkering left to do, I'll be back once I've done it to post results!
  13. Also here's the fan curve of the CPU cooler, might be easier to see than just the numbers from OP. As said I think this is already very aggressive so I don't think there's much improvement to be had here.
×