Jump to content

Gerowen

Member
  • Posts

    83
  • Joined

  • Last visited

Awards

Contact Methods

  • Steam
    gerowen
  • Twitter
    gerowen2

Profile Information

  • Gender
    Male
  • Location
    Kentucky, United States

System

  • Operating System
    Debian GNU/Linux

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Just filed for a return through Amazon and boxed the Seagates back up. Edit: When transferring a test file between the internal boot SSD and the RAID array where the network wouldn't be the bottleneck, I was getting read/write speeds in the several hundreds of MB/s, instead of 40.
  2. Made no difference being connected to the motherboard, so I put the WDs back in and they just picked up and worked. POST time was back down to about 10 seconds total. When I connected the Seagates directly to the motherboard, for some reason boot time skyrocketed. Two of the three Seagates took a solid 30 seconds plus to appear in the device list during POST (the first one appeared instantly), and it still sat there for several minutes before continuing to the OS with the HDD LED staying pretty well solid. When I swapped in the WDs it worked just fine. Write and read speeds are back up where they're supposed to be.
  3. I've only ever done it once; I ordered a drive that was listed as new, but when I got it SMART reported 20k hours of "power on hours". They offered me either $100 off or return/refund. It was a WD Gold and I wasn't desperate for the space at that time so I just returned it. I'm starting to wish I had taken them up on the $100 discount and just kept that one. The resilver finished this morning and benchmarking the filesystem that spans all 3 drives reports acceptable write speeds, but read speeds of like 40MB/s. I logged into another system that is hardwired directly to the same router and tried transferring a file and it checks out, the read speeds would spike to ~50MB/s, then it would seem to hang for a second and drop back down into the 30s before leveling off around 40ish. Here's the results of an iozone benchmark with a 512MB file. I understand that there's going to be some loss in performance based on the fact that it's software RAID 5 and an encrypted partition on a decade old 6 core Phenom CPU, but so were my WDs that I was using and they would just about saturate my gigabit ethernet at 100ish MB/s; a little slower when I use SFTP since that adds another layer of encryption. I'm going to bring it down, take out the SATA add-in card and connect the drives directly to the motherboard and see if that makes any difference. Then, if it doesn't, I'm going to throw my WDs back in there and run the exact same benchmarks and see what they look like.
  4. They were listed as "new" on Amazon. As far as the smart tests I ran short and conveyance tests both. After the resilver is finished in a couple hours I plan on running long ones. I installed Seagate's own "Seatools" and connected one of the drives and Seatools reported the same crazy values, but when I ran smartctl with a flag to convert them from 48 bit hex, the values were normal. I'm very seriously considering just returning all 3 and putting my old drives back in, ordering one more of them to upgrade with and calling it a day. Those WD drives report proper values in smart, they report their helium levels so you know if something has happened, they populate much faster so the whole system boots quicker and their read speeds don't tank during a resilver. Even on startup, 2 of the 3 Seagate drives take an unusually long time to spin up and start communicating. Whether it's my server or my laptop, when I connect one it's a good 30 seconds or more of the light flashing before the drives actually pop up and are ready to be accessed. This means that the server sits in the BIOS that whole time waiting on those drives to become responsive during POST.
  5. I'm gonna let the resilver finish, but if their performance doesn't drastically improve afterwards I'll just return them and I guess I'll start looking for non essential data to remove from the old drives to tide me over until I find a good deal on my WDs. I haven't wiped my old drives yet and kept backup copies of the encryption keys, so I could literally go back to those in about two minutes.
  6. So for years my home server has been running Western Digital Gold 12TB drives in software RAID 5 (mdadm, Linux) and it has been great. But, they were getting full so I decided to pick up some 18 TB Seagate Exos drives to upgrade to. I updated my backup, replaced the drives, created the array and started restoring data from my backup to the new drives. The ones I got came from a third party seller on Amazon selling some old stock white label drives, but they were indeed new and had 0 power on hours when I got them. Honestly I kinda wish I had noticed the reviews mentioning this fact before I bought them, but I was on my phone and didn't go scrolling around the page too much beyond verifying they were the product I wanted and were listed as "new". I have a couple odd observations though. 1) The array immediately entered a "degraded" state and appears to be resilvering one of the drives. This "might" be normal, but I don't recall having this issue before with the Western Digitals. 2) Some of the SMART attributes on these drives is just wacky. Certain attributes like "Raw_Read_Error_Rate" and "Seek_Error_Rate" have crazy high values that my WD drives didn't have, but they seem to be passing SMART tests. I pulled one of the drives and installed Seagate's "Seatools" utility on my laptop thinking it might make more sense of those, but even that utility just displayed the same raw output. The drives say "Passed" when I run SMART tests, and a few things I've found online state that Seagate stores these values differently than other manufacturers (stored as hex or something for some reason), so a high value isn't necessarily indicative of a problem, but I would think that Seatools at least would show the correct, human read-able value and it's not. Also, they don't report their helium levels at all, unlike the WDs. 3) The array is still resilvering that one drive, but the read speeds are incredibly slow, like 1MB/s or less. Write speeds seem fine. I'm not sure what the reason is. I'm restoring data to the array from my external backup drives still, plus it's resilvering that one drive, so I'm not expecting full speed, but it just seems odd. I've moved data onto and off my old drives while they were resilvering and not had this issue. I dunno, I'm gonna let the rebuild/resilver finish but these drives just don't seem to be acting right. Even the resilvering on the WDs ran at over 100MB/s, this one runs at like 65MB/s even when I stop all other services that might be using the drives. I had much better performance and much plainer SMART data on my old WD drives. These seem to even take longer to spool up and become accessible. When I shut the server down and put one in an external enclosure to run the tests on it, it takes a solid 20-30 seconds for it to show up. If they don't start acting better after the resilver, I'm very seriously considering just sending them back for a refund and ponying up the extra cash for some WD Golds in 18-20 TB capacities.
  7. So I just came across this on YouTube. It "kinda" sounds like him, but it's not as convincing as that other channel called "There I Ruined It". Anyway, just wanted to share.
  8. So the biggest advantage of changing my current setup to ZFS would be the ability to replace three tools (mdadm raid, LUKS encryption and filesystem) with a single tool. Since Nextcloud uses file versioning I don't think I'll make much use of the snapshots feature, that seems like something that would be more useful on a root filesystem; take a snapshot before an update so if it borks something up you can just roll back. ChimeraOS on my gaming PC uses BTRFS on the root filesystem for this reason I'm pretty sure. For a storage array though I could see it being useful to maybe use a cron script to take a daily snapshot and keep them for a couple days before deleting them, especially if multiple people had access. That way if your new hire accidentally nukes something important you can just roll it back. For my personal use case though, since I'm the only one with write access to the whole thing and I've got an entire second copy as a backup, I probably won't use that feature much.
  9. One other random thought. What does the checksums feature of ZFS accomplish that isn't also accomplished by normal parity checks during a scrub of a regular mdadm array?
  10. Two main reasons: 1) When that drive died originally it was still a RAID 1 and I had the option to RMA it, but I really didn't want to send it off while copies of my tax returns, pictures of my kids, etc. still physically existed on those platters. I left it in a hard drive dock for about 24 hours until I got the return shipping label and for some reason it started working again so I was able to run shred on it, and it wiped about 95% of the data before it stopped working again. That incident kinda scared me though so I turned on LUKS encryption as a "just in case". 2) One or two of my friends actually use my Nextcloud service as a backup for their phone photos, so besides my own personal data it could potentially be somebody else's that's at risk if a drive dies and I ship it off somewhere.
  11. So I've got three new drives on the way to upgrade my server. The old ones are going to get repurposed or sold. My home server is running headless Debian that I've set up and configured the way I like and what I've been doing is: mdadm (RAID 5) -> LUKS encrypted container -> EXT4 filesystem This has worked great. I even converted it from RAID 1 to RAID 5 a few years back while the filesystem was still live and in use, and even forgot and rebooted it during that operation and it just picked back up where it left off. When I had a drive die the process of degrading the array and replacing the dead drive was simple and went without a hitch. The server has two primary jobs, Plex and Nextcloud. The Nextcloud data directory and my Plex media folder both live on the array. It's only ever accessed by a handful of people at once and my home network is just gigabit, so performance isn't the be all end all, but I would like to retain the ability to saturate gigabit networking when transferring large files. However, I'm considering using ZFS when the new drives arrive for the following reasons. - A lot of the features I'm getting through the use of multiple, layered solutions are all available directly thru ZFS itself. Instead of using mdadm for RAID, LUKS for encryption and then ext4 for the filesystem, ZFS would tick all those boxes all by itself. - The one time I did have a drive die while using mdadm, the array was unresponsive until I physically removed the drive. I don't know if this was because of the nature of the failure, or because mdadm wasn't willing to automatically mark the drive as bad and keep going. The failure was of the arm that runs the read write head where you could hear it knocking and almost bouncing inside the drive. Once I removed the drive and marked the array as degraded it worked fine on two drives until the replacement arrived in the mail, but I'm wondering if ZFS would have handled this more gracefully. I do have some concerns though with using ZFS. - I know the "1GB per TB of data" is not a hard and fast rule, rather it's just a rule of thumb for people that enable de-duplication. But I've got 24TB of data right now and will have 36TB of available space, but the system only has 16GB of RAM and can't be upgraded as that's all the motherboard supports. It's an old AM3 socket motherboard from Alvorix that's about 10 years old. Would this be a problem for a system that will be managing the storage AND hosting Plex and Nextcloud at the same time? It's working fine now, but I'm not sure if ZFS would cause issues. - How much of a hit on performance is the compression? Can it be turned off when creating the zpool? The CPU is an old 6 core Phenom II and it works fine now with mdadm and LUKS, but I worry that adding compression to the RAID striping calculations and the encryption might incur a noticeable performance hit. I'm just totally new to ZFS. I've known about it for a while, but have never implemented it myself so I'm trying to decide whether to pull the trigger. Since I'll be creating an entirely new array and migrating the data, if I'm going to make the switch, now is the time. Also, what about BTRFS? Would it be a better solution? I know it supports snapshots, checksums and such, but it doesn't support encryption (yet), which I want, so if I went with it I'd have to layer it beneath LUKS like I'm doing now with EXT4. Would that have any effect on its ability to do checksums or snapshots? I'm basically just looking for some knowledge and advice. I appreciated anything y'all call educate me on.
  12. Hate to resurrect a dead(ish) topic, but I just discovered some other reason to upgrade, though I can't afford to right now because my income is spoken for until I get back to work next week. I played through the Final Fantasy 7 Intergrade remake on my Steam Deck and had a mostly positive experience. Visuals were a little blurry when blown up on the big screen when docked, but the reason I chose not to use my desktop PC is because even though the game ran fine, there were random audio stutters. Music from the little music shops, background conversations or sometimes the actual voiced lines during in-engine cutscenes would stutter, and once it started it wouldn't stop. Apparently, while not conclusive, it seems to be generally agreed upon that the issue is caused by FX CPUs not being able to properly handle the audio engine in that game, especially in busy areas where there might be several separate channels of audio. The game ran great on the desktop, 60ish fps at 1080p high, so that's why I'm not totally convinced the CPU is the source of the issue or it seems like I'd see a lower frame-rate. Though, the lack of a particular instruction set or something I guess "could" negatively affect certain aspects, without affecting its ability to get frame data to the GPU. This reminds me of playing the original PC release of Mass Effect. The game will run fine, but there's a whole section of the game where, if your CPU is missing the "3DNow" extension, your character, and only your character, will turn into a black, pixellated blob. There's community fixes and workarounds, but it was another scenario where it wasn't really a lack of ability to send frame data to the GPU, it was a different technical limitation. The Phenom chips that preceded FX had 3DNow, but it was removed from the FX and Ryzen lines of CPUs. With this game however, the only "fix" seems to be either to disable certain audio channels altogether, or just don't play it with an FX CPU.
  13. I checked again a few minutes later and everything appears to be back, so maybe they were having issues with their site? Maybe a big update? Still doesn't explain why entire sections of the site were removed though. Weird though because it was several hours ago that I first found the site in its broken state, so who knows what they were up to.
  14. Apologies for spelling or grammar, I'm on my phone. So earlier today I fired up one of my VMs in VirtualBox. I tried to forward a USB device, which I've done before without issue, and Windows couldn't recognize it. I had the thought that maybe it's because I had upgraded from Debian 11 to 12 as the host and maybe I needed to check for updates and maybe update the extension pack that facilitates things like USB forwarding. The extension pack in the past has always been a separate download on their site, so I headed over to VirtualBox(.)org , and it seems kind of barren. The big old download button takes you to a page with links to the source code and that's about it. The download page used to be filled with links to the extension pack, binary installers for various platforms, repo instructions for various Linux distributions, etc., and literally none of that was there. The only thing that remotely looked like a link to actually download it was for "older builds" like 6.1, but when I clicked that it just took me to a page with a couple lines explaining that the extension pack was licensed a certain way with a link to the license, but no links to actually download anything. The only way I can see that you can download it from their website now is to download the source code and build it yourself. The Debian repo I already have set up appears to be working and contains a copy of 7.0, but if I want up update the extension pack, or even set it up on a different system, I'm just out. Adding a Debian repo requires importing and trusting a signing key for that repository, none of which I can find now. I can only really think of two explanations: 1) VirtualBox is a huge net loss for Oracle and they're cutting it loose. 2) Tons of companies were using the free, personal version of it for business purposes, so they're trying to corral those people into buying a support license from them and are just removing all the free downloads from their website. Update I just tried to visit their site on my phone to double check and make sure I'm not just blind before posting this, and the whole site is down now. I don't mind using Qemu on Linux, it works fine. I just kept VirtualBox around because I only occasionally use VMs for a few very specific tasks for my personal use, and it was easier to get set up with, and when reading from a passed thru optical drive it seemed to perform a little better, at least the last time I tried comparing. I had noticed though that, the last time I tried starting a Windows 10 VM in VirtualBox, if you install the guest additions, which includes a virtual display driver, it breaks the aero/transparencies in the Windows UI so that weird things happen, like the background of the start menu being completely transparent. And it has been that way for a while. Maybe they are just abandoning it altogether.
  15. Just occurred to me that with straight forward, password symmetric encryption, you wouldn't need to store the key itself with the data. Run the password through the KDF and that "is" the key, you don't have to save it because it'll just get regenerated when the user enters the correct password.
×