Jump to content

Gerowen

Member
  • Posts

    88
  • Joined

  • Last visited

Everything posted by Gerowen

  1. I haven't tried each one in every individual slot; but I tried each one in one slot from each channel. Swapping power supplies didn't fix it, so I'll try this with the RAM next.
  2. I've swapped the RAM sticks around and re-seated the graphics card; I just dug out a spare power supply and am going to wire it up and see if that fixes it.
  3. It's a double wide trailer that we bought new, so I can't speak for the wiring inside the walls, but all of the wiring outside the house for the service box, grounding rods, etc. I did myself, so I know that from the breaker box in the house out to the service box is all good. Past that into the rest of the house though, I have no idea; I've never had a reason to inspect them after the initial inspection when everything was set up. I do however have it connected to an UPS, though that wouldn't do anything if the UPS isn't grounded either, but it's not complaining about not having a proper ground.
  4. I meant I wasn't sure if it was the case itself, or maybe an exposed metal component of the USB port, 3.5mm audio jack, etc.
  5. I'm not certain I've killed it completely; I don't see how grounding myself thru the case could kill something else. I don't recall specifically which component I grounded to today when I touched it, but if it was a USB port could that have killed the whole thing? The fact that it comes on and immediately shuts right back off makes me think it might be salvageable. Could I have nuked a RAM stick? Video link:
  6. Just filed for a return through Amazon and boxed the Seagates back up. Edit: When transferring a test file between the internal boot SSD and the RAID array where the network wouldn't be the bottleneck, I was getting read/write speeds in the several hundreds of MB/s, instead of 40.
  7. Made no difference being connected to the motherboard, so I put the WDs back in and they just picked up and worked. POST time was back down to about 10 seconds total. When I connected the Seagates directly to the motherboard, for some reason boot time skyrocketed. Two of the three Seagates took a solid 30 seconds plus to appear in the device list during POST (the first one appeared instantly), and it still sat there for several minutes before continuing to the OS with the HDD LED staying pretty well solid. When I swapped in the WDs it worked just fine. Write and read speeds are back up where they're supposed to be.
  8. I've only ever done it once; I ordered a drive that was listed as new, but when I got it SMART reported 20k hours of "power on hours". They offered me either $100 off or return/refund. It was a WD Gold and I wasn't desperate for the space at that time so I just returned it. I'm starting to wish I had taken them up on the $100 discount and just kept that one. The resilver finished this morning and benchmarking the filesystem that spans all 3 drives reports acceptable write speeds, but read speeds of like 40MB/s. I logged into another system that is hardwired directly to the same router and tried transferring a file and it checks out, the read speeds would spike to ~50MB/s, then it would seem to hang for a second and drop back down into the 30s before leveling off around 40ish. Here's the results of an iozone benchmark with a 512MB file. I understand that there's going to be some loss in performance based on the fact that it's software RAID 5 and an encrypted partition on a decade old 6 core Phenom CPU, but so were my WDs that I was using and they would just about saturate my gigabit ethernet at 100ish MB/s; a little slower when I use SFTP since that adds another layer of encryption. I'm going to bring it down, take out the SATA add-in card and connect the drives directly to the motherboard and see if that makes any difference. Then, if it doesn't, I'm going to throw my WDs back in there and run the exact same benchmarks and see what they look like.
  9. They were listed as "new" on Amazon. As far as the smart tests I ran short and conveyance tests both. After the resilver is finished in a couple hours I plan on running long ones. I installed Seagate's own "Seatools" and connected one of the drives and Seatools reported the same crazy values, but when I ran smartctl with a flag to convert them from 48 bit hex, the values were normal. I'm very seriously considering just returning all 3 and putting my old drives back in, ordering one more of them to upgrade with and calling it a day. Those WD drives report proper values in smart, they report their helium levels so you know if something has happened, they populate much faster so the whole system boots quicker and their read speeds don't tank during a resilver. Even on startup, 2 of the 3 Seagate drives take an unusually long time to spin up and start communicating. Whether it's my server or my laptop, when I connect one it's a good 30 seconds or more of the light flashing before the drives actually pop up and are ready to be accessed. This means that the server sits in the BIOS that whole time waiting on those drives to become responsive during POST.
  10. I'm gonna let the resilver finish, but if their performance doesn't drastically improve afterwards I'll just return them and I guess I'll start looking for non essential data to remove from the old drives to tide me over until I find a good deal on my WDs. I haven't wiped my old drives yet and kept backup copies of the encryption keys, so I could literally go back to those in about two minutes.
  11. So for years my home server has been running Western Digital Gold 12TB drives in software RAID 5 (mdadm, Linux) and it has been great. But, they were getting full so I decided to pick up some 18 TB Seagate Exos drives to upgrade to. I updated my backup, replaced the drives, created the array and started restoring data from my backup to the new drives. The ones I got came from a third party seller on Amazon selling some old stock white label drives, but they were indeed new and had 0 power on hours when I got them. Honestly I kinda wish I had noticed the reviews mentioning this fact before I bought them, but I was on my phone and didn't go scrolling around the page too much beyond verifying they were the product I wanted and were listed as "new". I have a couple odd observations though. 1) The array immediately entered a "degraded" state and appears to be resilvering one of the drives. This "might" be normal, but I don't recall having this issue before with the Western Digitals. 2) Some of the SMART attributes on these drives is just wacky. Certain attributes like "Raw_Read_Error_Rate" and "Seek_Error_Rate" have crazy high values that my WD drives didn't have, but they seem to be passing SMART tests. I pulled one of the drives and installed Seagate's "Seatools" utility on my laptop thinking it might make more sense of those, but even that utility just displayed the same raw output. The drives say "Passed" when I run SMART tests, and a few things I've found online state that Seagate stores these values differently than other manufacturers (stored as hex or something for some reason), so a high value isn't necessarily indicative of a problem, but I would think that Seatools at least would show the correct, human read-able value and it's not. Also, they don't report their helium levels at all, unlike the WDs. 3) The array is still resilvering that one drive, but the read speeds are incredibly slow, like 1MB/s or less. Write speeds seem fine. I'm not sure what the reason is. I'm restoring data to the array from my external backup drives still, plus it's resilvering that one drive, so I'm not expecting full speed, but it just seems odd. I've moved data onto and off my old drives while they were resilvering and not had this issue. I dunno, I'm gonna let the rebuild/resilver finish but these drives just don't seem to be acting right. Even the resilvering on the WDs ran at over 100MB/s, this one runs at like 65MB/s even when I stop all other services that might be using the drives. I had much better performance and much plainer SMART data on my old WD drives. These seem to even take longer to spool up and become accessible. When I shut the server down and put one in an external enclosure to run the tests on it, it takes a solid 20-30 seconds for it to show up. If they don't start acting better after the resilver, I'm very seriously considering just sending them back for a refund and ponying up the extra cash for some WD Golds in 18-20 TB capacities.
  12. So I just came across this on YouTube. It "kinda" sounds like him, but it's not as convincing as that other channel called "There I Ruined It". Anyway, just wanted to share.
  13. So the biggest advantage of changing my current setup to ZFS would be the ability to replace three tools (mdadm raid, LUKS encryption and filesystem) with a single tool. Since Nextcloud uses file versioning I don't think I'll make much use of the snapshots feature, that seems like something that would be more useful on a root filesystem; take a snapshot before an update so if it borks something up you can just roll back. ChimeraOS on my gaming PC uses BTRFS on the root filesystem for this reason I'm pretty sure. For a storage array though I could see it being useful to maybe use a cron script to take a daily snapshot and keep them for a couple days before deleting them, especially if multiple people had access. That way if your new hire accidentally nukes something important you can just roll it back. For my personal use case though, since I'm the only one with write access to the whole thing and I've got an entire second copy as a backup, I probably won't use that feature much.
  14. One other random thought. What does the checksums feature of ZFS accomplish that isn't also accomplished by normal parity checks during a scrub of a regular mdadm array?
  15. Two main reasons: 1) When that drive died originally it was still a RAID 1 and I had the option to RMA it, but I really didn't want to send it off while copies of my tax returns, pictures of my kids, etc. still physically existed on those platters. I left it in a hard drive dock for about 24 hours until I got the return shipping label and for some reason it started working again so I was able to run shred on it, and it wiped about 95% of the data before it stopped working again. That incident kinda scared me though so I turned on LUKS encryption as a "just in case". 2) One or two of my friends actually use my Nextcloud service as a backup for their phone photos, so besides my own personal data it could potentially be somebody else's that's at risk if a drive dies and I ship it off somewhere.
  16. So I've got three new drives on the way to upgrade my server. The old ones are going to get repurposed or sold. My home server is running headless Debian that I've set up and configured the way I like and what I've been doing is: mdadm (RAID 5) -> LUKS encrypted container -> EXT4 filesystem This has worked great. I even converted it from RAID 1 to RAID 5 a few years back while the filesystem was still live and in use, and even forgot and rebooted it during that operation and it just picked back up where it left off. When I had a drive die the process of degrading the array and replacing the dead drive was simple and went without a hitch. The server has two primary jobs, Plex and Nextcloud. The Nextcloud data directory and my Plex media folder both live on the array. It's only ever accessed by a handful of people at once and my home network is just gigabit, so performance isn't the be all end all, but I would like to retain the ability to saturate gigabit networking when transferring large files. However, I'm considering using ZFS when the new drives arrive for the following reasons. - A lot of the features I'm getting through the use of multiple, layered solutions are all available directly thru ZFS itself. Instead of using mdadm for RAID, LUKS for encryption and then ext4 for the filesystem, ZFS would tick all those boxes all by itself. - The one time I did have a drive die while using mdadm, the array was unresponsive until I physically removed the drive. I don't know if this was because of the nature of the failure, or because mdadm wasn't willing to automatically mark the drive as bad and keep going. The failure was of the arm that runs the read write head where you could hear it knocking and almost bouncing inside the drive. Once I removed the drive and marked the array as degraded it worked fine on two drives until the replacement arrived in the mail, but I'm wondering if ZFS would have handled this more gracefully. I do have some concerns though with using ZFS. - I know the "1GB per TB of data" is not a hard and fast rule, rather it's just a rule of thumb for people that enable de-duplication. But I've got 24TB of data right now and will have 36TB of available space, but the system only has 16GB of RAM and can't be upgraded as that's all the motherboard supports. It's an old AM3 socket motherboard from Alvorix that's about 10 years old. Would this be a problem for a system that will be managing the storage AND hosting Plex and Nextcloud at the same time? It's working fine now, but I'm not sure if ZFS would cause issues. - How much of a hit on performance is the compression? Can it be turned off when creating the zpool? The CPU is an old 6 core Phenom II and it works fine now with mdadm and LUKS, but I worry that adding compression to the RAID striping calculations and the encryption might incur a noticeable performance hit. I'm just totally new to ZFS. I've known about it for a while, but have never implemented it myself so I'm trying to decide whether to pull the trigger. Since I'll be creating an entirely new array and migrating the data, if I'm going to make the switch, now is the time. Also, what about BTRFS? Would it be a better solution? I know it supports snapshots, checksums and such, but it doesn't support encryption (yet), which I want, so if I went with it I'd have to layer it beneath LUKS like I'm doing now with EXT4. Would that have any effect on its ability to do checksums or snapshots? I'm basically just looking for some knowledge and advice. I appreciated anything y'all call educate me on.
  17. Hate to resurrect a dead(ish) topic, but I just discovered some other reason to upgrade, though I can't afford to right now because my income is spoken for until I get back to work next week. I played through the Final Fantasy 7 Intergrade remake on my Steam Deck and had a mostly positive experience. Visuals were a little blurry when blown up on the big screen when docked, but the reason I chose not to use my desktop PC is because even though the game ran fine, there were random audio stutters. Music from the little music shops, background conversations or sometimes the actual voiced lines during in-engine cutscenes would stutter, and once it started it wouldn't stop. Apparently, while not conclusive, it seems to be generally agreed upon that the issue is caused by FX CPUs not being able to properly handle the audio engine in that game, especially in busy areas where there might be several separate channels of audio. The game ran great on the desktop, 60ish fps at 1080p high, so that's why I'm not totally convinced the CPU is the source of the issue or it seems like I'd see a lower frame-rate. Though, the lack of a particular instruction set or something I guess "could" negatively affect certain aspects, without affecting its ability to get frame data to the GPU. This reminds me of playing the original PC release of Mass Effect. The game will run fine, but there's a whole section of the game where, if your CPU is missing the "3DNow" extension, your character, and only your character, will turn into a black, pixellated blob. There's community fixes and workarounds, but it was another scenario where it wasn't really a lack of ability to send frame data to the GPU, it was a different technical limitation. The Phenom chips that preceded FX had 3DNow, but it was removed from the FX and Ryzen lines of CPUs. With this game however, the only "fix" seems to be either to disable certain audio channels altogether, or just don't play it with an FX CPU.
  18. I checked again a few minutes later and everything appears to be back, so maybe they were having issues with their site? Maybe a big update? Still doesn't explain why entire sections of the site were removed though. Weird though because it was several hours ago that I first found the site in its broken state, so who knows what they were up to.
  19. Apologies for spelling or grammar, I'm on my phone. So earlier today I fired up one of my VMs in VirtualBox. I tried to forward a USB device, which I've done before without issue, and Windows couldn't recognize it. I had the thought that maybe it's because I had upgraded from Debian 11 to 12 as the host and maybe I needed to check for updates and maybe update the extension pack that facilitates things like USB forwarding. The extension pack in the past has always been a separate download on their site, so I headed over to VirtualBox(.)org , and it seems kind of barren. The big old download button takes you to a page with links to the source code and that's about it. The download page used to be filled with links to the extension pack, binary installers for various platforms, repo instructions for various Linux distributions, etc., and literally none of that was there. The only thing that remotely looked like a link to actually download it was for "older builds" like 6.1, but when I clicked that it just took me to a page with a couple lines explaining that the extension pack was licensed a certain way with a link to the license, but no links to actually download anything. The only way I can see that you can download it from their website now is to download the source code and build it yourself. The Debian repo I already have set up appears to be working and contains a copy of 7.0, but if I want up update the extension pack, or even set it up on a different system, I'm just out. Adding a Debian repo requires importing and trusting a signing key for that repository, none of which I can find now. I can only really think of two explanations: 1) VirtualBox is a huge net loss for Oracle and they're cutting it loose. 2) Tons of companies were using the free, personal version of it for business purposes, so they're trying to corral those people into buying a support license from them and are just removing all the free downloads from their website. Update I just tried to visit their site on my phone to double check and make sure I'm not just blind before posting this, and the whole site is down now. I don't mind using Qemu on Linux, it works fine. I just kept VirtualBox around because I only occasionally use VMs for a few very specific tasks for my personal use, and it was easier to get set up with, and when reading from a passed thru optical drive it seemed to perform a little better, at least the last time I tried comparing. I had noticed though that, the last time I tried starting a Windows 10 VM in VirtualBox, if you install the guest additions, which includes a virtual display driver, it breaks the aero/transparencies in the Windows UI so that weird things happen, like the background of the start menu being completely transparent. And it has been that way for a while. Maybe they are just abandoning it altogether.
  20. Just occurred to me that with straight forward, password symmetric encryption, you wouldn't need to store the key itself with the data. Run the password through the KDF and that "is" the key, you don't have to save it because it'll just get regenerated when the user enters the correct password.
  21. So, as somebody who isn't a cryptography expert, let me see if I understand something right. I hope y’all don’t mind me trying to educate myself, and would appreciate some high level/layman type feedback so that I can at least have a big picture idea of how the encryption is working, and what is or isn't safe against quantum computers. First off, I know that (or am pretty sure that), if all you have is an encrypted message, without its plain text original as a reference, then symmetric encryption algorithms like AES, Twofish, ChaCha20, etc. are quantum safe. I know that asymmetric algorithms like RSA, ElGamal or even asymmetric eliptic curve algorithms like X25519 are NOT quantum safe, and that variations of Shor's algorithm could theoretically break them, given a quantum computer with enough qbits. If you encrypt something with just a password using AGE or PGP, it uses symmetric encryption, and puts that password through a KDF to turn it into a private key to be used for that. The encrypted copy of that encryption key is stored with the data itself and can only be accessed when the correct password is put through the same KDF. So, this is quantum safe, correct? If you use asymmetric encryption, it generates a one time symmetric key for something like AES or ChaCha20, encrypts the data with that, then encrypts that symmetric key with the public half of the asymmetric key pair and stores it with the data. This is NOT quantum safe because even though the symmetric encryption is used on the data, the private key for that symmetric encryption is protected by an asymmetric algorithm, so attackers don’t need to break AES or ChaCha20, they just need to break the RSA/ElGamal/ECC that’s protecting the encryption key. If you use a password protected asymmetric key pair, does it use a KDF and symmetric algorithm to then protect your asymmetric key pair? Does entering the pass-phrase then decrypt and unlock that pair? So in effect, if you encrypt data with a password protected asymmetric key pair, it could essentially work like: SYM → ASYM → SYM So is it essentially going through 3 layers of encryption for this kind of scenario? One to unlock the asymmetric pair, then another to unlock the symmetric key, then another to unlock the data itself. So when it comes to quantum computers, would asymmetric cryptography be safe if it’s password protected, or not safe because the public key is still available and can be used to break it, regardless of the password protection on your private key?
  22. I've had my Steam Deck and dock for probably a year or more now and this is the first, and only time this has ever happened, and usually yes, I shut it down or put it to sleep or something before disconnecting the dock. I'll probably make sure to do that in the future just to be safe.
  23. Actually, it was probably my Linksys switch, though somebody replied to a post I made on Mastodon and said they had seen other reports of this happening, so I'm not sure who to blame at this point. So anyway, I have the official Steam Deck dock from Valve, and use it quite a lot. It is hard-wired to a Linksys SE3005 un-managed switch that is mounted behind the television. This switch provides hard-wiring capability for the TV, desktop PC, Steam Deck dock and my Nintendo Switch dock. Last night I was using my Deck docked in desktop mode with a wireless keyboard and mouse. I had Firefox open and was using it to watch a YouTube video, but I had left the video paused for a while, while I did something else. The display had went to sleep, but the Deck itself was still on and active. Eventually I decided it was time to go to bed so I walked over and just unplugged the little docking cable from the Steam Deck, then used the hand-held controls to shut it down. About 10 minutes after going to bed my daughter came in and said that Plex on her kodi box wouldn't play the next episode. I told her it was probably just the Plex app being finnicky and to just exit Plex and try again. I didn't hear from her again and I went to sleep. At some point later, about 4:30ish in the morning, I woke up to alerts from my phone; e-mails from "UptimeRobot" informing me that Plex, Nextcloud, etc. were down. At this point, I thought maybe it was just an automatic reboot, since I have the server set to automatically install system updates and, if a reboot is required, perform that reboot at 4-something in the morning. But, it usually reboots so fast that uptimerobot doesn't have time to notice, so I did think it was a bit odd so I waited for 5 minutes or so and then tried to ping my server; nothing. So, I got up, walked in to where the server and router and such are to check on things. The server itself was fine, but the router was not. All of the link lights were going crazy and I noticed that my phone had disconnected from the WiFi and even though the networks were visible, it kept trying and failing to connect. The server itself was unable to ping the router, despite being hard-wired directly into it. Pings mostly failed, or IF they succeeded, it was several seconds worth of latency (not ms, full seconds). So, I power cycled the router. When it first came up everything was great, for about 5 seconds, then it crashed again. That's when I brought my laptop in and hard-wired it to the router so I would have a GUI and a web browser to use for troubleshooting. Next I thought that maybe somebody was trying to DDoS my server, so I powered off my ISP's modem. Still no good. Next I thought, maybe it was some kind of a fluke or configuration error with OpenWRT. I flashed the thing with OpenWRT years ago, and have upgraded it in place once and kept my old configurations, but got a warning about configs being incompatible between releases and had to do some fiddling with some settings to get things working again with the new firmware. So, I hit the factory reset button to clear OpenWRT and reset it back to its default state. (Still OpenWRT, just not configured at all) That still didn't fix it. It was good for about 20-30 seconds, then it went back down. Then I had the thought, maybe it was one of my devices causing issues. So I unplugged everything except the laptop I was using to troubleshoot and power cycled the router, and, everything was fine, and stayed that way. Reconnected the internet, waited, then the PiHole, waited some more, then the server, waited some more. Then I connected the cable that runs to the living room. Boom, dead. Whatever the problem was, it was something to do with the living room, so I unplugged that cable again and went into the living room and one of the link lights on that switch was going wild. Weird thing is though, the TV, Switch and desktop PC were all powered off and the Steam Deck was physically off its dock and in its case. But the link light that was blinking was the one for the Steam Deck's dock. I unplugged the cable from the back of the dock, waited 5 seconds, then plugged it back in and the link light came on and stayed solid, but wasn't blinking any more. Went back to the router, plugged that cable back in, all good. All I can figure is that for some reason there was data in transit at the exact moment I pulled that cable from my Steam Deck, and instead of just letting those packets die out and dropping them, either the dock (how much logic is really in the official dock from Valve?) or that switch, or some combination of the two, entered some kind of an infinite loop and launched a denial-of-service attack against the router. I also power cycled the switch, just to be safe, and since the last backup I took of my configuration for the router's OpenWRT install was with a previous version of OpenWRT, instead of restoring that config and then trying to fix it again, I just decided to spend the next half hour manually reconfiguring everything from scratch in the router, and then made a fresh backup. I also enabled a few more options like "SYN flood protection", configured the firewall to drop all "invalid" packets, etc. so that hopefully, if this happens again, maybe it will stay contained to that switch and not take down the whole network. Everything has been great since. It's not totally clear to me exactly what happened, but I just thought I would put this out there, perhaps as a cautionary tale. I uploaded a short YouTube video describing the whole thing:
  24. I've been kind of attached to the old girl just because it's worked reasonably well for so long. I tried to go beyond just locking it at its 4.3Ggz turbo and I've had it as high as 4.7Ghz, but even if it seems stable, it always eventually crashes. I'm debating on whether I lost the silicon lottery, or after watching LTT's recent video on power supplies, I've started wondering if my Thermaltake SMART 650w power supply is letting the voltage drop on the CPU power line when I try to push the overclock. My current home server/NAS is running an old AMD Phenom II x6 1045t, so what I may do is just migrate the FX chip into being my home server system for hosting Plex, Nextcloud and Minecraft, maybe give it a better power supply from Corsair or something and leverage that GPU for Plex encoding if I can. That Phenom does ok, but when there's 4 or 5 of us on the Minecraft server and somebody forces a chunk load you can tell it struggles a little for a few seconds and I feel like the FX would give us a bit more overhead in that kind of scenario. I could even leave the Blu-ray drive in it and just have the server do all of its own ripping and such.
  25. So about 7 years ago I set out to build myself the best AMD gaming rig I could with a budget of about a grand (ended up spending about $1,200). This was a year-ish pre-Ryzen. I ended up settling on the FX 8370 CPU, 16GB of RAM at 1866 Mhz (DDR3) and an MSI RX 480 8GB graphics card. It did surprisingly well and I've enjoyed many great games on it. Even today, I can fire up just about anything I want at 1080p with high settings and expect to get 60ish fps, or more, depending on the game. There were some exceptions either way; DOOM (2016) gave me 100+ pretty much constantly at 1080p max settings, but Fallout 4 never wants to hit 60fps in down-town even when I turn down the graphics settings. I knew that it wasn't the best CPU available even at the time, but it was 8 cores at a high clock speed for a reasonable price, and I wanted to do an all AMD build. It was the first gaming PC I'd had in a very long time. My last serious one before that had an NVidia GeForce FX 5500 GPU in it and I had been a console gamer since then, but when Sony ditched backwards compatibility with the PS4 and wouldn't even bring over all the digital stuff I bought on the PS Store through my PS3, they pushed me back to PC gaming. So over the years, I never noticed any major issues when it came to gaming on the rig. Battlefield 1 would hitch occasionally, but for the most part, it stayed at or above 60fps and everything was great. And some of that hitching stopped when I moved from Windows to Debian Linux so that Windows Update or the disk indexer weren't trying to fire up randomly on their own while I was gaming. Anyway, fast forward a few years and I grabbed myself a Steam Deck. I hadn't gamed seriously in some time because of work and life, and I thought that having the option to pick it up and take it with me would be nice. Plus, I'm a Linux enthusiast, and I thought it was kind of awesome to see a main stream device of this nature from a well known company shipping with a Linux distro. Since getting it, about 99% of my gaming has been done on the Deck. Despite the lower GPU grunt compared to my desktop, it's nice to be able to sit in the bedroom or wherever and play my games, and even when I dock it, with some decent AA and sitting 8 or so feet from a 55 inch screen, I don't mind the 720p so much, and some games, like Halo MCC, can even hit 1080p with 100+ fps on the little thing. Plus, the thing draws like 20-25 watts under load while docked, while my desktop will draw over 400 watts to play the same game. But I did make one observation. While the resolution was lower, the framerate seemed more consistent. Battlefield 1 for example, hitched even less when there was a lot of explosions and such happening, it seemed more stable on the Deck. So that got me to wondering, was the CPU in my Deck holding up better than the 125 watt FX-8370 in my desktop? So, I downloaded GeekBench on all of my devices, made sure no extraneous services like Steam were running in the background or anything, made sure everything's fans were working and that the CPUs were idling, and let them rip. The results were startling to say the least. I expected the Steam Deck to win in terms of CPU power, I had pretty much surmised as much based on my gaming experience. What I didn't expect however, is that my Pixel 6a PHONE also thoroughly trounced the old FX-8370. And this is with a minor overclock that put my FX-8370 markedly higher than the average score for an FX-9590, a 220 watt TDP chip that shipped with a water cooler. So my cell phone, running on a couple of watts of battery power, the thing I use mostly just to send text messages and take photos, has more CPU grunt than a 125 watt desktop CPU from just a few years ago. I knew the poor thing was struggling, but man, that's bad. Even my Bluray rips for my Plex server have slowed to a crawl. One of the desktop's only remaining tasks since getting the Deck has been to rip and transcode Bluray discs for storage on my Plex server, and I recently switched from x264 to x265, and the encode speed dropped tremendously to an average of 15ish fps in Handbrake. These benchmark results make it pretty plain that the only reason it has done as well as it has at gaming is because of that RX-480 GPU. I think it's time to retire the old thing but honestly, since I'm doing all my gaming on the Deck, I'll probably just build something in a smaller case with an APU of some kind. All I really need it for is that Bluray drive, so I don't need a big chonker sitting behind the TV with a big graphics card in it. All I need is something that can turn a Bluray disc into an ISO and then convert that into a play-able file with Handbrake in a reason-able amount of time, and not suck down several hundred watts doing it. So anyway, here are the results from Geekbench. The FX 8370 was the slowest thing I tested. FX-8370 OC'd to 4.3Ghz - Air cooled with stock AMD "Wraith" air cooler Single Core: 595 Multi-Core: 2549 https://browser.geekbench.com/v6/cpu/1945133 Lenovo IdeaPad 330S w/ Ryzen 5 2500U - Hooked up to laptop chill pad for extra airflow Single Core: 1094 Multi-Core: 3020 https://browser.geekbench.com/v6/cpu/1945328 Google Pixel 6a w/CalyxOS - Case removed and sitting over an air vent to try and make sure it didn't thermal throttle or anything Single Core: 1460 Multi-Core: 3490 https://browser.geekbench.com/v6/cpu/1945378 Steam Deck - Docked with official dock Single Core: 1253 Multi-Core: 4445 https://browser.geekbench.com/v6/cpu/1945217
×