Jump to content

LaPlume

Member
  • Posts

    92
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

404 profile views
  1. Hi, I'm dealing with a peculiar trouble. Supposedly, since AGESA 1.0.7.2 (circa2017), we get the beloved option "Disable AMD PSP" on AM4 boards. My board, an Asrock X570s Riptide PG (link) was shipped with BIOS version 1.10, aka AGESA 1.2.0.3b, and I tried both 2.20 (AGESA 1.2.0.6b) and L5.01(which is a beta)(AGESA 1.2.0.8). The BIOS seem extremely... lacking. Especially for a X570(s) chipset, especially when I compare it to the Asus Pro WS X570-ACE that lives in one of my other machines. Less fine control in general, but most importantly, it lacks this very option that's amongst the ones I care about: "Disable AMD PSP". I couldn't, for the life of me, find it. Not in AMD PBS, not anywhere. Have I just missed it keyboard navigating these 3 bios versions for hours on end? Yes, I googled the issue. Sadly, even trying to narrow down but each time I would mention "AMD PSP" it would pull up anything under the sun, and nearly nothing about Asrock, and nothing useful, and each time I didn't mention it, it never pulled up anything about it in results related to Asrock. r/Asrock doesn't have many topics even mentioning AMD PSP (just a few), and nothing that would help me there either. I sent an email in asrock's support way, but no answer yet. So I'm wondering, anyone here with some pointers/infos on this? EDIT: After combing in even more depth, not on the internet, but by opening some BIOSes from the fateful 2017 era of disabling PSP, and some I had from an other board, I realized that it seems that the option is simply not exposed anymore in that other board's more recent BIOS versions!? Basically, AMD (or suspiciously, all the manufacturers) did a double back and middle finger once the storm passed and went back to not offering the option. I also discovered that I somehow effed myself on one of my boards, since I recently updated the BIOS (which obviously came with the caveat I can't revert to an older one afterward), and that this BIOS version is post-PSP-option-deletion. My option from there is to either try to mod these BIOS I have on hand with UEFI-Editor to unhide the option, or play it safe and just use setup_var.efi to "flip the switch" without modding. Overall, my feathers and jimmies are ruffled.
  2. Does this topic takes anti-suggestions? A Canadian computer company basically had me loose a job, my savings and more. There is a company calling themselves "Canadian computer 'developers' " who basically just sell clevo designs, and nowadays makes you pay extra for unlocked BIOSes. I won't name them until I get the "greenlight" it's okay to speak of anti-suggestions, but here it is, spoiled to avoid "visual spam" if anti-suggestions aren't okay: Edit: TLDR: A Canadian based computer company sold me a non-working computer for ~6,000 USD on paper total, that I tried to RMA at my expenses despite them trying to ghost me during warranty, that still didn't worked, and then intentionally have been ghosting me since.
  3. A first possible solution could be to make a simple unraid server (which gives user accessibility and isn't too tricky to work with and do stuff with) or other linux distro, and use any kind of VPN tunneling to bring server's share to users. For the exact VPN piece of software, Wireguard, OpenVPN, OpenConnectServer, options are many, and every of these are available either as plugin or docker container, or both. This would allow remote access to company data without having to go through too much security hardening. Without going proprietary and all, just having to ask people to install a VPN client and enter a user key and server (or for some softwares, import a file), and then add a network drive, it's not the most difficult process. But I may be overestimating corporate drone's tech abilities on that one! An alternative to VPN, and in my opinion "more streamlined", could be using a private cloud. Again, unraid box for example, and Owncloud or Nexcloud as software. Allows user logging, calendar shares, discussions and all. Browser access. Optional client software to live-sync a folder between a client and the server, folder/file sharing options with or without password or auth. It would be the better solution in the DYI category. https end-to-end would be handled by letsencrypt (test working over noip dynamic dns if needed), beyond the hardware and unraid liscence, no cost (unless some paid plugins/features of nextcloud or owncloud are really needed, but both suites are already way functional as is.) Side note: This ordeal, it doesn't seem like it's your job/ a you problem. Tbh if I was in an office job and had my boss ask me to do server deployment, I'ld make him a quote for the whole thing at close-to-IT-market-prices, because I aint gonna be building a server and deploying it if it's to be paid at a coffee-carrier's wage for that kind of headache. Because if you're a bit tech-savy, shouldn't be hard with owncloud/nextcloud... but even if it's more streamlined than a whole VPN jank, both for you and users, you'll still need to get every corporate drone to understand how to connect to something, they may not be willing to put the effort into listening or cooperating, knowing that if the thing fails, it won't be "theeeeir fault", but yours, because you were in charge of it.
  4. External storage wouldn't be great, Slower in transfer and seek times, and less reliable in average because exposed. Best thing to do is just to have SSD + HDD inside, SSD for OS and software, HDD for some user folders like My Documents, Downloads and Videos.
  5. ^ agreed. That motherboard is basically electronic waste even brand new. Though unlike what Jumballi said, you shouldn't really have issue with upgrading it, it's all standard components in there, no OEM proprietary stuff. But PcPartpicker already advise automatically a superior config for just a bit more money as the homepage "Modest AMD Gaming Build", so it would be a bit sad to go for this prebuilt. PCPartPicker Part List: https://pcpartpicker.com/list/HxWp3t CPU: AMD Ryzen 5 2600 3.4 GHz 6-Core Processor ($119.99 @ Newegg) Motherboard: *Gigabyte B450M DS3H Micro ATX AM4 Motherboard ($72.98 @ Amazon) Memory: *Team T-FORCE VULCAN Z 16 GB (2 x 8 GB) DDR4-3000 Memory ($67.98 @ Amazon) Storage: *PNY CS900 500 GB 2.5" Solid State Drive ($59.99 @ Amazon) Storage: *Seagate Barracuda Compute 2 TB 3.5" 7200RPM Internal Hard Drive ($54.98 @ Newegg) Video Card: *Gigabyte GeForce GTX 1660 Ti 6 GB OC Video Card ($274.99 @ Newegg) Case: Fractal Design Focus G Mini MicroATX Mini Tower Case ($63.98 @ Newegg) Power Supply: *SeaSonic FOCUS Gold 550 W 80+ Gold Certified Semi-modular ATX Power Supply ($94.00 @ Amazon) Total: $808.89 Prices include shipping, taxes, and discounts when available
  6. My call would be to wait for a bit more budget to get on a more modern platform. Anno, Assassin's creed ect, even though they are well optimized to multi-threading, aren't still just video renders, and in maybe an other 100$, you could transition to a Zen2 motherboard for a Ryzen 3xxx or next-gen 4xxxx CPU, with more throughput per core and the same amount of threads, and same amount of ram. Imo a good investment since you're already almost maxed out your actual platform. Keep in mind also that xeon cpus and motherboards, and compatible ram still retained interrest and a bit of value on the used market, so it wouldn't be a net 500$ expense either. Or yes, a new GPU, but might be kinda bottlenecked depending of the game.
  7. So, you do manage to get to POST with else the GPU out, or with a single stick of ram. When you started with both sticks and no GPU, it managed to boot the OS before crashing. When you started without 1 stick and without GPU, you say you were able to access bios, but that's it. No drive showing? For now, it's hard starting to build up a theory... Knowing that the Machine-check exception is usually linked to an hardware uncorrectable error, sounds like the RAM might be the one causing the issue. Since integrated graphics are a thing on the 9700K, and that it uses RAM as VRAM, could point toward RAM also, and that with the GPU in, it fails to balance the two graphics adapters? For now, no way yet to guess if it's RAM, GPU, motherboard. SO, HERE'S THE PLAN: What you can do is to take your old MB and CPU, plop them on top of your motherboard box, plug power into them, memtest your ram, 3Dmark your GPU in the "old config", checking for errors and stability issues. If these components show no sign of defect, it would be indeed most likely your new motherboard's fault. (DOA CPUs are extremely rare)
  8. Yeah first step was to check connections. EPS connector, GPU power, ect. Verify that the 24-pin is deeply shoved in both ends and it should be. If that's checked... well sounds like either PSU issue, Motherboard issue, or an unlikely braindead CPU. The first "swaparoony" test I'ld advise would be to jerry rig an other PSU (aka: disconnect every cable from the RM850x, take an other PSU, jank-plug all the cable in, not aiming at doing it tidy, just to check if you POST with feeding your rig with an other PSU.) Typically if it was a ram or CPU issue, even if it doesn't boot, it at least should "react", like, light blink or fan jerk even if it's for half a second. It really sounds like a dead PSU or Motherboard. ... I'll have you know I hate corsair PSUs, so that's my prime call. But I also hate a lot of MSI products, especially some motherboard and GPU series they made. And if it's not the PSU, it does sound an awfully lot like a DOA motherboard. But that's such more a pain to swap, you should defo check-swap PSU first.
  9. Well it does sound like you RAM is flipping you the bird. Also, I don't think performance is that important over shear stability. See if the issue is still there at lower RAM speed with a memtest64, but trying out with an other RAM kit to check if the BSOD issue is still there would be the better way to go... EVEN if you have to try it with a low speed DDR4 kit. I'ld defo trade few fps for just not crashing in the middle of something, and I hope you realize you would too.
  10. Monolith, da stealth box. Compute part: CPU: Ryzen 3900X MB: Asus X570 ACE pro RAM: 4x 16GB 3600mhz CAS 16 GSKILL neo GPU: XFX 5700XT THICC III PSU: Seasonic FOCUS PX-750 80PLUS Cablemod set for GPU, ATX and CPU power, with a 24-pin right angle connector and 2x U-turns 8-pins. Storage: - 1TB Corsair MP600 NVME m.2 PCIe gen 4* - 2TB Seagate Compute Sata 2.5" SSD* *(internal SSDs are only used for softwares and renders) - 10gbps SFP+ link to my unraid array (with SSD cache) through a SolarFlare SFN5122f cards Cooling: 3x NF-R8 redux 2x NF-A12x25 1x canibalized Eisbaer 120mm AIO 1x Alphacool NexXxoS UT60 A bunch of 45° and 90° alphacool fittings Flow pattern: CPU to back exhaust rad, then front intake rad, as to spit the very hot air directly at the back and have fresh or (worse case scenario) tepid air in. Project: I do plan on printing a nylon or PETG casing as to separate the GPU side and the CPU side, as the GPU spits warmth from its side, I would prefer the intake fan to push all that to the back grill instead of having it go over to the CPU side, as it tend to somehow stagnate a bit in it (these R8 redux fans are not great at sucking air through that thick rad. I might also print a TPU gasket as to make a seal between the fans and the rad. Front IO, drive mounting and additional intake: 1x Evercool HD-AR-ARMOR (replaced fan) 1x Icy Dock MB082SP (in the HD AR ARMOR, to mount the 2TB SSD) 1x SilverStone SST-FP510 for front USB3.1 + hotswap 2.5bay + hotswap 3.5bay Case: 4U 4088-S Side note: notice on this link how the support plate is supposed to bend inward for PCI support, and where the 3.5drive cages are. No, it wouldn't have worked with my GPU or cooling solution. Yes, a dremel was involved. Some gentle hammering of the dual 80mm rad too. No I have no shame. Don't ever tell me it won't fit.
  11. For the redundancy: - Raid1 is the "simplest" kind, but with such large capacity drive, rebuild time would be a real hassle in case of one of the two mirrored drives going down. You would need 2 of the same capacity drives. For raid 10, you would need 4, to end with the capacity of 2. - Raid6 could be an interesting alternative. Using smaller capacity drives (thus smaller rebuild times) , but in higher count, while keeping fault tolerance. There is also a performance advantage to it. For the counts and capacity, Raid 6 asks for a minimum of 4 disks, but it's not interesting with such a low number, because you end up with the capacity of just 2 disks. 6x 4TB disks in raid6 would give you Fault tolerance of 2-drive failures with a total capacity of 16 TB, with 4x read speed and same write speed. -Raid 5 would be the middle solution: 1 drive fault tolerance, 5 4TB disks would give you 16TB of storage at 3x the read speed, but keep in mind that rebuilding a failed drive puts a lot of stress on the whole array, increasing risks of an other drive failing, and there, you're foobar. My personal go in that case would be raid6, but that has always been a fiery debate in front coffee machines next to IT rooms. But no matter which you choose, you'ld probably prefer a real raid card to handle the matter, not the bios. (software I would only advise in cases of going ZFS, or unraid-like arrays) Because if you let the bios handle it and your motherboard dies, you might not be able to import it on a different board for a whatever fluke, and you might have to source the same you had just to get your data back. If your raid card dies, usually importing it to an other of the same model shouldn't be that much hassle, and aren't too expensive to not keep a spare one just in case. + if you're diagnosing suspected-hardware-related raid issues, it's always easier to swap a PCIe card than a motherboard. Flashing the firmware or even customizing it isn't as fiddly or risky as trying to wedge a custom bios or patch in a motherboard. Some raid cards you can even be flashed with firmwares of other raid cards to unlock options or going around issues related to a peculiar brand or series of products. It's just all around more serviceable. For the RAM: - you may or may not need ECC. My thing up until now would have been to say that you should go ECC whenever you can, but it's because I'm used to refurbished intel-based servers were ram speed isn't that crucial. For this one, it's trickier since slower ram would effectively be a notable performance hit. - If you decide to go for ECC though, beware to stick to 2933 Mhz (which is the fastest I could find) and only Samsung B-Die ICs, which have better compatibility with AMD zen2. Cooling-wise, I support the proposition of a Dark Rock Pro TR4. A Corsair Wraith Ripper would do the job too.
  12. supplementary information: It's 2 pins connectors. The UPS doesn't care about the RPM nor about managing it, I guess the voltages it sends are all hardcoded for the stockfans. They are 12V DC, so it shouldn't be too much issue to use regular PC fans, outside of having to mangle out the extra 1 or 2 leads from the connector. So the question really is, aside of NF-A8s, what other fan should I consider in the optic of silence while needing it to not blow up under load. EDIT: You can basically ignore this post if you come accross it, ended up doing it, and it worked nice, bringed the UPS down to under 30db, my place's noise floor.
  13. My view on this is that you could cheap a bit on the keyboard, eventually the mouse, you can find better NVME SSD than your sataIII one for the same price, and that you can defo buy a windows liscence key for 20bucks on amazon. And from sparing a bit of money here and there, it would be then worth getting a 3rd gen ryzen. Better single thread performance, and more threads, both better for editing, streaming and gaming. Also, if you plan to edit/game seriously, 75hrz with meh colors, not the best for gaming nor editing. At least it's IPS, so better colors than TN for editing I assume, but worse for gaming cause of pixel response time then.
  14. That's not the kind of builds I usually would feel comfortable giving advices on (I'm more into refurb servers for virtualization and gaming PCs / editing-cad-3D workstations.) THOUGH, even if storage isn't your priority at all, I would give some thoughts about it: - redundancy. Even if storage isn't your priority, if a drive has an issue, ESPECIALLY since you said >REMOTE< solving, you don't want your stuff going down (even in general). A RAID1 or any kind of software mirroring with fail-over that would be still bootable-usable in case of one of the two goes down. Also, IPMI or a KVM with IP connect could be a worthwhile investment for anything remotely remote. Even though I wouldn't advice making them face the world wide web, just a pi running an openvpn server next to it could be secure enough. PSU wise... during the past 5 years, I have had my fair share of friends going with corsair PSU that flat out crapped their configs. Be it regarding stability under load, or full on magic smoke. Ram is really slow for zen2 tbh. Cooling, depends on the load, but seems light too. If it was me I'ld stick to seasonic and cooler master.
  15. Side note: no, it's not that oversized for my use. Are to be plugged into it: - a R720 with 2x 10 core xeon 115w TDP, 6 drives and a 150W GPU and around as much in other PCIe-cards, including a 10Gbps SFP+ card - a R420 with 2x 6core cpus and a GPU with 10Gbps SFP+ card - a R210ii with 10Gbps SFP+ card - Ryzen 9 3900X + R 5700XT+10Gbps SFP+ card - Z600 with 2 X5500 xeons and 4 addon cards - 21:9 29inches screen 32W - 4:3 20inches Acer L2012 - fiber router and crappy temporary switch Side note²: the 3D printer will eventually get plugged into too if it can handle it. Why? Because I'm done with waking up to every of my boxes shut down because power went out for 1 minute 4 hours ago, and everything just crashfully got shut down. Also, power in my place is dirty af. light bulbs are changing shades as soon as a bit of wind blows.
×