Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

LaPlume

Member
  • Content Count

    85
  • Joined

  • Last visited

Awards


This user doesn't have any awards

About LaPlume

  • Title
    Member

Recent Profile Visitors

138 profile views
  1. Check connections. EPS connector, GPU power, ect. Verify that the 24-pin is deeply shoved in both ends and it should be. If that's checked... well sounds like either PSU issue, Motherboard issue, or an unlikely braindead CPU. The first "swaparoony" test I'ld advise would be to jerry rig an other PSU (aka: disconnect every cable from the RM850x, take an other PSU, jank-plug all the cable in, not aiming at doing it tidy, just to check if you POST with feeding your rig with an other PSU. ... I'll have you know I hate corsair PSUs, so that's my prime call. But I also hate a lot of MSI products, especially some motherboard and GPU series they made. And if it's not the PSU, it does sound an awfully lot like a DOA motherboard. But that's such more a pain to swap, you should defo check-swap PSU first.
  2. Well it does sound like you RAM is flipping you the bird. Also, I don't think performance is that important over shear stability. See if the issue is still there at lower RAM speed with a memtest64, but trying out with an other RAM kit to check if the BSOD issue is still there would be the better way to go... EVEN if you have to try it with a low speed DDR4 kit. I'ld defo trade few fps for just not crashing in the middle of something, and I hope you realize you would too.
  3. Monolith, da stealth box. Compute part: CPU: Ryzen 3900X MB: Asus X570 ACE pro RAM: 4x 16GB 3600mhz CAS 16 GSKILL neo GPU: XFX 5700XT THICC III PSU: Seasonic FOCUS PX-750 80PLUS Cablemod set for GPU, ATX and CPU power, with a 24-pin right angle connector and 2x U-turns 8-pins. Storage: - 1TB Corsair MP600 NVME m.2 PCIe gen 4* - 2TB Seagate Compute Sata 2.5" SSD* *(internal SSDs are only used for softwares and renders) - 10gbps SFP+ link to my unraid array (with SSD cache) through a SolarFlare SFN5122f cards Cooling: 3x NF-R8 redux 2x NF-A12x25 1x canibalized Eisbaer 120mm AIO 1x Alphacool NexXxoS UT60 A bunch of 45° and 90° alphacool fittings Flow pattern: CPU to back exhaust rad, then front intake rad, as to spit the very hot air directly at the back and have fresh or (worse case scenario) tepid air in. Project: I do plan on printing a nylon or PETG casing as to separate the GPU side and the CPU side, as the GPU spits warmth from its side, I would prefer the intake fan to push all that to the back grill instead of having it go over to the CPU side, as it tend to somehow stagnate a bit in it (these R8 redux fans are not great at sucking air through that thick rad. I might also print a TPU gasket as to make a seal between the fans and the rad. Front IO, drive mounting and additional intake: 1x Evercool HD-AR-ARMOR (replaced fan) 1x Icy Dock MB082SP (in the HD AR ARMOR, to mount the 2TB SSD) 1x SilverStone SST-FP510 for front USB3.1 + hotswap 2.5bay + hotswap 3.5bay Case: 4U 4088-S Side note: notice on this link how the support plate is supposed to bend inward for PCI support, and where the 3.5drive cages are. No, it wouldn't have worked with my GPU or cooling solution. Yes, a dremel was involved. Some gentle hammering of the dual 80mm rad too. No I have no shame. Don't ever tell me it won't fit.
  4. For the redundancy: - Raid1 is the "simplest" kind, but with such large capacity drive, rebuild time would be a real hassle in case of one of the two mirrored drives going down. You would need 2 of the same capacity drives. For raid 10, you would need 4, to end with the capacity of 2. - Raid6 could be an interesting alternative. Using smaller capacity drives (thus smaller rebuild times) , but in higher count, while keeping fault tolerance. There is also a performance advantage to it. For the counts and capacity, Raid 6 asks for a minimum of 4 disks, but it's not interesting with such a low number, because you end up with the capacity of just 2 disks. 6x 4TB disks in raid6 would give you Fault tolerance of 2-drive failures with a total capacity of 16 TB, with 4x read speed and same write speed. -Raid 5 would be the middle solution: 1 drive fault tolerance, 5 4TB disks would give you 16TB of storage at 3x the read speed, but keep in mind that rebuilding a failed drive puts a lot of stress on the whole array, increasing risks of an other drive failing, and there, you're foobar. My personal go in that case would be raid6, but that has always been a fiery debate in front coffee machines next to IT rooms. But no matter which you choose, you'ld probably prefer a real raid card to handle the matter, not the bios. (software I would only advise in cases of going ZFS, or unraid-like arrays) Because if you let the bios handle it and your motherboard dies, you might not be able to import it on a different board for a whatever fluke, and you might have to source the same you had just to get your data back. If your raid card dies, usually importing it to an other of the same model shouldn't be that much hassle, and aren't too expensive to not keep a spare one just in case. + if you're diagnosing suspected-hardware-related raid issues, it's always easier to swap a PCIe card than a motherboard. Flashing the firmware or even customizing it isn't as fiddly or risky as trying to wedge a custom bios or patch in a motherboard. Some raid cards you can even be flashed with firmwares of other raid cards to unlock options or going around issues related to a peculiar brand or series of products. It's just all around more serviceable. For the RAM: - you may or may not need ECC. My thing up until now would have been to say that you should go ECC whenever you can, but it's because I'm used to refurbished intel-based servers were ram speed isn't that crucial. For this one, it's trickier since slower ram would effectively be a notable performance hit. - If you decide to go for ECC though, beware to stick to 2933 Mhz (which is the fastest I could find) and only Samsung B-Die ICs, which have better compatibility with AMD zen2. Cooling-wise, I support the proposition of a Dark Rock Pro TR4. A Corsair Wraith Ripper would do the job too.
  5. supplementary information: It's 2 pins connectors. The UPS doesn't care about the RPM nor about managing it, I guess the voltages it sends are all hardcoded for the stockfans. They are 12V DC, so it shouldn't be too much issue to use regular PC fans, outside of having to mangle out the extra 1 or 2 leads from the connector. So the question really is, aside of NF-A8s, what other fan should I consider in the optic of silence while needing it to not blow up under load.
  6. My view on this is that you could cheap a bit on the keyboard, eventually the mouse, you can find better NVME SSD than your sataIII one for the same price, and that you can defo buy a windows liscence key for 20bucks on amazon. And from sparing a bit of money here and there, it would be then worth getting a 3rd gen ryzen. Better single thread performance, and more threads, both better for editing, streaming and gaming. Also, if you plan to edit/game seriously, 75hrz with meh colors, not the best for gaming nor editing. At least it's IPS, so better colors than TN for editing I assume, but worse for gaming cause of pixel response time then.
  7. That's not the kind of builds I usually would feel comfortable giving advices on (I'm more into refurb servers for virtualization and gaming PCs / editing-cad-3D workstations.) THOUGH, even if storage isn't your priority at all, I would give some thoughts about it: - redundancy. Even if storage isn't your priority, if a drive has an issue, ESPECIALLY since you said >REMOTE< solving, you don't want your stuff going down (even in general). A RAID1 or any kind of software mirroring with fail-over that would be still bootable-usable in case of one of the two goes down. Also, IPMI or a KVM with IP connect could be a worthwhile investment for anything remotely remote. Even though I wouldn't advice making them face the world wide web, just a pi running an openvpn server next to it could be secure enough. PSU wise... during the past 5 years, I have had my fair share of friends going with corsair PSU that flat out crapped their configs. Be it regarding stability under load, or full on magic smoke. Ram is really slow for zen2 tbh. Cooling, depends on the load, but seems light too. If it was me I'ld stick to seasonic and cooler master.
  8. Side note: no, it's not that oversized for my use. Are to be plugged into it: - a R720 with 2x 10 core xeon 115w TDP, 6 drives and a 150W GPU and around as much in other PCIe-cards, including a 10Gbps SFP+ card - Ryzen 9 3900X + R 5700XT+10Gbps SFP+ card - Z600 with 2 X5500 xeons and 4 addon cards - 21:9 29inches screen 32W - 4:3 20inches Acer L2012 - fiber router and crappy temporary switch Side note²: the 3D printer will eventually get plugged into too if it can handle it. Why? Because I'm done with waking up to every of my boxes shut down because power went out for 1 minute 4 hours ago, and everything just crashfully got shut down. Also, power in my place is dirty af. light bulbs are changing shades as soon as a bit of wind blows.
  9. Hi everyone I used to make overly long intro posts, I'll keep it simple this time. PS2200RT2-230E, 2U rackmounted 2200VA ups, 50-55DB at idle in a noise floor 40DB room (with other stuff running, but without a server I'm willing to silence too later.) . Basically I need to divide by 2 the noise it emits. To come: 3D printer incoming, and I'm working on a TPU mesh gcode for the front of my rack. Understand my desperation in 25m², with that thing next to my bed-sofa. What I would like advices on, since I have a 12months warranty on that thing so I would prefer not to open it until I know a bit where I'm going already eff it I opened it, the back fan (less noisy one is at the back, FFB0812SHE, and the two screamers are at the front, AUB0812VH, are as follow: - what other fans could I mount in there with comparable CFM (or just sufficient) that wouldn't be such screamers. Soldering is fine to me. Dremelling, I half-suck at it but can do. - soft mounting? - some knowledgeable advices on these UPS jet engines? - would a pair of NF-A8 be a good remplacement for the 2 AUB0812VH? Thanks ahead!
  10. End of story: It couldn't wait anymore, my old refurbed kitchen furniture wasn't holding the weight anymore: And I ended up going for a 4 post 15U open frame that at least I'm sure wasn't advertised wrongly about the deph. Found my sweet post, 26inchs setting (it can go from 24 to 31 in depth), and go for a flush R720's butt, so I can push the thing against the wall. And that poor thing found its place in the jank den. With around 90kg (~200lb) load inside the rack, and a good 20kg (~44lb) more on top. I was just done building you, but this is your life now.
  11. No... ...but I guess you could get use of that advise of yours too.
  12. Still, I thank you for your time! Every few weeks during the last 3 years I have been looking around without finding what would fit, it's just that it all became urgent all of the sudden when my garbagio setup started to really threaten my gear and stop me from sleeping, and made me shy of slamming the room's door.
  13. The closest I went to something affordable and usable was that . I would prefer a closed cabinet, but it seems like every (re)brand and marketplace sellers around are out to scam you about actual mounting hardware depth and other shenanigans, and going for an open rack might be the only way to avoid last minutes deathwishes. Killer issue though: It's threaded holes. Not square holes. Because obviously, seems we can't have nice thing!
  14. That's actually a base I see rebranded really often, and so far from the sellers I asked who actually took the time to answer, it's 600mm deep... front panel to back panel, the front rails are actually roughly 450mm appart from the back rails. And as you can see on some picture, the back isn't removable, which wouldn't even allow to "cheat" a bit with depth.
  15. And here I thought I found my luck, with a 150 buck 15U rack, that can take 600mm servers in.... but reading more closely the description, rail to rail spacing of 394mm... MATE REALLY? Y u no advertise better? Y u build stoopid? ლ(ಠ益ಠლ) That's exactly why I pushed back this purchase for years.
×