Jump to content

Lapsio

Member
  • Posts

    160
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. He could stress out that it was AUCTIONED, not SOLD which obviously and clearly makes a world of difference XD
  2. Uh oh, those are... quite old tho. I'm using storage encryption and in-fly lzo compression on this btrfs RAID and currently it imposes around 40% CPU load with i7-2600k on HDDs array - which deals with significantly lower throughput and i/o than SSD based storage. I hoped for something based on 7950X or at least 5950X since according to some articles that I found when benchmarking my array, LUKS encryption wipes the floor with EPYC 7402P when paired with NVME storage. https://scs.community/2023/02/24/impact-of-disk-encryption/ While 7950x would probably still be quite hammered down by even 8x NVME gen 4 ssds with encryption and compression, it probably could at least somewhat keep up with the task. X99? uh... xD X299 sounds more reasonable. That said I'll have to confirm performance impact in my current WRX80 based workstation to estimate performance impact of software encryption with NVME on modern AMD platform. Tbh it also makes me kinda realize how much of an overkill NVME based RAID is for NAS on so many levels...
  3. Recently I transformed my old workstation into RAID6 NAS. It's using 14x 2TB HDD drives (24TB space, it's almost full). Back in the day I used to use it only for backups (because I had SSD's for commonly used data in my workstation) and it worked quite fine in that scenario but since it got transformed into generic NAS I started to use it also for accessing my data from other machines - like laptop. And I noticed that... It's slow. I mean it's not unusable level slow since I have 10G LAN network and sequential operations are quite fine but stuff like thumbnails generation etc which generates random i/o - it feels very slow. I performed some benchmarks on this machine (baremetal, not even accounting NFS communication overhead) and random i/o results were quite... not very modern, let's put it that way. As such I decided to look for SATA SSDs since they dropped in price quite a lot recently but then I noticed that tbh... They cost about the same as NVME counterparts which are obviously significantly faster. Like by a LOT. So at that point I kinda started wondering - how viable are NVME drives for large RAID storages on consumer hardware? NVME drives are quite problematic due to pci-e requirement and unfortunately there's not all that much i/o on consumer motherboards. Sure, 14 bay NVME RAID NAS sounds fairly out of reach outside of Threadripper based workstations but even something smaller like 8x8TB NVME doesn't seem to be all that easy to build - especially if you consider that 25G or 40G NIC would be probably optimal match for such NAS and that requires pci-e lanes as well (I don't know any consumer mobo with onboard 40G). I'd probably need two M.2 riser cards with active pci-e switch plus NIC which would require at very least 2x pci-e x8 + pci-e x4 in order to perform within expectations. Not many consumer motherboards out there feature such pci-e configuration I believe. Threadripper based platforms are obviously waaaaay beyond reasonable budget for NAS machine lol so I'm kinda struggling to find viable solution for NVME based NAS arrays, even though drives themselves may seem to be quite affordable. Are there some consumer (or server, but possibly affordable) motherboards that I don't know of, which would make building such NVME oriented machine viable solution?
  4. Imho no. I also assumed Samsung is safe bet and regretted it immediately after getting my hands on new 980 PROs... their customer support is absolute trash. In matter of fact ASRock support (since I have ASRock motherboard so I decided it's worth a shot) helped me update firmware to their damn SSD because Samsung support just kept wasting my time with generic irrelevant "answers" and kept closing my ticket without solution. Samsung Magician couldn't even detect my 980 drives. Also their SSDs are not all that special anymore. They used to make high quality MLC drives with pretty hardcore TBW ratings back in the day, but that era is long gone too so... I just went with FireCuda 530 because it has mad sustained write performance and durability...
  5. So some time ago I was posting few topics regarding my main workstation build with Threadripper and some relatively decent hardware in general (eg. one below) and imho it turned out pretty fine at the end of the day. However I always used to have two PCs in Fractal cases - one main workstation and one old trash PC (which apparently was my first PC that I got from my parents back in the day). It has some sentimental value to me. I never really paid much attention to its looks and performance though. As long as it was sitting in Fractal case and didn't look out of place in my room it was fine to me. After all my previous main workstation wasn't anything spectacular on the inside either so there was no reason for this old junk to be any different. However now since I was building TR workstation which had cost me arm and leg anyways and I had two Fractal Define R6 cases (since I had to buy second one from ebay just to get usb-c front panel for main workstation), I decided that well... how about I match those two machines to some degree not only on the outside but also inside... At least visually. Aaaaaand I probably took it waaaaaaay to far considering that it doesn't even have glass panel. but oh well. It was somewhat funny to build that at least XD This old junk PC has MSI P4M900M2 motherboard and some old Pentium Dual Core, running at 1.6 ghz. Since I didn't want to reinstall OS (it still contains my original Windows XP that I used in elementary school) and also wanted to keep it at least somewhat related to that original PC - CPU and mobo had to stay. And that pretty much dictated color scheme of whole build because well - this motherboard is all red. Back in the day nobody cared about color of PCB so well... That's what I got. GPU was utter junk tho (gt7300) so I changed it to some old GTX580 3GB I had laying around (well I also had some 3080 laying around but I believe there are no Windows XP 32 bit drivers for that one so it was no-go). It was still a bit tricky to find last drivers that were released for Windows XP SP2 32 bit but well, it worked and worked relatively painlessly. At first I didn't know I'll be doing some crazy, ridiculous build so I just put it all together using stuff I already had just like I always did before and it turned out like this: Nothing fancy but it kinda struck me how ridiculous it is to put such old trash mobo in PC case that implements all those modern fancy trends like PSU shroud, HDD shroud etc. It was just so off that it stole my heart somehow. Up to point where I decided damn... I gotta do it as sleek as I can. And then it all started: - black PATA ribbons from ebay - Noctua Chromax fans - some old black WD PATA HDD just to add a bit more storage And I got something like this as some form of checkpoint: At that point I already knew I fell into this rabbit hole and I started getting the most ridiculous stuff I could. I had Noctua NH-D15 from my old workstation laying around so I contacted Noctua support whether they could send me mounting kit for LGA775. And they did. Then some Cablemod cabling to match red theme of machine, some heatsink shrouds, more red cables, black floppy drive and... it got pretty much out of hand. Unfortunately due to absolutely terrible ATX power connector placement on this mobo - standard Cablemod cables were too short and I had to additionally order some extension cables. However for some reason (I find it absolutely dumb idea but whatever) those custom Cablemod extension cables make some assumptions about your motherboard and cables are shorter on one side, determining which way cable should go. And unfortunately in this particular case it meant they should go up - over cooler. which was no go because that Noctua has like 0 clearance at the top (I even had to remove top fans since there's not enough cooler clearance for them due to how high CPU socket is placed on this mobo). So I ended up with a bit ugly cable bulge under GPU. But well, if you don't look at it too much it's not all that bad. I didn't want to waste another 100$ for perfect ATX cable considering I already spent ridiculous amount of money on this Cablemod junk. so it had to stay. Anyhow... I think at this point it looks really ridiculous for something that struggles to run Oblivion due to CPU bottlenecks but as troll build - I think it does pretty good job. If you think it's terrible waste of money and resources, it offends you and you feel internal pain just by looking at it - you're welcome, I'm glad I could be your cancer today xD
  6. So I have TR workstation with Quadro RTX 4000 and two 3090's for CUDA. Since my professional use cases don't need NVLink and can just utilize two separate GPUs with nearly 100% performance scaling I never really bothered with NVLink bridge. Always assumed it's some ML shenanigans I don't need. However today I've seen Der8auer video about that fancy Apple AMD dual GPU and noticed it was detected by Time-Spy as single GPU. So it kinda made me wonder - are there any games that are able to use two 3090's NVLinked? I mean well. since I have two of those anyways... Maybe there's some way to use it?
  7. ASRock support helped me create bootable USB and I managed to upgrade firmware using that standalone bootable ISO. Samsung support is f*cking pathetic. Thank god there's chad ASRock support who always solves all my problems...
  8. I purchased two 990 Pro 2TB very early dring preorder and now I'm having some issues with firmware upgrade. In fact I'm having issues with Samsung Magician in general. After firing up software it claims it has problem with downloading drive information: I tried install Samsung NVME drivers but they said there's no Samsung devices connected. Mgician itself shows them up as Samsung 990 PRO 2TB but there's not much info other than that. I tried to download standalone ISO updater but for whatever reason my motherboard doesn't recognize this ISO as bootable system (while laptop does for example). Drives are connected using ASUS Hyper M.2 4x M.2 card using 4x4 pci-e bifurcation. I'm kind of puzzled...
  9. One really important concept of NAS drives is that they won't necessarily be more "reliable" than regular HDDs - they're designed to keep RAID array alive and consistent - as such they'll fail more but they'll fail more gently to avoid data loss / corruption because they assume they work in array so "probably storage will work without me". I'm using 14 x 2TB IronWolf and and IronWolf Pro in my storage server for many years 24/7 (When I say non-pro-IronWolf I actually mean Seagate NAS because some of them are so old, I obtained them before rebranding). And they're ok. Only one failed on me and it did it really gently (turned itself into read-only mode so I could actually dump all its data to new drive, checked all checksums - no data got corrupted). I'd say ability to fail reasonably is just as important as reliability itself because at the end of the day - disk is just disk - you'll replace it. It's always better to just replace hardware than loose data. The worst thing that may happen in NAS drives is when such drive won't realize it's damaged and will start returning corrupted data that will eventually propagate to other RAID members during parity calculations and your data will get trashed for real. Definitely solid drives that behave the way they should.
  10. Yeah that would also explain why changing 3080 to 3090 which in theory is +50W resulted in like +150W from the wall - with load closer to 100% efficiency additionally drops so it probably just doesn't scale linearly.
  11. oh right... Somehow I didn't think about it. Assumed power rating relates to draw from the wall
  12. So I've got old trusty Seasonic Prime 1300W Platinium (SSR-1300PD) and I'm currently using it in my new workstation. It's got quite o lot of junk (2x 3090 Turbo, Quadro RTX 4000, Threadripper pro 5965wx, Intel x710-DA4 4x10G netowrk card, Mellanox 2x10G card, 4xNVME card with 2xFireCuda 530 4tb and 2x 990 pro 2TB, two OS NVME ssds, 8x32gb ECC DIMMs, 7x7W fans and some other junk). My wall power meter shows that under 100% synthetic load on everything this machine draws around 1350-1450W tops which maybe doesn't sound like a lot over spec but it kinda still is fairly overspec for this PSU. I didn't try to forcibly trip it by enabling PBO on TR (well I don't really want to trigger abrupt poweroff since it could be unhealthy for SSDs) but it makes me a little bit worried whether OCP works properly in this PSU. It's not like I'm going to actually use this workstation under such load since I'm using Quadro as display GPU so it's going to realisically operate at tops 1250-1300w normally but still it's a bit curious. Especially if you consider how people here demonize Seasonic Prime PSU's in terms of tripping on transient spikes.
  13. Holy shit, I didn't realize how bad is support for them. So like when you poke usb cable it sometimes gets a little loose and renegotiates to usb 2.0 speed but then just after a second it should go back to full speed when connection is stable again. But this dongle doesn't. In fact it doesn't so badly that actually system crashes when connection becomes unstable. I'm using it on Linux and when I fiddle with cable a bit too much so that it looses stable connection, I'm getting kernel panic and laptop performs watchdog reset. Other than that I guess it works xD I mean in my case it's workable since I have it connected to docking station anyways so it's possible for me to secure connection enough to avoid unexpected crashes but I honestly can't imagine anyone using it as portable ethernet dongle the same way I'm using my gigabit dongles carried in backpack.
  14. Long time ago Aquantia released USB 3.0 5GbE adapter. Since then USB evolved quite a bit and now we have USB 3.2 Gen 2 and even USB 3.2 Gen 2x2. I'd expect there to be 10G and 2x10G ethernet adapters for USB-C but I can't really seem to find such thing. What's even more interesting is fact that I can't even find USB 3.2 Gen 2 5GbE adapter (which I'd expect to have less bottlenecks due to more available bandwidth since I'm sure usb itself introduces some overhead as well so those "5GbE" usb 3.0 dongles sure are not in fact full blown 5G interfaces). I'd like to get my AMD laptop to use 10G ethernet but while getting two USB 3.0 5GbE dongles and setting up bonding is some form of solution, it feels quite silly really. Especially since due to USB hub limitations I'd probably need to hook up those two USB 3.0 dongles to two separate usb 3.2 gen 2 controllers so that they don't share 3.0 bandwidth and it sounds super dumb ngl... So are there any recent 10G usb-c dongles that would use USB 3.2 Gen 2? Or at least 5G dongles using USB 3.2 Gen 2?...
  15. I decided to install washers since I realized that loose screw could fall off any moment from vibrations and that could probably be quite fatal to laptop. So using that opportunity I took some photos for anyone interested in this drive for his ultra-thin notebook. Installed drive in single-side only m.2 slot: As you can see there's barely any bend and unlike FireCuda it lies almost flat even when unscrewed. And bottom side of SN850X in 4TB variant: tags for google: sn850x 4tb single-sided double-sided m.2 nvme ssd.
×