Jump to content

looney

Retired Staff
  • Posts

    1,884
  • Joined

  • Last visited

Awards

About looney

  • Birthday Oct 05, 1993

Contact Methods

Profile Information

  • Gender
    Male
  • Location
    the underworld submarine the the the
  • Interests
    Storage servers, enterprise grade and industrial hardware, virtualization, industrial automation.
  • Biography
    schnitzel
  • Member title
    Wise Counsel Lion Non-Noble Freeman from the Sands

Recent Profile Visitors

41,332 profile views
  1. Synchronous writes is indeed something you would get with databases, the database wants to know that the file has been saved to non volatile storage before it sends its next bit of data. You can force zfs to treat all writes as synchronous, but synchronous is always slower as the sender has to know that the file is stored on non volatile medium. With zfs + slog, all synchronous data goes to both the vdevs (through ram and cache) and the slog at the same time in parallel. As the slog should be non volatile zfs can tell for example the database that it's data is stored securely and that it can send the next bit of data. At the same time the data might also still exist in a volatile state in ram or hdd cache. When zfs knows this data is no longer in ram or cache but actually on the disk itself it will remove the mirrored data from the slog. So the slog, in normal operation, is only written to and purged. It is only in case of a failure that it gets read from. So say zfs saved the file to slog but the copy that is meant to go to the drive is still in ram or cache and the power cuts then after boot zfs can still ask the slog for those files. This is why you don't use consumer SSD's for slog For synchronous loads there can be an improvement where in case of small bursts the slog can function kind of like a buffer, but the moment you are writing more then one drive can handle (in a 1vdev system) for more than say 5 seconds you will stil hit that drive as a bottleneck, even if the slog is just chilling, zfs does not want to use it as a cache, if the vdev cache is full then it will stop new data coming in until it can write to both slog and vdev again. So a small burst can fill the cache and slog at the same time and the get written "slowly" to the drives. This would be slower without slog as it has to commit to drive straight away but this only applies to, I think, one drive worth of cache or something.
  2. Fun video, but the ZIL/SLOG part is flawed and might misinform people. First of all 2TB is insanely overkill, the ZIL gets flushed roughly every 5 seconds, therefore it can never have more then 5 seconds worth of ingress data on it (I highly doubt you are getting 400gigaBYTES per second of ingress ). On top of that it is only used for true synchronous writes which are not common in most workloads. All the SLOG does is store the synchronous writes for a few seconds so it can immediately say with certainty to the sender "hey, I have received your file and I have stored it (non volatile), please send me the next bit of data". At the same time the data is also being written to the actual storage in parallel, the SLOG is just there so that in case of a power cut the data that is still on the HDD cache is also in the SLOG. In normal operation, the SLOG only gets written to, the only time it gets read form is after for example a power cut, at this time ZFS will call on the SLOG to get the data that was still in cache/ram on the actual non-volatile array. So without a SLOG, in case of synchronous writes, ZFS will have to save the file directly to disk, bypassing cache, as it has to assure the sender that it has stored the data in a non-volatile state, With async data the sender will just go BRRRRRR and wont care if the receiver has saved it volatile (HDD cache) or non volatile (actually on disk). This is faster but also has more risk involved as the sender wont know if its stored volatile or non-volatile. And this is also where the second fault in the SLOG config lies, the NVMe drive uses still has VOLATILE storage on it in the from of highspeed cache. So you could still lose/corrupt data in case of a power cut or something comparable, defeating the sole purpose of SLOG. You are telling the sender that the file is safe while it might not be, defeating the entire point of sync data transfer. For a proper SLOG (which 99% of the time you don't need as you wont be writing sync) you still need proper DC grade SSD's those have built in supercaps to make sure that no data should ever be lost no matter what. And yes, SSD'!s!, you would want two in raid 1 just to be sure. SLOG is 100% a anti-dataloss thing, It will never give you a speed boost, the drives still have to handle the same ingress regardless of having a SLOG. The only way to speed up writes in ZFS is with more VDEV's that can be written to in parallel. In your case the write IOPS will be limited to two drives as it can only write to two VDEV's at the same time.
  3. I think you are just unlucky, my original batch of 45 4TB Seagate consumer drives are only just now starting to slowly fail me one by one every one or two months. To me this seems reasonable as they have a power on time of 51xxx hours (5.8y) and I have not treated them kindly in that time with a reasonable amount of IO. EDIT: apparently I still have 3 4TB drives running that I bought before those other 45 drives, they are at 6.3y and still going strong.
  4. I see your 6500 Chassis and I raise you with 3 more Okay okay, lets call it 2.5 more
  5. Correct, it's a Eaton / Delta PLC. And to answer Brian's other questions, yes the ghosting is atrocious on that laptop when playing tetris.
  6. It's an arista DCS-7124SX-SSD-F, never really looked into ONIE / SONiC compatibility. I'm also looking to replace it in the near future, the availability of firmware updates is bugging me even though its one hell of a switch. Top switch is just a Cisco WS-C3750X-48P-S, needed something for ipmi and access points / patch points throughout the house.
  7. I actually use this at lanparties, when people are still setting up that rigs I'm already playing Tetris Plus its fun to mess with the crew asking them where the coax is so I can install my 10BASE5 tap/transceiver.
  8. Hey guys, long time no see, I can't be bothered to fill in the template, way to much info, whoever made that template is truly as massive a idiot. But here is a picture of what my home rack(s) look like now:
  9. https://github.com/steamcache/monolithic/issues/28 Steam has switched their entire CDN over to HTTPS a few days ago breaking all steam caching solution's, I hope they will provide a alternative dedicated HTTP server for LAN organisers....
  10. Whoops typo indeed, it's indeed a NP4.2-6H but it's no longer produced anymore. Hence is why I'm looking for a replacement. Though I don't believe that the size standard still exists. The NP4-6 is to big. I have checked CSB, pbq, B.B, Universal Power and Yuasa but they don't seem to have them. PS: APC uses CSB and pbq batteries so no point in asking them
  11. It's a Sharp PC-4641 and as stated before it uses a Yuasa HP4.2-6H 6V 4.2Ah battery.
  12. I'm looking for a replacement battery for my laptop but I can't find one. I'ts a Yuasa NP4.2-6H 6V, 4.2Ah battery, but I can't find it in shops, can you help me find a replacement? Thanks!
  13. guess I should update, currently on 292TB raw, (45x4TB + 14x8TB)
×