Jump to content

looney

Retired Staff
  • Posts

    1,884
  • Joined

  • Last visited

Everything posted by looney

  1. Synchronous writes is indeed something you would get with databases, the database wants to know that the file has been saved to non volatile storage before it sends its next bit of data. You can force zfs to treat all writes as synchronous, but synchronous is always slower as the sender has to know that the file is stored on non volatile medium. With zfs + slog, all synchronous data goes to both the vdevs (through ram and cache) and the slog at the same time in parallel. As the slog should be non volatile zfs can tell for example the database that it's data is stored securely and that it can send the next bit of data. At the same time the data might also still exist in a volatile state in ram or hdd cache. When zfs knows this data is no longer in ram or cache but actually on the disk itself it will remove the mirrored data from the slog. So the slog, in normal operation, is only written to and purged. It is only in case of a failure that it gets read from. So say zfs saved the file to slog but the copy that is meant to go to the drive is still in ram or cache and the power cuts then after boot zfs can still ask the slog for those files. This is why you don't use consumer SSD's for slog For synchronous loads there can be an improvement where in case of small bursts the slog can function kind of like a buffer, but the moment you are writing more then one drive can handle (in a 1vdev system) for more than say 5 seconds you will stil hit that drive as a bottleneck, even if the slog is just chilling, zfs does not want to use it as a cache, if the vdev cache is full then it will stop new data coming in until it can write to both slog and vdev again. So a small burst can fill the cache and slog at the same time and the get written "slowly" to the drives. This would be slower without slog as it has to commit to drive straight away but this only applies to, I think, one drive worth of cache or something.
  2. Fun video, but the ZIL/SLOG part is flawed and might misinform people. First of all 2TB is insanely overkill, the ZIL gets flushed roughly every 5 seconds, therefore it can never have more then 5 seconds worth of ingress data on it (I highly doubt you are getting 400gigaBYTES per second of ingress ). On top of that it is only used for true synchronous writes which are not common in most workloads. All the SLOG does is store the synchronous writes for a few seconds so it can immediately say with certainty to the sender "hey, I have received your file and I have stored it (non volatile), please send me the next bit of data". At the same time the data is also being written to the actual storage in parallel, the SLOG is just there so that in case of a power cut the data that is still on the HDD cache is also in the SLOG. In normal operation, the SLOG only gets written to, the only time it gets read form is after for example a power cut, at this time ZFS will call on the SLOG to get the data that was still in cache/ram on the actual non-volatile array. So without a SLOG, in case of synchronous writes, ZFS will have to save the file directly to disk, bypassing cache, as it has to assure the sender that it has stored the data in a non-volatile state, With async data the sender will just go BRRRRRR and wont care if the receiver has saved it volatile (HDD cache) or non volatile (actually on disk). This is faster but also has more risk involved as the sender wont know if its stored volatile or non-volatile. And this is also where the second fault in the SLOG config lies, the NVMe drive uses still has VOLATILE storage on it in the from of highspeed cache. So you could still lose/corrupt data in case of a power cut or something comparable, defeating the sole purpose of SLOG. You are telling the sender that the file is safe while it might not be, defeating the entire point of sync data transfer. For a proper SLOG (which 99% of the time you don't need as you wont be writing sync) you still need proper DC grade SSD's those have built in supercaps to make sure that no data should ever be lost no matter what. And yes, SSD'!s!, you would want two in raid 1 just to be sure. SLOG is 100% a anti-dataloss thing, It will never give you a speed boost, the drives still have to handle the same ingress regardless of having a SLOG. The only way to speed up writes in ZFS is with more VDEV's that can be written to in parallel. In your case the write IOPS will be limited to two drives as it can only write to two VDEV's at the same time.
  3. I think you are just unlucky, my original batch of 45 4TB Seagate consumer drives are only just now starting to slowly fail me one by one every one or two months. To me this seems reasonable as they have a power on time of 51xxx hours (5.8y) and I have not treated them kindly in that time with a reasonable amount of IO. EDIT: apparently I still have 3 4TB drives running that I bought before those other 45 drives, they are at 6.3y and still going strong.
  4. I see your 6500 Chassis and I raise you with 3 more Okay okay, lets call it 2.5 more
  5. Correct, it's a Eaton / Delta PLC. And to answer Brian's other questions, yes the ghosting is atrocious on that laptop when playing tetris.
  6. It's an arista DCS-7124SX-SSD-F, never really looked into ONIE / SONiC compatibility. I'm also looking to replace it in the near future, the availability of firmware updates is bugging me even though its one hell of a switch. Top switch is just a Cisco WS-C3750X-48P-S, needed something for ipmi and access points / patch points throughout the house.
  7. I actually use this at lanparties, when people are still setting up that rigs I'm already playing Tetris Plus its fun to mess with the crew asking them where the coax is so I can install my 10BASE5 tap/transceiver.
  8. Hey guys, long time no see, I can't be bothered to fill in the template, way to much info, whoever made that template is truly as massive a idiot. But here is a picture of what my home rack(s) look like now:
  9. https://github.com/steamcache/monolithic/issues/28 Steam has switched their entire CDN over to HTTPS a few days ago breaking all steam caching solution's, I hope they will provide a alternative dedicated HTTP server for LAN organisers....
  10. Whoops typo indeed, it's indeed a NP4.2-6H but it's no longer produced anymore. Hence is why I'm looking for a replacement. Though I don't believe that the size standard still exists. The NP4-6 is to big. I have checked CSB, pbq, B.B, Universal Power and Yuasa but they don't seem to have them. PS: APC uses CSB and pbq batteries so no point in asking them
  11. It's a Sharp PC-4641 and as stated before it uses a Yuasa HP4.2-6H 6V 4.2Ah battery.
  12. I'm looking for a replacement battery for my laptop but I can't find one. I'ts a Yuasa NP4.2-6H 6V, 4.2Ah battery, but I can't find it in shops, can you help me find a replacement? Thanks!
  13. guess I should update, currently on 292TB raw, (45x4TB + 14x8TB)
  14. With proper QoS and some caching you can easily run a 1.000 participants lan event on a 500/80 line so 4 participants on a 15/5 should be no problem at all.
  15. Price per port is not so good on those, but if you are limited in PCIe slots then its a good alternative. I needed low profile so was no option for me.
  16. Sorry for the late response, No they are not loud.
  17. I just use NVS510 cards if I want to use a lot of monitors, they have 4 mini-DP 1.2 ports each so up to 16 monitors per card using MST's and are very cheap on eBay. Here is a little system I build yesterday that supports up to 112 monitors:
  18. I couldn't say, It isn't quiet but the fan speed is controlled by temperature so I personally don't find it very loud. As far as temps go, I havent populated mine yet.
  19. arrg, I really need to update this topic, cant right now, not at home on crappy hotel wifi. anyway, bought this to future proof my setup: https://www.hgst.com/sites/default/files/resources/4U60-Storage-Platform-DS.pdf
  20. So I have updated my storage array once again: Hardware CASE: Supermicro 6017R-N3RF4+ JBOD: Supermicro SC847 E16-RJBOD1 MB: Supermicro X9DRW-3LN4F+ CPU: 2x Intel Xeon E5-2650 HS: Stock Supermicro heatsink RAM: 24x 8GB Micron ECC DDR3 12800 (192GB) HBA: LSI SAS 9200-8e ZIL SSD: 2x Intel DC S3610 100GB HDD 1: 8x 4TB Seagate ST4000NM0033 HDD 4: 37x 4TB Seagate ST4000DM000 Software and Configuration: My updated server is now running ZFS under freenas. Its using the following configuration: config: NAME STATE READ WRITE CKSUM FloppyD ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/93108990-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/97d47490-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/9a98d29c-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/94808bdc-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/9c2e2cf7-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/936061b7-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/9f4a75ab-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a6c61557-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a1d94dd6-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 raidz2-1 ONLINE 0 0 0 gptid/91e97661-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a39d4540-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/908760af-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a11c8a81-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/9ee2ec62-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a618af0c-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a3b56e8f-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a5bca058-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a72cba93-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 raidz2-2 ONLINE 0 0 0 gptid/a517fa6f-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a63ff419-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/95c7c648-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/9e76827b-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/9fa4abb1-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a2031236-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a4c14ba4-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a7c82eac-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a58aa746-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 raidz2-3 ONLINE 0 0 0 gptid/eb182455-e802-11e6-839a-0025907e82ce ONLINE 0 0 0 gptid/ec8d32b0-e802-11e6-839a-0025907e82ce ONLINE 0 0 0 gptid/edc1526f-e802-11e6-839a-0025907e82ce ONLINE 0 0 0 gptid/eef20607-e802-11e6-839a-0025907e82ce ONLINE 0 0 0 gptid/f002450f-e802-11e6-839a-0025907e82ce ONLINE 0 0 0 gptid/f123b618-e802-11e6-839a-0025907e82ce ONLINE 0 0 0 gptid/f270ac82-e802-11e6-839a-0025907e82ce ONLINE 0 0 0 gptid/f3a227a9-e802-11e6-839a-0025907e82ce ONLINE 0 0 0 gptid/f4ba3d44-e802-11e6-839a-0025907e82ce ONLINE 0 0 0 raidz2-5 ONLINE 0 0 0 gptid/6aabafbe-f056-11e6-a043-0025907e82ce ONLINE 0 0 0 gptid/6b784ec3-f056-11e6-a043-0025907e82ce ONLINE 0 0 0 gptid/6c4eda10-f056-11e6-a043-0025907e82ce ONLINE 0 0 0 gptid/6d1e48c6-f056-11e6-a043-0025907e82ce ONLINE 0 0 0 gptid/6de63bf2-f056-11e6-a043-0025907e82ce ONLINE 0 0 0 gptid/6eb83a2d-f056-11e6-a043-0025907e82ce ONLINE 0 0 0 gptid/6f85f888-f056-11e6-a043-0025907e82ce ONLINE 0 0 0 gptid/705afb37-f056-11e6-a043-0025907e82ce ONLINE 0 0 0 gptid/71f29dae-f056-11e6-a043-0025907e82ce ONLINE 0 0 0 logs mirror-4 ONLINE 0 0 0 gptid/ac395a9a-f04f-11e6-a043-0025907e82ce ONLINE 0 0 0 gptid/acd503c6-f04f-11e6-a043-0025907e82ce ONLINE 0 0 0 Usage: I use the storage for movies and series, i have media players around the house that access it. Its also for backing up my computers and for general testing. Backup: The is arrays backed up to Yottacloud, this is a online cloud backup service. (I have 500/500 Mbps connection) Additional info: The server is nicknamed Abaddon as it's storage ranking score will be 666 AFAIK. Pictures are the same as my original post, just with a full JBOD now. PS: I'll be updating this topic now.
  21. Small update on the lack of updates: This is taking a while...
  22. Just ordered all the materials for the upgrade, so new system will be running this hardware: Server: Supermicro Server 6017R-NRF4+ CPU: 2x Intel XEON E5-2650v1 RAM: 24x 8GB 1600 DDR3 ECC RDIMM (192GB) SSD: 2x 512GB Samsung 850 Pro HBA: SAS 9200-8E HBA NIC: Dual port 10G SFP+ NIC JBOD: SC847 E16-RJBOD1 HDD1: 37x 4TB ST4000DM000 Seagate Desktop Drive HDD2: 8x 4TB ST4000NM0033 Seagate Constellation ES Drive The new server will run EXSi of USB. The 2 SSD's will be set up in RAID 1 to server as the ESXi datastore. I will be running a FreeNAS VM with 8 cores and 160GB RAM allocated to it with the HBA in passthrough mode. There will be 2 pools, one with the desktop drives and one with the enterprise drives The desktop drive pool will be 3 vdev's of 12 drives each in RAID-Z2. The enterprise drive pool will be 1 vdev of 8 drives in RAID-Z2 The remaining 4TB desktop will be a hot-spare. I may still buy a extra small SSD for the ZIL. This will bring the ranking TB's to 176TB excluding hot-spare with 44 drives giving me 666 storage ranking points! Server of the Beast!
×