Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

looney

Retired Staff
  • Content Count

    1,884
  • Joined

  • Last visited

Reputation Activity

  1. Like
    looney got a reaction from Windows7ge in Here's How to Save $45,000   
    Synchronous writes is indeed something you would get with databases, the database wants to know that the file has been saved to non volatile storage before it sends its next bit of data. 
     
    You can force zfs to treat all writes as synchronous, but synchronous is always slower as the sender has to know that the file is stored on non volatile medium. 
     
    With zfs + slog, all synchronous data goes to both the vdevs (through ram and cache) and the slog at the same time in parallel. 
    As the slog should be non volatile zfs can tell for example the database that it's data is stored securely and that it can send the next bit of data. 
     
    At the same time the data might also still exist in a volatile state in ram or hdd cache. 
     
    When zfs knows this data is no longer in ram or cache but actually on the disk itself it will remove the mirrored data from the slog. 
     
    So the slog, in normal operation, is only written to and purged. 
     
    It is only in case of a failure that it gets read from.
    So say zfs saved the file to slog but the copy that is meant to go to the drive is still in ram or cache and the power cuts then after boot zfs can still ask the slog for those files. 
     
    This is why you don't use consumer SSD's for slog 🙂
     
    For synchronous loads there can be an improvement where in case of small bursts the slog can function kind of like a buffer, but the moment you are writing more then one drive can handle (in a 1vdev system) for more than say 5 seconds you will stil hit that drive as a bottleneck, even if the slog is just chilling, zfs does not want to use it as a cache, if the vdev cache is full then it will stop new data coming in until it can write to both slog and vdev again.
     
    So a small burst can fill the cache and slog at the same time and the get written "slowly" to the drives. This would be slower without slog as it has to commit to drive straight away but this only applies to, I think, one drive worth of cache or something. 
  2. Like
    looney got a reaction from Electronics Wizardy in Here's How to Save $45,000   
    Fun video, but the ZIL/SLOG part is flawed and might misinform people.

    First of all 2TB is insanely overkill, the ZIL gets flushed roughly every 5 seconds, therefore it can never have more then 5 seconds worth of ingress data on it (I highly doubt you are getting 400gigaBYTES per second of ingress 🙂 ).
    On top of that it is only used for true synchronous writes which are not common in most workloads.
     
    All the SLOG does is store the synchronous writes for a few seconds so it can immediately say with certainty to the sender "hey, I have received your file and I have stored it (non volatile), please send me the next bit of data".
    At the same time the data is also being written to the actual storage in parallel, the SLOG is just there so that in case of a power cut the data that is still on the HDD cache is also in the SLOG.
     
    In normal operation, the SLOG only gets written to, the only time it gets read form is after for example a power cut, at this time ZFS will call on the SLOG to get the data that was still in cache/ram on the actual non-volatile array.
     
    So without a SLOG, in case of synchronous writes, ZFS will have to save the file directly to disk, bypassing cache, as it has to assure the sender that it has stored the data in a non-volatile state,
     
    With async data the sender will just go BRRRRRR and wont care if the receiver has saved it volatile (HDD cache) or non volatile (actually on disk).
    This is faster but also has more risk involved as the sender wont know if its stored volatile or non-volatile.

    And this is also where the second fault in the SLOG config lies, the NVMe drive uses still has VOLATILE storage on it in the from of highspeed cache.
    So you could still lose/corrupt data in case of a power cut or something comparable, defeating the sole purpose of SLOG.
    You are telling the sender that the file is safe while it might not be, defeating the entire point of sync data transfer.

    For a proper SLOG (which 99% of the time you don't need as you wont be writing sync) you still need proper DC grade SSD's those have built in supercaps to make sure that no data should ever be lost no matter what.
    And yes, SSD'!s!, you would want two in raid 1 just to be sure.

    SLOG is 100% a anti-dataloss thing, It will never give you a speed boost, the drives still have to handle the same ingress regardless of having a SLOG.

    The only way to speed up writes in ZFS is with more VDEV's that can be written to in parallel.
    In your case the write IOPS will be limited to two drives as it can only write to two VDEV's at the same time.
  3. Informative
    looney got a reaction from Sauron in Here's How to Save $45,000   
    Fun video, but the ZIL/SLOG part is flawed and might misinform people.

    First of all 2TB is insanely overkill, the ZIL gets flushed roughly every 5 seconds, therefore it can never have more then 5 seconds worth of ingress data on it (I highly doubt you are getting 400gigaBYTES per second of ingress 🙂 ).
    On top of that it is only used for true synchronous writes which are not common in most workloads.
     
    All the SLOG does is store the synchronous writes for a few seconds so it can immediately say with certainty to the sender "hey, I have received your file and I have stored it (non volatile), please send me the next bit of data".
    At the same time the data is also being written to the actual storage in parallel, the SLOG is just there so that in case of a power cut the data that is still on the HDD cache is also in the SLOG.
     
    In normal operation, the SLOG only gets written to, the only time it gets read form is after for example a power cut, at this time ZFS will call on the SLOG to get the data that was still in cache/ram on the actual non-volatile array.
     
    So without a SLOG, in case of synchronous writes, ZFS will have to save the file directly to disk, bypassing cache, as it has to assure the sender that it has stored the data in a non-volatile state,
     
    With async data the sender will just go BRRRRRR and wont care if the receiver has saved it volatile (HDD cache) or non volatile (actually on disk).
    This is faster but also has more risk involved as the sender wont know if its stored volatile or non-volatile.

    And this is also where the second fault in the SLOG config lies, the NVMe drive uses still has VOLATILE storage on it in the from of highspeed cache.
    So you could still lose/corrupt data in case of a power cut or something comparable, defeating the sole purpose of SLOG.
    You are telling the sender that the file is safe while it might not be, defeating the entire point of sync data transfer.

    For a proper SLOG (which 99% of the time you don't need as you wont be writing sync) you still need proper DC grade SSD's those have built in supercaps to make sure that no data should ever be lost no matter what.
    And yes, SSD'!s!, you would want two in raid 1 just to be sure.

    SLOG is 100% a anti-dataloss thing, It will never give you a speed boost, the drives still have to handle the same ingress regardless of having a SLOG.

    The only way to speed up writes in ZFS is with more VDEV's that can be written to in parallel.
    In your case the write IOPS will be limited to two drives as it can only write to two VDEV's at the same time.
  4. Informative
    looney got a reaction from Levent in Here's How to Save $45,000   
    Fun video, but the ZIL/SLOG part is flawed and might misinform people.

    First of all 2TB is insanely overkill, the ZIL gets flushed roughly every 5 seconds, therefore it can never have more then 5 seconds worth of ingress data on it (I highly doubt you are getting 400gigaBYTES per second of ingress 🙂 ).
    On top of that it is only used for true synchronous writes which are not common in most workloads.
     
    All the SLOG does is store the synchronous writes for a few seconds so it can immediately say with certainty to the sender "hey, I have received your file and I have stored it (non volatile), please send me the next bit of data".
    At the same time the data is also being written to the actual storage in parallel, the SLOG is just there so that in case of a power cut the data that is still on the HDD cache is also in the SLOG.
     
    In normal operation, the SLOG only gets written to, the only time it gets read form is after for example a power cut, at this time ZFS will call on the SLOG to get the data that was still in cache/ram on the actual non-volatile array.
     
    So without a SLOG, in case of synchronous writes, ZFS will have to save the file directly to disk, bypassing cache, as it has to assure the sender that it has stored the data in a non-volatile state,
     
    With async data the sender will just go BRRRRRR and wont care if the receiver has saved it volatile (HDD cache) or non volatile (actually on disk).
    This is faster but also has more risk involved as the sender wont know if its stored volatile or non-volatile.

    And this is also where the second fault in the SLOG config lies, the NVMe drive uses still has VOLATILE storage on it in the from of highspeed cache.
    So you could still lose/corrupt data in case of a power cut or something comparable, defeating the sole purpose of SLOG.
    You are telling the sender that the file is safe while it might not be, defeating the entire point of sync data transfer.

    For a proper SLOG (which 99% of the time you don't need as you wont be writing sync) you still need proper DC grade SSD's those have built in supercaps to make sure that no data should ever be lost no matter what.
    And yes, SSD'!s!, you would want two in raid 1 just to be sure.

    SLOG is 100% a anti-dataloss thing, It will never give you a speed boost, the drives still have to handle the same ingress regardless of having a SLOG.

    The only way to speed up writes in ZFS is with more VDEV's that can be written to in parallel.
    In your case the write IOPS will be limited to two drives as it can only write to two VDEV's at the same time.
  5. Like
    looney got a reaction from Velcade in Here's How to Save $45,000   
    Fun video, but the ZIL/SLOG part is flawed and might misinform people.

    First of all 2TB is insanely overkill, the ZIL gets flushed roughly every 5 seconds, therefore it can never have more then 5 seconds worth of ingress data on it (I highly doubt you are getting 400gigaBYTES per second of ingress 🙂 ).
    On top of that it is only used for true synchronous writes which are not common in most workloads.
     
    All the SLOG does is store the synchronous writes for a few seconds so it can immediately say with certainty to the sender "hey, I have received your file and I have stored it (non volatile), please send me the next bit of data".
    At the same time the data is also being written to the actual storage in parallel, the SLOG is just there so that in case of a power cut the data that is still on the HDD cache is also in the SLOG.
     
    In normal operation, the SLOG only gets written to, the only time it gets read form is after for example a power cut, at this time ZFS will call on the SLOG to get the data that was still in cache/ram on the actual non-volatile array.
     
    So without a SLOG, in case of synchronous writes, ZFS will have to save the file directly to disk, bypassing cache, as it has to assure the sender that it has stored the data in a non-volatile state,
     
    With async data the sender will just go BRRRRRR and wont care if the receiver has saved it volatile (HDD cache) or non volatile (actually on disk).
    This is faster but also has more risk involved as the sender wont know if its stored volatile or non-volatile.

    And this is also where the second fault in the SLOG config lies, the NVMe drive uses still has VOLATILE storage on it in the from of highspeed cache.
    So you could still lose/corrupt data in case of a power cut or something comparable, defeating the sole purpose of SLOG.
    You are telling the sender that the file is safe while it might not be, defeating the entire point of sync data transfer.

    For a proper SLOG (which 99% of the time you don't need as you wont be writing sync) you still need proper DC grade SSD's those have built in supercaps to make sure that no data should ever be lost no matter what.
    And yes, SSD'!s!, you would want two in raid 1 just to be sure.

    SLOG is 100% a anti-dataloss thing, It will never give you a speed boost, the drives still have to handle the same ingress regardless of having a SLOG.

    The only way to speed up writes in ZFS is with more VDEV's that can be written to in parallel.
    In your case the write IOPS will be limited to two drives as it can only write to two VDEV's at the same time.
  6. Informative
    looney got a reaction from Lurick in Here's How to Save $45,000   
    Fun video, but the ZIL/SLOG part is flawed and might misinform people.

    First of all 2TB is insanely overkill, the ZIL gets flushed roughly every 5 seconds, therefore it can never have more then 5 seconds worth of ingress data on it (I highly doubt you are getting 400gigaBYTES per second of ingress 🙂 ).
    On top of that it is only used for true synchronous writes which are not common in most workloads.
     
    All the SLOG does is store the synchronous writes for a few seconds so it can immediately say with certainty to the sender "hey, I have received your file and I have stored it (non volatile), please send me the next bit of data".
    At the same time the data is also being written to the actual storage in parallel, the SLOG is just there so that in case of a power cut the data that is still on the HDD cache is also in the SLOG.
     
    In normal operation, the SLOG only gets written to, the only time it gets read form is after for example a power cut, at this time ZFS will call on the SLOG to get the data that was still in cache/ram on the actual non-volatile array.
     
    So without a SLOG, in case of synchronous writes, ZFS will have to save the file directly to disk, bypassing cache, as it has to assure the sender that it has stored the data in a non-volatile state,
     
    With async data the sender will just go BRRRRRR and wont care if the receiver has saved it volatile (HDD cache) or non volatile (actually on disk).
    This is faster but also has more risk involved as the sender wont know if its stored volatile or non-volatile.

    And this is also where the second fault in the SLOG config lies, the NVMe drive uses still has VOLATILE storage on it in the from of highspeed cache.
    So you could still lose/corrupt data in case of a power cut or something comparable, defeating the sole purpose of SLOG.
    You are telling the sender that the file is safe while it might not be, defeating the entire point of sync data transfer.

    For a proper SLOG (which 99% of the time you don't need as you wont be writing sync) you still need proper DC grade SSD's those have built in supercaps to make sure that no data should ever be lost no matter what.
    And yes, SSD'!s!, you would want two in raid 1 just to be sure.

    SLOG is 100% a anti-dataloss thing, It will never give you a speed boost, the drives still have to handle the same ingress regardless of having a SLOG.

    The only way to speed up writes in ZFS is with more VDEV's that can be written to in parallel.
    In your case the write IOPS will be limited to two drives as it can only write to two VDEV's at the same time.
  7. Informative
    looney got a reaction from FakeKGB in Here's How to Save $45,000   
    Fun video, but the ZIL/SLOG part is flawed and might misinform people.

    First of all 2TB is insanely overkill, the ZIL gets flushed roughly every 5 seconds, therefore it can never have more then 5 seconds worth of ingress data on it (I highly doubt you are getting 400gigaBYTES per second of ingress 🙂 ).
    On top of that it is only used for true synchronous writes which are not common in most workloads.
     
    All the SLOG does is store the synchronous writes for a few seconds so it can immediately say with certainty to the sender "hey, I have received your file and I have stored it (non volatile), please send me the next bit of data".
    At the same time the data is also being written to the actual storage in parallel, the SLOG is just there so that in case of a power cut the data that is still on the HDD cache is also in the SLOG.
     
    In normal operation, the SLOG only gets written to, the only time it gets read form is after for example a power cut, at this time ZFS will call on the SLOG to get the data that was still in cache/ram on the actual non-volatile array.
     
    So without a SLOG, in case of synchronous writes, ZFS will have to save the file directly to disk, bypassing cache, as it has to assure the sender that it has stored the data in a non-volatile state,
     
    With async data the sender will just go BRRRRRR and wont care if the receiver has saved it volatile (HDD cache) or non volatile (actually on disk).
    This is faster but also has more risk involved as the sender wont know if its stored volatile or non-volatile.

    And this is also where the second fault in the SLOG config lies, the NVMe drive uses still has VOLATILE storage on it in the from of highspeed cache.
    So you could still lose/corrupt data in case of a power cut or something comparable, defeating the sole purpose of SLOG.
    You are telling the sender that the file is safe while it might not be, defeating the entire point of sync data transfer.

    For a proper SLOG (which 99% of the time you don't need as you wont be writing sync) you still need proper DC grade SSD's those have built in supercaps to make sure that no data should ever be lost no matter what.
    And yes, SSD'!s!, you would want two in raid 1 just to be sure.

    SLOG is 100% a anti-dataloss thing, It will never give you a speed boost, the drives still have to handle the same ingress regardless of having a SLOG.

    The only way to speed up writes in ZFS is with more VDEV's that can be written to in parallel.
    In your case the write IOPS will be limited to two drives as it can only write to two VDEV's at the same time.
  8. Funny
    looney got a reaction from Sir Asvald in Network layout showoff   
    I see your 6500 Chassis and I raise you with 3 more


     
    Okay okay, lets call it 2.5 more 😜
  9. Funny
    looney got a reaction from Lurick in Network layout showoff   
    I see your 6500 Chassis and I raise you with 3 more


     
    Okay okay, lets call it 2.5 more 😜
  10. Informative
    looney got a reaction from gcs8 in LTT Storage Rankings   
    Hey guys, long time no see, I can't be bothered to fill in the template, way to much info, whoever made that template is truly as massive a idiot.
     
    But here is a picture of what my home rack(s) look like now:
     


  11. Like
    looney got a reaction from scottyseng in LTT Storage Rankings   
    Hey guys, long time no see, I can't be bothered to fill in the template, way to much info, whoever made that template is truly as massive a idiot.
     
    But here is a picture of what my home rack(s) look like now:
     


  12. Like
    looney got a reaction from Mitko_DSV in Show off your old and retro computer parts   
    I actually use this at lanparties, when people are still setting up that rigs I'm already playing Tetris
    Plus its fun to mess with the crew asking them where the coax is so I can install my 10BASE5 tap/transceiver. 
     

  13. Funny
    looney got a reaction from TehDwonz in Show off your old and retro computer parts   
    I actually use this at lanparties, when people are still setting up that rigs I'm already playing Tetris
    Plus its fun to mess with the crew asking them where the coax is so I can install my 10BASE5 tap/transceiver. 
     

  14. Funny
    looney got a reaction from Letgomyleghoe. in Show off your old and retro computer parts   
    I actually use this at lanparties, when people are still setting up that rigs I'm already playing Tetris
    Plus its fun to mess with the crew asking them where the coax is so I can install my 10BASE5 tap/transceiver. 
     

  15. Like
    looney got a reaction from BrianTheElectrician in Show off your old and retro computer parts   
    I actually use this at lanparties, when people are still setting up that rigs I'm already playing Tetris
    Plus its fun to mess with the crew asking them where the coax is so I can install my 10BASE5 tap/transceiver. 
     

  16. Funny
    looney got a reaction from Dabombinable in Show off your old and retro computer parts   
    I actually use this at lanparties, when people are still setting up that rigs I'm already playing Tetris
    Plus its fun to mess with the crew asking them where the coax is so I can install my 10BASE5 tap/transceiver. 
     

  17. Like
    looney got a reaction from brwainer in LTT Storage Rankings   
    It's an arista DCS-7124SX-SSD-F, never really looked into ONIE / SONiC compatibility.
    I'm also looking to replace it in the near future, the availability of firmware updates is bugging me even though its one hell of a switch.

    Top switch is just a Cisco WS-C3750X-48P-S, needed something for ipmi and access points / patch points throughout the house.
  18. Like
    looney got a reaction from ProjectBox153 in Show off your old and retro computer parts   
    I actually use this at lanparties, when people are still setting up that rigs I'm already playing Tetris
    Plus its fun to mess with the crew asking them where the coax is so I can install my 10BASE5 tap/transceiver. 
     

  19. Funny
    looney got a reaction from Kilrah in Show off your old and retro computer parts   
    I actually use this at lanparties, when people are still setting up that rigs I'm already playing Tetris
    Plus its fun to mess with the crew asking them where the coax is so I can install my 10BASE5 tap/transceiver. 
     

  20. Like
    looney got a reaction from sub68 in Show off your old and retro computer parts   
    I actually use this at lanparties, when people are still setting up that rigs I'm already playing Tetris
    Plus its fun to mess with the crew asking them where the coax is so I can install my 10BASE5 tap/transceiver. 
     

  21. Like
    looney got a reaction from Valentyn in Show off your old and retro computer parts   
    I actually use this at lanparties, when people are still setting up that rigs I'm already playing Tetris
    Plus its fun to mess with the crew asking them where the coax is so I can install my 10BASE5 tap/transceiver. 
     

  22. Like
    looney got a reaction from Astac in How To: Choosing Your Storage Devices and Configuration   
    I give this topic looney's schnitzel of approval:

  23. Like
    looney got a reaction from Yaerox in How To: Choosing Your Storage Devices and Configuration   
    I give this topic looney's schnitzel of approval:

  24. Like
    looney got a reaction from unijab in Planning: New 100TB+ storage server.   
    No sponsoring.
     
    Are you interested in being a sponsor? 
  25. Like
    looney got a reaction from unijab in LTT Storage Rankings   
    Planing to buy a new server:
    http://linustechtips.com/main/topic/257439-planning-new-100tb-storage-server/
×