Jump to content

LDWoodworth

Member
  • Posts

    25
  • Joined

  • Last visited

Reputation Activity

  1. Informative
    LDWoodworth got a reaction from sub68 in Linus Tries the LTT Theme Song on Beat Saber!   
    I think this is his profile.
     
    Currently 15th globally on Expert.
    Laszlo - Supernova
    Mapped by: DownyCat  
    https://scoresaber.com/u/76561197962440661
  2. Informative
    LDWoodworth got a reaction from williamcll in Linus Tries the LTT Theme Song on Beat Saber!   
    I think this is his profile.
     
    Currently 15th globally on Expert.
    Laszlo - Supernova
    Mapped by: DownyCat  
    https://scoresaber.com/u/76561197962440661
  3. Funny
    LDWoodworth reacted to Princess Luna in Linus Tries the LTT Theme Song on Beat Saber!   
    When you gotta go creative for a VR advertisement hehe
  4. Like
    LDWoodworth got a reaction from Baumstyler in Gaming on the CLEAR TV Prototype!   
    Best use case I can see for this is streamers putting the camera literally behind the screen to get a better selfie view.
  5. Like
    LDWoodworth reacted to wpirobotbuilder in Reducing Single Points of Failure (SPoF) in Redundant Storage   
    Reducing Single Points of Failure in Redundant Storage
     
    In lots of the storage builds that exist here on the forum, the primary method of data protection is RAID, sometimes coupled with a backup solution (or the storage build is the backup solution). In the storage industry, there are load of systems that utilize RAID to provide redundancy for customers' data. One key aspect of (good) storage solutions is being resistant to not only drive failures (which happen a LOT), but also failure of other components as well. The goal is to have no single point of failure.
     
    First, let's ask:
     
    What is a Single Point of Failure?
     
    A single point of failure is exactly what it sounds like. Pick a component inside of the storage system, and imagine that it was broken, removed, etc. Do you lose any data as a result of this? If so, then that component is a single point of failure.
     
    By the way, from this point forward a single point of failure will be abbreviated as: SPoF
     
    Let's pick on looney again, using his system given here.
     
    Looney's build contains a FlexRAID software RAID array, which is comprised of drives on two separate hardware RAID cards running as Host Bus Adapters, with a handful of iSCSI targeted drives. We'll just focus on the RAID arrays for now, since those seem like where he would store data he wants to protect. His two arrays are on two separate hardware RAID cards, which provide redundancy in case of possible drive failures. As long as he replaces drives as they fail, his array is unlikely to go down.
     
    Now let's be mean and remove his IBM card, effectively removing 8 of his 14 drives. Since he only has two drives worth of parity, is his system still online? No, we have exceeded the capability of his RAID array to recover from drive loss. If he only had this system, that makes his IBM card a SPoF, as well as his RocketRaid card.
     
    However, he has a cloud backup service, which is very reliable in terms of keeping data intact. In addition, being from the Kingdom of the Netherlands, he has fantastic 100/100 internet service, making the process of recovering from a total system loss much easier.
     
    See why RAID doesn't constitute backup? It doesn't protect you from a catastrophic event.
     
    In professional environments, lots of storage is done over specialized networks, where multiple systems can replicate data to keep it safe in the event of a single system loss. In addition, systems may have multiple storage controllers (not like RAID controllers) which allow a single system to keep operating in the event of a controller failure. These systems also run RAID to prevent against drive loss.
     
    In systems running the Z File System (ZFS) like FreeNAS or Ubuntu with ZFS installed, DIY users can eliminate SPoFs by using multiple storage controllers, and planning their volumes to reduce the risk of data loss. Something similar can be done (I believe) with FlexRAID. This article aims to provide examples for theoretical configurations, and will have some practical real-life examples as well. It also will outline the high (sometimes unreasonably high) cost of eliminating SPoF for certain configurations, and aim to identify more efficient and practical ones.
     
    Please note: There is no hardware RAID control going on here, all software RAID. When 'controllers' are mentioned, I am referring to the Intel/3rd party SATA chipsets on a motherboard, an add-in SATA controller (Host Bus Adapter), or an add-in RAID card running without RAID configured. The controllers only provide the computer with more SATA ports, and it is the software itself which controls the RAID array.
     
    First, lets start with hypothetical situations. We have a user with some drives who wants to eliminate SPoFs in his system. Since we can't remove the risk of a catastrophic failure (such as a CPU, motherboard or RAM failure), we'll ignore those for now. We can, however, reduce the risk of downtime due to a controller failure. This might be a 3rd party chipset, a RAID card (not configured for RAID) or other HBA which connects drives to the system.
     
    RAID 0 will not be considered, since there is no redundancy.
     
    Note: For clarification, RAID 5 represents single-parity RAID, or RAID Z1 (ZFS). RAID 6 represents dual-parity RAID, or RAID Z2. RAID 7 represents triple-parity RAID, or RAID Z3.
     
    Note: FlexRAID doesn't support nested RAID levels.
     
    [spoiler=Our user has two drives.]
    Given this, the only viable configuration is RAID 1. In a typical situation, we might hook both drives up to the same controller and call it a day. But now that controller is a SPoF!
     
    To get around this, we'll use two controllers, and set up the configuration as shown:
     

     
    Now, if we remove a controller, there is still an active drive that keeps the data alive! This system has removed the controllers as a SPoF.

    [spoiler=Our user has three drives.]
    With three drives, we can do either a 3-way RAID 1 mirror, or a RAID 5 configuration. Let's start with RAID 1:
     
    Remembering that we want to have at least 2 controllers, we can set up the RAID 1 in one of two ways, shown below:
     

     
    In this instance, we could lose any controller, and the array would still be alive. Now let's go to RAID 5:
     
    In RAID 5, a loss of more than 1 drive will kill the array. Therefore, there must be at least 3 controllers to prevent any one from becoming an SPoF, shown below:
     

     
    Notice that in this situation, we are using a lot of controllers given the number of drives we have. Note also that the more drives a RAID 5 contains, the more controllers we will need. We'll see this shortly.

     
    [spoiler=Our user has four drives.]
    We'll stop using RAID 1 at this point, since it is very costly to keep building the array. This time, our options are RAID 5, RAID 6 and RAID 10. We'll start with RAID 5, for the last time.
     
    Remembering the insight we developed last time, we'll need 4 controller for 4 drives:
     

     
    This really starts to get expensive, unless you are already using 4 controllers in your system (we'll talk about this during the practical examples later on). Now on to RAID 6:
     
    Since RAID 6 can sustain two drive losses, we can put two drives on each controller, so we need 2 controllers to meet our requirements:
     

     
    In this situation, the loss of a controller will down two drives, which the array can endure. Last is RAID 10:
     
    Using RAID 10 with four drives gives us this minimum configuration:
     

     
    Notice that for RAID 10, we can put one drive from each RAID 1 stripe on a single controller. As we'll see later on, this allows us to create massive RAID 10 arrays with a relatively small number of controllers. In addition, using RAID 10 gives us the same storage space as a RAID 6, but with smaller worst-case redundancy. Given four drives, the best choices look like RAID 6and RAID 10, with the trade-off being redundancy (RAID 6is better) versus speed (RAID 10 is better).

     
    [spoiler=Our user has five drives.]
    For this case, we can't go with RAID 5, since it would require 5 controllers, and can't do RAID 10 with an odd number of drives. However, we do have RAID 6 and RAID 7. We'll start with RAID 6:
     
    Here we need at least 3 controllers, but one controller is underutilized:
     

     
    For RAID 7, we get 3 drives worth of redundancy, so we can put 3 drives on each controller:
     

     
    In this case, we need two controllers, with one being underutilized.

    [spoiler=Our user has six drives.]
    We can now start doing some more advanced nested RAID levels. In this case, we can create RAID 10, RAID 6, RAID 7, and RAID 50 (striped RAID 5).
     
    RAID 10 follows the logical progression from the four drive configuration:
     

     
    RAID 6 becomes as efficient as possible, fully utilizing all controllers:
     

     
    RAID 7 also becomes as efficient as possible, fully utilizing both controllers:
     

     
    RAID 50 is possible by creating two RAID 5volumes and striping them together as a RAID 0:
     

     
    Notice that we have reduced the number of controllers for a single-parity solution, since we can put one drive from each stripe onto a single controller. This progression will occur later as well, when we start looking at RAID 60 and RAID 70.


  6. Agree
    LDWoodworth reacted to Monkey Dust in Elon taking Tesla private?   
    It will be interesting to see how the Saudis buying a large stake goes. I doubt a Chinese company would be allowed to take a large stake in a US company, with the White house's current incumbent. Will the Saudis be looked on more favourably? Would the Saudis attempt to shaft Tesla from within if the value of petroleum dropped to low?
     
    Tesla might be the greatest corporate soap opera of all time...
  7. Informative
    LDWoodworth got a reaction from soldier_ph in WAN Show July 6 2018 - Wan show document   
    The microphone is gone from the middle of the table!!!
  8. Agree
    LDWoodworth reacted to minibois in Editing Workstation Upgrade   
    Who's the other dude? No link in this forum thread or video description.
  9. Agree
    LDWoodworth reacted to shadowbyte in Editing Workstation Upgrade   
    more Ivan pls
    he's so cheeki breeki
  10. Agree
    LDWoodworth reacted to Daniel644 in Editing Workstation Upgrade   
    wait, WHOLE ROOM WATER COOLING was the last upgrade? I thought all the editors rebuilt there machines right after the move into the new office.
  11. Informative
    LDWoodworth reacted to LinusTech in Editing Workstation Upgrade   
    Of course not  
     
     
    We didn't change the base hardware. Just new graphics cards and pulling out the water cooling stuff.
  12. Agree
    LDWoodworth reacted to Hoads2047 in Asus ROG Claymore brown switches.   
    Hey everyone,
    I am doing a ROG themed build and i have everything in place. There is just one thing. The Asus ROG keyboard, the Claymore in the Brown switches, that i want isn't out yet (The one with the num-pad).
    SO can somebody tell me what is going on. In my region (Australia) They have released the keyboard in red and blue switches at the start of this year. That was 6 months ago. I am starting to loose my patience with this. So is it delayed? canceled? date yet to be announced? Can somebody please help!!
     
    Thanks
     
×