Jump to content

Whonnock RAID Recovery Vlog

"Lessons relearned from this that are things people generally say do to but don't till something like this happens"

  1. Always have a scheduled or continuously running backup. At least a weekly backup if a backup cycle takes longer off business time.
  2. Have regular offline/cold backups, preferably somewhere not in the same building. (A fire will quickly teach you that one)
  3. Raid across raid cards, let alone as a stripe.... not a good idea.

 

Trying to do enterprise level storage on way less than enterprise storage prices will often lead to this level of headache or complete disasters. Scaling in ways that seem like cheap shortcuts to higher performance and/or greater capacity using cheaper consumer drivers on low end enterprise hardware only seems like a good idea till 11am when it completely implodes due to a single failure somewhere.

Enterprise storage: Fast, large, not over $100k; you can only pick two.

 

My super simple guide on how to set your budget for a raid and backup solutions.

  • Raid/redundancy budget is equal to how much downtime is worth.
    • Estimate should up to how much money you'd loose in the downtime it would be to get replacement hardware and restore from backup plus the cost of lost work since a previous backup (eg. up to a week if doing weekly backups)
  • Backup budget is equal to how much your data would cost to recreate what is needed.
    • This is usually a very large number and can vary wildly depending on the sort of business.

Chances are that unless you are constantly creating new data at 100% throughput from 9-5, incremental daily backups that finish before 9am the next day, and full weekly backups over the weekend, should be entirely possible nowadays for not insane costs.

 

Personally, I do like the way LSI handles raid configs. I've had far too many issues with "import foreign config" in the past. One LSI card = 1 to 2 simple arrays with no additional layers. For anything large scale that must have hardware riad, I'm gonna use an HP SmartArray, till a single adapter but use expanders and RAID 60. Though in recent years hardware raid is becoming supplanted by HBAs and software defined raid (usually ZFS) for large data stores spanning more than a few drives, and depending on drive mirroring and parity for uptime resilience both getting supplemented by mirror nodes.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×