Jump to content

RealJerseyRob

Member
  • Posts

    4
  • Joined

  • Last visited

Awards

This user doesn't have any awards

RealJerseyRob's Achievements

  1. From what I can deduce, your NAS setup is the bottleneck. I assume your network has a 1Gbit backbone? Not sure if you've outlined this in another thread, but in order to be helpful to you, more info on the setup will be helpful to everyone who reads your thread. NAS RAID Type & Member Drives NAS Hardware Installed RAM On Client & NAS
  2. Oops! I transposed those. Indeed you are correct. The increased parity is why '6 has a greater fault tolerance, though write performance suffers for the same reason. In truth, '5 and '6 are both pretty terrible when considering process overhead. If either level fails, rebuilding can be so excruciatingly slow, that you might wish you'd just restored a backup to a fresh RAID 10 array in the first place.
  3. Probably the best (IMO) site for visualizing and understanding many of the intricacies of RAID is here. Ultimately, I would almost always run RAID 10, so long as the enclosure and funds allow. You'll obviously have fewer effective bytes available for storage due to the architecture, but the performance is second to RAID 0, with multiple failure tolerance. Assuming the "drives" to which you're referring are mechanical, you'll most certainly want to spread the I/O load over several, rather than one. Another best practice is to favor RAID 5 over RAID 6 at all costs; the latter's rebuild times are atrocious, as is computational overhead. Kinda just throwing this out there, but for my money, I would pick up an LSI RAID card (w/BBU if critical). The 9271-8i is a great all-round SAS/SATA card that's available for less than $200 USD.
×