Jump to content

handruin

Member
  • Posts

    117
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Contact Methods

  • Twitter
    @handruin

Profile Information

  • Gender
    Male
  • Location
    MA
  • Occupation
    Software Engineering

Recent Profile Visitors

784 profile views
  1. I don't know if my NAS build is of interest to you but it's similar in purpose with being a Plex server and local NAS. I added my original build details in this post and then here is my post with the updated information. Here's the summary of my build: 1 x SUPERMICRO MBD-X10SL7-F-O uATX Server Motherboard (I got this because of the built-in 8x SAS2 (6Gbps) ports via LSI 2308 and a BMC for remote management) 1 x Intel Intel Xeon E3-1270V3 Haswell 3.5GHz LGA 1150 (Motherboard and CPU were a newegg packaged deal giving some $65 off the package. This Xeon is essentially a Core i7 - 4770 with ECC memory support) 2 x Crucial 16GB Kit (8GBx2) DDR3L 1600MT/s (PC3-12800) DR x8 ECC UDIMM 240-Pin 1 x Rosewill 1.0 mm Thickness 4U Rackmount, Black Metal/Steel RSV-L4411 12 x HGST Deskstar NAS H3IKNAS40003272SN (0S03664) 4TB 7200 RPM 64MB Cache SATA 6.0Gb (I got all 8 at roughly the $20 off discount/coupon from the normal price) 1 x SAMSUNG 850 Pro Series MZ-7KE256BW 2.5" 256GB SATA III 3-D Vertical Internal Solid State Drive (SSD) (I plan to use this for ZFS ZIL and L2ARC since it has high durability and a 10-year warranty) 1 x SAMSUNG 850 Pro Series MZ-7KE128BW 2.5" 128GB SATA III 3-D Vertical Internal Solid State Drive (SSD) (This will be for the OS and possible other experimental stuff for ZFS) 1 x SeaSonic SSR-650RM 650W ATX12V / EPS12V SLI Ready CrossFire Ready 80 PLUS GOLD Certified Modular Active PFC 3 x C2G 27397 14" Internal Power Extension Cable 3 x Athena Power CABLE-YPHD 8" Molex Y Splitted Power Cable 1 x Mellanox ConnectX-2 10Gb Ethernet + DAC SPF+ cable OS: Xubuntu 14.10 (soon to be rebuilt with Xubuntu 16.04) ZFS RaidZ2 Samba CrashPlan backup location
  2. Hardware CASE: Supermicro SC846E16-R1200B chassis with BPN-SAS2-846EL SAS 2 backplane PSU: Supermicro dual PWS-1K21P-1R 1200 watt PSUs MB: Supermicro X9DRi-LN4F+ dual socket LGA2011 motherboard CPU: 2x Intel Xeon E5-2670 (SR0H8) HS: 2x Supermicro SNK-P0048P CPU Coolers RAM: 192GB ECC Memory (24x 8GB PC3-10600R DIMM) RAID CARD 1: Supermicro SAS2308 SAS2 HBA (flashed to IT mode, fw 20.00.04.00) SSD: Samsung SV843 enterprise 960GB HDD 1: 20x 6TB HGST NAS 7200 RPM HDD 2: 4x 1.5TB Samsung Ecogreen 5400 RPM NIC: Mellanox ConnectX 2 - MNPA19-XTR HP 10Gb ethernet (SFP+ DAC cable) Software and Configuration: The OS is Ubuntu Server 16.04 with ZFS on Linux configured. I also have samba installed to us for SMB/CIFS/NFS network file sharing. I will also be using rsync to keep files synchronized between my two NAS systems. Other than those components there are a few other basics to lock down the system to keep it protected. Given the CPU and RAM on this system this may end up becoming my primary Plex server to take over for my other NAS. Right now I have no SLOG or L2ARC configured until I do more performance testing. I've partitioned a little less than half of the Samsung SV843 SSD to be used for either of these tasks. I realize that a dedicated SSD should be used for this purpose but I don't have many synchronous writes to take advantage of a small SLOG and my system has tons of memory to use for ARC that I think configuring an L2ARC will only detract from performance. My primary zpool is configured as 2 vdevs in raidz2 with 10 disks in each. The secondary pool will be 1 vdev in raidz with 4 disks. Both my zpools will be configured with compression=lz4 and only the primary pool will use ashift=12 since it's using 4k drives. Usage: Backups and media warehouse. Potentially a new and/or replacement Plex server. This will be used for other processing of data in the background. Backup: This is the backup to my other NAS as well as a backup to data on my desktop systems. Additional info: This is my second large NAS. The first one is listed earlier in this thread. The Mellanox 10Gb NICs will be direct connected to each of the NAS systems to allow for higher bandwidth syncs. Photo's:
  3. Wow, first I've seen of this style case. What are the HDD temperatures like in that setup? I'd be fairly concerned with them being sandwiched together like that with no direct air flow.
  4. I liked your post...but I really didn't want to. ;-) I got all my drives in. I had to ask friends and family to help ordering since Newegg has limits of 5 per customer in a 48-hour window. I'm still working on getting the rest of the parts but it is looking like it will be the following: Supermicro SC846E16-R1200B chassis, dual PWS-1K21P-1R PSU, 24x hot swap 3.5" drive carriers, 3x 80mm mid plane fans, 2x 80mm rear fans. Supermicro X9DRi-LN4F+ dual socket LGA2011 motherboard 2x E5-2670 (SR0H8) 192GB Memory ( 24x 8GB PC3-10600R DIMM) 1x Supermicro SAS2308 (pcie 3.0) SAS2 HBA (flashed to IT mode, fw 20.00.04.00) 18 x 6TB HGST NAS HDDs I'm still working out the SSDs I'll use for L2ARC and OS boot. This will put me at 156TB via 12 x 4TB and 18 x 6TB using 30 drives total. While I wait on the chassis, I'm testing my drives inside another system to make sure they're all healthy.
  5. Even if I don't surpass him, it's fun trying. In true LTT fashion I made a sad HDD christmas tree from the first delivery of my drives. :-)
  6. Once I figure out a couple more components I may be surpassing looney for a short amount of time until he adds more storage. I just placed an order on 18 x 6TB drives today to go with my 12 x 4TB (156 TB total)...I'll post the config once I can figure out a few more parts. :-)
  7. Yes, this will work but keep in mind that if you're the one building and planning this new storage server you'll also be the one who maintains it when it has issues. I know that sounds obvious but keep in mind that if you go with a virtualized solution that you're putting all your eggs in a single basket. Any time there is a problem all VMs may suffer. Any time you need to do maintenance you need to schedule this with all VMs on the same system. I'm a huge user and fan of virtualization but it does take some more planning and thought into all the collateral components and use cases that may be affected by this consolidation. Really what you need to define is what are the goals you're trying to accomplish by virtualizing vs bare metal deployments and make sure they're all a good fit. Based on your description the one system that may need more attention in the configuration would be the system used for recording sound. Depending on how involved and serious this system is, you may need certain IO ports or hardware devices to properly record sound in the OS. Hypervisors can pass through certain devices but you should absolutely understand the use case of this system before recommending it be done in a VM to ensure it will work properly. You may also want to isolate this VM onto its own LUN/storage for IO-sensitive recordings or consider using the storage as a raw device.
  8. Amazon offers unlimited personal cloud storage for $60/year and offers some limited functionality related to sharing. If you're a Prime member I think you get 5GB to play with. See if that works for you and if it does the price for unlimited storage is actually really good and the performance is pretty decent.
  9. I'm running out of space everywhere...
  10. Killer setup. I've been keeping my eye on that SuperMicro chassis for my next setup. How bad is the noise from the chassis with the two 1400W PSU? Can you go into more detail as to why you chose to go with the 4 x 7-disk vdevs striped into a single pool vs one single pool? This feels like a functional equivalent of a RAID 60. Why no SSD-based L2ARC with pools that large to hold meta data if not anything else? Love the cat pic! I've got several of those over the years as "helpers" putting their fuzzy heads into my PC building business. :-)
  11. handruin

    Safe RAID?

    I see a difference in that one doesn't need to complicate their storage environment with FreeNAS and can simply manage ZFS directly through Linux quite easily (versus BSD). When comparing fixing a broken system, it's FreeNAS that offers the extra challenges vs just using ZFS on Linux directly which offers a lot of simplicity. In what you described I'd also agree that I'd find managing the hardware raid card easier than messing with FreeNAS. I'm in agreement that ZFS is no magic pill but it does solve some complexities that hardware raid controllers bring to the table and offers additional features that aren't available (snapshot, checksum, thin provisioning, etc). If a HW raid controller dies, you best find an exact replacement with very close if not the same firmware on it or else you can destroy your foreign import. Next, you're completely out of luck if a HW raid controller is no longer manufactured and now you're stuck trolling eBay for a used replacement. Then you have batteries to manage if you want the writeback cache benefit or else you bought a hybrid non volatile version of cache protection. As drive sizes improve, some HW raid controllers won't recognize the larger drives. I got stuck with my Dell Perc 6 cards in this situation. I can't add drives past 2TB. I agree again that understanding the requirements one is trying to solve for is an important factor and can make for a better design if you have certain needs to meet.
  12. handruin

    Safe RAID?

    In the case of ZFS, if the raid controller and/or motherboard (which has the controller) fails, you can re-import your ZFS pool(s) onto another system provided all the drives are reconnected to the new controller and/or motherboard. You would just want to make sure you've installed at least the same version or newer of ZFS on the new system in the case of a complete OS/motherboard loss. As for your second argument, don't forget the non-recoverable error rates on even enterprise drives. Those numbers actually become within the range of reality when a parity rebuild occurs which is why many consider RAID5 (raid z) dead at this point with 4+TB drives and even RAID6 (raidz2) becoming close to the same issue. This is separate from the purpose of ZFS having checksums to reduce for bitrot. A non-recoverable error will absolutely cause corruption if there is no additional parity or mirror drive to lookup the value because no matter what grade drive you buy, they'll eventually die and usually when you're the least prepared for it.
  13. Depends on your application of the storage. ATTO disk benchmark isn't really a great source in analysis of IO performance. If you're only doing sustain workload at 256MB the numbers look great but where things will likely break down is when you introduce random IO into the mix at smaller block sizes. Sure, fusion IO are not cheap at all but they excel greatly at random workloads. I was under the impression the OP was looking for a single device that could achieve this otherwise I would have suggested something else like a PureStorage array.
  14. There is a licensing issue and that issue just keeps the two from being distributed together in the same package. The Linux kernel is "GNU General Public License Version 2 (GPLv2)" and ZFS is "Common Development and Distribution License (CDDL)". Both are free open source licenses but because of the differences in the licenses they cannot be included or distributed together. There is more info here if you're curious. You're right in that this port of ZFS (OpenZFS) is not the exact same as the original platform but it is feature-ready and comparable and the one to consider for Linux use. ZFS on Linux is also production-ready in terms of strong data integrity, stability, and performance when configured properly. Might make for a nice video in the future on the pros/cons of different file systems. ;-)
  15. I looked for the video before replying but don't see it? Is this one on YouTube yet or Vessel?
×