Jump to content

handruin

Member
  • Posts

    117
  • Joined

  • Last visited

Reputation Activity

  1. Funny
    handruin got a reaction from marcfusch in LTT Storage Rankings   
    For me, I built my NAS (see signature) as a learning device to get better with ZFS management (personal and job reasons).  In addition to that I use mine to backup all my home PCs to it using the local (free) features of CrashPlan.  My fiancee's PC is a photo-editing workstation for her personal business that has Terabytes of Canon RAW files from the work she does.  Those get backed up to my (NAS 5+TB and growing) and also to CrashPlan cloud as an off-site protection.  The cloud backup takes considerably longer so having a local option helps to stage the backups as we wait for the cloud to sync.
     
    My personal workstation also has a couple TB of Canon RAW files for my own personal photo hobby along with other content.  I also have a large collection of FLAC music that I've ripped from my personal CD collection over the years.  I'm also working on ripping all my DVD's and Bluray into mp4 files to serve up to my TV via Plex.  My workstation probably qualifies for this thread in storage space but since it's not a dedicated storage server I can't list it.  :-)  I also run multiple Linux VMs and I plan to move those over to my NAS to reduce volatility from my workstation.  I host some basic gaming servers for friends and mumble chat server on my Linux VM via a domain name I registered.  Those will eventually get hosted on the same machine I use as a NAS since it has decent cpu power.
     
    I'm working on revamping my old NAS (8 x 1.5TB | Dell Perc 6I | Server 2012 | AMD 4600+) to ship off to my dad's house so I can replicate my data there and also give him a place to backup some data locally.  I haven't started that project yet and not sure how I want to manage that.  One of the 1.5TB drives failed recently and I can't find a replacement to match.  I plan to convert the system into Linux with ZFS and make the remaining 7 into its own pool for slower storage.  I'll possibly buy a few 4-6TB drives to add on to that system in a different ZFS pool.
  2. Like
    handruin got a reaction from Mattias Edeslatt in LTT Storage Rankings   
    I liked your post...but I really didn't want to.  ;-)
     
    I got all my drives in.  I had to ask friends and family to help ordering since Newegg has limits of 5 per customer in a 48-hour window.  I'm still working on getting the rest of the parts but it is looking like it will be the following:
    Supermicro SC846E16-R1200B chassis, dual PWS-1K21P-1R PSU, 24x hot swap 3.5" drive carriers, 3x 80mm mid plane fans, 2x 80mm rear fans. Supermicro X9DRi-LN4F+ dual socket LGA2011 motherboard 2x E5-2670 (SR0H8) 192GB Memory ( 24x 8GB PC3-10600R DIMM) 1x Supermicro SAS2308 (pcie 3.0) SAS2 HBA (flashed to IT mode, fw 20.00.04.00) 18 x 6TB HGST NAS HDDs I'm still working out the SSDs I'll use for L2ARC and OS boot.
     
    This will put me at 156TB via 12 x 4TB and 18 x 6TB using 30 drives total.
     
    While I wait on the chassis, I'm testing my drives inside another system to make sure they're all healthy.
     
     

  3. Like
    handruin got a reaction from LazySpeck in LTT Storage Rankings   
    Hardware
     
    CASE: Supermicro SC846E16-R1200B chassis with BPN-SAS2-846EL SAS 2 backplane
    PSU: Supermicro dual PWS-1K21P-1R 1200 watt PSUs
    MB: Supermicro X9DRi-LN4F+ dual socket LGA2011 motherboard
    CPU: 2x Intel Xeon E5-2670 (SR0H8)
    HS: 2x Supermicro SNK-P0048P CPU Coolers
    RAM: 192GB ECC Memory (24x 8GB PC3-10600R DIMM)
    RAID CARD 1: Supermicro SAS2308 SAS2 HBA (flashed to IT mode, fw 20.00.04.00)
    SSD: Samsung SV843 enterprise 960GB
    HDD 1: 20x 6TB HGST NAS 7200 RPM
    HDD 2: 4x 1.5TB Samsung Ecogreen 5400 RPM
    NIC: Mellanox ConnectX 2 - MNPA19-XTR HP 10Gb ethernet (SFP+ DAC cable)
     
    Software and Configuration:
    The OS is Ubuntu Server 16.04 with ZFS on Linux configured.  I also have samba installed to us for SMB/CIFS/NFS network file sharing.  I will also be using rsync to keep files synchronized between my two NAS systems.  Other than those components there are a few other basics to lock down the system to keep it protected.  Given the CPU and RAM on this system this may end up becoming my primary Plex server to take over for my other NAS.
     
    Right now I have no SLOG or L2ARC configured until I do more performance testing.  I've partitioned a little less than half of the Samsung SV843 SSD to be used for either of these tasks.  I realize that a dedicated SSD should be used for this purpose but I don't have many synchronous writes to take advantage of a small SLOG and my system has tons of memory to use for ARC that I think configuring an L2ARC will only detract from performance.
     
    My primary zpool is configured as 2 vdevs in raidz2 with 10 disks in each.
    The secondary pool will be 1 vdev in raidz with 4 disks.
    Both my zpools will be configured with compression=lz4 and only the primary pool will use ashift=12 since it's using 4k drives.
     
    Usage:
    Backups and media warehouse.  Potentially a new and/or replacement Plex server.  This will be used for other processing of data in the background.
     
    Backup:
    This is the backup to my other NAS as well as a backup to data on my desktop systems.
     
    Additional info:
    This is my second large NAS.  The first one is listed earlier in this thread.
    The Mellanox 10Gb NICs will be direct connected to each of the NAS systems to allow for higher bandwidth syncs.
     
    Photo's:

     

     

     

     



  4. Like
    handruin got a reaction from looney in LTT Storage Rankings   
    Hardware
     
    CASE: Supermicro SC846E16-R1200B chassis with BPN-SAS2-846EL SAS 2 backplane
    PSU: Supermicro dual PWS-1K21P-1R 1200 watt PSUs
    MB: Supermicro X9DRi-LN4F+ dual socket LGA2011 motherboard
    CPU: 2x Intel Xeon E5-2670 (SR0H8)
    HS: 2x Supermicro SNK-P0048P CPU Coolers
    RAM: 192GB ECC Memory (24x 8GB PC3-10600R DIMM)
    RAID CARD 1: Supermicro SAS2308 SAS2 HBA (flashed to IT mode, fw 20.00.04.00)
    SSD: Samsung SV843 enterprise 960GB
    HDD 1: 20x 6TB HGST NAS 7200 RPM
    HDD 2: 4x 1.5TB Samsung Ecogreen 5400 RPM
    NIC: Mellanox ConnectX 2 - MNPA19-XTR HP 10Gb ethernet (SFP+ DAC cable)
     
    Software and Configuration:
    The OS is Ubuntu Server 16.04 with ZFS on Linux configured.  I also have samba installed to us for SMB/CIFS/NFS network file sharing.  I will also be using rsync to keep files synchronized between my two NAS systems.  Other than those components there are a few other basics to lock down the system to keep it protected.  Given the CPU and RAM on this system this may end up becoming my primary Plex server to take over for my other NAS.
     
    Right now I have no SLOG or L2ARC configured until I do more performance testing.  I've partitioned a little less than half of the Samsung SV843 SSD to be used for either of these tasks.  I realize that a dedicated SSD should be used for this purpose but I don't have many synchronous writes to take advantage of a small SLOG and my system has tons of memory to use for ARC that I think configuring an L2ARC will only detract from performance.
     
    My primary zpool is configured as 2 vdevs in raidz2 with 10 disks in each.
    The secondary pool will be 1 vdev in raidz with 4 disks.
    Both my zpools will be configured with compression=lz4 and only the primary pool will use ashift=12 since it's using 4k drives.
     
    Usage:
    Backups and media warehouse.  Potentially a new and/or replacement Plex server.  This will be used for other processing of data in the background.
     
    Backup:
    This is the backup to my other NAS as well as a backup to data on my desktop systems.
     
    Additional info:
    This is my second large NAS.  The first one is listed earlier in this thread.
    The Mellanox 10Gb NICs will be direct connected to each of the NAS systems to allow for higher bandwidth syncs.
     
    Photo's:

     

     

     

     



  5. Like
    handruin got a reaction from Mikensan in LTT Storage Rankings   
    Even if I don't surpass him, it's fun trying.  In true LTT fashion I made a sad HDD christmas tree from the first delivery of my drives.  :-)

  6. Like
    handruin reacted to looney in LTT Storage Rankings   
    Just ordered all the materials for the upgrade, so new system will be running this hardware:
     
    Server: Supermicro Server 6017R-NRF4+
    CPU: 2x Intel XEON E5-2650v1
    RAM: 24x 8GB 1600 DDR3 ECC RDIMM  (192GB)
    SSD: 2x 512GB Samsung 850 Pro
    HBA: SAS 9200-8E HBA
    NIC: Dual port 10G SFP+ NIC
    JBOD: SC847 E16-RJBOD1
    HDD1: 37x 4TB ST4000DM000 Seagate Desktop Drive
    HDD2: 8x 4TB ST4000NM0033 Seagate Constellation ES Drive
     
    The new server will run EXSi of USB.
    The 2 SSD's will be set up in RAID 1 to server as the ESXi datastore.
    I will be running a FreeNAS VM with 8 cores and 160GB RAM allocated to it with the HBA in passthrough mode.
    There will be 2 pools, one with the desktop drives and one with the enterprise drives
    The desktop drive pool will be 3 vdev's of 12 drives each in RAID-Z2.
    The enterprise drive pool will be 1 vdev of 8 drives in RAID-Z2
    The remaining 4TB desktop will be a hot-spare.
     
    I may still buy a extra small SSD for the ZIL.
     
    This will bring the ranking TB's to 176TB excluding hot-spare with 44 drives giving me 666 storage ranking points!
    Server of the Beast!
     
  7. Like
    handruin got a reaction from brwainer in LTT Storage Rankings   
    Once I figure out a couple more components I may be surpassing looney for a short amount of time until he adds more storage.  I just placed an order on 18 x 6TB drives today to go with my 12 x 4TB (156 TB total)...I'll post the config once I can figure out a few more parts. 
    :-)
  8. Like
    handruin got a reaction from scottyseng in LTT Storage Rankings   
    Once I figure out a couple more components I may be surpassing looney for a short amount of time until he adds more storage.  I just placed an order on 18 x 6TB drives today to go with my 12 x 4TB (156 TB total)...I'll post the config once I can figure out a few more parts. 
    :-)
  9. Agree
    handruin reacted to SageOfSpice in Most reliable HDD (Seagate vs WD vs Toshiba)   
    Hitachi>Toshiba>WD>Seagate
  10. Like
    handruin got a reaction from RangerLunis in Recommend a cloud storage?   
    Amazon offers unlimited personal cloud storage for $60/year and offers some limited functionality related to sharing.  If you're a Prime member I think you get 5GB to play with.  See if that works for you and if it does the price for unlimited storage is actually really good and the performance is pretty decent.
  11. Like
    handruin reacted to MyNameIsNicola in LTT Storage Rankings   
    Hardware
    CASE: Supermicro SuperChassis 847A-R1400LPB
    PSU: 2x 1400W high-efficiency (1+1) redundant power supply with PMBus
    MB: Supermicro X9DRH-IF
    CPU: 2x Intel® Xeon® CPU E5-2603 v2 @ 1.80GHz
    HS: 2x Active Supermicro SNK-P0048AP
    RAM: 80GB ECC DDR3 1333
    RAID CARD 1: LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware
    RAID CARD 2: LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware
    RAID CARD 3: LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware
    RAID CARD 4: LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware
    RAID CARD 5: LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware
    SSD 1: 2x 120GB Kingston SV300S37A120G SSD (FreeNAS boot volume)
    SSD 2: 2x 500GB Samsung 850 EVO SSD (jails and virtual machines)
    HDD 1: 7x 8TB Seagate ST8000AS0002-1NA17Z
    HDD 2: 21x 3TB Hitachi HGST HDN724030ALE640
    HDD 3: 1x 4TB Hitachi HGST HDS724040ALE640
    HDD 4: 4x 4TB Hitachi HGST HDS724040ALE640
    HDD 5: 3x 3TB Western Digital Red WDC WD30EFRX-68AX9N0
    SWITCH: Intellinet 523554 24-Port Gigabit Ethernet Rackmount Managed Switch 
    UPS: 2x APC Power-Saving Back-UPS ES 8 Outlet 700VA 230V BS 1363
    TOTAL RAW STORAGE: 149.24 terabytes
    TOTAL USABLE STORAGE: 130.69 terabytes
     
    Software and Configuration:
    My server is running FreeNAS 9.3.
    My main array consists of 4x 7-disk RAIDZ2 VDEVs striped in to a single ZFS pool. My second archival array (which is used to sync periodic snapshots of important parts of the main pool) is a RAIDZ1 VDEV of 7x 8TB disks.
    The two smaller 120GB SSDs are in a ZFS mirror for the FreeNAS boot volume, while the second larger 500GB SSDs are in a ZFS mirror for FreeBSD jails and VirtualBox VMs. The last remaining disk is used for scratch space and may be used to replace any failed disks in the other arrays as a "warm spare". I have 2x 4TB disks available as cold spares should any disks fail.

    Usage:
    Most of the storage is for movies and TV series, with Plex Media Server running inside one of the BSD jails. The remainder is used for personal file storage / networked home directory, backups of my colo servers, and PXE netboot server installation source (one of my FreeBSD jails is my PXE netboot server).

    Backup:
    Important files from the primary ZFS pool are backed up to the slightly smaller archival ZFS pool via periodic snapshots. The archival pool is physically removed and swapped out to be taken to my storage locker while another set of 7x 8TB disks are swapped in for the next periodic snapshot to be updated.

    Additional info:
    My whole setup is squeezed in to a 12U rackmount flight case on wheels. This makes for very easy maneuvering of the whole rack in my small "server room" / box-room, as well as being highly convenient for the next time I move home. The internal rack is mounted on to the external case via heavy duty rubber shock-absorbers.
    At present the small APC UPS is overloaded and is only really acting as a power smoothing device, so I plan to purchase a suitably rated 2U rack mount UPS in the near future.
     
    brienne# (for i in `seq 0 37`; do smartctl -a /dev/da$i | grep 'Device Model:' ; done) | sort | uniq -c
      21 Device Model:     HGST HDN724030ALE640
       1 Device Model:     HGST HDS724040ALE640
       4 Device Model:     Hitachi HDS724040ALE640
       2 Device Model:     KINGSTON SV300S37A120G
       7 Device Model:     ST8000AS0002-1NA17Z
       3 Device Model:     WDC WD30EFRX-68AX9N0
    brienne#
     
    Photos:
    http://imgur.com/a/oRcNe








        [root@brienne ~]# zpool iostat -v -T d Wed Jan 27 20:32:20 GMT 2016                                            capacity     operations    bandwidth pool                                    alloc   free   read  write   read  write --------------------------------------  -----  -----  -----  -----  -----  ----- archive                                 35.8T  14.7T      0      0  3.39K     10   raidz1                                35.8T  14.7T      0      0  3.39K     10     gptid/fa89b7e3-b2d7-11e5-9ebd-0cc47a546fcc      -      -      0      0  1.28K      4     gptid/fbf7981a-b2d7-11e5-9ebd-0cc47a546fcc      -      -      0      0    693      4     gptid/fd2ef450-b2d7-11e5-9ebd-0cc47a546fcc      -      -      0      0    864      4     gptid/fe514fff-b2d7-11e5-9ebd-0cc47a546fcc      -      -      0      0    132      4     gptid/ff865f8b-b2d7-11e5-9ebd-0cc47a546fcc      -      -      0      0    182      5     gptid/00ca81fe-b2d8-11e5-9ebd-0cc47a546fcc      -      -      0      0    183      5     gptid/01ebff25-b2d8-11e5-9ebd-0cc47a546fcc      -      -      0      0    132      4 --------------------------------------  -----  -----  -----  -----  -----  ----- freenas-boot                            1.90G   109G      2     89  10.5K   723K   mirror                                1.90G   109G      2     89  10.5K   723K     gptid/ef4efad1-8017-11e5-891c-0cc47a546fcc      -      -      1     13  6.50K   723K     gptid/ef53fac1-8017-11e5-891c-0cc47a546fcc      -      -      0     13  4.20K   723K --------------------------------------  -----  -----  -----  -----  -----  ----- jails                                    361G  98.6G     49     78  1.17M   740K   mirror                                 361G  98.6G     49     78  1.17M   740K     gptid/71ea795a-809a-11e5-884a-0cc47a546fcc      -      -     26     44  1.16M   744K     gptid/7223d49f-809a-11e5-884a-0cc47a546fcc      -      -     25     44  1.16M   744K --------------------------------------  -----  -----  -----  -----  -----  ----- pool1                                   49.6T  26.4T     60     16  7.30M  1.24M   raidz2                                15.4T  3.62T     18      3  2.23M   210K     gptid/8273abf3-889d-11e5-8ab5-0cc47a546fcc      -      -      9      0   334K  46.4K     gptid/85641e2a-889d-11e5-8ab5-0cc47a546fcc      -      -      9      0   333K  46.4K     gptid/882bf207-889d-11e5-8ab5-0cc47a546fcc      -      -      9      0   334K  46.4K     gptid/8b112084-889d-11e5-8ab5-0cc47a546fcc      -      -      9      0   334K  46.4K     gptid/8e20deb6-889d-11e5-8ab5-0cc47a546fcc      -      -      9      0   334K  46.4K     gptid/91252e88-889d-11e5-8ab5-0cc47a546fcc      -      -      9      0   334K  46.4K     gptid/940cb743-889d-11e5-8ab5-0cc47a546fcc      -      -      9      0   334K  46.4K   raidz2                                15.4T  3.57T     18      3  2.30M   291K     gptid/5134e3f9-88a2-11e5-8582-0cc47a546fcc      -      -      9      1   345K  64.0K     gptid/5413e683-88a2-11e5-8582-0cc47a546fcc      -      -      9      1   345K  64.0K     gptid/56fc5d0d-88a2-11e5-8582-0cc47a546fcc      -      -      9      1   345K  64.0K     gptid/59dfecca-88a2-11e5-8582-0cc47a546fcc      -      -      9      1   345K  64.0K     gptid/5cb31b99-88a2-11e5-8582-0cc47a546fcc      -      -      9      1   345K  64.0K     gptid/5f9c7a47-88a2-11e5-8582-0cc47a546fcc      -      -      9      1   345K  64.0K     gptid/62810541-88a2-11e5-8582-0cc47a546fcc      -      -      9      1   345K  64.0K   raidz2                                17.0T  2.00T     18      3  2.28M   263K     gptid/18451b24-8ee7-11e5-8fb5-0cc47a546fcc      -      -      9      1   342K  57.6K     gptid/1b2f4e11-8ee7-11e5-8fb5-0cc47a546fcc      -      -      9      1   342K  57.6K     gptid/1e285370-8ee7-11e5-8fb5-0cc47a546fcc      -      -      9      1   342K  57.6K     gptid/210413c7-8ee7-11e5-8fb5-0cc47a546fcc      -      -      9      1   342K  57.6K     gptid/23de6209-8ee7-11e5-8fb5-0cc47a546fcc      -      -      9      1   342K  57.6K     gptid/26bd5abc-8ee7-11e5-8fb5-0cc47a546fcc      -      -      9      1   342K  57.6K     gptid/29b1b41f-8ee7-11e5-8fb5-0cc47a546fcc      -      -      9      1   342K  57.6K   raidz2                                1.77T  17.2T      4      5   502K   509K     gptid/94bf7a70-b7b6-11e5-8976-0cc47a546fcc      -      -      2      1  73.2K   111K     gptid/97f04c90-b7b6-11e5-8976-0cc47a546fcc      -      -      2      1  73.2K   111K     gptid/9b0ad9ef-b7b6-11e5-8976-0cc47a546fcc      -      -      2      1  73.3K   111K     gptid/9e2b3065-b7b6-11e5-8976-0cc47a546fcc      -      -      2      1  73.2K   111K     gptid/9feb05f9-b7b6-11e5-8976-0cc47a546fcc      -      -      1      1  73.4K   111K     gptid/a11775af-b7b6-11e5-8976-0cc47a546fcc      -      -      1      1  73.4K   111K     gptid/a245e539-b7b6-11e5-8976-0cc47a546fcc      -      -      1      1  73.4K   111K --------------------------------------  -----  -----  -----  -----  -----  ----- pool2                                    353G  3.28T     16     12  2.00M   651K   gptid/9ccc277d-826c-11e5-8238-0cc47a546fcc   353G  3.28T     16     12  2.00M   651K --------------------------------------  -----  -----  -----  -----  -----  -----   [root@brienne ~]# for dev in ada0 ada1 $(perl -e 'print "da$_ " for 0 .. 37'); do echo -n "/dev/$dev"; smartctl -a /dev/$dev | perl -ne 'if (/(?:Device Model|User Capacity):\s+(.+?)\s*$/) { print "\t$1"; }'; echo; done /dev/ada0 Samsung SSD 850 EVO 500GB 500,107,862,016 bytes [500 GB] /dev/ada1 Samsung SSD 850 EVO 500GB 500,107,862,016 bytes [500 GB] /dev/da0 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da1 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da2 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da3 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da4 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da5 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da6 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da7 HGST HDS724040ALE640 4,000,787,030,016 bytes [4.00 TB] /dev/da8 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da9 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da10 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da11 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da12 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da13 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da14 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da15 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da16 KINGSTON SV300S37A120G 120,034,123,776 bytes [120 GB] /dev/da17 KINGSTON SV300S37A120G 120,034,123,776 bytes [120 GB] /dev/da18 ST8000AS0002-1NA17Z 8,001,563,222,016 bytes [8.00 TB] /dev/da19 ST8000AS0002-1NA17Z 8,001,563,222,016 bytes [8.00 TB] /dev/da20 ST8000AS0002-1NA17Z 8,001,563,222,016 bytes [8.00 TB] /dev/da21 ST8000AS0002-1NA17Z 8,001,563,222,016 bytes [8.00 TB] /dev/da22 ST8000AS0002-1NA17Z 8,001,563,222,016 bytes [8.00 TB] /dev/da23 WDC WD30EFRX-68AX9N0 3,000,592,982,016 bytes [3.00 TB] /dev/da24 WDC WD30EFRX-68AX9N0 3,000,592,982,016 bytes [3.00 TB] /dev/da25 WDC WD30EFRX-68AX9N0 3,000,592,982,016 bytes [3.00 TB] /dev/da26 ST8000AS0002-1NA17Z 8,001,563,222,016 bytes [8.00 TB] /dev/da27 ST8000AS0002-1NA17Z 8,001,563,222,016 bytes [8.00 TB] /dev/da28 Hitachi HDS724040ALE640 4,000,787,030,016 bytes [4.00 TB] /dev/da29 Hitachi HDS724040ALE640 4,000,787,030,016 bytes [4.00 TB] /dev/da30 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da31 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da32 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da33 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da34 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da35 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da36 Hitachi HDS724040ALE640 4,000,787,030,016 bytes [4.00 TB] /dev/da37 Hitachi HDS724040ALE640 4,000,787,030,016 bytes [4.00 TB] [root@brienne ~]#
  12. Like
    handruin got a reaction from alpenwasser in Safe RAID?   
    I see a difference in that one doesn't need to complicate their storage environment with FreeNAS and can simply manage ZFS directly through Linux quite easily (versus BSD).  When comparing fixing a broken system, it's FreeNAS that offers the extra challenges vs just using ZFS on Linux directly which offers a lot of simplicity.  In what you described I'd also agree that I'd find managing the hardware raid card easier than messing with FreeNAS.
     
    I'm in agreement that ZFS is no magic pill but it does solve some complexities that hardware raid controllers bring to the table and offers additional features that aren't available (snapshot, checksum, thin provisioning, etc).  If a HW raid controller dies, you best find an exact replacement with very close if not the same firmware on it or else you can destroy your foreign import.  Next, you're completely out of luck if a HW raid controller is no longer manufactured and now you're stuck trolling eBay for a used replacement.  Then you have batteries to manage if you want the writeback cache benefit or else you bought a hybrid non volatile version of cache protection.  As drive sizes improve, some HW raid controllers won't recognize the larger drives.  I got stuck with my Dell Perc 6 cards in this situation.  I can't add drives past 2TB. 
     
    I agree again that understanding the requirements one is trying to solve for is an important factor and can make for a better design if you have certain needs to meet.
  13. Like
    handruin got a reaction from leadeater in Safe RAID?   
    I see a difference in that one doesn't need to complicate their storage environment with FreeNAS and can simply manage ZFS directly through Linux quite easily (versus BSD).  When comparing fixing a broken system, it's FreeNAS that offers the extra challenges vs just using ZFS on Linux directly which offers a lot of simplicity.  In what you described I'd also agree that I'd find managing the hardware raid card easier than messing with FreeNAS.
     
    I'm in agreement that ZFS is no magic pill but it does solve some complexities that hardware raid controllers bring to the table and offers additional features that aren't available (snapshot, checksum, thin provisioning, etc).  If a HW raid controller dies, you best find an exact replacement with very close if not the same firmware on it or else you can destroy your foreign import.  Next, you're completely out of luck if a HW raid controller is no longer manufactured and now you're stuck trolling eBay for a used replacement.  Then you have batteries to manage if you want the writeback cache benefit or else you bought a hybrid non volatile version of cache protection.  As drive sizes improve, some HW raid controllers won't recognize the larger drives.  I got stuck with my Dell Perc 6 cards in this situation.  I can't add drives past 2TB. 
     
    I agree again that understanding the requirements one is trying to solve for is an important factor and can make for a better design if you have certain needs to meet.
  14. Like
    handruin got a reaction from leadeater in Today's 1GBps Storage Vault Video   
    There is a licensing issue and that issue just keeps the two from being distributed together in the same package.  The Linux kernel is "GNU General Public License Version 2 (GPLv2)" and ZFS is "Common Development and Distribution License (CDDL)".  Both are free open source licenses but because of the differences in the licenses they cannot be included or distributed together.  There is more info here if you're curious.
     
    You're right in that this port of ZFS (OpenZFS) is not the exact same as the original platform but it is feature-ready and comparable and the one to consider for Linux use.  ZFS on Linux is also production-ready in terms of strong data integrity, stability, and performance when configured properly.  Might make for a nice video in the future on the pros/cons of different file systems.  ;-)
  15. Like
    handruin got a reaction from scottyseng in LTT Storage Rankings   
    I over-spec'ed the CPU on my NAS also and I'm glad I did.  I now run a virtual machine on my NAS along with a Plex server.  On top of that it uses spare CPU cycles to run handbrake to convert movies for me.
  16. Like
    handruin got a reaction from Ch1ng0n in LTT Storage Rankings   
    Here is my second NAS.  It's configured with 32TB raw space.
     
    Hardware:
    1 x SUPERMICRO MBD-X10SL7-F-O uATX Server Motherboard (I got this because of the built-in 8x SAS2 (6Gbps) ports via LSI 2308 and a BMC for remote management) 1 x Intel Intel Xeon E3-1270V3 Haswell 3.5GHz LGA 1150 (Motherboard and CPU were a newegg packaged deal giving some $65 off the package. This Xeon is essentially a Core i7 - 4770 with ECC memory support)
    2 x Crucial 16GB Kit (8GBx2) DDR3L 1600MT/s (PC3-12800) DR x8 ECC UDIMM 240-Pin
    1 x Rosewill 1.0 mm Thickness 4U Rackmount, Black Metal/Steel RSV-L4411
    8 x HGST Deskstar NAS H3IKNAS40003272SN (0S03664) 4TB 7200 RPM 64MB Cache SATA 6.0Gb (I got all 8 at roughly the $20 off discount/coupon from the normal price)
    1 x SAMSUNG 850 Pro Series MZ-7KE256BW 2.5" 256GB SATA III 3-D Vertical Internal Solid State Drive (SSD) (I plan to use this for ZFS ZIL and L2ARC since it has high durability and a 10-year warranty)
    1 x SAMSUNG 850 Pro Series MZ-7KE128BW 2.5" 128GB SATA III 3-D Vertical Internal Solid State Drive (SSD) (This will be for the OS and possible other experimental stuff for ZFS)
    1 x SeaSonic SSR-650RM 650W ATX12V / EPS12V SLI Ready CrossFire Ready 80 PLUS GOLD Certified Modular Active PFC
    3 x C2G 27397 14" Internal Power Extension Cable
    3 x Athena Power CABLE-YPHD 8" Molex Y Splitted Power Cable
    Software:
    OS: Xubuntu 14.04.1 LTS 64-bit
    ZFS
    Samba
    Usage:
    Backups, Movie repository, Music repository, learning. Backup:
    None at the moment but possibly Crashplan at some point. Additional Info:
     
    CPU/MB/RAM
    I decided to go with a full socket 1150 motherboard vs some of those Intel Atom setups that are popular. I found a combo deal on newegg which combined the Supermicro X10SL7-F-O and an Intel Xeon E3-1270V3 (Haswell). I added 16GB (2 x 8GB) of ECC RAM and I plan to increase that to 32GB shortly which is why I listed 32GB in the build list below. I decided to go with a bit more CPU power than I originally planned because I wanted to be able to give enough to ZFS and still use some extra for processing other work in the future. I'll be running some kind of Samba/NFS/CIFS to transport data to other systems in my house. I also plan to run some media server components once I've digitized movies into this NAS.
    The supermicro board sold this config for me. I did a lot of reading and research on popular configs, and this board won me over. It comes with a built-in LSI 2308 adapter giving me 8 SAS 6/Gb ports plus the 6 built-in ports on the motherboard. The LSI adapter works in IT mode making it so I don't have to configure all the drives as 8 x RAID 0 to get the OS to see them. Basically they're all just JBOD. All the drives were seen right away through the OS and it was painless. When I priced out other configurations they were more expensive and more quirky than this setup. The X10SL7-F-O also comes with a full IPMI 2.0 BMC for easy remote management (and it works awesome). There are 3 x 1Gb NICs out the back which one of them is for the BMC. I can eventually team the two non-BMC NICs with my layer 2 switch to play with higher amounts of concurrency.
    Case
    After a long search for the right case for me, I chose the Rosewill L4411. It's an interesting mix of of good space, decent quantity of hotswap bays, good cooling, and relatively low price. The case comes with 3 x 120mm fans, 2x 80mm fans, 12 x SATA cables, front dust filter, and a metal locking front panel. It's very roomy inside and can even be rack-mounted if needed. With everything installed and running, the case if very quiet. I'd have no issue putting this on my desk next to me if I wanted. The drives seem to be cooled very well by the 3 x 120mm fans. I posted temps a bit further down in my post. The one negative I've seen with this case is that the hotswap bays won't recognize the Samsung 850 Pro SSD drives. This isn't a huge issue because I wasn't originally planning to mount them in the bays, but it was a surprise none the less. All info I read said the hotswap bays were simple pass-through. The SSDs are free-floating at the moment but I plan to mount them with sticky velcro for simplicity.
    HDDs
    I chose to go with 8 x HGST 4TB NAS for this build. I've had good luck with these in other builds and I've seen other decent reviews of them. I may decide to max out the bays on my case and add 4 more to the config down the road. If I decide to grow this larger than 12 drives, I'll look into replacing the case. That will also mean adding another adapter into the x8 PCIe 3.0 slot which gives me further expansion if needed.
    SSDs
    I imagine several of you will likely question the reason I added two higher-end Samsun 850 Pro SSDs to a NAS device. I did this to experiment with various things. The 128GB SSD is being used as a boot drive for now and I'll likely use it to stage other media-related work. The 256GB SSD is intended to experiment with a ZFS SLOG and also an L2ARC configuration for ZFS. It's way too big for a SLOG, but the L2ARC could take advantage of the size. Both are likely not needed in my home environment but I'm using it to learn/experiment. I chose the Samsung 850 Pro because of the increased durability and 10-year warranty. Given the nature of L2ARC and SLOG, it will possibly have more IO than normal going through it so I decided to go with a more-durable drive.
    Power supply
    I went with Seasonic in a 650W 80 plus gold for this build. This will give me decent efficiency and some room to grow. It's a bit overkill; I know.
    OS
    I'm going to play around with the OS but for now I chose Xubuntu 14.04.1 LTS 64-bit since I'm familiar with it. It may not be the best option but I'd like to experiment and find out for myself before putting this into full production in my house.
     
    Pictures:
    Full album on imgur.  
     

     

     

     

  17. Like
    handruin got a reaction from alpenwasser in LTT Storage Rankings   
    I did an update to my existing NAS over the past week.  I've added 4 more HGST 4TB NAS drives to it to max out the config for this case.  I've increased the capacity from 32TB to 48TB of raw space (48TB + 384GB for internal SSD storage) and rebuilt the zfs pool.  It's configured as raidz2 and has compression enabled. 
     
    I've updated the OS to Xubuntu 14.10 and that is being run on the 256GB Samsung 850 pro and the L2ARC is on the 128GB Samsung 850 PRO.  I've secured the SSD drives to the case tray with 3M Velcro tape and I added the additional RAM maxing it out at 32GB ECC.  I also tried to cleanup all the SATA cables since I have 14 of them to manage.  I love the port density of this motherboard but I wouldn't mind the breakout cables to reduce some of the mess.  Fortunately it's laid out such that it doesn't obstruct air flow.
     
    I'm using the NAS for my own plex server.  I also aggregate all the backups from my workstations in the house using CrashPlan.  It is also used for hosting another Linux VM which runs a couple chat servers and gaming servers.
     
    My original build post is here.  New config with pictures and proof attached.
     
     
     




  18. Like
    handruin got a reaction from MrBucket101 in LTT Storage Rankings   
    I did an update to my existing NAS over the past week.  I've added 4 more HGST 4TB NAS drives to it to max out the config for this case.  I've increased the capacity from 32TB to 48TB of raw space (48TB + 384GB for internal SSD storage) and rebuilt the zfs pool.  It's configured as raidz2 and has compression enabled. 
     
    I've updated the OS to Xubuntu 14.10 and that is being run on the 256GB Samsung 850 pro and the L2ARC is on the 128GB Samsung 850 PRO.  I've secured the SSD drives to the case tray with 3M Velcro tape and I added the additional RAM maxing it out at 32GB ECC.  I also tried to cleanup all the SATA cables since I have 14 of them to manage.  I love the port density of this motherboard but I wouldn't mind the breakout cables to reduce some of the mess.  Fortunately it's laid out such that it doesn't obstruct air flow.
     
    I'm using the NAS for my own plex server.  I also aggregate all the backups from my workstations in the house using CrashPlan.  It is also used for hosting another Linux VM which runs a couple chat servers and gaming servers.
     
    My original build post is here.  New config with pictures and proof attached.
     
     
     




  19. Like
    handruin reacted to Dzzope in Can you teach me these terms?   
    Best advice:
     
    Go listen to a bunch of headphones and pick the best sounding (or best sounding to price) that YOU like.
     
    The terms are 2 things, tech specifications and descriptions of the sound. Thing is that tech specs don't tell you how they sound and descriptions are subjective to the listener.
     
    Mids: the sounds in the middle of the range.
    Highs: The sounds at the top (symbols, snare and such) of the range
    Lows: Bass.
     
    Thats about all I can be bothered explaining.. google them there will be a wealth of information on all of it.
     
    Ohh and break your sentences apart some.. Relevant stuff together. Makes it much easier to read.
  20. Like
    handruin got a reaction from Oktyabr in Two channel stereo amp for speakers, advice needed   
    What about the Emotiva mini-X a-100?  You can also buy through Amazon if you don't want to go direct through Emotiva.  There are a reviews all over the net.
     
  21. Like
    handruin got a reaction from ShearMe in Equipment Needed For AT2020 XLR Microphone?   
    I purchased a Behringer XENYX 302USB when I was working on a project that used a mic with XLR.  It's simple and with a limited number of channels (5) but it does connect as a USB device to your computer which makes it functionally like a sound card.  There are much better mixers out there but for the price this one is fairly capable and has an XLR connection for a microphone.
     
    I know it's a little above your quoted budget but maybe something like a Rode NT1 kit condenser microphone?  The reviews I've watched/listen to left me wanting this mic.  A little closer to your budget could be the Rode NT1A kit.
  22. Like
    handruin reacted to MrBucket101 in LTT Storage Rankings   
    CLICK HERE FOR ORIGINAL POST
    CLICK HERE FOR UPDATE 1
    CLICK HERE FOR UPDATE 3
    CLICK HERE FOR UPDATE 4
     
    UPDATE #2:
     
    If you don't care to read my horror story skip down to the second bolded sentence.
     
    Things weren't looking good for me. I nearly lost ALL my data, and the root cause of it all was my decision to use cheap consumer drives in an always on RAID environment. Heed my warning, STAY AWAY FROM SEAGATE ST3000DM001
     
    A little background story. I originally bought 8 - 3TB ST3000DM0001 and was running them in RAID 6. Late november early december I ran into a massive string of failures. I lost 3 drives in 3 weeks. I had a spare availiable, and thankfully all of them were covered under warranty, so I got them replaced for free.
     
    2 weeks ago I started to run out of space on my array. I had 1TB left out of the 16.3TB. So I got rid of my dedicated hot-spare and added it to the array. This is where shit hit the fan. The expansion should have only taken 3 days as it has done so in the past, but the expansion took a grand total of 9 days. Which definitely had me worried. But I wasn't going to disturb anything. According to the logs, 10 minutes before the expansion finished, another drive died on me. (that makes 4 of the original 8 within 2 years) I didn't really think much of it at the time. The expansion finished and megaraid reported the new size at 19TB, it was just degraded.  So this time I bought a nice HGST 3TB NAS drive, and put that in to replace the failed drive. But when the rebuild finished....things took a turn for the worse. The OS was only able to see the array as 16.3TB, it was as if the expansion hadn't actually worked. I know when you expand you also have to move the GPT data to the end of the disk and then expand the partition. However the OS reported the drive was only 16.3TB.
     
    On top of this, the alarm on the card would not go off. I scoured the logs and saw no problems that had not been taken care of. Yet something was clearly wrong. The alarm would not turn off, unless I did so manually, and the OS could not see the expansion.
     
    At this point, I had no effing idea what was going on with my array. But the OS could still run, and I could still access my data. So I left the server alone for a couple days while I did some research. Unfortunately at this point though, I was now getting non-stop kernel panics. The OS was reporting that the device had taken longer than 120s to respond, and so the kernel halted waiting for the array to respond back. The entire OS locked up and I was forced to hard reboot. 
     
    This kept up for the next 2 days and the random kernel panics and hard resetting corrupted my filesystem...then the panic set in.
     
    I was able to repair the damage done, and verify that my data was in tact. But I just couldn't take this any longer. My stomach was in knots from nearly loosing all of my data from the past 8 years.
     
    I snapped and bought new drives. I don't like to do this, but I had to throw in the towel. The system had beaten me and I could not stand to lose my data.
     
    SOOOOO on to the juicy stuff.
     
    I bought 8 - Hitachi HGST 4TB NAS drives. I picked these over the WD RED for a number of reasons. These drives are comparable to the WD RED PRO line, but they do not have the price tag to match. The drives are 7200rpm, and come with a rotational vibration sensor. Something the WD RED's don't. On top of this the drive supports TLER, is rated for 1million power on hours, and comes with a 3 year warranty. All identical to the WD RED. All this doesn't even come at a much larger price premium. On newegg, the WD RED 4TB drive costs $166, where these drives are around $175. I managed to get 5 of my 4TB drives on sale for $160, and then I had to pay $175 for the last 3 since the coupon had a limit of 5 drives.
     
    Since I was buying new drives, I decided to buy another SSD so that I could run my SSD Cache in RAID 1 and safely enable write caching. I just got a crucial MX100. My other SSD is an OCZ Vertex 4. So I wasn't too worried about matching performance. Basically, the cache drives are used to speed up read/write speeds from my array. Data read from the array once is moved onto the SSD cache for faster access times. When I am writing data, the controller will write the data to the SSD and report back to the OS the transfer is done, and then it will move the data from the SSD to the disks at a more convenient time. This feature REALLY speeds things up. I did some rough testing, and I was getting around 1.2GB/s write speed, and 1.7GB/s read speed. I didn't do much to validate the results, but the numbers were enough to convince me it was working. Previously, I was getting around 600MB/s write speed, and 800MB/s read speed. Fun fact, according to the manual my raid card tops out at 1.8GB/s
     
    I put the new drives in my system and set them up in RAID 6 and used rsync to transfer the content from the bad array, to the new one. rsync is nice in that it will hash check the new file against the original to make sure there was no issue with the transfer.
     
    Once I had migrated my data off of the bad array, I booted back into the cards bios and deleted the array. Soon as I did that the alarm finally stopped going off. Whatever it was that had went wrong, had now been fixed...with money lol
     
    My OS drive got corrupted with all the kernel panics as well, and i didn't care to go rescue my data from it. I had an image from mid january 2015 that I just copied back onto the system.
     
    I plan on taking the crappy 3TB drives and individually filling them up with my data; and then I'm going to put them back in the box my new hard drives came in, and store them in my basement. This will give me a dated offline backup incase of any future catastrophe.
     
    I also took the 3TB HGST NAS drive, and I replaced the previous 2TB drive I was using for downloads.
     
    To help make things easier, for those that haven't been keeping up with everything. Here is my current configuration.
     
    Hardware:
    SSD 1: 2 x 120GB SAMSUNG 840 EVO (RAID 1 for the OS)
    SSD 2: 1 x 128GB OCZ Vertex 4
    SSD 3: 1 x 128GB Crucial MX100 (RAID 1 w/ SSD 2 for SSD cache)
    HDD 1: 1 x 320GB WD Black 2.5" (for storing VM disks)
    HDD 2: 1 x 250GB Hitachi Travelstar 5k500 2.5" (for backing up my /home/ folder and some application config files)
    HDD 3: 1 x 3TB HGST Deskstar NAS
    HDD 4: 8 x 4TB HGST Deskstar NAS (RAID 6)
     
    (36.066TB ~36TB )
     
    And now for the pr0n!!!
     

     

     

     

     
    The graveyard
     

     
     
     
  23. Like
    handruin got a reaction from ictdude1 in POST YOUR CRYSTALDISKINFO RESULTS HERE   
    My Samsung 1TB drives are some of the older ones I have that are still in service in my desktop.  Looks like both have over 40K hours on them.  I leave my system on 24x7 which you could probably tell given the number of powered on hours and the amount of power cycles during that time.  See my signature for all the drive details.
     
     








  24. Like
    handruin got a reaction from babadoctor in Copying/Moving which is faster?   
    I haven't tried this, but this was the first return from a Google search.  This was a second return from Google.
  25. Like
    handruin got a reaction from Lewellyn in Copying/Moving which is faster?   
    When you move a file within the same physical drive and in the same formatted partition (assuming NTFS for this example), you're updating a reference in the Master File Table (MFT) which the filesystem uses to track every file and directory and this is very fast. 
     
    When you copy and paste a file or directory, you are creating a clone of the data which takes a lot more time because you're duplicating the bits into your physical media as well as updating the filesystem to store the new information.  The OS, filesystem, and drive have to read and write out all the data.
     
    If you Move a file or directory from one physical drive to another, then it is like a copy then delete of the original source because it has to traverse different physical media to a different MFT.
×