Jump to content

MyNameIsNicola

Member
  • Posts

    21
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Contact Methods

Profile Information

  • Gender
    Female
  • Location
    Cambridge, United Kingdom
  • Occupation
    "Computers"

Recent Profile Visitors

1,194 profile views

MyNameIsNicola's Achievements

  1. https://arstechnica.com/information-technology/2018/04/nasty-malware-campaign-using-thousands-of-hacked-sites-hid-for-months/ https://blog.malwarebytes.com/threat-analysis/2018/04/fakeupdates-campaign-leverages-multiple-website-platforms/
  2. Mini Update I needed to expand my archive pool to be able to properly backup my primary data pool1, so I swapped out the single 4TB pool2 disk for an additional 8TB disk to extend archive (which nicely tips me over 150TB of raw storage - woot). ;-) -- I've also now got a better solution for UPS, with a 2U rack mount APC Smart-UPS 1000 watt / 1500 VA, which is currently sitting at 55% load. Feels much better now. Phew. (120 GB) + (120 GB) + (3.00 TB) + (3.00 TB) + (3.00 TB) + (3.00 TB) + (3.00 TB) + (3.00 TB) + (3.00 TB) + (3.00 TB) + (3.00 TB) + (3.00 TB) + (3.00 TB) + (3.00 TB) + (3.00 TB) + (3.00 TB) + (3.00 TB) + (3.00 TB) + (3.00 TB) + (3.00 TB) + (3.00 TB) + (3.00 TB) + (3.00 TB) + (3.00 TB) + (3.00 TB) + (3.00 TB) + (4.00 TB) + (4.00 TB) + (4.00 TB) + (4.00 TB) + (500 GB) + (500 GB) + (8.00 TB) + (8.00 TB) + (8.00 TB) + (8.00 TB) + (8.00 TB) + (8.00 TB) + (8.00 TB) + (8.00 TB) = TOTAL RAW STORAGE: 153.24 terabytes (57 TB) + (111 GB) + (460 GB) + (76 TB) = TOTAL USABLE STORAGE: 133.57100 terabytes
  3. The network is dual bonded 10 gig and is running as close to theoretical maximum as I would expect when I test it by just spewing garbage over UDP. I'm interested in synthetic benchmarks of sequential read, sequential write, sequential read write, random, small files, large files, many large files in a directory, many small files in a directory, a few files in many directories, locking performance, filesystem traversal performance, with and without client filesystem caching etc etc.
  4. To be clear, ZFS without ECC memory is still less likely to result in data corruption than most other filesystems without ECC due to the fact it's a copy-on-write filesystem with additional checksum validation on all data. The 1GB of RAM per TB of disk space isn't strictly true, but it's always good to have more RAM. The Linux implementation of ZFS isn't quite as mature as that which is in BSD at the moment, which is a shame. It is perfectly serviceable though.
  5. FreeNAS will be just fine and dandy under 24GB of RAM. It'll use as much RAM as a primary disk cache. I can't see that SATA II would be a problem for you with Plex as your main work load, unless you're runnings lots of independent HD streams at once. If you can, bond your two gigabit ports together with LACP on your switch.
  6. I've Googled around and found a number of different network filesystem throughput benchmarking tools out there, but I just wanted to get some feedback about personal favourites and why. (I want to get some better stats on different types of synthetic workloads beyond what I observe from actual real life day to day usage). Preach to me people! :-) k thnx bye x
  7. I would trust the FreeBSD (used by FreeNAS) implementation of ZFS over BTRFS. Contrary to popular belief, you do not need large gobs of RAM or ECC memory for ZFS; but just like any other server, it never hurts to have more. I'm using my FreeNAS host as my primary storage array for my Windows and Mac desktops now. It delivers faster read and write performance over the network than my local Samsung PCIe SSDs do in my desktop. It also runs about a dozen virtual machines and jails now, including real-time transcoding of video streams for Plex. FreeNAS every time if you ask me.
  8. I think I've fixed the script. I've not got a setup that's the same to test against though, but I think it'll parse the output for those scenarios now. https://nicolaw.uk/freenas_disk_info.pl It also now outputs the adaptor, bus, target and lun that the disk is connected to. Especially useful for me given how many different HBAs I have. I've tidied up the code a little more to run with full taint checking etc to be a little safer (not that there should be any risk anyway). Let me know if you have any warnings from the output.
  9. Mmm, interesting. Would you mind messaging me the the output you get for that line and the line above and below it? (anonymise anything as you wish of course) .. I'll see if I can fix the script, as it shouldn't be doing that. I'm obviously made a silly assumption or missed an edge case in my parsing logic.
  10. I've just updated the script on my homepage/wiki so that the first row printed gives you the column names: https://nicolaw.uk/freenas_disk_info.pl The first column that you mention is a WWN. (See https://en.wikipedia.org/wiki/World_Wide_Name) The second column is the FreeBSD GEOM GPTID of the partitions on that device that are assigned to your ZFS VDEVs. (See the output of glabel list and zpool status -v commands). I find this information handy to have a copy of in an emergency because it means that you will have all the information about any given disk to hand in case you need to replace one. (I've seen some disks that annoyingly only have a barcode of the WWN on the exterior of the drive and not the serial, or perhaps the other way around). I just like belt-and-braces record keeping when you're dealing with hoarding of precious data pr0n. I hope this helps!
  11. Yup. Automatic snapshots are just so convenient and simple once you get it setup. I really love the features that come with ZFS out of the box. It really is the filesystem that the world should have had 20 years ago! I've reused a number of components that I already had (mostly disks and a few others bits here and there), so I'm not really sure of the total cost with recycled old servers. At a guess, I'd have to say around £3,000 GBP. Ideally I would have built two identical machines, and performed completed snapshot replication between the two, but money doesn't really permit. The case is a little bit of a squeeze to connect all 9x SAS SFF-8087 cables up to the backplanes; cable routing isn't especially easy with that many cables! I think Supermicro envisioned most people using the backplanes with a single SFF-8087 with contention, but I didn't want to compromise as I already had sourced the HBAs for a really good cheap price. I can see that I may want to put a 10gig network card in later on down the line, so I'm kicking myself for not buying the slightly more expensive server motherboard that had dual 10gig NICs rather than the dual 1gig NICs. Still, I was trying to be frugal with the initial components, and I always have the option to spend the money later on when I decide I really need to. I'm very happy with it so far. It's leaps and bounds better compared to the hotch-potch setup I had before (a QNAP TS-859 with 8x 2TB disks in RAID6, and two 4x 4TB disks RAID5 DASes connected via eSATA <-> USB3 to a TranquilPC NFS head).
  12. The noise is perfectly acceptable when the door is closed. I have Zabbix monitoring of every temperature sensor I can get my grubby little fingers on just in case keeping the door shut makes it a bit too warm in there. So far so good, but I've not had a summer in this house with that setup yet so it remains to be seen if it will be feasible to keep the door closed over the summer or not. Yes, the primary ZFS pool is analogous to a RAID60 configuration. I wasn't comfortable with the redundancy of a 28 disk RAIDZ3 VDEV, and I wanted better capacity efficiency than a set of striped mirrors, so striping some smaller RAIDZ2 VDEVs seemed like a reasonable trade off for capacity efficiency, redundancy and performance. Zabbix has indicated a 2 gigabytes/second read from the array when I was poking around with some data on a local BSD jail, but given that I only have two un-bonded gigabit NICs on the machine, I'm unlikely to ever stress the disks. I'll do some performance testing at some point and see what peak read and write throughput I can get on each ZFS pool and post them later in the week. Given the relatively low load on the pools and the large L1ARC in the 80GB of RAM, I didn't feel that an SSD L2ARC would have added much. In any case, I've filled all the internal and external disk bays so there's not currently anywhere to put an additional set of SSDs for an L2ARC anyway. Kitty cats power teh Interwebz, so it seems rude not to pay out kitty taxes when posting. ;-) heh
  13. Hardware CASE: Supermicro SuperChassis 847A-R1400LPB PSU: 2x 1400W high-efficiency (1+1) redundant power supply with PMBus MB: Supermicro X9DRH-IF CPU: 2x Intel® Xeon® CPU E5-2603 v2 @ 1.80GHz HS: 2x Active Supermicro SNK-P0048AP RAM: 80GB ECC DDR3 1333 RAID CARD 1: LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware RAID CARD 2: LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware RAID CARD 3: LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware RAID CARD 4: LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware RAID CARD 5: LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware SSD 1: 2x 120GB Kingston SV300S37A120G SSD (FreeNAS boot volume) SSD 2: 2x 500GB Samsung 850 EVO SSD (jails and virtual machines) HDD 1: 7x 8TB Seagate ST8000AS0002-1NA17Z HDD 2: 21x 3TB Hitachi HGST HDN724030ALE640 HDD 3: 1x 4TB Hitachi HGST HDS724040ALE640 HDD 4: 4x 4TB Hitachi HGST HDS724040ALE640 HDD 5: 3x 3TB Western Digital Red WDC WD30EFRX-68AX9N0 SWITCH: Intellinet 523554 24-Port Gigabit Ethernet Rackmount Managed Switch UPS: 2x APC Power-Saving Back-UPS ES 8 Outlet 700VA 230V BS 1363 TOTAL RAW STORAGE: 149.24 terabytes TOTAL USABLE STORAGE: 130.69 terabytes Software and Configuration: My server is running FreeNAS 9.3. My main array consists of 4x 7-disk RAIDZ2 VDEVs striped in to a single ZFS pool. My second archival array (which is used to sync periodic snapshots of important parts of the main pool) is a RAIDZ1 VDEV of 7x 8TB disks. The two smaller 120GB SSDs are in a ZFS mirror for the FreeNAS boot volume, while the second larger 500GB SSDs are in a ZFS mirror for FreeBSD jails and VirtualBox VMs. The last remaining disk is used for scratch space and may be used to replace any failed disks in the other arrays as a "warm spare". I have 2x 4TB disks available as cold spares should any disks fail. Usage: Most of the storage is for movies and TV series, with Plex Media Server running inside one of the BSD jails. The remainder is used for personal file storage / networked home directory, backups of my colo servers, and PXE netboot server installation source (one of my FreeBSD jails is my PXE netboot server). Backup: Important files from the primary ZFS pool are backed up to the slightly smaller archival ZFS pool via periodic snapshots. The archival pool is physically removed and swapped out to be taken to my storage locker while another set of 7x 8TB disks are swapped in for the next periodic snapshot to be updated. Additional info: My whole setup is squeezed in to a 12U rackmount flight case on wheels. This makes for very easy maneuvering of the whole rack in my small "server room" / box-room, as well as being highly convenient for the next time I move home. The internal rack is mounted on to the external case via heavy duty rubber shock-absorbers. At present the small APC UPS is overloaded and is only really acting as a power smoothing device, so I plan to purchase a suitably rated 2U rack mount UPS in the near future. brienne# (for i in `seq 0 37`; do smartctl -a /dev/da$i | grep 'Device Model:' ; done) | sort | uniq -c 21 Device Model: HGST HDN724030ALE640 1 Device Model: HGST HDS724040ALE640 4 Device Model: Hitachi HDS724040ALE640 2 Device Model: KINGSTON SV300S37A120G 7 Device Model: ST8000AS0002-1NA17Z 3 Device Model: WDC WD30EFRX-68AX9N0 brienne# Photos: http://imgur.com/a/oRcNe [root@brienne ~]# zpool iostat -v -T d Wed Jan 27 20:32:20 GMT 2016 capacity operations bandwidth pool alloc free read write read write -------------------------------------- ----- ----- ----- ----- ----- ----- archive 35.8T 14.7T 0 0 3.39K 10 raidz1 35.8T 14.7T 0 0 3.39K 10 gptid/fa89b7e3-b2d7-11e5-9ebd-0cc47a546fcc - - 0 0 1.28K 4 gptid/fbf7981a-b2d7-11e5-9ebd-0cc47a546fcc - - 0 0 693 4 gptid/fd2ef450-b2d7-11e5-9ebd-0cc47a546fcc - - 0 0 864 4 gptid/fe514fff-b2d7-11e5-9ebd-0cc47a546fcc - - 0 0 132 4 gptid/ff865f8b-b2d7-11e5-9ebd-0cc47a546fcc - - 0 0 182 5 gptid/00ca81fe-b2d8-11e5-9ebd-0cc47a546fcc - - 0 0 183 5 gptid/01ebff25-b2d8-11e5-9ebd-0cc47a546fcc - - 0 0 132 4 -------------------------------------- ----- ----- ----- ----- ----- ----- freenas-boot 1.90G 109G 2 89 10.5K 723K mirror 1.90G 109G 2 89 10.5K 723K gptid/ef4efad1-8017-11e5-891c-0cc47a546fcc - - 1 13 6.50K 723K gptid/ef53fac1-8017-11e5-891c-0cc47a546fcc - - 0 13 4.20K 723K -------------------------------------- ----- ----- ----- ----- ----- ----- jails 361G 98.6G 49 78 1.17M 740K mirror 361G 98.6G 49 78 1.17M 740K gptid/71ea795a-809a-11e5-884a-0cc47a546fcc - - 26 44 1.16M 744K gptid/7223d49f-809a-11e5-884a-0cc47a546fcc - - 25 44 1.16M 744K -------------------------------------- ----- ----- ----- ----- ----- ----- pool1 49.6T 26.4T 60 16 7.30M 1.24M raidz2 15.4T 3.62T 18 3 2.23M 210K gptid/8273abf3-889d-11e5-8ab5-0cc47a546fcc - - 9 0 334K 46.4K gptid/85641e2a-889d-11e5-8ab5-0cc47a546fcc - - 9 0 333K 46.4K gptid/882bf207-889d-11e5-8ab5-0cc47a546fcc - - 9 0 334K 46.4K gptid/8b112084-889d-11e5-8ab5-0cc47a546fcc - - 9 0 334K 46.4K gptid/8e20deb6-889d-11e5-8ab5-0cc47a546fcc - - 9 0 334K 46.4K gptid/91252e88-889d-11e5-8ab5-0cc47a546fcc - - 9 0 334K 46.4K gptid/940cb743-889d-11e5-8ab5-0cc47a546fcc - - 9 0 334K 46.4K raidz2 15.4T 3.57T 18 3 2.30M 291K gptid/5134e3f9-88a2-11e5-8582-0cc47a546fcc - - 9 1 345K 64.0K gptid/5413e683-88a2-11e5-8582-0cc47a546fcc - - 9 1 345K 64.0K gptid/56fc5d0d-88a2-11e5-8582-0cc47a546fcc - - 9 1 345K 64.0K gptid/59dfecca-88a2-11e5-8582-0cc47a546fcc - - 9 1 345K 64.0K gptid/5cb31b99-88a2-11e5-8582-0cc47a546fcc - - 9 1 345K 64.0K gptid/5f9c7a47-88a2-11e5-8582-0cc47a546fcc - - 9 1 345K 64.0K gptid/62810541-88a2-11e5-8582-0cc47a546fcc - - 9 1 345K 64.0K raidz2 17.0T 2.00T 18 3 2.28M 263K gptid/18451b24-8ee7-11e5-8fb5-0cc47a546fcc - - 9 1 342K 57.6K gptid/1b2f4e11-8ee7-11e5-8fb5-0cc47a546fcc - - 9 1 342K 57.6K gptid/1e285370-8ee7-11e5-8fb5-0cc47a546fcc - - 9 1 342K 57.6K gptid/210413c7-8ee7-11e5-8fb5-0cc47a546fcc - - 9 1 342K 57.6K gptid/23de6209-8ee7-11e5-8fb5-0cc47a546fcc - - 9 1 342K 57.6K gptid/26bd5abc-8ee7-11e5-8fb5-0cc47a546fcc - - 9 1 342K 57.6K gptid/29b1b41f-8ee7-11e5-8fb5-0cc47a546fcc - - 9 1 342K 57.6K raidz2 1.77T 17.2T 4 5 502K 509K gptid/94bf7a70-b7b6-11e5-8976-0cc47a546fcc - - 2 1 73.2K 111K gptid/97f04c90-b7b6-11e5-8976-0cc47a546fcc - - 2 1 73.2K 111K gptid/9b0ad9ef-b7b6-11e5-8976-0cc47a546fcc - - 2 1 73.3K 111K gptid/9e2b3065-b7b6-11e5-8976-0cc47a546fcc - - 2 1 73.2K 111K gptid/9feb05f9-b7b6-11e5-8976-0cc47a546fcc - - 1 1 73.4K 111K gptid/a11775af-b7b6-11e5-8976-0cc47a546fcc - - 1 1 73.4K 111K gptid/a245e539-b7b6-11e5-8976-0cc47a546fcc - - 1 1 73.4K 111K -------------------------------------- ----- ----- ----- ----- ----- ----- pool2 353G 3.28T 16 12 2.00M 651K gptid/9ccc277d-826c-11e5-8238-0cc47a546fcc 353G 3.28T 16 12 2.00M 651K -------------------------------------- ----- ----- ----- ----- ----- ----- [root@brienne ~]# for dev in ada0 ada1 $(perl -e 'print "da$_ " for 0 .. 37'); do echo -n "/dev/$dev"; smartctl -a /dev/$dev | perl -ne 'if (/(?:Device Model|User Capacity):\s+(.+?)\s*$/) { print "\t$1"; }'; echo; done /dev/ada0 Samsung SSD 850 EVO 500GB 500,107,862,016 bytes [500 GB] /dev/ada1 Samsung SSD 850 EVO 500GB 500,107,862,016 bytes [500 GB] /dev/da0 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da1 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da2 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da3 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da4 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da5 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da6 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da7 HGST HDS724040ALE640 4,000,787,030,016 bytes [4.00 TB] /dev/da8 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da9 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da10 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da11 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da12 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da13 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da14 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da15 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da16 KINGSTON SV300S37A120G 120,034,123,776 bytes [120 GB] /dev/da17 KINGSTON SV300S37A120G 120,034,123,776 bytes [120 GB] /dev/da18 ST8000AS0002-1NA17Z 8,001,563,222,016 bytes [8.00 TB] /dev/da19 ST8000AS0002-1NA17Z 8,001,563,222,016 bytes [8.00 TB] /dev/da20 ST8000AS0002-1NA17Z 8,001,563,222,016 bytes [8.00 TB] /dev/da21 ST8000AS0002-1NA17Z 8,001,563,222,016 bytes [8.00 TB] /dev/da22 ST8000AS0002-1NA17Z 8,001,563,222,016 bytes [8.00 TB] /dev/da23 WDC WD30EFRX-68AX9N0 3,000,592,982,016 bytes [3.00 TB] /dev/da24 WDC WD30EFRX-68AX9N0 3,000,592,982,016 bytes [3.00 TB] /dev/da25 WDC WD30EFRX-68AX9N0 3,000,592,982,016 bytes [3.00 TB] /dev/da26 ST8000AS0002-1NA17Z 8,001,563,222,016 bytes [8.00 TB] /dev/da27 ST8000AS0002-1NA17Z 8,001,563,222,016 bytes [8.00 TB] /dev/da28 Hitachi HDS724040ALE640 4,000,787,030,016 bytes [4.00 TB] /dev/da29 Hitachi HDS724040ALE640 4,000,787,030,016 bytes [4.00 TB] /dev/da30 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da31 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da32 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da33 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da34 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da35 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB] /dev/da36 Hitachi HDS724040ALE640 4,000,787,030,016 bytes [4.00 TB] /dev/da37 Hitachi HDS724040ALE640 4,000,787,030,016 bytes [4.00 TB] [root@brienne ~]#
×