It took a while but I think I finally finished setting up the new GNU/Linux server. Doing everything CLI-only is only a PITA until you learn the commands. After that it is so much faster and easier to automate vs a GUI. CLI 10/10 do recommend.
Got my ZFS pool online:
# Output of: zpool status pool: storage state: ONLINE scan: scrub repaired 0B in 0 days 01:37:26 with 0 errors on Sun Sep 13 02:01:27 2020 config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 sda ONLINE 0 0 0 sdb ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0 sdf ONLINE 0 0 0 sdg ONLINE 0 0 0 sdh ONLINE 0 0 0 sdi ONLINE 0 0 0 sdj ONLINE 0 0 0 sdk ONLINE 0 0 0 raidz2-1 ONLINE 0 0 0 sdl ONLINE 0 0 0 sdm ONLINE 0 0 0 sdn ONLINE 0 0 0 sdo ONLINE 0 0 0 sdp ONLINE 0 0 0 sdq ONLINE 0 0 0 sdr ONLINE 0 0 0 sds ONLINE 0 0 0 sdt ONLINE 0 0 0 sdu ONLINE 0 0 0 errors: No known data errors
A twenty drive 960GB SSD RAID60 with physical room for up to 20 more SSDs.
Total capacity:
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT storage 17.4T 12.7T 4.72T - - 0% 72% 1.00x ONLINE -
Now this doesn't count what I lose to resiliency.
All of my Datasets:
# Output of: zfs list NAME USED AVAIL REFER MOUNTPOINT storage/3D Printer 2.87G 3.18T 2.87G /storage/3D Printer storage/Custom Software 34.2M 3.18T 33.1M /storage/Custom Software storage/Music 3.15G 3.18T 3.15G /storage/Music storage/Pictures 1.60G 3.18T 1.60G /storage/Pictures storage/Proxmox 275G 3.18T 275G /storage/Proxmox storage/Software 303G 3.18T 303G /storage/Software storage/Videos 9.12T 3.18T 9.12T /storage/Videos storage/users 446M 3.18T 446M /storage/users
# Output of: df -H Filesystem Size Used Avail Use% Mounted on storage/3D Printer 3.5T 3.1G 3.5T 1% /storage/3D Printer storage/users 3.5T 468M 3.5T 1% /storage/users storage/Music 3.5T 3.4G 3.5T 1% /storage/Music storage/Custom Software 3.5T 35M 3.5T 1% /storage/Custom Software storage/Videos 14T 11T 3.5T 75% /storage/Videos storage/Software 3.9T 326G 3.5T 9% /storage/Software storage/Pictures 3.5T 1.8G 3.5T 1% /storage/Pictures storage/Proxmox 3.8T 296G 3.5T 8% /storage/Proxmox
# Output of: ls -l /storage/ drwxr-xr-x 11 username username 10 Sep 8 10:42 '3D Printer' drwxr-xr-x 5 username username 4 Sep 8 10:42 'Custom Software' drwxr-xr-x 12 username username 11 Sep 8 10:43 Music drwxr-xr-x 14 username username 20 Sep 8 10:44 Pictures drwxr-xr-x 8 username username 7 Sep 13 23:58 Proxmox drwxr-xr-x 47 username username 51 Sep 8 12:08 Software drwxr-xr-x 4 username username 3 Sep 8 12:08 users drwxr-xr-x 9 username username 10 Sep 8 12:42 Videos
Size/Available capacity/Used capacity is all sorts of messed up. I think that's because I didn't set storage limits on the Datasets so they all just report it the same. storage/Videos does report the true usable storage correctly though. I have 14TiB of usable SSD storage. That's a lot of SSD .
Setting up periodic snapshots was easy enough. What was a pain was looking up a way of deleting them after a certain period:
Output of: zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT storage/3D Printer@09-14-2020 238K - 2.87G - storage/3D Printer@09-15-2020 238K - 2.87G - storage/3D Printer@09-16-2020 238K - 2.87G - storage/Custom Software@09-14-2020 548K - 33.1M - storage/Custom Software@09-15-2020 548K - 33.1M - storage/Custom Software@09-16-2020 548K - 33.1M - storage/Music@09-14-2020 146K - 3.15G - storage/Music@09-15-2020 146K - 3.15G - storage/Music@09-16-2020 146K - 3.15G - storage/Pictures@09-14-2020 219K - 1.60G - storage/Pictures@09-15-2020 219K - 1.60G - storage/Pictures@09-16-2020 219K - 1.60G - storage/Software@09-14-2020 8.27M - 303G - storage/Software@09-15-2020 8.27M - 303G - storage/Software@09-16-2020 347K - 303G - storage/Videos@09-14-2020 1.52M - 9.03T - storage/Videos@09-15-2020 1.36M - 9.10T - storage/Videos@09-16-2020 0B - 9.12T - storage/users@09-14-2020 201K - 446M - storage/users@09-15-2020 183K - 446M - storage/users@09-16-2020 0B - 446M -
# Entry in: crontab -e # Runs Snapshot task at midnight every night for several ZFS Datasets. 0 23 * * * /home/username/snapshot.script
# Output of: cat snapshot.script #!/bin/sh zfs=/sbin/zfs date=/bin/date grep=/usr/bin/grep sort=/usr/bin/sort sed=/usr/bin/sed xargs=/usr/bin/xargs $zfs snapshot -r storage/'3D Printer'@`date +"%m-%d-%Y"` $zfs snapshot -r storage/'Custom Software'@`date +"%m-%d-%Y"` $zfs snapshot -r storage/Music@`date +"%m-%d-%Y"` $zfs snapshot -r storage/Pictures@`date +"%m-%d-%Y"` $zfs snapshot -r storage/Software@`date +"%m-%d-%Y"` $zfs snapshot -r storage/users@`date +"%m-%d-%Y"` $zfs snapshot -r storage/Videos@`date +"%m-%d-%Y"` $zfs list -t snapshot -o name | $grep "storage/3D Printer@*" | $sort -r | $sed 1,30d | $xargs -n 1 $zfs destroy -r $zfs list -t snapshot -o name | $grep "storage/Custom Software@*" | $sort -r | $sed 1,30d | $xargs -n 1 $zfs destroy -r $zfs list -t snapshot -o name | $grep "storage/Music@*" | $sort -r | $sed 1,30d | $xargs -n 1 $zfs destroy -r $zfs list -t snapshot -o name | $grep "storage/Pictures@*" | $sort -r | $sed 1,30d | $xargs -n 1 $zfs destroy -r $zfs list -t snapshot -o name | $grep "storage/Software@*" | $sort -r | $sed 1,30d | $xargs -n 1 $zfs destroy -r $zfs list -t snapshot -o name | $grep "storage/users@*" | $sort -r | $sed 1,30d | $xargs -n 1 $zfs destroy -r $zfs list -t snapshot -o name | $grep "storage/Videos@*" | $sort -r | $sed 1,30d | $xargs -n 1 $zfs destroy -r
Now I know I should have used variables here and I could have gotten it done with a pair of For Loops but I don't know the scripting syntax all that well so doing the wrong way for now. Might fix it later if I find the time.
Setting up SAMBA (SMB/CIFS) was really easy:
# Output of: testparm [3D_Printer] guest ok = Yes path = /storage/3D Printer/ read only = No valid users = username [Custom_Software] guest ok = Yes path = /storage/Custom Software/ read only = No valid users = username [Music] guest ok = Yes path = /storage/Music read only = No valid users = username [Pictures] guest ok = Yes path = /storage/Pictures read only = No valid users = username [Software] guest ok = Yes path = /storage/Software read only = No valid users = username [Users] guest ok = Yes path = /storage/users read only = No valid users = username [Videos] guest ok = Yes path = /storage/Videos read only = No valid users = username [Proxmox] guest ok = Yes path = /storage/Proxmox read only = No valid users = username
Each of these show up as a separate network share that I can mount however I see fit.
Lastly setting up backup tasks also took some time:
# Entries in: crontab -e # Runs RSYNC over SSH storage backup task for Onsite Backup 0 23 * * * /home/username/backup-onsite.script # Runs RSYNC over SSH storage backup task for Offsite Backup 0 23 * * * /home/username/backup-offsite.script
For backup I've used RSYNC over SSH using password-less public/private key authentication so it can remote in securely without my intervention.
# Output of: cat backup-onsite.script #!/bin/bash #This script backs up the primary storage pool to the Onsite Backup Storage Server via RSYNC & SSH. rsync -avzhP --exclude-from=/home/username/exclusion_list.txt -e 'ssh -p 22 -i .ssh/onsite' --log-file=/home/username/onsite-logs/backup-`date +”%F-%I%p”`.log /storage/ username@10.0.0.8:/mnt/onsite-backup/
# Output of: cat backup-offsite.script #!/bin/bash #This script backs up the primary storage pool to the Offsite Backup Storage Server via RSYNC & SSH. rsync -avzhP --exclude-from=/home/username/exclusion_list.txt -e 'ssh -p 22 -i .ssh/offsite' --log-file=/home/username/offsite-logs/backup-`date +”%F-%I%p”`.log /storage/ username@10.0.0.4:/offsite-backup/
For the time being the offsite backup server is located on the LAN but eventually I'll swap out it's Private IP for either a Dynamic DNS address or the Public IP of a offsite location where I can keep everything save in the event of catastrophe.
Aside from writing myself some helpful scripts for future automation I think we're back up & running with the new OS.