Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Windows7ge

Member
  • Content Count

    10,138
  • Joined

  • Last visited

Awards


About Windows7ge

  • Title
    Writer of Guides & Tutorials
  • Birthday 1994-09-08

Profile Information

  • Gender
    Male
  • Interests
    Computer networking
    Server construction/management
    Virtualization
    Writing guides & tutorials

System

  • CPU
    AMD Threadripper 1950X
  • Motherboard
    ASUS PRIME X399-A
  • RAM
    64GB G.Skill Ripjaws4 2400MHz
  • GPU
    2x Sapphire Tri-X R9 290X CFX
  • Case
    LIAN LI PC-T60B (modified - just a little bit)
  • Storage
    Samsung 960PRO 1TB & 13TB SSD Server w/ 20Gbit Fiber Optic NIC
  • PSU
    Corsair AX1200i
  • Cooling
    XSPC Waterblocks on CPU/Both GPU's 2x 480mm radiators with Noctua NF-F12 Fans
  • Operating System
    Ubuntu 20.04.1 LTS

Recent Profile Visitors

27,909 profile views
  1. Yes. And it usually (not always, but usually) voids your warranty with the manufacturer. They don't build the card so you can just dismantle it. If they had the intent for you to just immediately throw your own cooler on it they'd sell you the card without a cooler just as high-end processors don't come with coolers as they expect you to supply your own. The manufacturer isn't going to side with you if you crack the GPU die. The creation/inclusion of an IHS on CPUs for a intended use case of installing it yourself makes them more robust and less prone to breaking during installation.
  2. You would actually be moving backwards. Many years ago desktop processors use to not come with an IHS. One of the many reasons one was intraduced was because the bare silicon die is fragile. If you aren't careful in how you mount the cooler you could crack the die immediately killing the CPU with zero hope of recovery. The IHS made CPU's much more robust. GPU's on the other hand aren't purpose built for the consumer to dismantle and tinker with. When you do so it's at your own risk like with delidding a CPU. It often breaks warranties to do so.
  3. It took a while but I think I finally finished setting up the new GNU/Linux server. Doing everything CLI-only is only a PITA until you learn the commands. After that it is so much faster and easier to automate vs a GUI. CLI 10/10 do recommend.

     

    Got my ZFS pool online:

    # Output of: zpool status
    
    pool: storage
     state: ONLINE
      scan: scrub repaired 0B in 0 days 01:37:26 with 0 errors on Sun Sep 13 02:01:27 2020
    config:
    
    	NAME        STATE     READ WRITE CKSUM
    	storage     ONLINE       0     0     0
    	  raidz2-0  ONLINE       0     0     0
    	    sda     ONLINE       0     0     0
    	    sdb     ONLINE       0     0     0
    	    sdc     ONLINE       0     0     0
    	    sdd     ONLINE       0     0     0
    	    sdf     ONLINE       0     0     0
    	    sdg     ONLINE       0     0     0
    	    sdh     ONLINE       0     0     0
    	    sdi     ONLINE       0     0     0
    	    sdj     ONLINE       0     0     0
    	    sdk     ONLINE       0     0     0
    	  raidz2-1  ONLINE       0     0     0
    	    sdl     ONLINE       0     0     0
    	    sdm     ONLINE       0     0     0
    	    sdn     ONLINE       0     0     0
    	    sdo     ONLINE       0     0     0
    	    sdp     ONLINE       0     0     0
    	    sdq     ONLINE       0     0     0
    	    sdr     ONLINE       0     0     0
    	    sds     ONLINE       0     0     0
    	    sdt     ONLINE       0     0     0
    	    sdu     ONLINE       0     0     0
    
    errors: No known data errors
    

    A twenty drive 960GB SSD RAID60 with physical room for up to 20 more SSDs. 😎

     

    Total capacity:

    NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    storage  17.4T  12.7T  4.72T        -         -     0%    72%  1.00x    ONLINE  -
    

    Now this doesn't count what I lose to resiliency.

     

    All of my Datasets:

    # Output of: zfs list
    
    NAME                      USED  AVAIL     REFER  MOUNTPOINT
    storage/3D Printer       2.87G  3.18T     2.87G  /storage/3D Printer
    storage/Custom Software  34.2M  3.18T     33.1M  /storage/Custom Software
    storage/Music            3.15G  3.18T     3.15G  /storage/Music
    storage/Pictures         1.60G  3.18T     1.60G  /storage/Pictures
    storage/Proxmox           275G  3.18T      275G  /storage/Proxmox
    storage/Software          303G  3.18T      303G  /storage/Software
    storage/Videos           9.12T  3.18T     9.12T  /storage/Videos
    storage/users             446M  3.18T      446M  /storage/users
    # Output of: df -H
    
    Filesystem               Size  Used Avail Use% Mounted on
    storage/3D Printer       3.5T  3.1G  3.5T   1% /storage/3D Printer
    storage/users            3.5T  468M  3.5T   1% /storage/users
    storage/Music            3.5T  3.4G  3.5T   1% /storage/Music
    storage/Custom Software  3.5T   35M  3.5T   1% /storage/Custom Software
    storage/Videos            14T   11T  3.5T  75% /storage/Videos
    storage/Software         3.9T  326G  3.5T   9% /storage/Software
    storage/Pictures         3.5T  1.8G  3.5T   1% /storage/Pictures
    storage/Proxmox          3.8T  296G  3.5T   8% /storage/Proxmox
    # Output of: ls -l /storage/
    
    drwxr-xr-x 11 username username 10 Sep  8 10:42 '3D Printer'
    drwxr-xr-x  5 username username  4 Sep  8 10:42 'Custom Software'
    drwxr-xr-x 12 username username 11 Sep  8 10:43  Music
    drwxr-xr-x 14 username username 20 Sep  8 10:44  Pictures
    drwxr-xr-x  8 username username  7 Sep 13 23:58  Proxmox
    drwxr-xr-x 47 username username 51 Sep  8 12:08  Software
    drwxr-xr-x  4 username username  3 Sep  8 12:08  users
    drwxr-xr-x  9 username username 10 Sep  8 12:42  Videos

    Size/Available capacity/Used capacity is all sorts of messed up. I think that's because I didn't set storage limits on the Datasets so they all just report it the same. storage/Videos does report the true usable storage correctly though. I have 14TiB of usable SSD storage. That's a lot of SSD :D.

     

    Setting up periodic snapshots was easy enough. What was a pain was looking up a way of deleting them after a certain period:

    Output of: zfs list -t snapshot
    
    NAME                                 USED  AVAIL     REFER  MOUNTPOINT
    storage/3D Printer@09-14-2020        238K      -     2.87G  -
    storage/3D Printer@09-15-2020        238K      -     2.87G  -
    storage/3D Printer@09-16-2020        238K      -     2.87G  -
    storage/Custom Software@09-14-2020   548K      -     33.1M  -
    storage/Custom Software@09-15-2020   548K      -     33.1M  -
    storage/Custom Software@09-16-2020   548K      -     33.1M  -
    storage/Music@09-14-2020             146K      -     3.15G  -
    storage/Music@09-15-2020             146K      -     3.15G  -
    storage/Music@09-16-2020             146K      -     3.15G  -
    storage/Pictures@09-14-2020          219K      -     1.60G  -
    storage/Pictures@09-15-2020          219K      -     1.60G  -
    storage/Pictures@09-16-2020          219K      -     1.60G  -
    storage/Software@09-14-2020         8.27M      -      303G  -
    storage/Software@09-15-2020         8.27M      -      303G  -
    storage/Software@09-16-2020          347K      -      303G  -
    storage/Videos@09-14-2020           1.52M      -     9.03T  -
    storage/Videos@09-15-2020           1.36M      -     9.10T  -
    storage/Videos@09-16-2020              0B      -     9.12T  -
    storage/users@09-14-2020             201K      -      446M  -
    storage/users@09-15-2020             183K      -      446M  -
    storage/users@09-16-2020               0B      -      446M  -
    # Entry in: crontab -e
    
    # Runs Snapshot task at midnight every night for several ZFS Datasets.
    0 23 * * * /home/username/snapshot.script
    # Output of: cat snapshot.script
    
    #!/bin/sh
    
    zfs=/sbin/zfs
    date=/bin/date
    grep=/usr/bin/grep
    sort=/usr/bin/sort
    sed=/usr/bin/sed
    xargs=/usr/bin/xargs
    
    $zfs snapshot -r storage/'3D Printer'@`date +"%m-%d-%Y"`
    $zfs snapshot -r storage/'Custom Software'@`date +"%m-%d-%Y"`
    $zfs snapshot -r storage/Music@`date +"%m-%d-%Y"`
    $zfs snapshot -r storage/Pictures@`date +"%m-%d-%Y"`
    $zfs snapshot -r storage/Software@`date +"%m-%d-%Y"`
    $zfs snapshot -r storage/users@`date +"%m-%d-%Y"`
    $zfs snapshot -r storage/Videos@`date +"%m-%d-%Y"`
    
    $zfs list -t snapshot -o name | $grep "storage/3D Printer@*" | $sort -r | $sed 1,30d | $xargs -n 1 $zfs destroy -r
    $zfs list -t snapshot -o name | $grep "storage/Custom Software@*" | $sort -r | $sed 1,30d | $xargs -n 1 $zfs destroy -r
    $zfs list -t snapshot -o name | $grep "storage/Music@*" | $sort -r | $sed 1,30d | $xargs -n 1 $zfs destroy -r
    $zfs list -t snapshot -o name | $grep "storage/Pictures@*" | $sort -r | $sed 1,30d | $xargs -n 1 $zfs destroy -r
    $zfs list -t snapshot -o name | $grep "storage/Software@*" | $sort -r | $sed 1,30d | $xargs -n 1 $zfs destroy -r
    $zfs list -t snapshot -o name | $grep "storage/users@*" | $sort -r | $sed 1,30d | $xargs -n 1 $zfs destroy -r
    $zfs list -t snapshot -o name | $grep "storage/Videos@*" | $sort -r | $sed 1,30d | $xargs -n 1 $zfs destroy -r
    

    Now I know I should have used variables here and I could have gotten it done with a pair of For Loops but I don't know the scripting syntax all that well so doing the wrong way for now. Might fix it later if I find the time.

     

    Setting up SAMBA (SMB/CIFS) was really easy:

    # Output of: testparm
    
    [3D_Printer]
    	guest ok = Yes
    	path = /storage/3D Printer/
    	read only = No
    	valid users = username
    
    
    [Custom_Software]
    	guest ok = Yes
    	path = /storage/Custom Software/
    	read only = No
    	valid users = username
    
    
    [Music]
    	guest ok = Yes
    	path = /storage/Music
    	read only = No
    	valid users = username
    
    
    [Pictures]
    	guest ok = Yes
    	path = /storage/Pictures
    	read only = No
    	valid users = username
    
    
    [Software]
    	guest ok = Yes
    	path = /storage/Software
    	read only = No
    	valid users = username
    
    
    [Users]
    	guest ok = Yes
    	path = /storage/users
    	read only = No
    	valid users = username
    
    
    [Videos]
    	guest ok = Yes
    	path = /storage/Videos
    	read only = No
    	valid users = username
    
    
    [Proxmox]
    	guest ok = Yes
    	path = /storage/Proxmox
    	read only = No
    	valid users = username
    

    Each of these show up as a separate network share that I can mount however I see fit.

     

    Lastly setting up backup tasks also took some time:

    # Entries in: crontab -e
    
    # Runs RSYNC over SSH storage backup task for Onsite Backup
    0 23 * * * /home/username/backup-onsite.script
    
    # Runs RSYNC over SSH storage backup task for Offsite Backup
    0 23 * * * /home/username/backup-offsite.script

    For backup I've used RSYNC over SSH using password-less public/private key authentication so it can remote in securely without my intervention.

    # Output of: cat backup-onsite.script
    
    #!/bin/bash
    
    #This script backs up the primary storage pool to the Onsite Backup Storage Server via RSYNC & SSH.
    rsync -avzhP --exclude-from=/home/username/exclusion_list.txt -e 'ssh -p 22 -i .ssh/onsite' --log-file=/home/username/onsite-logs/backup-`date +”%F-%I%p”`.log /storage/ username@10.0.0.8:/mnt/onsite-backup/
    # Output of: cat backup-offsite.script
    
    #!/bin/bash
    
    #This script backs up the primary storage pool to the Offsite Backup Storage Server via RSYNC & SSH.
    rsync -avzhP --exclude-from=/home/username/exclusion_list.txt -e 'ssh -p 22 -i .ssh/offsite' --log-file=/home/username/offsite-logs/backup-`date +”%F-%I%p”`.log /storage/ username@10.0.0.4:/offsite-backup/

    For the time being the offsite backup server is located on the LAN but eventually I'll swap out it's Private IP for either a Dynamic DNS address or the Public IP of a offsite location where I can keep everything save in the event of catastrophe.

     

    Aside from writing myself some helpful scripts for future automation I think we're back up & running with the new OS. :P

  4. When GNU/Linux loads first you don't see a GRUB menu giving you the option to load the Windows boot loader or continue to Ubuntu?
  5. Google Chrome: *Uses a lot of RAM*

     

    ZFS: "Hold my beer..."

     

    611202881_Screenshotfrom2020-09-1220-46-47.png.50a9efd00cff1b2ce072c7bc0d9778f5.png

    1. leadeater

      leadeater

      Hope that is a dual purpose server and not ZFS murdering those CPU cores

    2. Windows7ge

      Windows7ge

      Oh dear god if ZFS was that CPU heavy nothing could handle it. 20 disks don't put that much of a load on the CPU. What you're looking at there is 56 threads of BOINC. More specifically WCG/OpenPandemics. I've left 8 threads free so the system can still serve the purposes I need it to. The server is running on what I call summer hours so it's only under load 12 out of 24 hours a day.

       

      Here's a more accurate screenshot:

       

      1896517949_Screenshotfrom2020-09-1312-39-49.png.a5f95b993f27a4dd12650482854ee30a.png

       

      ZFS IS using all that RAM though. To be honest if you're familiar with ARC you'll know that you don't need this much RAM to use ZFS but it is VERY nice to have.

  6. The only issue you might run into with compatibility is SMB versions. Sometimes you might experience a hickup where it gives you an error without opening the share because both sides don't negotiate what SMB protocol to use properly. Other than that it works without real issue. As others are suggesting you can use an OS that comes with a convenient WebUI for ease of use but if you would really like to explore GNU/Linux and what the server distros are all about I have a tutorial that could get you started with a file server.
  7. Specs checkout but the price is a little sketch. The 2698v3 is still going for over 2 times that price. If you trust the seller it's a really good deal. Also it requires DDR4 not DDR3 to operate. Socket LGA2011-v3
  8. Well, the time has finally come. So long Windows Server 2016!

     

    windows-server-2016.thumb.png.f228fc34e6d3eb75b960bf3182cc267a.png

     

    Hello Ubuntu Server 20.04.1 LTS :D

    htop.thumb.png.76f64d609c98155327af1b9721617ee5.png

     

    Obligatory neofetch.

     

    neofetch.png.6eec15650234e333b3e193c65965e9da.png

     

    Now to setup one mother of a zpool with 20 SSDs.

    1. Show previous comments  7 more
    2. Windows7ge

      Windows7ge

      That exact issue was a legitimate concern I also needed to address. :D

       

      Thank you.

    3. Windows7ge

      Windows7ge

      Good god mounting a smb share via cli is a right PITA. The command itself wasn't bad but either I'm bad at google or there's just no good documentation on how to do it.

       

      For future reference:

      mkdir /local/mount/path
      sudo apt install cifs-utils
      sudo mount -t cifs -o username=username //IP/share /local/mount/path

       

    4. 2FA

      2FA

      It's the Arch Wiki but it's useful for other distros as well. https://wiki.archlinux.org/index.php/Samba#Manual_mounting

       

      Usually it's my go to for finding things. Plus I use DuckDuckGo so I can just use !aw search term to automatically search the wiki.

  9. Is your PSU modular? From here I'd disconnect all internal peripherals outside the main 24-pin, GPU 6/8-pin, & CPU 8-pin. See if the system powers on. Might have twisted a SATA/Molex or fan cable the wrong direction.
  10. happy birbday! :D

    1. Eschew

      Eschew

      🎂 and 🎁 for you.

  11. As it turns out about 200~250MB/s is the max I'm going to get from this mechanical array. The good news is I can append two additional 3 drive raidz1 vdevs. This will result in three 3 drive RAID5's in RAID0 which if this scales should bring me into the 600MB/s~750MB/s range and provide a grand total of 90TB of RAW network storage. I'm currently dumping my storage server over to this backup server and the Mellanox ConnectX-2 isn't getting hot at all. My little prototype is working quite well. I'm glad to see that.
  12. Alright. I got SAMBA installed & setup. I still have to create Datasets and assign owners but for the time being I just chmod 757 offsite. I'll switch it back to 755 when I'm done backing up the primary server and I'll start from scratch with datasets. The primary reason I want to just dump 9.22TB of data onto it right now is so I can switch out my primary servers OS. Performance actually isn't bad. I'm seeing sustained writes of ~250MB/s over the 10Gig network. I need to enable Jumbo Packets which is a little bit of a pain but it should give me a little boost over what I'm getting now. Power consumption is great. Under load I'm pulling 75W which for this UPS equates to almost 3 hours of run time. Idling it pulls 50~60W which is closer to 4 hours of run time. I'll work on getting Jumbo Packets working and see how high I can get the speeds to go.
  13. God, finally some progress. Gotta love old hardware. I... Installed the OS to just one USB Updated the BMC/IPMI v0.30.0 -> v0.35.0 Updated the BIOS v3.00 -> v3.20 The system is still behaving strangely kind of slowly but it's not as bad as it was before. I have yet to test network performance. Won't happen until tomorrow. I did get remote console working. It wasn't working partly because of a mis-config in the jviewer on my desktop: With SSH there's really no reason to use the remote console except for when I want control higher than the OS. Like in the example below. This is a screen grab on my desktop. I can also remotely shutdown, start, or reboot the system in the event of lockup which is really convenient. More to come tomorrow. I'm afraid I'm going to have network performance issues so I want to get that tested asap.
  14. So we've hit a snag. Not only did it take over 2 hours to install which is well beyond normal but after restarting the system the OS was beyond slow. Borderline completely unresponsive. I think this has to do with my attempt at making a mdadm RAID1 with the USBs. So now we're re-installing and this time it's just a plain install. No RAID. I'll have to research if I can do anything with something like RSYNC to backup the boot drive to the other. We'll see. So far the plain install is going much much quicker. With any luck it'll work this time.
  15. We've struck a couple of problems and part of it is from the fact the motherboard isn't new. I can't access the remote console via the IPMI. Downloading the jviewer file fails each time. Still troubleshooting what that's all about. Might have to get on the horn with ASRock Rack. I can also try updating both the BIOS & BNC. It would appear one of the USB ports on the front of the box aren't usable. This is problematic because I need 4 ports at the minimum and there's only 2 USB type A ports and one dual USB2.0 header. I'm going to have to buy a USB PCIe adapter. The type that connects to USB headers. The CPU is getting toasty despite the two front fans at 100% so for the time being I've haphazardly tossed a baby server fan on there. It works. It would also seem some of the RAM was bunk. Have to deal with 16GB until I can get the proper stuff. For now we're installing the OS. This server is going to be running Ubuntu Server 20.04.1 LTS. Installation is taking a while longer than I'm use to. It looks like its behaving but still...
×