Jump to content

bANONYMOUS

Member
  • Posts

    118
  • Joined

  • Last visited

Reputation Activity

  1. Informative
    bANONYMOUS got a reaction from madmaxheadroom in [GUIDE] Install Pop_OS! 20.04 on RAID   
    Before we get into this, I just want to point credit where credit is due, so first off, I would like to thank Wendell from Level1Techs, without his Guide and Video, I wouldn't have even know this was possible, and I would have just spend over $600 CAD on new NVMe's for no reason if I wasn't able to get this working. (I guess it pays to do your research before buying hardware)
    Next I would like to thank the people on /r/linuxquestions and /r/pop_os who were helping me out though this entire process start to finish.
    We can't forget about our very own @jdfthetech who also just happened to be up all night and was able to give me some tips that got me back on track.
    And Finally, I want to thank everyone who contributes to the ArchWiki, without that place, I would have been absolutely lost and would have switched back to Windows
     
    Now, onto the Guide.

    First, you want to make a Pop_OS! 20.04 USB Installer
    Intel/AMD
    NVIDIA
    Once that's done, we want to configure your BIOS for a Linux install, if you haven't already.
     
    Change OS option if you have it to "other OS"
    Make sure SATA Mode is AHCI (it will not work otherwise)
    Disable Fastboot
    Save and Restart
    Boot to your POP_OS! 20.04 USB
     
    If you don't want to follow along with this guide, I made a tutorial video HERE
    (video is not finished yet, this is just a place holder)
     
    Once booted in the Pop_OS! USB
    Setup language, keyboard, and do not continue any further once it asks how to install.
    open terminal and let's elevate as root
    sudo -i  
    Now check your devices to see which drives you're using.
    lsblk  
    Write down the device names that you want for your RAID (you will need these a lot)
    sudo gdisk /dev/DEVICE_NAME  
    First Drive
    You want this one to be your EFI partition and your for the RAID capacity
    So for the EFI partition, enter as followed as it asks
    n enter enter 1024M ef00  
    Next create the EXT4 Partition that will be used for the RAID
    n enter enter enter 8300  
    Write the changes
    w y  
    Check for EFI Partition
    lsblk  
    Format the EFI you just created from FAT16 to FAT32
    mkfs.fat /dev/YOUR_DEVICE_PARTITION  
    Continue partitioning every other drive as follows
    sudo gdisk /dev/EVERY_OTHER_DEVICE  
    Make the Boot Partition on every drive
    n enter enter 1024M 8300  
    Make the rest of the drive capacity the EXT4 partition for your RAID
    n enter enter enter 8300  
    Write the changes
    w y  
    Continue until all drives you want in your RAID are partitioned.
     
    Now we can make the RAID (0/1/5/6/10)
    X = RAID level
    Y = Number of drives total for the RAID
    mdadm --create /dev/md0 --verbose --level=X --raid-devices=Y /dev/YOUR_DEVICE_1_EXT4_PARTITION_2 /dev/YOUR_DEVICE_2_EXT4_PARTITION_2  
    Make note that you are adding the SECOND PARTITION of each drive for the RAID, not the devices itself, make sure to add the partition at the end and then continue adding to the end of that command for total number of every drive you're using in your RAID
     
    Check RAID status with
    cat /proc/mdstat  
    If you selected anything other than RAID0, it will take awhile to build the volume.
    Keep checking with "cat /proc/mdstat" until it's done
    Once Completed, we can now create the partitions needed on the RAID
    sudo gdisk /dev/md0  
    We want to make a swap partition as our first partition on the new RAID volume, now keep in mind these don't need to be exact, but it's in good practice to stay with the rule of thumb for capacity needed to how much RAM you have
    I'm going to start this list off at 8GB of RAM, because if you have less than 8GB, you should probably be upgrading your RAM and not making a RAID boot setup lol
     
    First Number is how much RAM you HAVE, Second number is how much CAPACITY that the SWAP should be
        8GB - 3GB
      12GB - 3GB
      16GB - 4GB
      24GB - 5GB
      32GB - 6GB
      64GB - 8GB
    128GB - 11GB
     
    For me, I have 32GB of RAM, 1GB is 1024MB, I need a 6GB SWAP, 1024 x 6, is 6144MB, so I'm going to be entering 6144MB, change your value to meet your specs
    n enter enter 6144M 8200  
    Now partition the rest of the RAID
    n enter enter enter 8300  
    Write the changes
    w y  
    Now we can finally move to Pop_OS Installer and configure the partitions.
     
    Select Custom Install
    Select the EFI partition on FIRST drive by device number, they don't always appear in order
    Select /boot/efi
    Make sure the format is fat32
     
    Select the BOOT partition that you made on the SECOND drive by device number, it's not always the second drive in the list)
    (sometimes won't be listed in order, check device number)
    Select Custom
    input into the box /boot
    Make sure the format is set to EXT4
     
    Select the SMALL partition on the RAID array
    Select SWAP
     
    Select the LARGE partition on the RAID Array
    Select / (for Root)
    Make sure the format is EXT4
     
    You may be able to use other formats, I have not tried and can not guarantee if it will work using this process.
     
    Now you can finally select the orange button at the bottom right and Install Pop_OS! (sometimes it will fail at the end, just ignore this)
     
    Once completed (or failed), go back into terminal and we need to mount the RAID
    sudo mount /dev/md0 /mnt  
    Mount the Boot Partition
    sudo mount /dev/YOUR_SECOND_DEVICE_BOOT_PARTITION /boot  
    Mount the EFI partition
    sudo mount /dev/YOUR_FIRST_DEVICE_EFI_PARTITION /boot/efi  
    Now we can install mdadm on the RAID (may already be installed, but try anyway)
    cd /mnt sudo mount --bind /dev dev sudo mount --bind /proc proc sudo mount --bind /sys sys sudo chroot . apt install mdadm  
    Now check mdadm configuration to make sure the RAID UUID is there
    /cat /etc/mdadm/mdadm.conf  
    If it is not there, check the UUID manually
    mdadm --detail /dev/md0
    Copy the UUID and now we can edit the mdadm.conf
    nano /etc/mdadm/mdadm.conf  
    Under where it says "# definitions of existing MD arrays"
    Type in and paste your UUID
    ARRAY /dev/md/0 metadata=1.2 UUID=YOUR_RAID_UUID name=pop-os:0 CTRL X to Save
    Y
    Enter
     
    Now we need to update your changes
    update-initramfs -u  
    Make sure it scans the changes
    mdadm --detail --scan >>/etc/mdadm/mdadm  
    Tell the system it needs to boot the RAID (change the X to the level of RAID you are using)
    echo raidX >> /etc/modules  
    Now just make sure the /boot and /boot/efi partitions are still mounted, mine unmounted at this point for some reason
    lsblk if you do not see anything that says /boot or /boot/efi, you need to remount them.
     
    Remount the Boot Partition
    sudo mount /dev/YOUR_SECOND_DEVICE_BOOT_PARTITION /boot  
    Remount the EFI partition
    sudo mount /dev/YOUR_FIRST_DEVICE_EFI_PARTITION /boot/efi  
    With /boot and /boot/efi remounted, we can finish it off
    lsinitramfs /boot/initrd.img |grep mdadm  
    You need internet for this last step, so connect Wifi if you're not using Ethernet, and you should try to install grub2 in case it failed to install during the Pop_OS! Install.
    So the last step here is to install grub, and then update grub, even if it shows issues, it should be fine as the other files will be there from the Pop_OS! Installer
    apt install grub2-common -y (you may get an error, ignore this)
    update-grub2  
    Exit chroot by typing in
    exit  
    Exit root by typing in
    exit  
    Now reboot the computer
    poweroff --reboot  
    Everything should go fine and you should now be booting into a clean install of Pop_OS! 20.04 on your new RAID Array operating just like a normal install.
     
    From this point on, you just need to remember to never touch the drives separately, if you ever have to enter commands and it's telling you to direct toward your boot drive, this will always be /dev/md0 (or whatever you called your RAID array), never use the devicenames we were using earlier to create the raid, if anything on those changes, it could corrupt the entire RAID resulting in full data loss.

    I hope this helps anyone who wants to setup RAID0 for a blazing fast boot drive to get the most FPS possible, or a safe and secure RAID1 for those with your mission critical files, or even a RAID5/6/10 for those who wanted a little of both.
    I started learning mdadm straight out of windows with very little Linux Knowledge at 6pm April 24th, it's now 6am April 26th. It has been 36 hours straight, and I slept at my desk for 4 hours, no one has any excuse for not being able to learn something.

    It's time I get some much needed sleep.
  2. Agree
    bANONYMOUS got a reaction from Tomy Hoopa in Windows 11 Benchmarks, DESTROYS Windows 10!   
    I made a video doing a bunch of benchmarks comparing Windows 10 to Windows 11.
     
    In my tests, Windows 11 has an 18.75% faster boot time
    3DMark has a 9.74% better score, 2.05% better clock speed, however the test was 7.08% hotter on the CPU and 2.57% hotter on the GPU.
    CrystalDiskMark I got a 15.03% better read speed, and a 4.41% better write speed.
    Geekbench 5, it shows a 9.04% better single core score, and a 15.59% better multi core score, while the clock speed was also 2.05% better, and actually ran 4.13% cooler on the CPU.
     
    EDIT:
    Higher clock speeds and temps are a result of a "Turbo" performance profile in Windows 11 that Windows 10 doesn't have from factory.
  3. Informative
    bANONYMOUS got a reaction from goopey in [GUIDE] Install Pop_OS! 20.04 on RAID   
    Before we get into this, I just want to point credit where credit is due, so first off, I would like to thank Wendell from Level1Techs, without his Guide and Video, I wouldn't have even know this was possible, and I would have just spend over $600 CAD on new NVMe's for no reason if I wasn't able to get this working. (I guess it pays to do your research before buying hardware)
    Next I would like to thank the people on /r/linuxquestions and /r/pop_os who were helping me out though this entire process start to finish.
    We can't forget about our very own @jdfthetech who also just happened to be up all night and was able to give me some tips that got me back on track.
    And Finally, I want to thank everyone who contributes to the ArchWiki, without that place, I would have been absolutely lost and would have switched back to Windows
     
    Now, onto the Guide.

    First, you want to make a Pop_OS! 20.04 USB Installer
    Intel/AMD
    NVIDIA
    Once that's done, we want to configure your BIOS for a Linux install, if you haven't already.
     
    Change OS option if you have it to "other OS"
    Make sure SATA Mode is AHCI (it will not work otherwise)
    Disable Fastboot
    Save and Restart
    Boot to your POP_OS! 20.04 USB
     
    If you don't want to follow along with this guide, I made a tutorial video HERE
    (video is not finished yet, this is just a place holder)
     
    Once booted in the Pop_OS! USB
    Setup language, keyboard, and do not continue any further once it asks how to install.
    open terminal and let's elevate as root
    sudo -i  
    Now check your devices to see which drives you're using.
    lsblk  
    Write down the device names that you want for your RAID (you will need these a lot)
    sudo gdisk /dev/DEVICE_NAME  
    First Drive
    You want this one to be your EFI partition and your for the RAID capacity
    So for the EFI partition, enter as followed as it asks
    n enter enter 1024M ef00  
    Next create the EXT4 Partition that will be used for the RAID
    n enter enter enter 8300  
    Write the changes
    w y  
    Check for EFI Partition
    lsblk  
    Format the EFI you just created from FAT16 to FAT32
    mkfs.fat /dev/YOUR_DEVICE_PARTITION  
    Continue partitioning every other drive as follows
    sudo gdisk /dev/EVERY_OTHER_DEVICE  
    Make the Boot Partition on every drive
    n enter enter 1024M 8300  
    Make the rest of the drive capacity the EXT4 partition for your RAID
    n enter enter enter 8300  
    Write the changes
    w y  
    Continue until all drives you want in your RAID are partitioned.
     
    Now we can make the RAID (0/1/5/6/10)
    X = RAID level
    Y = Number of drives total for the RAID
    mdadm --create /dev/md0 --verbose --level=X --raid-devices=Y /dev/YOUR_DEVICE_1_EXT4_PARTITION_2 /dev/YOUR_DEVICE_2_EXT4_PARTITION_2  
    Make note that you are adding the SECOND PARTITION of each drive for the RAID, not the devices itself, make sure to add the partition at the end and then continue adding to the end of that command for total number of every drive you're using in your RAID
     
    Check RAID status with
    cat /proc/mdstat  
    If you selected anything other than RAID0, it will take awhile to build the volume.
    Keep checking with "cat /proc/mdstat" until it's done
    Once Completed, we can now create the partitions needed on the RAID
    sudo gdisk /dev/md0  
    We want to make a swap partition as our first partition on the new RAID volume, now keep in mind these don't need to be exact, but it's in good practice to stay with the rule of thumb for capacity needed to how much RAM you have
    I'm going to start this list off at 8GB of RAM, because if you have less than 8GB, you should probably be upgrading your RAM and not making a RAID boot setup lol
     
    First Number is how much RAM you HAVE, Second number is how much CAPACITY that the SWAP should be
        8GB - 3GB
      12GB - 3GB
      16GB - 4GB
      24GB - 5GB
      32GB - 6GB
      64GB - 8GB
    128GB - 11GB
     
    For me, I have 32GB of RAM, 1GB is 1024MB, I need a 6GB SWAP, 1024 x 6, is 6144MB, so I'm going to be entering 6144MB, change your value to meet your specs
    n enter enter 6144M 8200  
    Now partition the rest of the RAID
    n enter enter enter 8300  
    Write the changes
    w y  
    Now we can finally move to Pop_OS Installer and configure the partitions.
     
    Select Custom Install
    Select the EFI partition on FIRST drive by device number, they don't always appear in order
    Select /boot/efi
    Make sure the format is fat32
     
    Select the BOOT partition that you made on the SECOND drive by device number, it's not always the second drive in the list)
    (sometimes won't be listed in order, check device number)
    Select Custom
    input into the box /boot
    Make sure the format is set to EXT4
     
    Select the SMALL partition on the RAID array
    Select SWAP
     
    Select the LARGE partition on the RAID Array
    Select / (for Root)
    Make sure the format is EXT4
     
    You may be able to use other formats, I have not tried and can not guarantee if it will work using this process.
     
    Now you can finally select the orange button at the bottom right and Install Pop_OS! (sometimes it will fail at the end, just ignore this)
     
    Once completed (or failed), go back into terminal and we need to mount the RAID
    sudo mount /dev/md0 /mnt  
    Mount the Boot Partition
    sudo mount /dev/YOUR_SECOND_DEVICE_BOOT_PARTITION /boot  
    Mount the EFI partition
    sudo mount /dev/YOUR_FIRST_DEVICE_EFI_PARTITION /boot/efi  
    Now we can install mdadm on the RAID (may already be installed, but try anyway)
    cd /mnt sudo mount --bind /dev dev sudo mount --bind /proc proc sudo mount --bind /sys sys sudo chroot . apt install mdadm  
    Now check mdadm configuration to make sure the RAID UUID is there
    /cat /etc/mdadm/mdadm.conf  
    If it is not there, check the UUID manually
    mdadm --detail /dev/md0
    Copy the UUID and now we can edit the mdadm.conf
    nano /etc/mdadm/mdadm.conf  
    Under where it says "# definitions of existing MD arrays"
    Type in and paste your UUID
    ARRAY /dev/md/0 metadata=1.2 UUID=YOUR_RAID_UUID name=pop-os:0 CTRL X to Save
    Y
    Enter
     
    Now we need to update your changes
    update-initramfs -u  
    Make sure it scans the changes
    mdadm --detail --scan >>/etc/mdadm/mdadm  
    Tell the system it needs to boot the RAID (change the X to the level of RAID you are using)
    echo raidX >> /etc/modules  
    Now just make sure the /boot and /boot/efi partitions are still mounted, mine unmounted at this point for some reason
    lsblk if you do not see anything that says /boot or /boot/efi, you need to remount them.
     
    Remount the Boot Partition
    sudo mount /dev/YOUR_SECOND_DEVICE_BOOT_PARTITION /boot  
    Remount the EFI partition
    sudo mount /dev/YOUR_FIRST_DEVICE_EFI_PARTITION /boot/efi  
    With /boot and /boot/efi remounted, we can finish it off
    lsinitramfs /boot/initrd.img |grep mdadm  
    You need internet for this last step, so connect Wifi if you're not using Ethernet, and you should try to install grub2 in case it failed to install during the Pop_OS! Install.
    So the last step here is to install grub, and then update grub, even if it shows issues, it should be fine as the other files will be there from the Pop_OS! Installer
    apt install grub2-common -y (you may get an error, ignore this)
    update-grub2  
    Exit chroot by typing in
    exit  
    Exit root by typing in
    exit  
    Now reboot the computer
    poweroff --reboot  
    Everything should go fine and you should now be booting into a clean install of Pop_OS! 20.04 on your new RAID Array operating just like a normal install.
     
    From this point on, you just need to remember to never touch the drives separately, if you ever have to enter commands and it's telling you to direct toward your boot drive, this will always be /dev/md0 (or whatever you called your RAID array), never use the devicenames we were using earlier to create the raid, if anything on those changes, it could corrupt the entire RAID resulting in full data loss.

    I hope this helps anyone who wants to setup RAID0 for a blazing fast boot drive to get the most FPS possible, or a safe and secure RAID1 for those with your mission critical files, or even a RAID5/6/10 for those who wanted a little of both.
    I started learning mdadm straight out of windows with very little Linux Knowledge at 6pm April 24th, it's now 6am April 26th. It has been 36 hours straight, and I slept at my desk for 4 hours, no one has any excuse for not being able to learn something.

    It's time I get some much needed sleep.
  4. Informative
    bANONYMOUS got a reaction from Ivan Granic 01 in [GUIDE] Install Pop_OS! 20.04 on RAID   
    Before we get into this, I just want to point credit where credit is due, so first off, I would like to thank Wendell from Level1Techs, without his Guide and Video, I wouldn't have even know this was possible, and I would have just spend over $600 CAD on new NVMe's for no reason if I wasn't able to get this working. (I guess it pays to do your research before buying hardware)
    Next I would like to thank the people on /r/linuxquestions and /r/pop_os who were helping me out though this entire process start to finish.
    We can't forget about our very own @jdfthetech who also just happened to be up all night and was able to give me some tips that got me back on track.
    And Finally, I want to thank everyone who contributes to the ArchWiki, without that place, I would have been absolutely lost and would have switched back to Windows
     
    Now, onto the Guide.

    First, you want to make a Pop_OS! 20.04 USB Installer
    Intel/AMD
    NVIDIA
    Once that's done, we want to configure your BIOS for a Linux install, if you haven't already.
     
    Change OS option if you have it to "other OS"
    Make sure SATA Mode is AHCI (it will not work otherwise)
    Disable Fastboot
    Save and Restart
    Boot to your POP_OS! 20.04 USB
     
    If you don't want to follow along with this guide, I made a tutorial video HERE
    (video is not finished yet, this is just a place holder)
     
    Once booted in the Pop_OS! USB
    Setup language, keyboard, and do not continue any further once it asks how to install.
    open terminal and let's elevate as root
    sudo -i  
    Now check your devices to see which drives you're using.
    lsblk  
    Write down the device names that you want for your RAID (you will need these a lot)
    sudo gdisk /dev/DEVICE_NAME  
    First Drive
    You want this one to be your EFI partition and your for the RAID capacity
    So for the EFI partition, enter as followed as it asks
    n enter enter 1024M ef00  
    Next create the EXT4 Partition that will be used for the RAID
    n enter enter enter 8300  
    Write the changes
    w y  
    Check for EFI Partition
    lsblk  
    Format the EFI you just created from FAT16 to FAT32
    mkfs.fat /dev/YOUR_DEVICE_PARTITION  
    Continue partitioning every other drive as follows
    sudo gdisk /dev/EVERY_OTHER_DEVICE  
    Make the Boot Partition on every drive
    n enter enter 1024M 8300  
    Make the rest of the drive capacity the EXT4 partition for your RAID
    n enter enter enter 8300  
    Write the changes
    w y  
    Continue until all drives you want in your RAID are partitioned.
     
    Now we can make the RAID (0/1/5/6/10)
    X = RAID level
    Y = Number of drives total for the RAID
    mdadm --create /dev/md0 --verbose --level=X --raid-devices=Y /dev/YOUR_DEVICE_1_EXT4_PARTITION_2 /dev/YOUR_DEVICE_2_EXT4_PARTITION_2  
    Make note that you are adding the SECOND PARTITION of each drive for the RAID, not the devices itself, make sure to add the partition at the end and then continue adding to the end of that command for total number of every drive you're using in your RAID
     
    Check RAID status with
    cat /proc/mdstat  
    If you selected anything other than RAID0, it will take awhile to build the volume.
    Keep checking with "cat /proc/mdstat" until it's done
    Once Completed, we can now create the partitions needed on the RAID
    sudo gdisk /dev/md0  
    We want to make a swap partition as our first partition on the new RAID volume, now keep in mind these don't need to be exact, but it's in good practice to stay with the rule of thumb for capacity needed to how much RAM you have
    I'm going to start this list off at 8GB of RAM, because if you have less than 8GB, you should probably be upgrading your RAM and not making a RAID boot setup lol
     
    First Number is how much RAM you HAVE, Second number is how much CAPACITY that the SWAP should be
        8GB - 3GB
      12GB - 3GB
      16GB - 4GB
      24GB - 5GB
      32GB - 6GB
      64GB - 8GB
    128GB - 11GB
     
    For me, I have 32GB of RAM, 1GB is 1024MB, I need a 6GB SWAP, 1024 x 6, is 6144MB, so I'm going to be entering 6144MB, change your value to meet your specs
    n enter enter 6144M 8200  
    Now partition the rest of the RAID
    n enter enter enter 8300  
    Write the changes
    w y  
    Now we can finally move to Pop_OS Installer and configure the partitions.
     
    Select Custom Install
    Select the EFI partition on FIRST drive by device number, they don't always appear in order
    Select /boot/efi
    Make sure the format is fat32
     
    Select the BOOT partition that you made on the SECOND drive by device number, it's not always the second drive in the list)
    (sometimes won't be listed in order, check device number)
    Select Custom
    input into the box /boot
    Make sure the format is set to EXT4
     
    Select the SMALL partition on the RAID array
    Select SWAP
     
    Select the LARGE partition on the RAID Array
    Select / (for Root)
    Make sure the format is EXT4
     
    You may be able to use other formats, I have not tried and can not guarantee if it will work using this process.
     
    Now you can finally select the orange button at the bottom right and Install Pop_OS! (sometimes it will fail at the end, just ignore this)
     
    Once completed (or failed), go back into terminal and we need to mount the RAID
    sudo mount /dev/md0 /mnt  
    Mount the Boot Partition
    sudo mount /dev/YOUR_SECOND_DEVICE_BOOT_PARTITION /boot  
    Mount the EFI partition
    sudo mount /dev/YOUR_FIRST_DEVICE_EFI_PARTITION /boot/efi  
    Now we can install mdadm on the RAID (may already be installed, but try anyway)
    cd /mnt sudo mount --bind /dev dev sudo mount --bind /proc proc sudo mount --bind /sys sys sudo chroot . apt install mdadm  
    Now check mdadm configuration to make sure the RAID UUID is there
    /cat /etc/mdadm/mdadm.conf  
    If it is not there, check the UUID manually
    mdadm --detail /dev/md0
    Copy the UUID and now we can edit the mdadm.conf
    nano /etc/mdadm/mdadm.conf  
    Under where it says "# definitions of existing MD arrays"
    Type in and paste your UUID
    ARRAY /dev/md/0 metadata=1.2 UUID=YOUR_RAID_UUID name=pop-os:0 CTRL X to Save
    Y
    Enter
     
    Now we need to update your changes
    update-initramfs -u  
    Make sure it scans the changes
    mdadm --detail --scan >>/etc/mdadm/mdadm  
    Tell the system it needs to boot the RAID (change the X to the level of RAID you are using)
    echo raidX >> /etc/modules  
    Now just make sure the /boot and /boot/efi partitions are still mounted, mine unmounted at this point for some reason
    lsblk if you do not see anything that says /boot or /boot/efi, you need to remount them.
     
    Remount the Boot Partition
    sudo mount /dev/YOUR_SECOND_DEVICE_BOOT_PARTITION /boot  
    Remount the EFI partition
    sudo mount /dev/YOUR_FIRST_DEVICE_EFI_PARTITION /boot/efi  
    With /boot and /boot/efi remounted, we can finish it off
    lsinitramfs /boot/initrd.img |grep mdadm  
    You need internet for this last step, so connect Wifi if you're not using Ethernet, and you should try to install grub2 in case it failed to install during the Pop_OS! Install.
    So the last step here is to install grub, and then update grub, even if it shows issues, it should be fine as the other files will be there from the Pop_OS! Installer
    apt install grub2-common -y (you may get an error, ignore this)
    update-grub2  
    Exit chroot by typing in
    exit  
    Exit root by typing in
    exit  
    Now reboot the computer
    poweroff --reboot  
    Everything should go fine and you should now be booting into a clean install of Pop_OS! 20.04 on your new RAID Array operating just like a normal install.
     
    From this point on, you just need to remember to never touch the drives separately, if you ever have to enter commands and it's telling you to direct toward your boot drive, this will always be /dev/md0 (or whatever you called your RAID array), never use the devicenames we were using earlier to create the raid, if anything on those changes, it could corrupt the entire RAID resulting in full data loss.

    I hope this helps anyone who wants to setup RAID0 for a blazing fast boot drive to get the most FPS possible, or a safe and secure RAID1 for those with your mission critical files, or even a RAID5/6/10 for those who wanted a little of both.
    I started learning mdadm straight out of windows with very little Linux Knowledge at 6pm April 24th, it's now 6am April 26th. It has been 36 hours straight, and I slept at my desk for 4 hours, no one has any excuse for not being able to learn something.

    It's time I get some much needed sleep.
  5. Informative
    bANONYMOUS got a reaction from benitiv in [GUIDE] Install Pop_OS! 20.04 on RAID   
    Before we get into this, I just want to point credit where credit is due, so first off, I would like to thank Wendell from Level1Techs, without his Guide and Video, I wouldn't have even know this was possible, and I would have just spend over $600 CAD on new NVMe's for no reason if I wasn't able to get this working. (I guess it pays to do your research before buying hardware)
    Next I would like to thank the people on /r/linuxquestions and /r/pop_os who were helping me out though this entire process start to finish.
    We can't forget about our very own @jdfthetech who also just happened to be up all night and was able to give me some tips that got me back on track.
    And Finally, I want to thank everyone who contributes to the ArchWiki, without that place, I would have been absolutely lost and would have switched back to Windows
     
    Now, onto the Guide.

    First, you want to make a Pop_OS! 20.04 USB Installer
    Intel/AMD
    NVIDIA
    Once that's done, we want to configure your BIOS for a Linux install, if you haven't already.
     
    Change OS option if you have it to "other OS"
    Make sure SATA Mode is AHCI (it will not work otherwise)
    Disable Fastboot
    Save and Restart
    Boot to your POP_OS! 20.04 USB
     
    If you don't want to follow along with this guide, I made a tutorial video HERE
    (video is not finished yet, this is just a place holder)
     
    Once booted in the Pop_OS! USB
    Setup language, keyboard, and do not continue any further once it asks how to install.
    open terminal and let's elevate as root
    sudo -i  
    Now check your devices to see which drives you're using.
    lsblk  
    Write down the device names that you want for your RAID (you will need these a lot)
    sudo gdisk /dev/DEVICE_NAME  
    First Drive
    You want this one to be your EFI partition and your for the RAID capacity
    So for the EFI partition, enter as followed as it asks
    n enter enter 1024M ef00  
    Next create the EXT4 Partition that will be used for the RAID
    n enter enter enter 8300  
    Write the changes
    w y  
    Check for EFI Partition
    lsblk  
    Format the EFI you just created from FAT16 to FAT32
    mkfs.fat /dev/YOUR_DEVICE_PARTITION  
    Continue partitioning every other drive as follows
    sudo gdisk /dev/EVERY_OTHER_DEVICE  
    Make the Boot Partition on every drive
    n enter enter 1024M 8300  
    Make the rest of the drive capacity the EXT4 partition for your RAID
    n enter enter enter 8300  
    Write the changes
    w y  
    Continue until all drives you want in your RAID are partitioned.
     
    Now we can make the RAID (0/1/5/6/10)
    X = RAID level
    Y = Number of drives total for the RAID
    mdadm --create /dev/md0 --verbose --level=X --raid-devices=Y /dev/YOUR_DEVICE_1_EXT4_PARTITION_2 /dev/YOUR_DEVICE_2_EXT4_PARTITION_2  
    Make note that you are adding the SECOND PARTITION of each drive for the RAID, not the devices itself, make sure to add the partition at the end and then continue adding to the end of that command for total number of every drive you're using in your RAID
     
    Check RAID status with
    cat /proc/mdstat  
    If you selected anything other than RAID0, it will take awhile to build the volume.
    Keep checking with "cat /proc/mdstat" until it's done
    Once Completed, we can now create the partitions needed on the RAID
    sudo gdisk /dev/md0  
    We want to make a swap partition as our first partition on the new RAID volume, now keep in mind these don't need to be exact, but it's in good practice to stay with the rule of thumb for capacity needed to how much RAM you have
    I'm going to start this list off at 8GB of RAM, because if you have less than 8GB, you should probably be upgrading your RAM and not making a RAID boot setup lol
     
    First Number is how much RAM you HAVE, Second number is how much CAPACITY that the SWAP should be
        8GB - 3GB
      12GB - 3GB
      16GB - 4GB
      24GB - 5GB
      32GB - 6GB
      64GB - 8GB
    128GB - 11GB
     
    For me, I have 32GB of RAM, 1GB is 1024MB, I need a 6GB SWAP, 1024 x 6, is 6144MB, so I'm going to be entering 6144MB, change your value to meet your specs
    n enter enter 6144M 8200  
    Now partition the rest of the RAID
    n enter enter enter 8300  
    Write the changes
    w y  
    Now we can finally move to Pop_OS Installer and configure the partitions.
     
    Select Custom Install
    Select the EFI partition on FIRST drive by device number, they don't always appear in order
    Select /boot/efi
    Make sure the format is fat32
     
    Select the BOOT partition that you made on the SECOND drive by device number, it's not always the second drive in the list)
    (sometimes won't be listed in order, check device number)
    Select Custom
    input into the box /boot
    Make sure the format is set to EXT4
     
    Select the SMALL partition on the RAID array
    Select SWAP
     
    Select the LARGE partition on the RAID Array
    Select / (for Root)
    Make sure the format is EXT4
     
    You may be able to use other formats, I have not tried and can not guarantee if it will work using this process.
     
    Now you can finally select the orange button at the bottom right and Install Pop_OS! (sometimes it will fail at the end, just ignore this)
     
    Once completed (or failed), go back into terminal and we need to mount the RAID
    sudo mount /dev/md0 /mnt  
    Mount the Boot Partition
    sudo mount /dev/YOUR_SECOND_DEVICE_BOOT_PARTITION /boot  
    Mount the EFI partition
    sudo mount /dev/YOUR_FIRST_DEVICE_EFI_PARTITION /boot/efi  
    Now we can install mdadm on the RAID (may already be installed, but try anyway)
    cd /mnt sudo mount --bind /dev dev sudo mount --bind /proc proc sudo mount --bind /sys sys sudo chroot . apt install mdadm  
    Now check mdadm configuration to make sure the RAID UUID is there
    /cat /etc/mdadm/mdadm.conf  
    If it is not there, check the UUID manually
    mdadm --detail /dev/md0
    Copy the UUID and now we can edit the mdadm.conf
    nano /etc/mdadm/mdadm.conf  
    Under where it says "# definitions of existing MD arrays"
    Type in and paste your UUID
    ARRAY /dev/md/0 metadata=1.2 UUID=YOUR_RAID_UUID name=pop-os:0 CTRL X to Save
    Y
    Enter
     
    Now we need to update your changes
    update-initramfs -u  
    Make sure it scans the changes
    mdadm --detail --scan >>/etc/mdadm/mdadm  
    Tell the system it needs to boot the RAID (change the X to the level of RAID you are using)
    echo raidX >> /etc/modules  
    Now just make sure the /boot and /boot/efi partitions are still mounted, mine unmounted at this point for some reason
    lsblk if you do not see anything that says /boot or /boot/efi, you need to remount them.
     
    Remount the Boot Partition
    sudo mount /dev/YOUR_SECOND_DEVICE_BOOT_PARTITION /boot  
    Remount the EFI partition
    sudo mount /dev/YOUR_FIRST_DEVICE_EFI_PARTITION /boot/efi  
    With /boot and /boot/efi remounted, we can finish it off
    lsinitramfs /boot/initrd.img |grep mdadm  
    You need internet for this last step, so connect Wifi if you're not using Ethernet, and you should try to install grub2 in case it failed to install during the Pop_OS! Install.
    So the last step here is to install grub, and then update grub, even if it shows issues, it should be fine as the other files will be there from the Pop_OS! Installer
    apt install grub2-common -y (you may get an error, ignore this)
    update-grub2  
    Exit chroot by typing in
    exit  
    Exit root by typing in
    exit  
    Now reboot the computer
    poweroff --reboot  
    Everything should go fine and you should now be booting into a clean install of Pop_OS! 20.04 on your new RAID Array operating just like a normal install.
     
    From this point on, you just need to remember to never touch the drives separately, if you ever have to enter commands and it's telling you to direct toward your boot drive, this will always be /dev/md0 (or whatever you called your RAID array), never use the devicenames we were using earlier to create the raid, if anything on those changes, it could corrupt the entire RAID resulting in full data loss.

    I hope this helps anyone who wants to setup RAID0 for a blazing fast boot drive to get the most FPS possible, or a safe and secure RAID1 for those with your mission critical files, or even a RAID5/6/10 for those who wanted a little of both.
    I started learning mdadm straight out of windows with very little Linux Knowledge at 6pm April 24th, it's now 6am April 26th. It has been 36 hours straight, and I slept at my desk for 4 hours, no one has any excuse for not being able to learn something.

    It's time I get some much needed sleep.
  6. Like
    bANONYMOUS got a reaction from benitiv in Help with NVMe RAID setup for Linux install   
    Yeah I saw that, that was how I finally figured it all out and got it working.
    I just finished writing a full n00b guide on how to do this, I wnated to make sure I could repeat the process so I formatted everything again and did it over again a second time to make sure my process works for the guide and also made a video of the entire process that I need to edit now.
    I'm going to make a second thread for the guide so it's not cluttered with the posts about trying to figure stuff out.
  7. Informative
    bANONYMOUS reacted to benitiv in Help with NVMe RAID setup for Linux install   
    Was lurking on this thread. Interesting topic.
     
    (bANONYMOUS - wendel took some of his precious time to reply to your post on the L1tech site. mb you want to inform him about your success as well? (sorry for beeing like i am 😭))
  8. Like
    bANONYMOUS got a reaction from BiG StroOnZ in The AorusElite - Linux Video Editing Workstation   
    Introducing my new build for 2020! The "AorusElite"
    I recycled a lot from my 2019 build
    However, we're going in a different direction this time, prioritizing video editing, and switching fully to Linux. Specifically, Pop_OS!.
     

    Specs:
    Case:
     - NZXT H510 Elite
    Motherboard:
     - Gigabyte AorusElite Xtreme Z390
    CPU:
     - Intel i9-9900k (Max Load ~60c)
    CPU Cooling:
     - NZXT Kraken X62 (4x 140s)
     - Thermal-Grizzly Kryonaut
    Memory:
     - 4x Corsair Vengeance RGB Pro 2666MHz
    GPU:
     - EVGA 1080ti FTW3 (Max Load ~55c)
    GPU Cooling:
     - NZXT Kraken G12 (modded) (1x 120)
     - NZXT Kraken M22 (modded) (2x 120s)
     - Thermal-Grizzly Kryonaut
    Storage:
     - 3x Adata XPG Gammix S11 Pro 1TB NVMe
     - 3x Seagate Barracuda 8TB HDD
    PSU:
     - Corsair HX1200 PSU with CableMod Cables

    Photos:
    The stress started when these arrived.

     
    Modified the NZXT Kraken G12 and Kraken M22 to play nice together
    Hacked together a bracket to fit a 120mm fan instead of the factory 92mm (RGB is now an option)
    Painted grey to be uniform for either light or dark builds in the future.

     
    Final outcome


    The Story:
    During 2019, I was editing videos with Premiere Pro, some gaming, mostly Forza and Halo, but now 2020 is around, I've finally given up on gaming and just play Forza and Halo on Xbox in the living room, and my computer ends up being used primarily for content creating, Youtube, cinematography, photography, and music, so, since none of us can leave the house, I finally got fed up with Premiere Pro crashing YET AGAIN, corrupting my Autosave and I lost the entire project, so, with so much excessive free time, I figured now would be the perfect time to start learning something new and decided to try out Davinci Resolve, and after finishing a few of their online courses, I cancelled my Adobe Creative Cloud subscription and I'm not looking back. I noticed when downloading Davinci Resolve that they have a Linux version, did some looking into it and the only setbacks is it was designed for CentOS and I primarily use distros off of Debian (Kali or Pop_OS!), looked into getting that working and found a few guides on how to piece together the info I needed to making Davinci Resolve 16 work seamlessly on Pop_OS! The only other program I use is FL Studio 20, and it turns out it works perfect straight out of the box with wine, so, my solutions are met, I have no use for Windows anymore. I'm switching to Linux (I used Kali every day on my MSI Leopard Pro for work before I got laid off, and a year later I now daily Pop_OS! on my HP Spectre x360).
     
    NZXT's Lack Of Information:
    When I decided on the NZXT H510 Elite, my rad options are a 280mm on the front and a 120mm on the rear, so, naturally, I want to utilize the most of my options knowing that I'll be buying the Kraken G12 for my GPU, so I look into the supported coolers for the G12, and it doesn't mention the Kraken M22 (which is NZXT's only Kraken series 120mm cooler), but it does say it works with all X series Kraken coolers, so on their website, under Kraken X series, they have listed the M22, so I just figured since the G12 came out before the M22 was released, they probably just didn't update the product compatabilty page for the G12. No.. no that wasn't the case at all, it's not compatible at all, not even close.
    So normally, on a decent gaming setup, you're going to overclock the hell out of the GPU so running a 280mm rad on a 1080ti with the Kraken G12, and maybe an overclocked i5 or a stock clock i7 would suffice on a 120mm AIO. But I have an i9 that hits ~90c rendering videos on my old setup of a 280mm rad and 3 120mm fans on it, so, no, a 120mm cooler will not work for my application, and since this is a video editing workstation, the GPU is only there for 3D applications, the primary usage of this computer is going to be CPU intensive, so I need to have a fully functioning 280mm rad for the CPU. Which means, I need to make that Kraken G12 and M22 work, whether they like it or not.

    Kraken G12 & M22 Learn To Like Each Other:
    I ended up sizing up the M22 mounting bracket up to the G12 and realized you can actually just drill out four holes with it perfectly centered in the asetek mount hole, and you can mount the M22 bracket under the G12, this setup actually means you can run the M22 on the G12 brackets without the exterior G12 plate that holds the fan, you just need to cut down the stand offs on the G12 brackets that mount to the GPU. So I did that, drilled out the M22 bracket, sized it all up on the GPU.. and I should have taken apart the GPU first to make sure I was drilling out the correct holes, because I drilled out the AMD holes, and now the M22 bracket is scrap, so, going through some old scrap, found a broken DVD player that was going to get recycled, riped that apart and salvaged the exterior panels as scrap sheet metal to make my own M22 mounting bracket, with the correct holes this time that line up with the G12 brackets, bingo, bango, we're in business, M22 now mounts to the GPU, however the G12 bracket needed to be cut between the asetek mounting hole, and the fan hole to allow the coolant lines to come through as a normal cooler is suppose to be mounted on top, and my new setup has it mounted below the G12, and since I was cutting it up, why not utilize a little more of that old DVD player and make a 120mm fan bracket, so with another day of cutting, testing, fitting, mocking up, yelling, bleeding, metal slivers, more yelling, crying, admitting self-defeat, pricing out new cases that fit a 280mm and 140mm AIO set up, I finally got it done, and it works fantastic, the GPU was hitting around 80-85c at full load before, and now it hits around 60c dead on with full load.
    Was it worth it? Absolutely, however, this is what leads us into the next issue, the lack of internal USB headers.

    Just.. One.. More:
    Now that I have the G12 modified and the M22 with a custom mounting plate, they're working in a very perfect harmony with a delightful 120mm fan mod to just set off the perfection I didn't think I was going to be able to achieve. However, now there's a shortage of internal USB ports, because the Gigabyte Aorus Xtreme Z390, beast of a board, it only has two, and the H510 Elite RGB/Fan controller uses one and I need that working because it controls the fan curve for the GPU now that I'm not using the proprietary fan headers on the GPU PCB itself, next would be either the M22 or the X62, so since the GPU was my "show piece" of the case, I decided that one gets to light up, but it's not a hard fix, I just need to order the NZXT Internal USB Hub, and then I can plug in the X62 and I'll have all the lights working.

    Finally done, right?:
    So now the new computer is built, and I'm ready to say goodbye to Windows, I reset the BIOS to default settings to remove the possibility of having any compatibility issues, disabled Fastboot so there are no hiberfiles locking my drives from use, I even unplugged all of my other drives to make sure they're completely removed from the install to protect data and make sure nothing goes wrong. It's time, time to finally install Pop_OS! 19.10, so, I wiped the NVMe and did a fresh install, set it all up, all good right? ...no audio

    Audio Issues:
    I can't get anything out of the rear ports on the motherboard, my USB DAC doesn't work, my USB headphones don't work, tried to figure this out for hours, hit up the Pop_OS! Reddit, no solution, wiped the NVMe again and tried Pop_OS! LTS 18.04 this time, same issues exactly. Decided since I really have nothing to lose at this point, wiped that NVMe again, and tried the Pop_OS! 20.04 Beta, and it half works, I have audio over USB, so my DAC and headphones work, and for whatever reason, the front headphone jack on the case works, but the rear don't at all, I can use PulseAudio Volume Control to allocate the front headphone output to the rear jack, but that suffered issues with rebooting, upon start up, the audio ends up being stuck on "headphones" (good, what I wanted), but the volume control would only try to control "Analog Output" so I couldn't turn my volume up or down even with headphones selected until I manually selected a different output and went back to the headphones as output to fix this.
    So with that not really being a very good solution considering every time I turn the computer on, I need to manually deselect my output source and reselect it just to have volume control, I bought a 3.5mm to 1/4" audio cable, so I can use the 1/4" audio out on the Focusrite Scarlett 2i2 DAC as my default audio output to the Logitech speakers, so now, it's USB audio out to the DAC, and from the DAC to the 1/4" output on to the 3.5mm end to the Logitech speakers, it's not graceful, but it works, so I found a workaround for that issue.
     
    Davinci Resolve Issues:
    Now the whole reason I was fully switching was because of the Linux support from BlackMagic with Davinci Resolve, however there wasn't a lot of information about this until I looked into it and have already spent days diagnosing the audio issues before I found out it's designed for CentOS, so I was able to piece together enough info from a few guides online and found a tool by Daniel Tufvesson called MakeResolveDeb, which converts the BlackMagic provided .run file, and turns it into an easily installable .deb file for Debian based linux distros. Which worked the first try, it crashed opening the welcome screen on first boot, but I found a few things online saying this is entirely normal, just wait for it to crash, and then click "force close" and Davinci Resolve 16 will just start opening, and since that welcome screen only comes up on first boot, you never have that issue again, and so far I've knocked out a few videos, and everything is working, my rendering time has also been CRAZY fast, but we'll get into that further during the NTFS Issues. The only downfall I was able to find for Davinci Resolve for Linux is the free version, doesn't support h.26x formats, and since my entirely workflow has always been .mp4, it doesn't read any of my videos for editing. But this brings us to our next part.
     
    MP4/MOV Containers and h.26x Codec:
    Turns out Davinci Resolve doesn't like h.26x codec for the free version on Linux, so naturally I tried to see what other formats I can shoot in, and my Genuine Panaphonic GH4 does have the ability to shoot in .MOV, however, it turns out .MOV can also be h.264 which isn't supported, so even though some test shots in .MOV were a supported format, they only import as an audio file for some reason, because the video codec is still h.264 and not supported, so my entire workflow just remains .MP4/h.264.
    Now I figured this was going to be a deal breaker if I couldn't find a very simple way to encode footage into a different format fast and in bulk, and I later found out Davinci Resolve also doesn't support AAC, but I couldn't find much information about this as to whether or not it doesn't support AAC entirely, or only the free version, because I can buy the Studio version, and it supports h.26x codecs, and I can use .MP4 containers again, however I couldn't find much saying if AAC is supported in the Studio version too, but for now, I found a fairly quick solution to continue using the free version without further issues.

    FFmpeg:
    FFmpeg is built into Pop_OS! by default, and I can just change open a terminal, change the directory to the folder where the .MP4 files are located, enter a command which will encode every .MP4 file in that directory to a Davinci Resolve compatible .MOV file, and bingo bango, all my footage is now compatible, and I didn't have to do anything more than a couple clicks, super simple procedure and I'm working on video tutorials for all of these solutions that I'll link with their designated headings as I get the videos finished to help out anyone else who is facing similar issues.
    Now that I figured out a super simple way to encode all of my footage from .MP4 to .MOV, that's a pretty nice transition into the next issue.
     
    NTFS locked to Read Only Access.
    Now, the huge response I got from a lot of people on Reddit was "you need ntfs-3g", and yes, they're absolutely correct, I do need ntfs-3g which would probably solve this issue, if it wasn't already installed out of the box with Pop_OS!. I even tried purging and reinstalling to no avail just to see if maybe it didn't install correctly out of the box, but, the issue persists, I tried adjusting permissions, different mounting options, disabling automount, mounting manually, mounting at root, doesn't matter, the NTFS drives are always Read Only access, I tried an external NTFS drive, works fine, read and write access all day long, I also remembered that Linux Kernel 5.4 also has full support for exFAT built in now, this is a new feature since 5.3, you had to manually download some packages to enable support for exFAT before, so, being on Kernel 5.4 with exFAT support, I even tried formatting my Games SSD to exFAT since I don't need any of that data anymore on linux, and yeah, it works fine now that it's formatted to exFAT, formatted back to NTFS just to see, Read Only, so my solution now, I was going to buy another 8TB drive, install it, format to exFAT and go one drive at a time, copy everything from one NTFS drive to the exFAT drive, once it's done and I have an exFAT clone, format that specific NTFS drive to exFAT, and repeat until all drives are exFAT, so, it sucks, but it'll work, and seeing as how this H510 Elite can fit three 3.5" drives, and I only had two because that's all the Corsiar Spec-OMEGA could fit, why not throw in a third drive and really utilize the full potential of this H510 Elite.
    This means I'll now have the 512GB NVMe as my boot drive, 960GB Corsair SSD for "Games", 1TB Sandisk SSD for Youtube videos to edit off of at higher speeds, and then one 8TB drive as my allocated windows data (pictures, videos, music, downloads, documents), and the second 8TB drive as an archive for files I don't use anymore but might need one day, mostly unused or unedited Youtube footage, abandoned projects, things of that nature.
    Now, with this third 8TB drive I bought, I'll have 5 SATA drives, two SSD in ports 1 and 2, two HDD in ports 3 and 4, and the new drive I just bought going into port 5 of 6 total..  which is where the next issue comes into play.
     
    NVMe, The Robbinhood Of SATA:
    Some of you might already know where this is going, but it turns out, if you use an NVMe drive in the first or second M.2 port on the Gigabyte Aorus Xtreme Z390, it disables SATA 5 and 6 to allocate those lanes. So now, I just bought a HDD I can't hook up, but, what about the third M.2 port, well that one cuts the 4x PCIe lane to half speed and I want to get a capture card that will be using that lane, and I don't really want to try and capture at half bandwidth unless it's really worth my effort which we'll get to later, especially if I end up doing something with higher resolution down the road and need the bandwidth for a capture card, the only way I'm losing this functionality is for ridiculous storage speeds.
    So, what's my solution? Well, according to my Amazon orders, I have three 1TB Adata XPG Gammix S11's coming, I'm going to fill the three M.2 ports with three 1TB NVMe drives, remove the SSD's entirely, and run the three 8TB HDD as SATA 1/2/3 with the available port 4 if I change cases to something that fits four HDD in 2021 for my next build upgrade.
    Now some of you may be wondering, why did I buy the Gammix S11's instead of the standard SX8200? They're the same NVMe, the Gammix just has a heat spreader, and the Gigabyte Aorus Xtreme Z390 has M.2 heat speaders built in, so why pay more for Gammix when SX8200 are cheaper and I can't physically see the drives anyway?
    Yeah, I'm right with you on that thought process, I was thinking the same thing, I was actually going to buy three SX8200's off of Newegg, and when I was about to order, they only had two, and I need 3, so I went on Amazon, and turns out they have the Gammix S11's on sale right now for $190 CAD with free shipping, so I saved $10 each drive by buying Gammix S11's instead of the standard SX8200's.
    Once these come in, I'll be installing all three, and then running a RAID5 array as my boot drive which should give me double the performance of a RAID0 array providing something up to 7000MBps Read and 6000MBps Write, while still having the parity of RAID1 in the case of a drive malfunction, This also allows me to hook up that third 8TB drive, and fulfill my previous plan of copying everything to the exFAT drive, format the NTFS drive to exFAT, and repeat for the last drive, the alternative solution for the latter is paying for some cloud storage to upload around 15tb of data, and then create another RAID5 using the three 8TB drives, would be the ideal cleaner setup software wise, faster storage speeds and in linux just allocate the directories to the HDD RAID5 to save space on the NVMe RAID5.

    Raid? Are you sure?:
    This, to put it simply, is the most simple route now, 3x 1TB NVMe's in RAID5 will give me ~2tb storage with parity for recovery if drive failure occurs, and with the HDDs, I'll have ~16TB with parity, this way I should be able to just install Pop_OS! on the ~2TB NVMe RAID5 array which is essentially combining my 512GB NVMe, 960GB SSD, and 1TB SSD, so I'm losing roughly 500GB of storage with this setup, but considering my benchmark speeds on the SSD's are around 500MBps, the insanely improved speeds of NVMe in RAID5, I'm completely content with giving up around 500GB to have something up to 14x the performance, and considering my rendering times, average on a 10 minute 1080p video, it takes somewhere in the neighborhood of 8ish minutes, that's encoding from the 1TB Sandisk X400 which averages around 500MBps, when I edited my first video in Davinci Resolve on Linux, keeping in mind I only have the 512GB NVMe setup so the video was being edited off of NVMe and in a larger .MOV container, a 14 minute video (bad for comparison, I know) rendered in 2 minutes and 42 seconds, and that's just one NVMe, with the new setup, I should be seeing something in the neighborhood of around 1 minute to render a 10 minute 1080p video which is why this new NVMe RAID5 array is going to be a dream to work with. As for the HDDs, there's no real reason I need RAID5, I just figured it would be a much easier setup to allocate the other directories to it if it's just "one drive" recognized by linux. I can figure out how to do what I did with Windows when I installed Windows on the NVMe (I'll install Pop_OS! on the NVMe RAID5 array), and then allocated the default libraries to the HDD, so downloads, pictures, music, videos, documents, all go to the HDD by default, if I download something from Firefox, it goes to the stock download location, which in Windows was allocated to the Downloads folder on the HDD to save space on the NVMe for programs, which I'm hoping to achieve with POP_OS! as well. I just need to figure out how, I've never used linux daily on a multi drive setup so I still need t o figure this out. But if it works as it did in Windows which I'm assuming it will be, this is going to end up being the cleanest most organized data storage setup I've done to date.

    NTFS, exFAT, or Ext4:
    My initial plan was to keep NTFS if I ever needed to use Windows again, however even after my testing of using an external NTFS drive and it works, but internal NTFS does not for some reason, I formatted my "games" SSD to NTFS just to see if this was a windows comparability issue locking the privledges with my drives, it was not, the newly formatted NTFS drive is not reading in POP_OS! 20.04, so this must be a bug with the beta, however, seeing as how I have no patience and want to get this resolved, I was just going to format all my drives to exFAT, despite the fact that exFAT has poor recovery if something goes wrong, and NTFS requires defragging to recover lost space over time, they both have their downfalls so I figured it was pretty well a tossup, until I looked more into Ext4, has the stability of NTFS, with the cleanliness of exFAT by not requiring defragging all the time. It may not be compatible with Windows in the future if ever needed, but so far it's a win win for Ext4, so now it's just a matter of wait until 20.04 is released to see if NTFS is working now, or just jump on Ext4 for all of my drives?
     
    NTFS vs Ext4:
    So we've already established that Ext4 has a cleaner table and doesn't require frequent defragments as opposed to NTFS, and still shares the recovery ability of NTFS if corruption does take place. so it's already looking like a win win here Ext4 is the winner, well, I wanted to make sure this choice was clear, so I formatted the 960GB Corsiar Force LE, created a single partition of the entire drive capacity and formatted it to NTFS, I ran three benchmarks one after another, and the average reports are as follows.
    NTFS: Read - 536.5MBps, Write - 437.1MBps, Latency - 0.20ms
    I then formatted the drive again, created a new partition of the entire capacity in Ext4, and the average of all three reports are as follows.
    Ext4: Read - 555.6MBps, Write - 439.5MBps, Latency - 0.03ms
    This basically means that Ext4 has a 3.5% improvement on read speeds, 0.5% improvement on write speeds, and a 15% improvement on latency time, so not only is Ext4 a win win in terms of stability and cleanliness, it's also faster across all tests. If you were unsure of whether or not you wanted to switch to Linux, Ext4 is a pretty damn solid reason, because both of my RAID5 arrays are going to be Ext4 now after these tests.

    ...
     
    Post Under Construction
  9. Like
    bANONYMOUS got a reaction from BiG StroOnZ in Weird Flex but AorusOMEGA   
    Decided to kick off 2019 with a new money pit because reasons.
     
    I don't have many pictures of it but I'm planning on shooting a video of it tomorrow (which I'll post here when finished) so I'll snag some pictures during the shoot.
     
    Parts List:
    Corsair Spec-OMEGA RGB
    Intel i9-9900k
    Gigabyte Aorus Z390 Xtreme
    Corsair H115i RGB Platnium
    32GB (4x8GB) Corsair Vengeance RGB Pro
    4x Corsair HD120 RGB Fans
    MSI 8GB GTX 1070 Aero (my old GPU from the last computer, still waiting for the 1080 TI's to come in)
    Corsair HX1200 80+ Platnium
    Adata XPG 265GB NVMe M.2
    Corsair Force LE 960GB SSD
    Western Digital Enterprise 4TB HDD
     

     

     

  10. Like
    bANONYMOUS reacted to Sabbathian in Mother Russia Concept ;)   
    It will, don`t worry  All the computer parts are ready, OS is installed, just waiting for few more parts and off we go
     
     
  11. Funny
    bANONYMOUS got a reaction from Rune in The AorusElite - Linux Video Editing Workstation   
    Hey, thanks for the suggestion but it was actually the opposite, the right angle connectors off of the motherboard are what made this as clean as possible because the cables come straight off right into the grommets to the rear side which in this case are perfectly spaced to the motherboard, instead of straight conectors making a giant U bend going across the front fans off of the rad. This specific case was also a dream to work with compared to the 2019 build using the Corsair Spec-OMEGA RGB, THAT thing.. was an absolute nightmare for the right angle connectors lol
  12. Like
    bANONYMOUS got a reaction from NotABigGamer in Weird Flex but AorusOMEGA   
    Took some pictures with that sweet sweet f1.4 lens for all the bokah
     
    Here's all of the trophies stacked up


    Eye candy as follows
     
    Sleep mode:

     

     

     

     
    Running:

     

     

     

     

  13. Like
    bANONYMOUS got a reaction from Belac F in The AorusElite - Linux Video Editing Workstation   
    Introducing my new build for 2020! The "AorusElite"
    I recycled a lot from my 2019 build
    However, we're going in a different direction this time, prioritizing video editing, and switching fully to Linux. Specifically, Pop_OS!.
     

    Specs:
    Case:
     - NZXT H510 Elite
    Motherboard:
     - Gigabyte AorusElite Xtreme Z390
    CPU:
     - Intel i9-9900k (Max Load ~60c)
    CPU Cooling:
     - NZXT Kraken X62 (4x 140s)
     - Thermal-Grizzly Kryonaut
    Memory:
     - 4x Corsair Vengeance RGB Pro 2666MHz
    GPU:
     - EVGA 1080ti FTW3 (Max Load ~55c)
    GPU Cooling:
     - NZXT Kraken G12 (modded) (1x 120)
     - NZXT Kraken M22 (modded) (2x 120s)
     - Thermal-Grizzly Kryonaut
    Storage:
     - 3x Adata XPG Gammix S11 Pro 1TB NVMe
     - 3x Seagate Barracuda 8TB HDD
    PSU:
     - Corsair HX1200 PSU with CableMod Cables

    Photos:
    The stress started when these arrived.

     
    Modified the NZXT Kraken G12 and Kraken M22 to play nice together
    Hacked together a bracket to fit a 120mm fan instead of the factory 92mm (RGB is now an option)
    Painted grey to be uniform for either light or dark builds in the future.

     
    Final outcome


    The Story:
    During 2019, I was editing videos with Premiere Pro, some gaming, mostly Forza and Halo, but now 2020 is around, I've finally given up on gaming and just play Forza and Halo on Xbox in the living room, and my computer ends up being used primarily for content creating, Youtube, cinematography, photography, and music, so, since none of us can leave the house, I finally got fed up with Premiere Pro crashing YET AGAIN, corrupting my Autosave and I lost the entire project, so, with so much excessive free time, I figured now would be the perfect time to start learning something new and decided to try out Davinci Resolve, and after finishing a few of their online courses, I cancelled my Adobe Creative Cloud subscription and I'm not looking back. I noticed when downloading Davinci Resolve that they have a Linux version, did some looking into it and the only setbacks is it was designed for CentOS and I primarily use distros off of Debian (Kali or Pop_OS!), looked into getting that working and found a few guides on how to piece together the info I needed to making Davinci Resolve 16 work seamlessly on Pop_OS! The only other program I use is FL Studio 20, and it turns out it works perfect straight out of the box with wine, so, my solutions are met, I have no use for Windows anymore. I'm switching to Linux (I used Kali every day on my MSI Leopard Pro for work before I got laid off, and a year later I now daily Pop_OS! on my HP Spectre x360).
     
    NZXT's Lack Of Information:
    When I decided on the NZXT H510 Elite, my rad options are a 280mm on the front and a 120mm on the rear, so, naturally, I want to utilize the most of my options knowing that I'll be buying the Kraken G12 for my GPU, so I look into the supported coolers for the G12, and it doesn't mention the Kraken M22 (which is NZXT's only Kraken series 120mm cooler), but it does say it works with all X series Kraken coolers, so on their website, under Kraken X series, they have listed the M22, so I just figured since the G12 came out before the M22 was released, they probably just didn't update the product compatabilty page for the G12. No.. no that wasn't the case at all, it's not compatible at all, not even close.
    So normally, on a decent gaming setup, you're going to overclock the hell out of the GPU so running a 280mm rad on a 1080ti with the Kraken G12, and maybe an overclocked i5 or a stock clock i7 would suffice on a 120mm AIO. But I have an i9 that hits ~90c rendering videos on my old setup of a 280mm rad and 3 120mm fans on it, so, no, a 120mm cooler will not work for my application, and since this is a video editing workstation, the GPU is only there for 3D applications, the primary usage of this computer is going to be CPU intensive, so I need to have a fully functioning 280mm rad for the CPU. Which means, I need to make that Kraken G12 and M22 work, whether they like it or not.

    Kraken G12 & M22 Learn To Like Each Other:
    I ended up sizing up the M22 mounting bracket up to the G12 and realized you can actually just drill out four holes with it perfectly centered in the asetek mount hole, and you can mount the M22 bracket under the G12, this setup actually means you can run the M22 on the G12 brackets without the exterior G12 plate that holds the fan, you just need to cut down the stand offs on the G12 brackets that mount to the GPU. So I did that, drilled out the M22 bracket, sized it all up on the GPU.. and I should have taken apart the GPU first to make sure I was drilling out the correct holes, because I drilled out the AMD holes, and now the M22 bracket is scrap, so, going through some old scrap, found a broken DVD player that was going to get recycled, riped that apart and salvaged the exterior panels as scrap sheet metal to make my own M22 mounting bracket, with the correct holes this time that line up with the G12 brackets, bingo, bango, we're in business, M22 now mounts to the GPU, however the G12 bracket needed to be cut between the asetek mounting hole, and the fan hole to allow the coolant lines to come through as a normal cooler is suppose to be mounted on top, and my new setup has it mounted below the G12, and since I was cutting it up, why not utilize a little more of that old DVD player and make a 120mm fan bracket, so with another day of cutting, testing, fitting, mocking up, yelling, bleeding, metal slivers, more yelling, crying, admitting self-defeat, pricing out new cases that fit a 280mm and 140mm AIO set up, I finally got it done, and it works fantastic, the GPU was hitting around 80-85c at full load before, and now it hits around 60c dead on with full load.
    Was it worth it? Absolutely, however, this is what leads us into the next issue, the lack of internal USB headers.

    Just.. One.. More:
    Now that I have the G12 modified and the M22 with a custom mounting plate, they're working in a very perfect harmony with a delightful 120mm fan mod to just set off the perfection I didn't think I was going to be able to achieve. However, now there's a shortage of internal USB ports, because the Gigabyte Aorus Xtreme Z390, beast of a board, it only has two, and the H510 Elite RGB/Fan controller uses one and I need that working because it controls the fan curve for the GPU now that I'm not using the proprietary fan headers on the GPU PCB itself, next would be either the M22 or the X62, so since the GPU was my "show piece" of the case, I decided that one gets to light up, but it's not a hard fix, I just need to order the NZXT Internal USB Hub, and then I can plug in the X62 and I'll have all the lights working.

    Finally done, right?:
    So now the new computer is built, and I'm ready to say goodbye to Windows, I reset the BIOS to default settings to remove the possibility of having any compatibility issues, disabled Fastboot so there are no hiberfiles locking my drives from use, I even unplugged all of my other drives to make sure they're completely removed from the install to protect data and make sure nothing goes wrong. It's time, time to finally install Pop_OS! 19.10, so, I wiped the NVMe and did a fresh install, set it all up, all good right? ...no audio

    Audio Issues:
    I can't get anything out of the rear ports on the motherboard, my USB DAC doesn't work, my USB headphones don't work, tried to figure this out for hours, hit up the Pop_OS! Reddit, no solution, wiped the NVMe again and tried Pop_OS! LTS 18.04 this time, same issues exactly. Decided since I really have nothing to lose at this point, wiped that NVMe again, and tried the Pop_OS! 20.04 Beta, and it half works, I have audio over USB, so my DAC and headphones work, and for whatever reason, the front headphone jack on the case works, but the rear don't at all, I can use PulseAudio Volume Control to allocate the front headphone output to the rear jack, but that suffered issues with rebooting, upon start up, the audio ends up being stuck on "headphones" (good, what I wanted), but the volume control would only try to control "Analog Output" so I couldn't turn my volume up or down even with headphones selected until I manually selected a different output and went back to the headphones as output to fix this.
    So with that not really being a very good solution considering every time I turn the computer on, I need to manually deselect my output source and reselect it just to have volume control, I bought a 3.5mm to 1/4" audio cable, so I can use the 1/4" audio out on the Focusrite Scarlett 2i2 DAC as my default audio output to the Logitech speakers, so now, it's USB audio out to the DAC, and from the DAC to the 1/4" output on to the 3.5mm end to the Logitech speakers, it's not graceful, but it works, so I found a workaround for that issue.
     
    Davinci Resolve Issues:
    Now the whole reason I was fully switching was because of the Linux support from BlackMagic with Davinci Resolve, however there wasn't a lot of information about this until I looked into it and have already spent days diagnosing the audio issues before I found out it's designed for CentOS, so I was able to piece together enough info from a few guides online and found a tool by Daniel Tufvesson called MakeResolveDeb, which converts the BlackMagic provided .run file, and turns it into an easily installable .deb file for Debian based linux distros. Which worked the first try, it crashed opening the welcome screen on first boot, but I found a few things online saying this is entirely normal, just wait for it to crash, and then click "force close" and Davinci Resolve 16 will just start opening, and since that welcome screen only comes up on first boot, you never have that issue again, and so far I've knocked out a few videos, and everything is working, my rendering time has also been CRAZY fast, but we'll get into that further during the NTFS Issues. The only downfall I was able to find for Davinci Resolve for Linux is the free version, doesn't support h.26x formats, and since my entirely workflow has always been .mp4, it doesn't read any of my videos for editing. But this brings us to our next part.
     
    MP4/MOV Containers and h.26x Codec:
    Turns out Davinci Resolve doesn't like h.26x codec for the free version on Linux, so naturally I tried to see what other formats I can shoot in, and my Genuine Panaphonic GH4 does have the ability to shoot in .MOV, however, it turns out .MOV can also be h.264 which isn't supported, so even though some test shots in .MOV were a supported format, they only import as an audio file for some reason, because the video codec is still h.264 and not supported, so my entire workflow just remains .MP4/h.264.
    Now I figured this was going to be a deal breaker if I couldn't find a very simple way to encode footage into a different format fast and in bulk, and I later found out Davinci Resolve also doesn't support AAC, but I couldn't find much information about this as to whether or not it doesn't support AAC entirely, or only the free version, because I can buy the Studio version, and it supports h.26x codecs, and I can use .MP4 containers again, however I couldn't find much saying if AAC is supported in the Studio version too, but for now, I found a fairly quick solution to continue using the free version without further issues.

    FFmpeg:
    FFmpeg is built into Pop_OS! by default, and I can just change open a terminal, change the directory to the folder where the .MP4 files are located, enter a command which will encode every .MP4 file in that directory to a Davinci Resolve compatible .MOV file, and bingo bango, all my footage is now compatible, and I didn't have to do anything more than a couple clicks, super simple procedure and I'm working on video tutorials for all of these solutions that I'll link with their designated headings as I get the videos finished to help out anyone else who is facing similar issues.
    Now that I figured out a super simple way to encode all of my footage from .MP4 to .MOV, that's a pretty nice transition into the next issue.
     
    NTFS locked to Read Only Access.
    Now, the huge response I got from a lot of people on Reddit was "you need ntfs-3g", and yes, they're absolutely correct, I do need ntfs-3g which would probably solve this issue, if it wasn't already installed out of the box with Pop_OS!. I even tried purging and reinstalling to no avail just to see if maybe it didn't install correctly out of the box, but, the issue persists, I tried adjusting permissions, different mounting options, disabling automount, mounting manually, mounting at root, doesn't matter, the NTFS drives are always Read Only access, I tried an external NTFS drive, works fine, read and write access all day long, I also remembered that Linux Kernel 5.4 also has full support for exFAT built in now, this is a new feature since 5.3, you had to manually download some packages to enable support for exFAT before, so, being on Kernel 5.4 with exFAT support, I even tried formatting my Games SSD to exFAT since I don't need any of that data anymore on linux, and yeah, it works fine now that it's formatted to exFAT, formatted back to NTFS just to see, Read Only, so my solution now, I was going to buy another 8TB drive, install it, format to exFAT and go one drive at a time, copy everything from one NTFS drive to the exFAT drive, once it's done and I have an exFAT clone, format that specific NTFS drive to exFAT, and repeat until all drives are exFAT, so, it sucks, but it'll work, and seeing as how this H510 Elite can fit three 3.5" drives, and I only had two because that's all the Corsiar Spec-OMEGA could fit, why not throw in a third drive and really utilize the full potential of this H510 Elite.
    This means I'll now have the 512GB NVMe as my boot drive, 960GB Corsair SSD for "Games", 1TB Sandisk SSD for Youtube videos to edit off of at higher speeds, and then one 8TB drive as my allocated windows data (pictures, videos, music, downloads, documents), and the second 8TB drive as an archive for files I don't use anymore but might need one day, mostly unused or unedited Youtube footage, abandoned projects, things of that nature.
    Now, with this third 8TB drive I bought, I'll have 5 SATA drives, two SSD in ports 1 and 2, two HDD in ports 3 and 4, and the new drive I just bought going into port 5 of 6 total..  which is where the next issue comes into play.
     
    NVMe, The Robbinhood Of SATA:
    Some of you might already know where this is going, but it turns out, if you use an NVMe drive in the first or second M.2 port on the Gigabyte Aorus Xtreme Z390, it disables SATA 5 and 6 to allocate those lanes. So now, I just bought a HDD I can't hook up, but, what about the third M.2 port, well that one cuts the 4x PCIe lane to half speed and I want to get a capture card that will be using that lane, and I don't really want to try and capture at half bandwidth unless it's really worth my effort which we'll get to later, especially if I end up doing something with higher resolution down the road and need the bandwidth for a capture card, the only way I'm losing this functionality is for ridiculous storage speeds.
    So, what's my solution? Well, according to my Amazon orders, I have three 1TB Adata XPG Gammix S11's coming, I'm going to fill the three M.2 ports with three 1TB NVMe drives, remove the SSD's entirely, and run the three 8TB HDD as SATA 1/2/3 with the available port 4 if I change cases to something that fits four HDD in 2021 for my next build upgrade.
    Now some of you may be wondering, why did I buy the Gammix S11's instead of the standard SX8200? They're the same NVMe, the Gammix just has a heat spreader, and the Gigabyte Aorus Xtreme Z390 has M.2 heat speaders built in, so why pay more for Gammix when SX8200 are cheaper and I can't physically see the drives anyway?
    Yeah, I'm right with you on that thought process, I was thinking the same thing, I was actually going to buy three SX8200's off of Newegg, and when I was about to order, they only had two, and I need 3, so I went on Amazon, and turns out they have the Gammix S11's on sale right now for $190 CAD with free shipping, so I saved $10 each drive by buying Gammix S11's instead of the standard SX8200's.
    Once these come in, I'll be installing all three, and then running a RAID5 array as my boot drive which should give me double the performance of a RAID0 array providing something up to 7000MBps Read and 6000MBps Write, while still having the parity of RAID1 in the case of a drive malfunction, This also allows me to hook up that third 8TB drive, and fulfill my previous plan of copying everything to the exFAT drive, format the NTFS drive to exFAT, and repeat for the last drive, the alternative solution for the latter is paying for some cloud storage to upload around 15tb of data, and then create another RAID5 using the three 8TB drives, would be the ideal cleaner setup software wise, faster storage speeds and in linux just allocate the directories to the HDD RAID5 to save space on the NVMe RAID5.

    Raid? Are you sure?:
    This, to put it simply, is the most simple route now, 3x 1TB NVMe's in RAID5 will give me ~2tb storage with parity for recovery if drive failure occurs, and with the HDDs, I'll have ~16TB with parity, this way I should be able to just install Pop_OS! on the ~2TB NVMe RAID5 array which is essentially combining my 512GB NVMe, 960GB SSD, and 1TB SSD, so I'm losing roughly 500GB of storage with this setup, but considering my benchmark speeds on the SSD's are around 500MBps, the insanely improved speeds of NVMe in RAID5, I'm completely content with giving up around 500GB to have something up to 14x the performance, and considering my rendering times, average on a 10 minute 1080p video, it takes somewhere in the neighborhood of 8ish minutes, that's encoding from the 1TB Sandisk X400 which averages around 500MBps, when I edited my first video in Davinci Resolve on Linux, keeping in mind I only have the 512GB NVMe setup so the video was being edited off of NVMe and in a larger .MOV container, a 14 minute video (bad for comparison, I know) rendered in 2 minutes and 42 seconds, and that's just one NVMe, with the new setup, I should be seeing something in the neighborhood of around 1 minute to render a 10 minute 1080p video which is why this new NVMe RAID5 array is going to be a dream to work with. As for the HDDs, there's no real reason I need RAID5, I just figured it would be a much easier setup to allocate the other directories to it if it's just "one drive" recognized by linux. I can figure out how to do what I did with Windows when I installed Windows on the NVMe (I'll install Pop_OS! on the NVMe RAID5 array), and then allocated the default libraries to the HDD, so downloads, pictures, music, videos, documents, all go to the HDD by default, if I download something from Firefox, it goes to the stock download location, which in Windows was allocated to the Downloads folder on the HDD to save space on the NVMe for programs, which I'm hoping to achieve with POP_OS! as well. I just need to figure out how, I've never used linux daily on a multi drive setup so I still need t o figure this out. But if it works as it did in Windows which I'm assuming it will be, this is going to end up being the cleanest most organized data storage setup I've done to date.

    NTFS, exFAT, or Ext4:
    My initial plan was to keep NTFS if I ever needed to use Windows again, however even after my testing of using an external NTFS drive and it works, but internal NTFS does not for some reason, I formatted my "games" SSD to NTFS just to see if this was a windows comparability issue locking the privledges with my drives, it was not, the newly formatted NTFS drive is not reading in POP_OS! 20.04, so this must be a bug with the beta, however, seeing as how I have no patience and want to get this resolved, I was just going to format all my drives to exFAT, despite the fact that exFAT has poor recovery if something goes wrong, and NTFS requires defragging to recover lost space over time, they both have their downfalls so I figured it was pretty well a tossup, until I looked more into Ext4, has the stability of NTFS, with the cleanliness of exFAT by not requiring defragging all the time. It may not be compatible with Windows in the future if ever needed, but so far it's a win win for Ext4, so now it's just a matter of wait until 20.04 is released to see if NTFS is working now, or just jump on Ext4 for all of my drives?
     
    NTFS vs Ext4:
    So we've already established that Ext4 has a cleaner table and doesn't require frequent defragments as opposed to NTFS, and still shares the recovery ability of NTFS if corruption does take place. so it's already looking like a win win here Ext4 is the winner, well, I wanted to make sure this choice was clear, so I formatted the 960GB Corsiar Force LE, created a single partition of the entire drive capacity and formatted it to NTFS, I ran three benchmarks one after another, and the average reports are as follows.
    NTFS: Read - 536.5MBps, Write - 437.1MBps, Latency - 0.20ms
    I then formatted the drive again, created a new partition of the entire capacity in Ext4, and the average of all three reports are as follows.
    Ext4: Read - 555.6MBps, Write - 439.5MBps, Latency - 0.03ms
    This basically means that Ext4 has a 3.5% improvement on read speeds, 0.5% improvement on write speeds, and a 15% improvement on latency time, so not only is Ext4 a win win in terms of stability and cleanliness, it's also faster across all tests. If you were unsure of whether or not you wanted to switch to Linux, Ext4 is a pretty damn solid reason, because both of my RAID5 arrays are going to be Ext4 now after these tests.

    ...
     
    Post Under Construction
  14. Like
    bANONYMOUS got a reaction from Sabbathian in Mother Russia Concept ;)   
    Oh ..my god
    I need to see this become a reality, it's really for the greater good of the universe that this becomes a real thing
  15. Like
    bANONYMOUS reacted to Sabbathian in Mother Russia Concept ;)   
    I got my hands on some extra parts, so decided to design something a bit different around them.
     
    All the 3D designing is over and detailed, now I am just waiting for some parts to arrive and its building time!
     
    Answers:
     
    - its a terminal style PC with integrated 17" FullHD laptop Screen
    - switches control displays and fans
    - displays show CPU and case temperaures
    - cables connect integrated speakers and display with the graphics card (that way you can connect external ones - made just for looks)
    - it will run cool-retro-term terminal emulator just for the looks
    - it should be done in... a month? Depends on when all the parts will arrive...
     



  16. Informative
    bANONYMOUS reacted to Slayerking92 in Corsair iCue Software Died Again!!   
    My MB only has 2 USB2 headers,  but my HX850 and H115i both used those connectors as well, so I needed the extra headers for the commander pro and LNP.
     
    it looks more like a conflict of software.  You could try and disable everything from startup,  then start icue, then start the others one at a time and see if anything triggers.
  17. Informative
    bANONYMOUS reacted to Slayerking92 in Corsair iCue Software Died Again!!   
    i've only had issues if any other RGB software was installed.  The asus software would give me the same issue with the RAM.
    it doesn't look like its even detecting the USB connections other then the cooler.

    How do you have your commander pro or lightning node pro connected?  i assume you needed a USB2 hub as well?
    (I had just used a nzxt one https://www.nzxt.com/products/internal-usb-hub )
     
    Some good wiring diagrams can be found here:
    https://forum.corsair.com/v3/showthread.php?t=173880
     
    I just noticed from that post about your case:
    Spec Omega RGB
    Supplied Devices, 1 x Spec Omega edition Lighting Node Pro. This differs from a Normal Lighting Node pro as it's LED channel 1 is set to power the front chassis lighting's 30 LED's, it appears in iCUE as the Spec Omega rather than a normal Lighting Node Pro.
    2 x HD 120 RGB Fans, 1 x RGB Fan LED Hub
     
  18. Funny
    bANONYMOUS reacted to LienusLateTips in My new channel   
    JayzTwoWit's Tech Hardware Nexus
  19. Like
    bANONYMOUS got a reaction from LaserLion in Weird Flex but AorusOMEGA   
    Video is finished if anyone cares for some chill music and eye candy
     
    EDIT:
    Reposted video in first post for lurkers who don't want to scroll
  20. Like
    bANONYMOUS got a reaction from LaserLion in Weird Flex but AorusOMEGA   
    Took some pictures with that sweet sweet f1.4 lens for all the bokah
     
    Here's all of the trophies stacked up


    Eye candy as follows
     
    Sleep mode:

     

     

     

     
    Running:

     

     

     

     

  21. Like
    bANONYMOUS got a reaction from LaserLion in Weird Flex but AorusOMEGA   
    Decided to kick off 2019 with a new money pit because reasons.
     
    I don't have many pictures of it but I'm planning on shooting a video of it tomorrow (which I'll post here when finished) so I'll snag some pictures during the shoot.
     
    Parts List:
    Corsair Spec-OMEGA RGB
    Intel i9-9900k
    Gigabyte Aorus Z390 Xtreme
    Corsair H115i RGB Platnium
    32GB (4x8GB) Corsair Vengeance RGB Pro
    4x Corsair HD120 RGB Fans
    MSI 8GB GTX 1070 Aero (my old GPU from the last computer, still waiting for the 1080 TI's to come in)
    Corsair HX1200 80+ Platnium
    Adata XPG 265GB NVMe M.2
    Corsair Force LE 960GB SSD
    Western Digital Enterprise 4TB HDD
     

     

     

  22. Like
    bANONYMOUS reacted to Bonzay0 in Decided to go nuts - Lian-Li PC-O11DB   
    Ok finally done!
     

    Not my best cable management, probably will do better when the new cables will arrive.
     

    At least it looks decent on the front.
     

    Those damn bulky cables!!
     

    Final set up! I moved my 24" monitor to a table stand to the left cuz the PC needed go to the right.
     
    Here some close-ups:




     
    Now I need just to remember to close the lights and take some proper pictures.
     
    Took me like 3 hours to set up windows and I'm not even done!!
     
    After I'm done I'll get Aida, cinebench and all that goodies and start overclocking this beast.
     
    Any one used or using the temperature wires and have suggestions where should I put them? (If at all)
     
    Edit:
    I just noticed I forgot to cable manage my desk cables!!! Shiiii will have to fix that tonight.
  23. Like
    bANONYMOUS reacted to Bonzay0 in Decided to go nuts - Lian-Li PC-O11DB   
    So I decided I would build my most insane PC yet. Something that is in between: "Holy shit my wallet has holes in it!" and "Maybe I should custom loop this... with gold".
     
    My build will have the following:
    * Lian-Li PC-O11 Dynamic Black
    * Core i7 9700K
    * T-Force Night Hawk RGB Black 32GB RAM 3200Mhz  CL16
    * Gigabyte Z390 Aorus Xtreme
    * EVGA RTX 2080 XC Ultra (couldn't get my hands on a 2080ti...)
    * WD Black M.2 NVMe SSD 1TB 
    * WD Black 3TB HDD
     
    (There is also an Intel optane 32gb but it got delayed so I'm not sure if it will be in this build)
     
    For cooling I wanted a custom loop but I have 0 experience with that, and those parts are super expensive.
     
    So I went with:
    * Corsair H115i Pro RGB
    * 6x Corsair ML120 - for the case
    * 2x Corsair ML140 - to replace the rad fans with some better looking fans.
     
    All the parts should arrive tomorrow, for now I have on me the cooler and fans.
     
    Will keep the topic updated with pics and other info as I progress through the build.
     
    (This is my second time building a PC from scratch if not counting all my previous upgrades I did for myself and family)

  24. Agree
    bANONYMOUS reacted to Fasauceome in HP Spectre X360 Cooling Mod with Results   
    Less heat = less resistance = more efficiency
  25. Like
    bANONYMOUS got a reaction from Cyberspirit in Microsoft Store Default Download Location?   
    Darn, you were just not quick enough on that one, I already wiped the computer, did another clean install, let windows finish installing all of the bloat and windows updates, then drivers, got everything ready, and then tried downloading again from the Microsoft store and it asked where I wanted to save the game this time, so it was 100% an issue with the bloat preinstalling while I was installing drivers caused something to glitch out when a driver said I needed to restart.
     
    So, it's fixed now, solution was a clean install and waiting for all of the Microsoft garbage to finish before doing anything.

    Thanks for the help, a lot of useful info in here now incase anyone else experiences this issue
×