Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Windows7ge

Member
  • Content Count

    11,420
  • Joined

  • Last visited

Awards

About Windows7ge

  • Title
    Writer of Guides & Tutorials
  • Birthday Sep 08, 1994

Profile Information

  • Gender
    Male
  • Interests
    Computer networking
    Server construction/management
    Virtualization
    Writing guides & tutorials

System

  • CPU
    AMD Threadripper 1950X
  • Motherboard
    ASUS PRIME X399-A
  • RAM
    64GB G.Skill Ripjaws4 2400MHz
  • GPU
    2x Sapphire Tri-X R9 290X CFX
  • Case
    LIAN LI PC-T60B (modified - just a little bit)
  • Storage
    Samsung 960PRO 1TB & 13.96TiB SSD Server w/ 20Gbit Fiber Optic NIC
  • PSU
    Corsair AX1200i
  • Cooling
    XSPC Waterblocks on CPU/Both GPU's 2x 480mm radiators with Noctua NF-F12 Fans
  • Operating System
    Ubuntu 20.04.1 LTS

Recent Profile Visitors

35,726 profile views
  1. It's looking like I'm going to have to abandon the WOL function. At least for now. Working with various network controllers on different motherboards they don't all want to respond to whatever the Magic Packet is that Etherwake sends given Ubuntu claims it's enabled and working on all three systems yet only one responds. I investigated the MNPA19-XTR to see if I could WOL over the fiber network but that's not looking promising either. One thing that is clearly supported on all three systems however is Wake on RTC (Real Time Clock).
  2. A little bit of extended testing shows very good stability over the course of a period of days. Right now the 2x2670v1 server has been up for over 4.5 days and ran in 12hr cycles at 87.5% CPU utilization for BOINC and hasn't faulted with any errors. In addition to this the FreeBSD console hasn't reported any errors with iSCSI which helps reassure that the connection is stable over a period of days. Even longer testing is still going to be needed but things are looking up. As far as WOL (Wake-on-LAN) goes I've already looked up some tutorials that mention two Debian pac
  3. Took quite a while longer than I wanted it to but I got it and it looks like you, me, & @Gorgon are the only three who have it on the LinusTechTips_Team right now. :3

     

    131004047_Screenshotfrom2021-09-1420-40-24.png.69ebb041a72969b8102fee16a3e6c9ed.png

     

    I think I'm gonna crack 300,000 results returned then show some other projects a little love again. I made my contribution to COVID-19. :old-grin:

    1.   Show previous replies  2 more
    2. Windows7ge

      Windows7ge

      10 hours ago, Gorgon said:

      Congratulations! I wouldn’t have beat you to it without using GPUs. Kudos for doing it on just CPUs.

      I still would have been in the running had summer related circumstances not forced me to dramatically reduce my ability to contribute. Not much longer now. I'll be going full brrr again.

       

      4 hours ago, TopHatProductions115 said:

      Should I try to hop on next with the Titan Xp? Or is it too late?

      It's never too late...especially with how COVID keeps mutating. 🙄 

    3. Gorgon

      Gorgon

      4 hours ago, Windows7ge said:

      I still would have been in the running had summer related circumstances not forced me to dramatically reduce my ability to contribute. Not much longer now. I'll be going full brrr again.

       

      It's never too late...especially with how COVID keeps mutating. 🙄 

      Looking forward to seeing some Epyc results 😆

       

      @TopHatProductions115 don’t know if you’ll get too much GPU work for a Titan Xp. The GPU WUs are few and far between. I’ve been running it anyway but have Einstein@Home setup on the systems as well to keep the GPUs busy in between OpenPandemics WUs. The nice side using a GPU for BOINC is the GPUs typically run at a much lower power level than folding at home which has been nice over the summer months but I still managed to kill the 35 year old compressor in my A/C but thankfully it was just last week so it’s not an urgent repair and I can take a couple of months to figure out wether it’s time to replace the 25 year old mid-efficiency furnace at the same time.

    4. Windows7ge

      Windows7ge

      3 hours ago, Gorgon said:

      Looking forward to seeing some Epyc results 😆

      Speaking of Epyc. The ASRock Rack EPYCD8 motherboard I was using during the Pentathlon, it gave up the ghost. Like, wth!? It ran fine for the Pent, hung out turned off for a long while after, went to put it in my primary SSD storage server and...bubcus...post code error couldn't resolve it. Swapped it for the Supermicro H11SSL-i (the one with the chipped corner) and it booted just fine...

       

      Have to get on ASRock Rack about that but in the meantime that CPU is currently in one of my production servers.

       

      DSCF0142.JPG

       

      So many PCIe lanes. 🤤 The motherboard supports the v2 series as well so if the 64C/128T SKUs ever reach the realm of affordable.

  4. Back when I first validated this setup 9 months ago the iSCSI performance was beyond abysmal and I don't know what the culprit was. Nice to see I didn't have to troubleshoot it 9 months later. Something I need to test is if giving the FreeBSD VM more CPU & RAM does anything. Right now it's only got 8 cores & 8GB of RAM of which is at a constant 79~80% utilization. I wonder if there's some sort of RAM caching going on here...might be able to increase the multi-client performance.
  5. Many trials have been passed in the last couple days but we finally have three working machines (with some catches) booting off the network with quite a bit of knowledge learned along the way. First and foremost. The process. In a previous post I outlined the two commands required to discover and login to iSCSI volumes: sudo iscsiadm -m discovery -t sendtargets -p 10.1.0.2 sudo iscsiadm --mode node --targetname iqn.2021-9.boinc.com:lun1 --portal 10.1.0.2 --login What you need to do is open a shell during the installation process of your preferred Debian distro and run these
  6. Oh shit, oh jeez! It's working lolol IT'S WORKING!!! XD This is not a directly connected disk (nor a physical one - but it can be if you want): boinc@boinc-node-4:~$ sudo fdisk -l /dev/sda Disk /dev/sda: 128GiB, 137438953472 bytes, 268435456 sectors Disk model: CTLDISK Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 524288 bytes Disklabel type: gpt Disk identifier: COADC91C-19B9-46OF-B2AD-255EB8BB2561 Device Start End Sectors Size Type /dev/sda1 2048 4095
  7. The Ubiquiti USW-Pro-Aggregation

     

    USW-Aggregation-PRO_002_cbf6a479-c00b-448e-b59f-0ef4fed9f59a_grande.png.6ac0a5d3d308d145e3bed859c213b6da.png

     

    Twenty-eight 10Gig SFP+ ports.

    Four 25Gig SFP28 ports.

     

    Oh the things I could do with 100Gig uplink. I want one. 🤤

     

    Spoiler

    Would anybody like to be my Secret Santa this year? 😆

     

    1.   Show previous replies  4 more
    2. Schnoz

      Schnoz

      13 minutes ago, Windows7ge said:

      ...possibly? Leadeater can pitch in and correct me here but "cheap" switches like this are usually entirely ASIC based. Meaning it has no central processor (typically reserved for switches that will be deployed with the intention of routing traffic between networks - a high computational load.)

       

      The ASIC implementation is usually that black plastic dipped package arrangement but I have seen in some high-end switches they do use a BGA chip with heat spreader & heatsink that you could in theory delid.

       

      If I ever get one I'll take pictures of the inside assuming there's no warranty void of removed sticker. 😉

      I hope there are things in there to delid! I think many network switches could actually significantly benefit from delidding and/or repasting, since it always seems like those small fans have to match a turbojet engine in terms of RPM and noise to cool them.

       

      Also, there's not much to worry about "Warranty Void If Removed" stickers. For US residents at least, those stickers are illegal under the Magnuson-Moss Warranty Act. Though, I would imagine that at least some vendors would still try and enforce them.

    3. Drama Lama

      Drama Lama

      14 hours ago, Windows7ge said:

      Would anybody like to be my Secret Santa this year? 😆

      Only if you’re somehow able to shift the comma on my income a few digits to the right.

    4. Windows7ge

      Windows7ge

      13 hours ago, Schnoz said:

      I hope there are things in there to delid! I think many network switches could actually significantly benefit from delidding and/or repasting, since it always seems like those small fans have to match a turbojet engine in terms of RPM and noise to cool them.

      LTT water-cooled a 10gig switch a while back. I can definitely say the transceivers even when they're not activity transmitting user data run quite hot. I'd be curious to see how much hotter SFP28's idle.

  8. One step forward. Only one though. Turns out one of the issues was staring me right in the face the whole time. Note: requested target "InitiatorName=iqn.2021-9.boinc.com:lun2" not found This error was caused due to a mis-written grub file (my mistake): GRUB_CMDLINE_LINUX_DEFAULT="quiet splash ip=dhcp ISCSI_INITIATOR=InitiatorName=iqn.2021-9.boinc.com:lun3 ISCSI_TARGET_NAME=InitiatorName=iqn.2021-9.boinc.com:lun3 ISCSI_TARGET_IP=10.3.0.2 ISCSI_TARGET_PORT=3260" Should have been written as: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash
  9. My eyes were heavy while I was typing that so definitely yes. Still tinkering with it as we speak. I re-wrote the FreeBSD iSCSI file as such: portal-group pg0 { discovery-auth-group no-authentication listen 10.1.0.2 } portal-group pg1 { discovery-auth-group no-authentication listen 10.2.0.2 } portal-group pg2 { discovery-auth-group no-authentication listen 10.3.0.2 } portal-group pg3 { discovery-auth-group no-authentication listen 10.4.0.2 } target iqn.2021-9.boinc.com:lun1 { auth-group no-authenticat
  10. *sign* The taste of defeat...for now... I've been at this all day and I'm close but it's getting late and I'm tired. I don't want to reveal what I'm doing to make it work if it doesn't work. There are still several variables I need to go over both software and hardware wise including some more Googling. This procedure isn't as simple as slap a MNPA19-XTR in your rig and you're off to the races, no I see different responses from different system hardware. I need more time. I think I'm going to go back to my roots and see if it's the latest version of Ubuntu
  11. Welcome to FreeBSD! A UNIX-like Operating System. Compared to Debian based solutions the first annoying thing you learn about FreeBSD is that the package manager is not installed by default. root@boinc-iscsi:~ # pkg install nano The package management tool is not yet installed on your system. Do you want to fetch and install it now? [y/N]: If you're used to Debian sudo also doesn't exist by default: iscsi@boinc-iscsi:~ $ sudo pkg install nano -sh: sudo: not found iscsi@boinc-iscsi:~ $ To fix this as root after installing the package manager we first
  12. Hi @Windows7ge, thanks so much for helping me out in the past; I really appreciate it! 

     

    I have another question--I took a look at the sector map of my D drive, a 5TB WD5001F9YZ, and I noticed that the data is arranged in two large blocks:

     

    Spoiler

    image.thumb.png.843e19e2f9995b9f025fde56362ff1ab.png

     

    I frequently edit off the drive and use other drive-intensive applications that are on it, and I want to move all the files to the beginning of the drive, where the higher velocity of the platters' outer diameter relative to the heads will increase performance. However, Windows 10's built-in defragmentation utiltility, as well as UltraDefrag 7.1.4, don't seem to do this, and Defraggler just crashes when trying to build a file list. 

     

    Do you happen to know a way where I could move the files to the beginning of the disk? If you could consider helping me with this, that would be greatly appreciated. Thank you so much!

    1. Windows7ge

      Windows7ge

      Immediately I can say I don't know an easy solution unfortunately. Not something I've paid a lot of attention to so it's not something I've explored.

       

      The only solution I can offer (but you're not going to like it) is move all the data somewhere else. Hard wipe the drive, then move all the data back. It should write it all in series starting from the outside of the platter.

       

      I have to assume you deleted a lot of data at one point given the giant block of free space. Usually you see something like this when you have/had multiple partitions on a single drive. It physically segregates the data on the platter how far of which is dependent on the size of each partition. 

    2. Schnoz

      Schnoz

      21 hours ago, Windows7ge said:

       

      Thanks for letting me know. When I upgraded my HDD from a 3 TB to a 5 TB WD Se, I did have to mess around with whole-drive backups, so that could be the reason as to why the data is arranged like that. I'll make sure to do what you suggested when I have time. Thanks again!

    3. Schnoz

      Schnoz

      On 8/31/2021 at 4:58 PM, Windows7ge said:

       

      All right, I copied the drive's data onto a spare 4 TB WD Black I had laying around, formatted the drive as NTFS with the largest cluster size available, and copied over the backed-up data. The drive now has all its data at the outer diameter; thanks so much!

      image.thumb.png.bee194b0bece0572651d5877bc7bacdc.png

  13. This update is going to be short since I'm out of time again. To host our iSCSI server I explored the Debian tgt package extensively. No matter what I did I could not get iPXE to boot to any iSCSI volume hosted on it. After asking for help from the internet and getting no reply I decided to take a shot in the dark on a OS based entirely on something different that I once used a long time ago for file storage. FreeNAS or the now known as TrueNAS CORE I believe. Both are derived from a OS known as FreeBSD a UNIX-like OS. Completely different from Debian. And guess what. It worked.
  14. So where we left off last week we got all of the hardware setup and ready to go. Now we're going to start with with software side of things. At the end of the last update I showed the Mellanox FlexBoot screen. This uses iPXE 1.0.0+. Now iPXE itself has a website where-in it's possible to either update the flash on the NIC, boot an updated flash from USB, or perform what is known as chain-loading where-in you have the DHCP server point iPXE on the NIC to a TFTP server which holds an updated iPXE file. Of these options I've explored all of them and although the TFTP serve
  15. So linking all of these NICs together as mentioned when answering a previous query is the Ubiquiti Unifi 16-XG-US. The Ubiquiti Unifi series of switches uses what's known as the Unifi Network Application which is a client side web browser app for managing Unifi products. Although it can integrate well for some users I'm more of a proponent for the device just hosting it's own WebUI (Ubiquiti's Edge series line-up). This comes at a premium though because it's a more enterprise like feature as oppose to pro-sumer. Setup with the Network Application and registering the 16-XG-US it com
×