Jump to content

Windows7ge

Member
  • Posts

    12,148
  • Joined

  • Last visited

Awards

About Windows7ge

  • Birthday Sep 08, 1994

Profile Information

  • Gender
    Male
  • Interests
    Computer networking
    Server construction/management
    Virtualization
    Writing guides & tutorials
  • Member title
    Writer of Guides & Tutorials

System

  • CPU
    AMD Threadripper 1950X
  • Motherboard
    ASUS PRIME X399-A
  • RAM
    64GB G.Skill Ripjaws4 2400MHz
  • GPU
    2x Sapphire Tri-X R9 290X CFX
  • Case
    LIAN LI PC-T60B (modified - just a little bit)
  • Storage
    Samsung 960PRO 1TB & 13.96TiB SSD Server w/ 20Gbit Fiber Optic NIC
  • PSU
    Corsair AX1200i
  • Cooling
    XSPC Waterblocks on CPU/Both GPU's 2x 480mm radiators with Noctua NF-F12 Fans
  • Operating System
    Ubuntu 20.04.1 LTS

Recent Profile Visitors

40,392 profile views
  1. Expensive and I'd have to wait quite a while...what I need is 3.5" with 2.5" support. The chassis doesn't support 2.5" sleds. For the moment I've just pushed the SSDs onto the SAS backplane and inserted the sleds. I have a few ideas how I might 3D print my own brackets.
  2. I/O shield arrived. That looks a lot better. I was just buttoning everything up and plugging everything in when I discovered an oversight. I didn't check if the sleds supported 2.5" drives. So I have three options here. I can look for adapters but I know those get expensive. I can see if 2.5" compatible sleds are available for this server. I can just use my fingers and push the drives into the SAS backplane knowing they're not connected to the sleds. I'm gonna explore all three probably. Depending on prices and availability option 3 is probably gonna be my goto. Once I install these they're only coming out of a drive dies.
  3. 40mm fans arrived. Server only came with 3. I was able to source the same model of fan the server came with very cheaply so they all match and I don't have to worry about airflow balance. I also acquired the x8 riser card and it's bracket. Once I'm ready to screw everything together this is about what it'll look like. Still waiting on the I/O shield so I can't secure the motherboard in place just yet.
  4. Got the old motherboard removed. I also moved some stand offs and removed some standoffs for the "new" board. This is the motherboard I'll be replacing it with. The Supermicro X10SRL-F I needed a board that would support the E5-2637v3 which it should. The CPU mount needed to support Narrow-ILM. I wanted this socket orientation so it can get airflow from the front of the chassis. And I needed enough SAS/SATA ports for the drive bays. Wiring up all the pins for the front panel indicators was annoying. Looking at the silkscreen the front panel I/O is pretty strait-forward. As it turns out the cable supplied with the server was an adapter plugged into an adapter and neither are needed plugging a supermicro board into a supermicro chassis as every pair are properly in line. I can just drop the ribbon directly onto the board. According to the manual all polarity dictates GND is ontop. and pin 1 is lower left. This also matches the pin 1 indicator on the ribbon itself. Until the rear I/O shield arrives I can't screw in the board so I'll stop here for tonight.
  5. I've never been one to do things the easy way. It's a HPE 331T. Don't know if HP spun their own silicon or if it's an Intel controller but we'll find out if support is there out of the box. Have never built a driver from source if that's even how you port a driver into an unsupported distro. Will find out. The goal of the project is to do this as inexpensively as possible. This is what I got for free so I want to avoid buying one unless I have to.
  6. I'm not OP but out of curiosity does moonlight/sunshine require an internet connection like PARSEC? Does it support platforms other than Windows?
  7. Something with a WebUI would be nice just because I'm not going to remember commands and will probably neglect to write them down. I see OPNSense is actually a pfSense fork so it's also FreeBSD based. I'll try OPNSense first, see how it plays with the hardware. If I have issues I might try again but with VyOS.
  8. I was arguing with myself between VyOS and pfSense but forgot OPNSense was a thing. Is it Debian based? VyOS has no WebUI. CLI only. pfSense being FreeBSD based may not support the quad-nic. I have to check the controller it uses. Also all the controversy surrounding the developers makes supporting the project a turn off. I will read into OPNSense otherwise I'm tempted to give VyOS a shot. It's Debian based and should support the NIC. I'll have to brush up on the commands though.
  9. Pulled this guy out of the trash at work. Figured I'd turn it into a router. This is the Barracuda Network's Backup 490. It runs on an MSI motherboard with some custom firmware and their own Linux based operating system. To turn this into a router I wanted to gut the existing hardware. Turns out the enclosure is a gently tweaked Supermicro CSE-813M which accepts standard ATX motherboards. I also salvaged all of these bits used, for free from work. 4x MX500 SSDs off a shelf 4x8GB 2Rx8 DDR4 RDIMM ECC from a Dell PowerEdge HP quad port network card from our old work server Intel Xeon E5-2637 quad core LGA2011-v3 processor from a Dell PowerEdge 1U Narrow-ILM CPU cooler from a E5-2670 box where it was never used. Parts I need to order 40mm fans PCI_e riser card + bracket A motherboard server rails CSE-813M compatible I/O shield (doesn't take normal ones) All parts will be sourced from second hand market vendors.
  10. I've never tried pass-through on an XP VM. Are the error details any more descriptive than error code 10? Proxmox should tell you more in the log. Did you explore any of the parameters you can set? Maybe you can pass-through on i440FX but you have to make the VM think it's a PCI device not PCIe.
  11. I'm not familiar with i440FX supporting hardware pass-though. Tutorials I've followed myself have you switch to Q35 because of a virtual PCI_e feature that's included. I know Microsoft released a 64-bit version of XP but I don't know if that would help any.
  12. Did you enable VT-d? How about did you set the VM to use the Q35 chipset instead of the 440FX?
  13. Did you check with lspci -vnn to verify both the video and audio devices are using the vfio-pci kernel driver? 5e:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP107GL [Quadro P1000] [10de:1cb1] (rev a1) (prog-if 00 [VGA controller]) Subsystem: Hewlett-Packard Company GP107GL [Quadro P1000] [103c:11bc] Flags: bus master, fast devsel, latency 0, IRQ 993, NUMA node 0, IOMMU group 92 Memory at c4000000 (32-bit, non-prefetchable) [size=16M] Memory at a0000000 (64-bit, prefetchable) [size=256M] Memory at b0000000 (64-bit, prefetchable) [size=32M] I/O ports at 9000 [size=128] Expansion ROM at c5000000 [disabled] [size=512K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Legacy Endpoint, MSI 00 Capabilities: [100] Virtual Channel Capabilities: [128] Power Budgeting <?> Capabilities: [420] Advanced Error Reporting Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?> Capabilities: [900] Secondary PCI Express Kernel driver in use: vfio-pci Kernel modules: nvidiafb, nouveau 5e:00.1 Audio device [0403]: NVIDIA Corporation GP107GL High Definition Audio Controller [10de:0fb9] (rev a1) Subsystem: Hewlett-Packard Company GP107GL High Definition Audio Controller [103c:11bc] Flags: bus master, fast devsel, latency 0, IRQ 992, NUMA node 0, IOMMU group 92 Memory at c5080000 (32-bit, non-prefetchable) [size=16K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Kernel driver in use: vfio-pci Kernel modules: snd_hda_intel
  14. Haven't done anything on this website in a few months but recently I pulled a cool toy out of the e-waste bit at work.

     

    This is the Barracuda Network's Backup Server 490.

     

    20240426_185147.thumb.jpg.b0c1f38e92a232b93d7a7c62fb04c483.jpg

     

    This is actually a gently customized Supermicro server built to spec for Barracuda Network's who is selling it as a different product. The name of the enclosure itself is actually the Supermicro CSE-813M.

     

    Not much to it but the internal layout is completely standardized meaning I can install anything I want.

     

    20240426_185430.thumb.jpg.1184d823961b00bf05f05f140a243bea.jpg

     

    Above is the stuff it came with but I'm gonna swap most of it out for the stuff below for the low low price of free.

     

    20240429_200732.thumb.jpg.5fc618ee941409212c1352a5bf287061.jpg

     

    4x500GB SSD

    4x8GB DDR4 RDIMM ECC 2133MHz

    4x1Gbit NIC

    Intel Xeon E5-2637v3 quad core.

    Narrow-ILM LGA2011 1U compatible heatsink

     

    I need a handful of parts to arrive before I start but I might do a build log on the project. I know the build logs aren't very popular though so it's going to be pretty low effort. For the few who would be interested in seeing me build this.

    The end goal is either a VyOS or pfSense based router build. Very overkill, but I don't care. 😆

    1. da na

      da na

      Welcome back:)
      Fun little piece of hardware!

  15. I don't know about this year due to circumstances but maybe next year. o7
×