Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Windows7ge

Member
  • Content Count

    10,584
  • Joined

  • Last visited

Awards

About Windows7ge

Profile Information

  • Gender
    Male
  • Interests
    Computer networking
    Server construction/management
    Virtualization
    Writing guides & tutorials

System

  • CPU
    AMD Threadripper 1950X
  • Motherboard
    ASUS PRIME X399-A
  • RAM
    64GB G.Skill Ripjaws4 2400MHz
  • GPU
    2x Sapphire Tri-X R9 290X CFX
  • Case
    LIAN LI PC-T60B (modified - just a little bit)
  • Storage
    Samsung 960PRO 1TB & 13.96TiB SSD Server w/ 20Gbit Fiber Optic NIC
  • PSU
    Corsair AX1200i
  • Cooling
    XSPC Waterblocks on CPU/Both GPU's 2x 480mm radiators with Noctua NF-F12 Fans
  • Operating System
    Ubuntu 20.04.1 LTS

Recent Profile Visitors

31,650 profile views
  1. I wouldn't recommend mixing your desktop with a NAS. These are two appliances you general want separate from one another. If you like you can setup a hypervisor and create your different servers on-top of that but keep your desktop separate.
  2. Might as well add a little safety PSA (or whatever it's called) while I'm here. If you can help it. Only put one hand in the PSU at a time. Don't touch anything grounded with the other and isolate your feet from the ground if you can. Do not use a anti-static wrist-strap either. Why? Because that's how the high voltage/high current discharge from the capacitors finds a path across your heart. :3 Insulated screw drivers are also a good recommendation. It's also worth mentioning you can discharge capacitors in a relatively safe manor by bridging th
  3. If you want an SSD in the NAS you'll need a 10Gig network to not bottleneck it. You can Peer-to-peer this if you like. You just need to know how to setup the IP's manually. If the NAS is physically a separate box from your desktop you do not need any virtualization what-so-ever.
  4. This makes sense. Under normal circumstances though you wouldn't do this with PROXMOX. Most people keep there servers on a LAN behind a router. This way when you create your Linux Bridge for vmbr0, 1, 2, 3 all the CT/VMs just have direct access to those respective LANs and a DHCP server. Then it'd be up to you to setup Port Forwarding in the router. I take it you didn't like the idea of self-hosting? Did you consider a web service like Squarespace to build and host your site/forum? Would have been easier. Don't know if it would have been cheaper though.
  5. It appears all the NICs are in the same IOMMU group...which is bad. However, they do say they support SR-IOV. Now how one goes about setting that up on Proxmox...going to have to read the documentation.
  6. Given this is a rental server you may not be able to utilise SR-IOV. Also the suggestion I have here would only work for VMs to my knowledge. If you go into the proxmox terminal and run the command: lspci -vnn And find the four network adapters in the list do they all appear in different IOMMU groups? You may be able to pass each NIC to a different VM.
  7. This would be a general run down of how I might go about troubleshooting a PSU. Start with a visual inspection. Any shit-stains? Good place to start. Give it a sniff test. If something smells burnt look for blown caps, cracked transistors, or anything like looks like it let out the magic smoke. Grab a multi-meter and test for a short to ground on the output rails. This would trip over-current protection instantly. If it yields nothing check for a short on the input side. If none of these show results then you might have to start probing the board with Power connected
  8. You can try this if you like but I have no confidence it'll work. Create a Linux Bridge going to one of the NICs (preferably not the one you're using for Web Management). Assign that bridge to a CT/VM then configure that CT/VM with the same Static IP assigned to the physical interface. This might work. This also might break things.
  9. The problem with a Linux Bridge here is it passes network packets from a virtual interface assigned to the CT/VM to the physical NIC on what would normally be a LAN. So by setting the network adapter of a given CT/VM to the Linux Bridge it's effectively connected to the open Internet but the issue is it will send out a DHCP request which will be quickly dropped as there is no server to hand out an address on the WAN. You would need to assign the interface itself to the VM. That, or setup a DHCP/Firewall server and run everything behind NAT. Assigning the har
  10. I still need to research their BGA series of EPYC processors. Pretty sure the cores are weaker than their SP3 counterparts.
  11. You should be able to but I can't say that with 100% confidence as I don't own your board. I'm going to make the bold claim that software RAID would be a great step above motherboard RAID. If you're intent on hardware RAID get a proper RAID controller. Desktop motherboard RAID is not good at all.
  12. Unless you have 10Gbit NICs you won't see any appreciable gain at all doing P2P as oppose to just routing the NAS through your home switch. Unless you're also looking to do this for security/access restriction reasons.
  13. Most/all appliances today support a feature known as Auto-MDIX. If a cross-over is necessary for your situation the system will make that change in software for you. You can use a normal strait-through cable. From your Network Connections menu if you go to the properies and sharing tab of your wireless NIC you can share it's network connection out your eithernet port giving connect clients Internet access. Now unfortunately this works as a kind of 1/2 NAT (network address translation) which means not all services will work if you're trying to host a server to the broade
  14. Eh, I'm a server hardware enthusiast. Even if I'm not that into the premise of the video I'm still in it to see the new hardware companies like ASRock Rack, AMD, and Micron are pushing out. I like the mini-PCI_e x8 connectors on that board. That way not all of those 128 PCI_e lanes are going to waste on a single x16 slot. You could use something like this and put it into a 4U server with like 6 GPU's. It'd be an amazing compute server or multi-client workstation server. Isn't the whole Ryzen 4000 series mobile chips (laptop & such)? I could be mistaken about that. I didn't e
  15. ¯\_(ツ)_/¯ We will see in the coming days. If a week passes and it hasn't shipped I'll know something's up. Thoughts of buying pre-builds to get hardware I can't get off the shelf at this time has crossed my mind but I know I have no means of flipping the rest of the equipment to make up the wasted cash so I immediately abandoned that idea. Not gonna lie. Not the first server "gaming PC" build LMG has done. This is probably the 7th or more they've done over the years. Obviously the boost clock of the CPU will be the bottleneck here and the other 56-ish cores will not
×