Jump to content

maxtch

Member
  • Posts

    697
  • Joined

  • Last visited

Everything posted by maxtch

  1. Genshin Impact always starts a new installation and new game in full screen mode and detects resolution automatically. For my setup (PNY RTX 3060 12GB + Dell P2415Q) means I get full screen 4K with all eye candies enabled for the initial sequence, with no way lowering the resolution or other graphics settings until I am done with the first tasks. That resulted in me having a crashing game stream, because my GPU became overloaded. Is it true that 4K resolution makes almost any game extremely demanding on the GPU?
  2. Core. The server has suffered so much to the point of dragging my other server down, so I don't think I would keep TrueNAS on that server. I would likely build something custom on top of stock Ubuntu Server instead, with RDMA.
  3. So this is happening, my TrueNAS server has one CPU core pinned at 100% while all other cores are idling. How can I load balance across the cores? It is ingesting data from another spinning rust array storage server over rsync on NFS on 10GbE. TrueNAS does not support RDMA, which exacerbated this problem. Otherwise I would use NFS over RDMA for this connection, which will reduce CPU load greatly. System configuration: CPU: Intel Atom C3558 (4C/4T) Motherboard: Asrock Rack C3558D4I-4L RAM: 2x SKHynix 16GB DDR4-2133 ECC RDIMM = 32GB SSD: Kingston A400 120GB (boot SSD) HDD: 4x Hitachi 12TB (refurbished) in RAID-Z1 NIC: nVidia Mellanox ConnectX-3 MCX311A 10GbE
  4. Budget (including currency): US$0. It is one RMA and some shifting parts around builds only Country: USA Games, programs or workloads that it will be used for: Spare computer. No specific tasks. System configuration: CPU: Intel Core i7-10700K Motherboard: Asus ROG Strix Z490-G Gaming Wi-Fi RAM: T-Force Vulcan Z DDR4-3200 16GB x2 CPU Cooler: Cooler Master MasterLiquid ML120L v2 RGB version Graphics: Sapphire Nitro RX 580 8GB SSD: Samsung PM981 M.2 22110 NVMe 960GB NIC: nVidia ConnectX-3 MCX311A 10GbE NIC + Brocade 10GBASE-SR transceiver PSU: Corsair CX750M Case: Corsair 220T RGB I went in to further investigate the reason why things are giving me problems and generated an unholy amount of rejects, turned out to be two problems: I mounted my CPU cooler with insufficient mounting pressure in my 4U cases and thermal paste, and The set of T-Force RAM I bought was faulty at 3200MHz speed. For the first problem I fixed it when moving that MSI platform back. And the second one is up to an RMA process. The RMA came back, and I moved the MSI board back to the rack case I initially had it in, and it has worked fine ever since. This means now I have a pile of no longer needed troubleshooting pieces, hence the title Wasted Efforts. This is still a pretty capable gaming and workstation build, but I just have no task assigned to it. Some things of note here: The BIOS of this Asus board has trouble booting Ubuntu through UEFI, and its onboard peripherals are not supported in macOS (Hackintosh.) This limited the usefulness of this machine for me, as I use a lot of UNIX. The front fans of the case is a good emulation of server cases if I spin the fans up. Too bad this motherboard lacked the PCIe slots for my spare Tesla K20x.
  5. Well, for Ethernet, if you have multiple wired devices with open PCIe slots (desktop PCs) you can upgrade the LAN connection to 10Gbps using some cheap used Mellanox 10Gbps Ethernet cards and a relatively cheap TP-Link 10Gbps Ethernet switch.
  6. It can be their ISP too. Also certain ISP pairs have exceptionally bad connections.
  7. Your GPU will run normally but you will see a drop in performance due to less bandwidth available to it. Unless you have something else that requires a lot of bandwidth (for example a high end network card - a dual-port 100Gb Ethernet card can easily eat up a whole PCIe 3.0 x16 interface) you usually should not do this.
  8. These are surface mount capacitors.
  9. Textures can be coarse or fine. The finer the texture, the larger the VRAM usage.
  10. Yup it is the HDD to blame. Get an SSD, preferably a M.2 NVMe drive and image your HDD into it (if you can afford an SSD the same size or larger than your HDD.) You can leave the HDD in your machine, but only for storing data not frequently accessed.
  11. If you can find the clip somewhere in your case, you can slip it back in from the bottom. However it should not matter much to run your GPU without that clip, as long as you have secured the PCI bracket properly. I have server motherboards that uses exclusively PCIe slots without those clips, and GPUs always ran properly in those slots.
  12. The first thing I see is your memory speed. You are leaving 50% of your memory interface performance on desk. Enable XMP first. Memory bandwidth is a critical point for overall system performance, as everything else usually has to transit through main memory when being moved from one device to another.
  13. That 120mm AIO was not intended for the current case at all. It is too a reject from the troubleshooting process. I have three 4U industrial rack mount cases. Those cases are too small to work with traditional tower air coolers like Hyper 212. The best hope of putting a 11900K (or 10700K, or 9700K) in there was a 120mm liquid cooler in the only place where the whole case accepts a 120mm fan. In that mounting scheme the radiator is directly at the cool air inlet, which makes things slightly more workable. That cooler was taken out of one of such cases once I gave up on that 11900K platform and put the dual socket Xeon E5-2696v2 system back in.
  14. Budget (including currency): $200. This build is mostly a collection of parts that came out of other builds in the process of troubleshooting. Country: USA Games, programs or workloads that it will be used for: Backup unit, no specific workload planned. I was having weird thermal and USB compatibility issues among my main workstations. In the process of troubleshooting, I generated enough parts removed from those systems to be put together into a build. Since most parts are either rejects or bought used, I am calling this build The Question Mark Build, as the reliability of almost all important part here has question marks on it. This is actually my first ever build with either a transparent side panel or any significant amount of RGB. Parts list: CPU: Intel Core i9-11900K. This CPU may have a weak memory controller, as it seems to have reliability trouble with DDR4-3200 memory. MoBo: MSI Z590-A Pro. This motherboard seems to have signal integrity issues with its USB 3.2 ports. CPU Cooler: Cooler Master MasterLiquid ML120L v2 RGB. This is bought used. RAM: Corsair Vengeance DDR4-3000 clocked at DDR4-2933. This RAM set is otherwise fine, but the motherboard doesn't want to run it at 3000MHz without an BCLK overclock. GPU: Sapphire RX 580 Nitro 8GB. This GPU actually physically conflicts with the reinforce bar in my 4U rack cases. The case can live without that reinforce bar but operating it like that was suboptimal. SSD: Samsung PM981A 960GB M.2 22110 NVMe drive. This is bought used. This drive looks like server-grade SSDs, since it has ridiculous number of electrolytic capacitors on it. PSU: Corsair CX750M. NIC: nVidia Mellanox ConnectX-3 EN CX311A. This is bought used. (I do have an all-fiber 10Gbps home network and most of my computers have CX311A or CX312A cards. That can lead to interesting things as those cards have RDMA.) NIC: Intel AX200 Wi-Fi + Bluetooth card. Case + Fans: Corsair iCUE 220T.
  15. Budget (including currency): US$2000 Country: USA Games, programs or workloads that it will be used for: Cities Skylines, KiCad and Eclipse CDT Other details: CPU: Intel i9-11900K Cooler: EVGA 120mm water cooled MoBo: MSI Z590 Pro-A RAM: Team Force DDR4-3200 16GB x2 kit GPU: PNY nVidia RTX 3060 LHR 12GB (was Sapphire AMD RX 480 8GB when initial built) SSD: Samsung PM981 1.8TB NVMe M.2 22110 server SSD HDD: 6TB disk image on my storage server over iSCSI + iSER. Storage server uses a 16-drive RAID-60 volume formatted as btrfs, shared by 3 computers. Wi-Fi + BT: Intel AX200 on PCIe adapter card Ethernet: nVidia Mellanox ConnectX-3 EN 10Gbps Ethernet + RoCE PCIe card + Brocade 10GBASE-SX fiber transceiver PSU: EVGA 750W full modular Case: Rosewill 4U rack mount case
  16. I play some pretty heavily modded Cities Skylines. However one day something tripped on my computer during a gaming session. Can someone tell me what might be the problem? Here is the specs on that computer: CPU: Core i9-11900K Motherboard: MSI Z590-A Pro Memory: T-Force Vulcan Z DDR4-3200 GPU: Sapphire RX 480 8GB (this has been with me for ages.) SSD: Samsung PM981 2TB NVMe M.2 22110 (server SSD with ridiculous amounts of capacitors on it.) CPU Cooler: EVGA CLC 120mm AIO + Noctua NF-A12x25 PWM fan PSU: EVGA Supernova GA 750 Case: Rosewell 4U rack-mount case Ethernet: nVidia ConnectX-3 EN 10GbE/RoCE PCIe card Wi-Fi: Intel AX200 PCIe card
  17. I am trying to upgrade my home metwork to 10Gig using retired datacenter stuff. For whatever reason the cheapest single and dual port 10Gig Ethernet cards with SFP+ ports are ConnectX-3 CX311A and CX312A for ~$25 each, which also supports RDMA over Ethernet (RoCE.) Well since that is the price tag all my desktop computers and servers, except that Hackintosh with its X520, will receive a ConnectX-3 card. The NAS gets a dual-port CX312A, and everyone else get a single-port CX311A. Now that I somehow got myself an RDMA-ready home network, how can I make use of that feature? I know SMB Direct is a thing, and for my virtualized NAS I obviously will use the virtualization features on that CX312A to give the storage node two RDMA-capable network interfaces, one from each of the two ports. Now the problem is how to make Linux/Samba work with SMB Direct? Also since that is a NAS what other common protocols supports RDMA? Other than dumping files over the network, what else can I do with RDMA? Can I, for example, pull in GPU power through RDMA? I did nab a Tesla K20X if such cards are necessary. Will it help with VM migration? p.s. The 10Gig switch I got is a used Brocade 8000, a combined 24-port 10GbE + 8-port 8GFC switch. What can the FC parts do?
  18. It ended up costing about the same per gigabyte using 4GB and 8GB ECC UDIMMs, even considering eBay prices. This motherboard, for whatever reason, if I have all slots populated the memory bus only runs at DDR3-1066 speeds. Also I will be running some school projects that involves long compute times on this system, so having ECC memory helps with stability of the system and not having corrupted results. (I think this is also why the school initially ordered it with ECC memory too.)
  19. I do need the ODDs. However the idle power of the ODDs are negiligible, so if I need the power budget on the GPU I just make sure I leave the ODDs idling.
  20. Budget (including currency): US$270 Country: Massachusetts, USA Games, programs or workloads that it will be used for: Microsoft Office, VMware Workstation, KiCad, Quartus, GCC on WSL Ubuntu Other details (existing parts lists, whether any peripherals are needed, what you're upgrading from, when you're going to buy, what resolution and refresh rate you want to play at, etc): There was a trash heap of discarded computer equipment of the academic department in front of an auditorium in my school for students and faculty to dumpster dive. I decide to pay that trash heap a visit and ended up with a pretty awesome workstation after putting $250 of upgrade parts into it. From the same trash heap I also pulled two laptops, one of which completely fixed and the other broken beyond economic repair and thus cannibalized. Also pulled are some spare parts. Specs: System: HP Z400 Workstation. Motherboard: HP Z400 Revision 2 with 6 memory slots ($0 from the trash heap) CPU: Intel Xeon X5675 (Westmere 6C12T @ 3.06GHz/3.46GHz) ($20 from eBay) RAM: 3x Crucial DDR3-1600 8GB ECC Unbuffered DIMM (24GB total, multi-bit ECC enabled, running at 1333MT/s) ($90 from Newegg) GPU: Galaxy GeForce GTX 1050 2GB ($0 as payment for repairing a friend's gaming rig) SSD: Crucial MX500 1TB ($90 from Amazon) HDD: Seagate Barracuda 2TB 7200rpm PMR ($0 from the trash heap) ODD: 2x DVD-RW drives ($0 from the trash heap) Case: HP Z400 case with minor cosmetic damage and a torn school asset label ($0 from the trash heap) PSU: HP 430W power supply ($0 from the trash heap) Monitor: HP ZR22w 21.5-inch 1080p IPS LCD monitor ($0 from the trash heap) Keyboard: HP USB keyboard ($0 from the trash heap) Mouse: Dell MOCZUL laser mouse ($0 from a different trash heap) Others: Intel AX200 Wi-Fi + Bluetooth module on adapter card ($30 from eBay,) a USB 3.0 PCIe card and front panel USB 3.0 card reader ($40 from Newegg.)
  21. I think that current problem is why I am getting those OCP trippings. It is a 80+ Gold unit. I think the "90+" might just be a mistake in marketing material. That brand, on the other hand, is fairly reputable in China and it is a state-owned brand. I need to give her some power supply, either this 1000W one or I have to take out the 630W one I am currently using with my other PC and transfer the 1000W one over.
  22. How do I tell if two 12V outlets on the modular connector panel belongs to the same 12V rail? I don't think the PSU's manual mentions that. I need to account for having both GPUs running full blast, as since this is a full blown server system I may in the future Proxmox it and run two gaming-capable VMs using GPU passthrough. (I am upgrading to a Great Wall Dragon 1250W/1560W one anyway so I can give this 1000W one to my mom. She is getting into gaming.)
  23. It is two RX 480's and two 12-core Ivy Bridge Xeons. (E5-2696v2 is off-roadmap for some hyper-scaler, but E5-2697v2 is the closest approximate.) It is running on a server motherboard Asus Z9PE-D16, but I have specifically chosen the PCIe slot layout so both card each get a full 16x link to the same CPU. I have PCH, both graphics cards and my NVMe SSD hooked to CPU 0, then Wi-Fi card and USB 3.0 card hooked to CPU 1.
  24. After I re-added the second RX 480 back into my dual E5-2696v2 machine, I am suddenly getting over-current protection tripping on my Great Wall Dragon 1000W modular power supply when stressing both CPUs and both GPUs at the same time in some benchmarks. How much power do two E5-2696v2 + two RX 480 8GB take? Cinebench R23 and 3DMark Time Spy both works fine, but Unigine Heaven trips OCP.
  25. Umm how to force AFR 1 on AMD Crossfire? Also, my CAD is mainly KiCad which uses either local OpenGL rendering or local pure CPU ray tracing. (That dual E5 does CPU ray tracing really well.)
×