Jump to content

suggestable

Member
  • Posts

    9
  • Joined

  • Last visited

Awards

About suggestable

  • Birthday Sep 02, 1985

Contact Methods

  • Discord
    suggestable#9624
  • Steam
    suggestable
  • Origin
    suggestable
  • Battle.net
    suggestable#2494
  • PlayStation Network
    suggestable
  • Xbox Live
    the suggestable
  • Twitter
    suggestable_uk

Profile Information

  • Gender
    Female
  • Location
    UK
  • Occupation
    Volunteer for an LGBT+ charity

System

  • CPU
    Ryzen 5 3600
  • Motherboard
    ASUS PRIME X370-Pro
  • RAM
    32GB DDR4-3000 CL16 Corsair
  • GPU
    ASUS GTX 1080Ti Turbo
  • Case
    Cheapest mesh-fronted ATX box I could find
  • Storage
    1TB Samsung 970 Evo, 5TB Seagate spinning rust
  • PSU
    Corsair HX750i
  • Display(s)
    HP 22w, LG 65" 4K IPS TV
  • Cooling
    Arctic Freezer 7 Pro
  • Keyboard
    White USB from an old Acer R3610
  • Mouse
    White USB from an old Acer R3610
  • Sound
    HDMI to 7.1 home cinema receiver
  • Operating System
    Windows 10 Pro x64

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

suggestable's Achievements

  1. I've also experienced the same issue. Unfortunately for me, my server boots from an array on one of my MegaRAID cards (there's two 9285CV-8e controllers in the system). For now, my server (and its 200TB of storage) is stuck on 1909, unless someone knows a better solution? I'm tempted to turn the current bare-metal install into a VM and run Hyper-V Server 2019 on the hardware (as it's based on an older build, this *should* run fine, I think?). An alternative to this might be to return the chassis to the original non-expander backplane so I can run SATA cables from four of the motherboard ports to it to allow me to use Windows Dynamic Disks (urgh software RAID!) to run the OS directly, so I can update the drivers, etc. with the MegaRAID cards disconnected, then put it back how it was after the upgrade. (that's an "if all else fails" option, as this is a "proper" server board with decent, modern IPMI, which I'd be wasting by switching to Hyper-V Server). In case it's relevant, the server in question is running the following kit: ASRock Rack X470D4U Ryzen 5 3600 (with Dynatron A24 2U cooler) 32GB Corsair DDR4-3000 (2x16GB) Supermicro SC826 chassis with 800W redundant PSUs Windows 10 Pro QLogic QLE8152 dual-port 10GbE CNA (First) MegaRAID 9285CV-8e: 2x Samsung 850 Evo 500GB - configured in RAID-1 for base system OS 2x Samsung 850 Pro 512GB - configured in RAID-1 for the VM that runs my Plex install 24x 6TB SATA disks (mix of WD Red and Seagate Barracuda - mix of retail and shucked drives) - configured in RAID-60 (Second) MegaRAID 9285CV-8e: 12x 10TB WD Red (mix of retail and shucked drives) The SSDs are in the front slots of the main chassis (using an SFF8087-SFF8088 adapter and a "loopback" cable), running via a Supermicro 6Gbps SAS expander I bought separately and fitted to this chassis. The 36 other drives are housed in Xyratex HB1235E enclosures. Incidentally, I've noticed a significantly higher failure rate on the 6TB WD Red drives than the Seagate Barracuda, despite both being SMR (which I was unaware of at the time of initial purchase). I've also had two of the WD Red 10TB disks fail within two years of purchase. In case this helps @Nigel Johnstone, to successfully get the CacheVault (supercapacitor-flash backup) working properly after a firmware update, you also need to update the "gas gauge" firmware on the card(s). I've had to do this to several of my MegaRAID cards, including an M5016 (that I had previously written off as faulty - updating the gas gauge firmware fixed it!).
  2. Decided to switch my folding efforts to the LTT team, and I've signed up to the folding month event. Wish me luck! Currently folding with my desktop: 1080Ti (folding) R5-3600 (not folding, as it's not worth the extra power) Previous folding rig was my Plex server box: 2x Opteron 6366HE No GPU PPD on the Plex box was just 10k PPD on the desktop is ~1.28M
  3. Why did nobody suggest the EVGA SR-2? It's quite simply the best dual LGA1366 board ever made, and is unique in that it will allow overclocking the CPUs, too! They're not particularly cheap, and rather hard to find, but if you want to push those Xeons to the limit, then the SR-2 is the way to go. My last gaming PC build used one to good effect, running a pair of Xeon E5645 CPUs at ~3GHz. Your case would need to support the board, though, as it's larger than almost all boards out there, and is HPTX layout. I used to use the Xigmatek Elysium case, but moved over to a Corsair Obsidian 900D when they came out.
  4. Pretty sure isopropyl alcohol (lens cleaner contains IPA) would work better, without leaving any residue and would likely be safer for your RAM. Long story short: Don't touch the gold contacts on any circuit board. Not only do you risk contaminating them, but you also risk ESD problems.
  5. Hi LTT! I have a system running with an ASUS Z10PE-D16 WS motherboard and 2x Xeon E5-2620v3 CPUs. I'd very much like to increase the clock speed as much as I can. Has anyone on here already tried this combination? I'm running custom watercooling on the CPU loop (2x EK-Supreme LTX, Black Ice GTX 360, 6x Corsair SP120 HPE in push-pull, 1/2" ID hose, Laing DDC-1T, XSPC bayres, XSPC UV blue non-conductive coolant) and have plenty of cooling headroom in the system. This is my first time overclocking Xeons, as my previous build (EVGA SR-2 with 2x E5645 CPUs) was so unstable to the point that it would crash randomly for seemingly no reason, even though it was never overclocked. I'm happy to downcore the system if it would help, as my server (2x Opteron 6366HE) is my primary VM host, and I only occasionally use all 12c/24t on this system when messing about with Handbrake. Thanks in advance! -suggestable
×