Jump to content

bimmerman

Member
  • Posts

    829
  • Joined

  • Last visited

Everything posted by bimmerman

  1. I vaguely remember under memory settings in the bios it had the option for essentially this. I had to change the setting to enable my sr2 to boot with 4 gpu. I haven't got mine attached to a monitor at the moment but that's vaguely what I remember. If you search for posts of mine in the X58 HEDT thread I was reporting on virtualization shenanigans, one required solution to which was to limit pci memory dma space to get around the godawful nf200 chip limitations. Off the top of my head, I don't recall tho. Other posts to look for are anything by gordan79 (?) On evga forum....search for the nf200 kvm terms in google and his posts will come up.
  2. I have an SR2 and got 96 GB of ECC to work on mine, though, I haven't had great success overclocking with the 96gb. It also depends on your CPUs to some extent, esp the CPU stepping. My X5675s worked a treat. The key was to find 8GB 2rx4 dimms-- dual rank in other words. I have seen anecdotes of people getting 16gb modules to work, and I'm halfheartedly trying to source some, but the 12x8GB 2rx4 works great. It does take some time to post and won't post at all with the 1600MHz strap. You also may need to install half the modules, post, and then set command rate to 2T and hard set every other setting(mine seemed ambivalent to doing this, others needed to).
  3. Somewhere in this thread are my bios screenshots for my X5675 on the Gigabyte UD3 board, at 4.5 Ghz with 1.3ish V. My chip would do 4.7 at 1.4, but i never tried more than that. Diminishing returns kicked in hard. Re: NVMe boot drive, yes use the 950 Pro or the OEM variant and run it in AHCI mode. Don't try to run multiple nvme drives in a pcie add in card unless the card has a plx or bifurcation setup, as the X58 chipset doesn't support 4x4 bifurcation. Hence, need pricey cards to guarantee success. In other news, my SR2 hasn't sold, so until it does more tinkering is afoot. Also getting some freebie 2011 socket Xeons for the supermicro board, and setting about making a case for its weirdo aspect ratio. In other other news....TR Pro has caught my eye hardcore, esp that Asus board. Mmmmm.
  4. I only use wireless keyboards and mice (the ergo Logitech stuff, specifically)....but I'm also not trying to play esports games competitively. That said, even if I were, I'd probably use the same input devices. Not having wires and reducing my RSI is worth the imperceptible performance hit at my skill level anyway.
  5. That's...weird. Try plugging your drive into the black ports, letting it post/boot, then turn off and try in red port? I think the board in general can be pretty darn wonky
  6. Whelp. I've been tinkering further with the SR-2. My board and my 96GB Ram do not get along above 145 BCLK. It will post with 48GB just fine, but 96 isn't happening no matter what I've tried, which is a bummer but not the end of the world. However, this kinda is: I wanted to get a baseline of 2 core + 4 thread performance with GPU before I go much further with this platform. The SR-2 has physical jumpers that let you disable/enable CPU and memory banks, and in the BIOS you can tell it how many cores per socket you want it to use. I also set my display GPU to a PCIe slot that was knocked down to PCIe 2.0 x8-- because of the USB 3.0 card and LSi HBA for SSDs, at least two of the VMs will be running their 1080ti at PCIe 2.0 x8 (aka, 3.0 x4) speed. Combining the 2c/4t CPU, at +500mhz over stock, with a 2.0x8 GPU, is going to be the at-best upper bound for the VM configuration. And....performance is not good. I'm getting 38 fps in Shadow of the Tomb Raider at 1080p High. On a 1080ti. With all 12 cores and 24 threads, at STOCK speeds, with a full bandwidth 1080ti, it was pulling down 90+. At 1080p Lowest setting, it poops out 42 fps. It's crazily CPU bottlenecked with 2c/4t, the benchmark tool says there is 0% GPU bound. With the same 2c/4t/8x setup, Doom (2016) is doing ~55-70 range consistently, which is a positive result. So.....where does that leave the project? Realistically, it's game over. I can overclock it a bit more and get another 500MHz out of it without worrying about my total system power budget, but 500 MHz isn't going to magically double framerates. I also can't NOT run at least one GPU at x8 link speed because there are not enough USB expansion ports otherwise. Long way of saying, the SR2 build is over-- my use case of 4x virtualized simultaneous gamers is just not feasible with this motherboard, and it's time to cut my losses and sell off the parts. With all 12c/24t it is a very entertaining platform, especially if you like overclocking and tweaking...I just can't justify keeping it when I have no real use for it. So. Who wants an SR-2 + ram + noctua coolers + CPUs + chipset watercooling parts? And case to fit it (900D)? Send PMs if interested. The board works, it overclocks, it's great....just not for the task I want to use it for. I'm likely going to also sell off my other supermicro 2p boards as well (X8DTG-QF bare, X9DRG-QF + 2x 8c Xeon + 128gb ram + coolers) and just move to a zen 3 threadripper or X299 setup.
  7. I dunno, that's weird. I have had the sata 3 controller derp out and then come back. Has it ever worked on your board? Is SATA set to AHCI mode?
  8. I also have an SR-2. What controller are you flashing? Have you overclocked or messed with any settings inside bios?
  9. So, it's been a while since Project BadDecisions was noodled on, but I've made some progress. Also bought some shiny bits (GPUs), some more shiny bits (EK, Heatkiller bits), and even more (socket 2011 2P Supermicro 4 gpu board, cpus, ram, cooler...) Anyway. On to the update! Since the SR2 uses a server chipset with some fuckery, it DOES support ECC. However, getting more than 4GB/slot to work has historically been an enormous pain in the ass. However....most posts on the subject on the internet are circa 2010-2014 era, and things have changed. Why would I need more than 2 CPU x 6 slot/CPU x 4GB/slot = 48 GB total? Well, for those who don't know/remember, the point of this project is to run 4 gaming VMs simultaneously, and the rule of thumb at present is to run 16GB per player, plus some for overhead for the hypervisor. That math....doesn't work with only 48GB total system memory. The best I had configured was 11 GB / player with 1 GB per VM allocated to the hypervisor. It technically works, especially for games like Left 4 Dead (which, realistically, is what this'll be used for primarily in my friend group). So, because the other part of this project is to really push the SR-2 to the limit, I figured why not try for the elusive 96GB. Going over 48GB requires (according to internet, I can't prove/disprove) ECC memory, and supposedly specific kinds of it. I....yolo'd it and bought a Supermicro motherboard with CPUs and 128GB of 1333 DDR3 ECC RAM for less than the price of just 96 GB ECC on ebay. Upside, I have another test bed if the SR2 just fails and I need more cores, since that SM board will tackle up to 12 cores/socket and has 4x x16 3.0 slots at double spacing....so it's technically the better way to go. If yall remember the Supermicro board I had bought before, this is the same thing just for socket 2011 (x79 era stuff). Anyway, that's boring, and can't overclock, and was intended to be a way to get ram. So, I popped out 12 sticks of the ECC goodness and got tinkering. I can't speak to how much of a pain this would be with the earlier production stuff (think X5650, 70s, etc), but the internet says those are much more challenging, especially ES/QS versions, to get this to work. YMMV. For me, with X5675s, this was stupid easy. Step the first: put in 6 sticks, 3 per CPU, and then manually set the timings, speed, and voltage in BIOS. The key here is to manually set the Command Rate to 2. Then power off, install sticks, and BOOM 96 GB. The only critical thing is that you get dual rank 8GB modules (2Rx4). Results are iffy on 16GB or other rank combinations, despite the CPUs' compatibility with 16GB dimms and other ranks...in other motherboards. The SR-2 is a weirdo. It went shockingly smoothly, once I typed in the correct timing in the correct spot. Clearing CMOS on this thing is a pain in the ass, especially since not all settings get saved in profiles. But hey, here's the proof! 96 GB, running nice and happy at 1333 MHz, at the timings the memory manufacturer's data sheet specified (Samsung, in this case). Eagle eyed readers will see the next part of the update. Previously I had been using rando GPUs I and friends had lying around. The plan all along has been to run 4 GPUs, but here comes SR-2 pain in the ass part the second-- due to the janky way the chipset on this era stuff is laid out, the maximum SATA speed is 3GB/s, and there is almost no USB3 on the board....and what there is, is sharing bandwidth with the singular SATA 3 (6GB/s) controller, all of which is set behind a singular PCIe 2.0 lane (5GB/s bandwidth). In simple terms, for intense IO, routing through the cipset will bottleneck. Intense IO may be a stretch for four simultaneous gaming VMs, but maybe not-- 4+ SSD drives plus USB connectivity. Simpler math says that'll suck if I can't figure out a solution....which I did: I need to run a PCIe USB controller card, with 4x controllers so that I can pass each controller to separate VMs (enabling hot plug and downstream hubs and things, which is nice), plus an HBA PCIe card to bypass the chipset entirely for my SSDs. So, that's pretty sweet! One problem...the SR2, despite being a 10 slot form factor, only has 7 slots, and GPUs are dual slot. Well shit. Oh, wait, reference design 1080ti can be water blocked to be single slot. And I already have one.... Hmmmmmmmmm Yep. Right when the internet was losing its mind over the 3080's performance, prior to realizing you can't actually buy them, I scooped up 3 additional 1080ti that also came with water blocks. Looking back, 20x0 series may have been smarter to buy, as they include a USB controller on the damn thing, but that wouldn't have solved the HBA issue-- if you can use chipset storage, that's the way to go IMO). Anyway, while I'm waiting on radiators to arrive....and free time... I lent one of the 1080Ti to a buddy, borrowed his 1080 in exchange, and threw this together (my main rig is keeping its 1080Ti until it gets upgraded). Right now I have the following setup: 1080TI in slots 1 and 2, HBA card slot 3, 1080ti slots 4 and 5, USB3 controller card slot 6, 1080 non-ti slot 7/8. Since I don't have water cooling fully done yet, this spacing lets the cards breathe without requiring me to tear down my other rig, which is nice. Pr0nz: Lastly, before calling it for the evening, I ran a set of Cinebench R15 and R20 for @CommanderAlex's comparison sake (spoiler....10980XE cleans house) There's no way to spin this. The performance of stock X58 era stuff....is not great. I'll do some more thorough benchmarking in games (and VMs) before I really start tweaking knobs with overclocking, but....while my X5675 scaled Cinebench perf pretty much linearly with OC percent (~148 single core @ 4.5ghz in R15), that's still not great. Next update will likely be a while, but next step is to play with OC as-is before ordering all the water cooling parts, and do a dry (ha!) run with friends to see whether performance of this era stuff is really going to work as gaming VMs where each player is running 2c/4t. Next update may be some performance figures and/or a for sale post-- I don't think overclocking the system is going to make up for only having 2c/4t per player and all the latency from the PCIe multiplexer chips due to the janky-ass PCIe layout (from a virtualization perspective, it's fine in normal use). If that's the case, I'm heavily leaning towards selling it all as a unit to someone who really wants to tinker with the SR2 rather than trying to do some super niche virtualization thing like I am....it's really just not the right set up for my use case. If interested, shoot me a PM haha. I know what my old X5675 would do at its 4.5 GHz limit, which was honestly really good performance in modern-ish titles at higher resolutions, so there's a chance this works. It's really just a question of will it work at 2c/4t (at...likely closer to 4/4.2) AND will the unraid hypervisor be able to run things with 2c/CPU allocated. More details on getting the SR2 working with virtualization in a follow up post, because it's super non-trivial due to the aforementioned NF200 PCIe multiplexer layout. Seriously, to the best of my internet sleuthing, only one guy has done this before and published how he did it, and it's not a small job to work around the multiplexers. But it CAN be done, and I've gotten it to work already in testing, so more details to come. Finally, as I'm writing this and running FAH, the 2x stock X5675s, 2x 1080ti, and 1x 1080 apparently pull enough juice to make my UPS scream bloody murder and trip the overload protection. Fcking Awesome.
  10. Dang. How's it fare in R15? This weekend my hope is to spin up the SR2 again to decide whether to sell it off in favor of a 10980xe solution. I found a great motherboard for X299 that has 7 full pcie slots, supposedly with solid iommu groups, without any of the quirkiness the SR2 has.
  11. I have a 1080ti that I bought used two years ago. I don't really feel a need to upgrade, so whenever availability gets sorted I'll figure out then if an upgrade is warranted. Ray tracing would be nice but given how few games have RT support, and even fewer of those are ones I would want to play (woo WW2 shooter 2020 edition woo)....I'm not going to buy based on RT abilities. Now, if only the nvidia cards can drive an LG oled at 120Hz and variable refresh, that's a different story (and, I don't know whether AMD can/can't).
  12. Yup, totally agreed on the torque. Super looking forward to replacing my (awesome, just aging) v8 car with an overly powered EV when the time comes for that reason; I really like being able to mash the pedal and have the car MOVE.
  13. headed and ventilated seats. Sooo nice for road trips. also, lots of torque. I want to merge NOW
  14. uhhhhhhhh so that looks interesting to me.... EK parts for the BadDecisions build have started arriving! Also...may be getting the X79 variant of my quad-GPU dual supermicro board here next week with ram and procs and coolers. Yassss
  15. I mean.....I did kinda buy 4x 1080ti and single slot waterblocks. It'd be a shame to not spin it up at least once! Anybody here messed with adding pcie slots by cables plugged into M.2 slots? If so....TR3 may be back on the menu. Still debating dual 2011v3 Xeon vs X299 (Asus Z10/X299 Sage mobos, each have 7 x8-16 pcie slots) vs Trx40. Latter has plenty of lanes for GPU but unlike the Xeon/i9 options, there are only 4 pcie slots so adding in USB or 10G or other cards isn't possible without janky adapters (see: M.2 -> pcie riser and slot). X299 and 10980xe may end up winning-- fast cores, 18 cores so enough for 4 vm and overhead, and plenty of pcie. Badly wish there was a Trx40 equivalent of the X299 WS Sage mobo.
  16. Right??? It baffles me why there is such PCIe bandwidth but they focused on M.2 sockets instead of slots. Oh well. I did find a 10980XE in stock at B&H. Sooooo yea, kinda tempting. Would be super fun to mess around with! Good tip on the ESs. I'll look into that more. The micro SD card slot made me giggle a bit.
  17. Oof yea, that's just too much. As is the SR-3 and the 28 core unlocked nonsense.... I was just looking at Epyc haha. Price is good but used stuff isn't flooding ebay just yet. Honestly the easy button would be a 10980XE, X299, and just set up 4x 4c/8t VMs with 2/4 left over to manage the hypervisor. ~60% of a TR3 setup in cost, buuuuuuut not exactly purchasable.
  18. Holy balls! I was looking at the Z10 + E5 v4 CPUs as a package but daaaang this is interesting. Board is the same cost so question is, have 3647 CPUs come down yet
  19. Yup, saw that. It's not surprising! So, SR2 build update. Goal has been to build a 4 gamer 1 tower kind of build for local lan parties once covid is over (so, never?). I've gotten virtualization and GPU passthrough to be working, but performance....is kinda dogshit due to lack of cores and stock-ish speeds of X58 xeons. Each VM is being allocated 4-6 threads total, with best performance coming from pinning 2-4 threads per CPU for hypervisor. Essentially, each VM is being given a hyperthreaded dual core virtual CPU, which isn't optimal, and really shows the age of the platform. So, rethinking my platform. Tempted to go dual E5-v3 Xeon (2011-3 / x99 era), but no overclocking and the high core count CPUs are slooooow. Ideally I'd be able to allocate somewhere between 12-16 threads per VM, and with Star Wars Squadrons supporting 5 player teams....that means I need a truly ludicrous number of threads (~32-36 total cores or so) for VR, hypervisor, and VM to all be happy and have good performance. Threadripper would make sense as well, but there are exactly zero TRX40 boards with 5+ CPU connected PCIe slots for GPUs. 'Only' doing 4 players is definitely reasonable, but kind of a bummer to not be able to do 5. Le. Sigh. gaming laptops are so much easier to deal with for group parties.
  20. That.....is quite weird! Maybe try upping PCIE volts a tick, or IOH / IOH PLL. Honestly that's bizarre. I wonder if your GPU itself is unstable and that (or power delivery to it) is the issue rather than mobo overclock.
  21. Whelp, end of an era. Giving away my X5675/mobo/RAM/AIO/R9 290 to a friend who wants to build a PC to play overwatch. Downclocked it to 21x200, 1600 ram, leaving uncore and qpi and voltages at their 4.52 setting for stability. Here's hoping they like it! Keeping the case (HAF 932) for the supermicro beast board. Updates forthcoming on the SR2 front as well.
  22. That's looking quite solid. If your cooler can handle it, try a 22 or 23 multiplier and play with CPU Vcore! My X5675 could do 23x200 without much issue at 1.35-1.4 V (I don't remember). Much beyond that is the territory of full water loop and massively diminished gains. Another thing you can try is to raise uncore and QPI multipliers a bit, as IME that results in a snappier system. Stability will be affected much beyond 3900mhz uncore though, and you may have to raise QPI/VTT voltage a bit. Ideally you peg uncore to 2x memory though, so....YMMV on whether this has a benefit. Some other configs I'd try, depending how much energy you want to spend chasing <5% gains: 1. higher BCLK lower multiplier to OC Ram and Uncore without adjusting their multipliers (eg, 215-220 x21). My UD3R can do 220 BCLK pretty consistently. 2. lower BCLK higher cpu multiplier and 10x/20x/40x mem/uncore/QPI mults, targeting 1800-1900 ram speed. May be doable, may barf. I lost a memory channel above 2000mhz ram but that was with all 6 populated and it came back once I came back to reality.
  23. they also have B stock reference 1080ti models available, including ones without the dang DVI port. EK is also firesale-ing their 1080ti blocks and backplates.....would be ~$450 per card for a new, with warranty, waterblocked single slot 1080ti....times 3. ugh. Not the best price to performance in light of 3080, buuuuuut dang that solves the PCIe issue on the BadDecisions build. Need some free dollars though.
  24. Dang those blocks and that board look awesome. Yup, I'm building a multi gamer 1 tower box using the SR2. It arguably doesn't have enough cores to do AAA gaming with 4 seats (realistically, 4x virtualized "4 thread" cpus plus 2thread/seat hypervisor overhead)....but OC might help, and might be able to reduce threads needed for overhead. Lots of tinkering and tweaking to go. The SR2 motherboard is a shiiiiiit choice for this though due to the pcie multiplexer chip and other funkiness from being enthusiast hardware. It's been a royal pain to get going, but it's finally working at the proof of concept stage. Lotta work to go though. I have a backup 4x gpu supermicro board if this falls through, but without OC it's going to be fairly potato. Would save a lot on gpu costs tho. Ultimate plan is WC cpus and mobo chipset/vrm (have that block from @Zando Bob), and 4x gpu. 3x gpu pretty much need to be FE 1080ti since they can be single slot (to fit usb, hba, nvme/10G pcie cards in adjacent slots) and don't have driver issues w/r/t vm shutdowns.....so yea I'm also eagerly anticipating firesales once 3xxx reviews drop.
  25. So, I did a thing today. Someone on reddit was selling a pair of noctua coolers that would likely have better thermal performance than the baby noctuas I currently had on the SR2. Since watercooling is a ways off, I wanted to be able to tinker with OCing before-- if OCing doesn't help each VM perform adequately, not much point in WC with multi GPU. Anyway, I got a good deal on two NH U12DX i4 coolers with secondary fans, so I tossed them in just now. Before, my U9DXi4 coolers were hitting 58-61C under stress testing, at stock settings, on X5675s......which clearly meant there was next to no thermal headroom for overclocking or the X5690s I want to buy. However, with these bois, I'm running at a frosty 47-50C under the same conditions in OCCT stress testing. So, hooray! OC headroom achieved! Pics of lil' cutie and biggish boii: Also.....I found this SUPER COOL utility for the board. EVGA's E-LEET software lets you tune each CPU's voltages, QPIs, VTTs, independently and from windows! Lets you tinker with it without messing about in BIOS to test stuff. No clue how stable this is, but y0l0. This makes me really want to find one of the unobtanium EVBots they used to sell.
×