Jump to content

bimmerman

Member
  • Posts

    829
  • Joined

  • Last visited

Awards

This user doesn't have any awards

1 Follower

Recent Profile Visitors

2,112 profile views
  1. I vaguely remember under memory settings in the bios it had the option for essentially this. I had to change the setting to enable my sr2 to boot with 4 gpu. I haven't got mine attached to a monitor at the moment but that's vaguely what I remember. If you search for posts of mine in the X58 HEDT thread I was reporting on virtualization shenanigans, one required solution to which was to limit pci memory dma space to get around the godawful nf200 chip limitations. Off the top of my head, I don't recall tho. Other posts to look for are anything by gordan79 (?) On evga forum....search for the nf200 kvm terms in google and his posts will come up.
  2. I have an SR2 and got 96 GB of ECC to work on mine, though, I haven't had great success overclocking with the 96gb. It also depends on your CPUs to some extent, esp the CPU stepping. My X5675s worked a treat. The key was to find 8GB 2rx4 dimms-- dual rank in other words. I have seen anecdotes of people getting 16gb modules to work, and I'm halfheartedly trying to source some, but the 12x8GB 2rx4 works great. It does take some time to post and won't post at all with the 1600MHz strap. You also may need to install half the modules, post, and then set command rate to 2T and hard set every other setting(mine seemed ambivalent to doing this, others needed to).
  3. Somewhere in this thread are my bios screenshots for my X5675 on the Gigabyte UD3 board, at 4.5 Ghz with 1.3ish V. My chip would do 4.7 at 1.4, but i never tried more than that. Diminishing returns kicked in hard. Re: NVMe boot drive, yes use the 950 Pro or the OEM variant and run it in AHCI mode. Don't try to run multiple nvme drives in a pcie add in card unless the card has a plx or bifurcation setup, as the X58 chipset doesn't support 4x4 bifurcation. Hence, need pricey cards to guarantee success. In other news, my SR2 hasn't sold, so until it does more tinkering is afoot. Also getting some freebie 2011 socket Xeons for the supermicro board, and setting about making a case for its weirdo aspect ratio. In other other news....TR Pro has caught my eye hardcore, esp that Asus board. Mmmmm.
  4. I only use wireless keyboards and mice (the ergo Logitech stuff, specifically)....but I'm also not trying to play esports games competitively. That said, even if I were, I'd probably use the same input devices. Not having wires and reducing my RSI is worth the imperceptible performance hit at my skill level anyway.
  5. That's...weird. Try plugging your drive into the black ports, letting it post/boot, then turn off and try in red port? I think the board in general can be pretty darn wonky
  6. Whelp. I've been tinkering further with the SR-2. My board and my 96GB Ram do not get along above 145 BCLK. It will post with 48GB just fine, but 96 isn't happening no matter what I've tried, which is a bummer but not the end of the world. However, this kinda is: I wanted to get a baseline of 2 core + 4 thread performance with GPU before I go much further with this platform. The SR-2 has physical jumpers that let you disable/enable CPU and memory banks, and in the BIOS you can tell it how many cores per socket you want it to use. I also set my display GPU to a PCIe slot that was knocked down to PCIe 2.0 x8-- because of the USB 3.0 card and LSi HBA for SSDs, at least two of the VMs will be running their 1080ti at PCIe 2.0 x8 (aka, 3.0 x4) speed. Combining the 2c/4t CPU, at +500mhz over stock, with a 2.0x8 GPU, is going to be the at-best upper bound for the VM configuration. And....performance is not good. I'm getting 38 fps in Shadow of the Tomb Raider at 1080p High. On a 1080ti. With all 12 cores and 24 threads, at STOCK speeds, with a full bandwidth 1080ti, it was pulling down 90+. At 1080p Lowest setting, it poops out 42 fps. It's crazily CPU bottlenecked with 2c/4t, the benchmark tool says there is 0% GPU bound. With the same 2c/4t/8x setup, Doom (2016) is doing ~55-70 range consistently, which is a positive result. So.....where does that leave the project? Realistically, it's game over. I can overclock it a bit more and get another 500MHz out of it without worrying about my total system power budget, but 500 MHz isn't going to magically double framerates. I also can't NOT run at least one GPU at x8 link speed because there are not enough USB expansion ports otherwise. Long way of saying, the SR2 build is over-- my use case of 4x virtualized simultaneous gamers is just not feasible with this motherboard, and it's time to cut my losses and sell off the parts. With all 12c/24t it is a very entertaining platform, especially if you like overclocking and tweaking...I just can't justify keeping it when I have no real use for it. So. Who wants an SR-2 + ram + noctua coolers + CPUs + chipset watercooling parts? And case to fit it (900D)? Send PMs if interested. The board works, it overclocks, it's great....just not for the task I want to use it for. I'm likely going to also sell off my other supermicro 2p boards as well (X8DTG-QF bare, X9DRG-QF + 2x 8c Xeon + 128gb ram + coolers) and just move to a zen 3 threadripper or X299 setup.
  7. I dunno, that's weird. I have had the sata 3 controller derp out and then come back. Has it ever worked on your board? Is SATA set to AHCI mode?
  8. I also have an SR-2. What controller are you flashing? Have you overclocked or messed with any settings inside bios?
  9. So, it's been a while since Project BadDecisions was noodled on, but I've made some progress. Also bought some shiny bits (GPUs), some more shiny bits (EK, Heatkiller bits), and even more (socket 2011 2P Supermicro 4 gpu board, cpus, ram, cooler...) Anyway. On to the update! Since the SR2 uses a server chipset with some fuckery, it DOES support ECC. However, getting more than 4GB/slot to work has historically been an enormous pain in the ass. However....most posts on the subject on the internet are circa 2010-2014 era, and things have changed. Why would I need more than 2 CPU x 6 slot/CPU x 4GB/slot = 48 GB total? Well, for those who don't know/remember, the point of this project is to run 4 gaming VMs simultaneously, and the rule of thumb at present is to run 16GB per player, plus some for overhead for the hypervisor. That math....doesn't work with only 48GB total system memory. The best I had configured was 11 GB / player with 1 GB per VM allocated to the hypervisor. It technically works, especially for games like Left 4 Dead (which, realistically, is what this'll be used for primarily in my friend group). So, because the other part of this project is to really push the SR-2 to the limit, I figured why not try for the elusive 96GB. Going over 48GB requires (according to internet, I can't prove/disprove) ECC memory, and supposedly specific kinds of it. I....yolo'd it and bought a Supermicro motherboard with CPUs and 128GB of 1333 DDR3 ECC RAM for less than the price of just 96 GB ECC on ebay. Upside, I have another test bed if the SR2 just fails and I need more cores, since that SM board will tackle up to 12 cores/socket and has 4x x16 3.0 slots at double spacing....so it's technically the better way to go. If yall remember the Supermicro board I had bought before, this is the same thing just for socket 2011 (x79 era stuff). Anyway, that's boring, and can't overclock, and was intended to be a way to get ram. So, I popped out 12 sticks of the ECC goodness and got tinkering. I can't speak to how much of a pain this would be with the earlier production stuff (think X5650, 70s, etc), but the internet says those are much more challenging, especially ES/QS versions, to get this to work. YMMV. For me, with X5675s, this was stupid easy. Step the first: put in 6 sticks, 3 per CPU, and then manually set the timings, speed, and voltage in BIOS. The key here is to manually set the Command Rate to 2. Then power off, install sticks, and BOOM 96 GB. The only critical thing is that you get dual rank 8GB modules (2Rx4). Results are iffy on 16GB or other rank combinations, despite the CPUs' compatibility with 16GB dimms and other ranks...in other motherboards. The SR-2 is a weirdo. It went shockingly smoothly, once I typed in the correct timing in the correct spot. Clearing CMOS on this thing is a pain in the ass, especially since not all settings get saved in profiles. But hey, here's the proof! 96 GB, running nice and happy at 1333 MHz, at the timings the memory manufacturer's data sheet specified (Samsung, in this case). Eagle eyed readers will see the next part of the update. Previously I had been using rando GPUs I and friends had lying around. The plan all along has been to run 4 GPUs, but here comes SR-2 pain in the ass part the second-- due to the janky way the chipset on this era stuff is laid out, the maximum SATA speed is 3GB/s, and there is almost no USB3 on the board....and what there is, is sharing bandwidth with the singular SATA 3 (6GB/s) controller, all of which is set behind a singular PCIe 2.0 lane (5GB/s bandwidth). In simple terms, for intense IO, routing through the cipset will bottleneck. Intense IO may be a stretch for four simultaneous gaming VMs, but maybe not-- 4+ SSD drives plus USB connectivity. Simpler math says that'll suck if I can't figure out a solution....which I did: I need to run a PCIe USB controller card, with 4x controllers so that I can pass each controller to separate VMs (enabling hot plug and downstream hubs and things, which is nice), plus an HBA PCIe card to bypass the chipset entirely for my SSDs. So, that's pretty sweet! One problem...the SR2, despite being a 10 slot form factor, only has 7 slots, and GPUs are dual slot. Well shit. Oh, wait, reference design 1080ti can be water blocked to be single slot. And I already have one.... Hmmmmmmmmm Yep. Right when the internet was losing its mind over the 3080's performance, prior to realizing you can't actually buy them, I scooped up 3 additional 1080ti that also came with water blocks. Looking back, 20x0 series may have been smarter to buy, as they include a USB controller on the damn thing, but that wouldn't have solved the HBA issue-- if you can use chipset storage, that's the way to go IMO). Anyway, while I'm waiting on radiators to arrive....and free time... I lent one of the 1080Ti to a buddy, borrowed his 1080 in exchange, and threw this together (my main rig is keeping its 1080Ti until it gets upgraded). Right now I have the following setup: 1080TI in slots 1 and 2, HBA card slot 3, 1080ti slots 4 and 5, USB3 controller card slot 6, 1080 non-ti slot 7/8. Since I don't have water cooling fully done yet, this spacing lets the cards breathe without requiring me to tear down my other rig, which is nice. Pr0nz: Lastly, before calling it for the evening, I ran a set of Cinebench R15 and R20 for @CommanderAlex's comparison sake (spoiler....10980XE cleans house) There's no way to spin this. The performance of stock X58 era stuff....is not great. I'll do some more thorough benchmarking in games (and VMs) before I really start tweaking knobs with overclocking, but....while my X5675 scaled Cinebench perf pretty much linearly with OC percent (~148 single core @ 4.5ghz in R15), that's still not great. Next update will likely be a while, but next step is to play with OC as-is before ordering all the water cooling parts, and do a dry (ha!) run with friends to see whether performance of this era stuff is really going to work as gaming VMs where each player is running 2c/4t. Next update may be some performance figures and/or a for sale post-- I don't think overclocking the system is going to make up for only having 2c/4t per player and all the latency from the PCIe multiplexer chips due to the janky-ass PCIe layout (from a virtualization perspective, it's fine in normal use). If that's the case, I'm heavily leaning towards selling it all as a unit to someone who really wants to tinker with the SR2 rather than trying to do some super niche virtualization thing like I am....it's really just not the right set up for my use case. If interested, shoot me a PM haha. I know what my old X5675 would do at its 4.5 GHz limit, which was honestly really good performance in modern-ish titles at higher resolutions, so there's a chance this works. It's really just a question of will it work at 2c/4t (at...likely closer to 4/4.2) AND will the unraid hypervisor be able to run things with 2c/CPU allocated. More details on getting the SR2 working with virtualization in a follow up post, because it's super non-trivial due to the aforementioned NF200 PCIe multiplexer layout. Seriously, to the best of my internet sleuthing, only one guy has done this before and published how he did it, and it's not a small job to work around the multiplexers. But it CAN be done, and I've gotten it to work already in testing, so more details to come. Finally, as I'm writing this and running FAH, the 2x stock X5675s, 2x 1080ti, and 1x 1080 apparently pull enough juice to make my UPS scream bloody murder and trip the overload protection. Fcking Awesome.
  10. Dang. How's it fare in R15? This weekend my hope is to spin up the SR2 again to decide whether to sell it off in favor of a 10980xe solution. I found a great motherboard for X299 that has 7 full pcie slots, supposedly with solid iommu groups, without any of the quirkiness the SR2 has.
  11. I have a 1080ti that I bought used two years ago. I don't really feel a need to upgrade, so whenever availability gets sorted I'll figure out then if an upgrade is warranted. Ray tracing would be nice but given how few games have RT support, and even fewer of those are ones I would want to play (woo WW2 shooter 2020 edition woo)....I'm not going to buy based on RT abilities. Now, if only the nvidia cards can drive an LG oled at 120Hz and variable refresh, that's a different story (and, I don't know whether AMD can/can't).
  12. Yup, totally agreed on the torque. Super looking forward to replacing my (awesome, just aging) v8 car with an overly powered EV when the time comes for that reason; I really like being able to mash the pedal and have the car MOVE.
  13. headed and ventilated seats. Sooo nice for road trips. also, lots of torque. I want to merge NOW
  14. uhhhhhhhh so that looks interesting to me.... EK parts for the BadDecisions build have started arriving! Also...may be getting the X79 variant of my quad-GPU dual supermicro board here next week with ram and procs and coolers. Yassss
  15. I mean.....I did kinda buy 4x 1080ti and single slot waterblocks. It'd be a shame to not spin it up at least once! Anybody here messed with adding pcie slots by cables plugged into M.2 slots? If so....TR3 may be back on the menu. Still debating dual 2011v3 Xeon vs X299 (Asus Z10/X299 Sage mobos, each have 7 x8-16 pcie slots) vs Trx40. Latter has plenty of lanes for GPU but unlike the Xeon/i9 options, there are only 4 pcie slots so adding in USB or 10G or other cards isn't possible without janky adapters (see: M.2 -> pcie riser and slot). X299 and 10980xe may end up winning-- fast cores, 18 cores so enough for 4 vm and overhead, and plenty of pcie. Badly wish there was a Trx40 equivalent of the X299 WS Sage mobo.
×