Jump to content

bimmerman

Member
  • Posts

    829
  • Joined

  • Last visited

Reputation Activity

  1. Like
    bimmerman got a reaction from SouleaterP in General Intel HEDT Xeon/i7 Discussion   
    Ok....here we go.
     
    Gigabyte UD3R, X5675 running on a 240mm AIO, 24GB of 1600 rated memory (G.skill maybe?). Not that it matters, but running NVMe boot drive and an R9 290 right now.
     
    Summary:

     
     
    Frequencies and Multipliers:

     
    Voltages:

     
     
    and likely who-cares, but Memory Mult:

  2. Like
    bimmerman got a reaction from Slayer3032 in General Intel HEDT Xeon/i7 Discussion   
    Somewhere in this thread are my bios screenshots for my X5675 on the Gigabyte UD3 board, at 4.5 Ghz with 1.3ish V. My chip would do 4.7 at 1.4, but i never tried more than that. Diminishing returns kicked in hard.
     
    Re: NVMe boot drive, yes use the 950 Pro or the OEM variant and run it in AHCI mode. Don't try to run multiple nvme drives in a pcie add in card unless the card has a plx or bifurcation setup, as the X58 chipset doesn't support 4x4 bifurcation. Hence, need pricey cards to guarantee success.
     
    In other news, my SR2 hasn't sold, so until it does more tinkering is afoot. Also getting some freebie 2011 socket Xeons for the supermicro board, and setting about making a case for its weirdo aspect ratio.
     
    In other other news....TR Pro has caught my eye hardcore, esp that Asus board. Mmmmm.
  3. Like
    bimmerman got a reaction from Pasi123 in General Intel HEDT Xeon/i7 Discussion   
    Somewhere in this thread are my bios screenshots for my X5675 on the Gigabyte UD3 board, at 4.5 Ghz with 1.3ish V. My chip would do 4.7 at 1.4, but i never tried more than that. Diminishing returns kicked in hard.
     
    Re: NVMe boot drive, yes use the 950 Pro or the OEM variant and run it in AHCI mode. Don't try to run multiple nvme drives in a pcie add in card unless the card has a plx or bifurcation setup, as the X58 chipset doesn't support 4x4 bifurcation. Hence, need pricey cards to guarantee success.
     
    In other news, my SR2 hasn't sold, so until it does more tinkering is afoot. Also getting some freebie 2011 socket Xeons for the supermicro board, and setting about making a case for its weirdo aspect ratio.
     
    In other other news....TR Pro has caught my eye hardcore, esp that Asus board. Mmmmm.
  4. Like
    bimmerman got a reaction from drevmcast in General Intel HEDT Xeon/i7 Discussion   
    Somewhere in this thread are my bios screenshots for my X5675 on the Gigabyte UD3 board, at 4.5 Ghz with 1.3ish V. My chip would do 4.7 at 1.4, but i never tried more than that. Diminishing returns kicked in hard.
     
    Re: NVMe boot drive, yes use the 950 Pro or the OEM variant and run it in AHCI mode. Don't try to run multiple nvme drives in a pcie add in card unless the card has a plx or bifurcation setup, as the X58 chipset doesn't support 4x4 bifurcation. Hence, need pricey cards to guarantee success.
     
    In other news, my SR2 hasn't sold, so until it does more tinkering is afoot. Also getting some freebie 2011 socket Xeons for the supermicro board, and setting about making a case for its weirdo aspect ratio.
     
    In other other news....TR Pro has caught my eye hardcore, esp that Asus board. Mmmmm.
  5. Like
    bimmerman got a reaction from CommanderAlex in General Intel HEDT Xeon/i7 Discussion   
    Somewhere in this thread are my bios screenshots for my X5675 on the Gigabyte UD3 board, at 4.5 Ghz with 1.3ish V. My chip would do 4.7 at 1.4, but i never tried more than that. Diminishing returns kicked in hard.
     
    Re: NVMe boot drive, yes use the 950 Pro or the OEM variant and run it in AHCI mode. Don't try to run multiple nvme drives in a pcie add in card unless the card has a plx or bifurcation setup, as the X58 chipset doesn't support 4x4 bifurcation. Hence, need pricey cards to guarantee success.
     
    In other news, my SR2 hasn't sold, so until it does more tinkering is afoot. Also getting some freebie 2011 socket Xeons for the supermicro board, and setting about making a case for its weirdo aspect ratio.
     
    In other other news....TR Pro has caught my eye hardcore, esp that Asus board. Mmmmm.
  6. Like
    bimmerman got a reaction from the pudding in General Intel HEDT Xeon/i7 Discussion   
    So, it's been a while since Project BadDecisions was noodled on, but I've made some progress. Also bought some shiny bits (GPUs), some more shiny bits (EK, Heatkiller bits), and even more (socket 2011 2P Supermicro 4 gpu board, cpus, ram, cooler...)
     
    Anyway. On to the update!
     
    Since the SR2 uses a server chipset with some fuckery, it DOES support ECC. However, getting more than 4GB/slot to work has historically been an enormous pain in the ass. However....most posts on the subject on the internet are circa 2010-2014 era, and things have changed. Why would I need more than 2 CPU x 6 slot/CPU x 4GB/slot = 48 GB total? Well, for those who don't know/remember, the point of this project is to run 4 gaming VMs simultaneously, and the rule of thumb at present is to run 16GB per player, plus some for overhead for the hypervisor. That math....doesn't work with only 48GB total system memory. The best I had configured was 11 GB / player with 1 GB per VM allocated to the hypervisor. It technically works, especially for games like Left 4 Dead (which, realistically, is what this'll be used for primarily in my friend group).
     
    So, because the other part of this project is to really push the SR-2 to the limit, I figured why not try for the elusive 96GB. Going over 48GB requires (according to internet, I can't prove/disprove) ECC memory, and supposedly specific kinds of it.
     
    I....yolo'd it and bought a Supermicro motherboard with CPUs and 128GB of 1333 DDR3 ECC RAM for less than the price of just 96 GB ECC on ebay. Upside, I have another test bed if the SR2 just fails and I need more cores, since that SM board will tackle up to 12 cores/socket and has 4x x16 3.0 slots at double spacing....so it's technically the better way to go. If yall remember the Supermicro board I had bought before, this is the same thing just for socket 2011 (x79 era stuff). Anyway, that's boring, and can't overclock, and was intended to be a way to get ram.
     
    So, I popped out 12 sticks of the ECC goodness and got tinkering. I can't speak to how much of a pain this would be with the earlier production stuff (think X5650, 70s, etc), but the internet says those are much more challenging, especially ES/QS versions, to get this to work. YMMV. For me, with X5675s, this was stupid easy. Step the first: put in 6 sticks, 3 per CPU, and then manually set the timings, speed, and voltage in BIOS. The key here is to manually set the Command Rate to 2. Then power off, install sticks, and BOOM 96 GB. The only critical thing is that you get dual rank 8GB modules (2Rx4). Results are iffy on 16GB or other rank combinations, despite the CPUs' compatibility with 16GB dimms and other ranks...in other motherboards. The SR-2 is a weirdo.
     

     
    It went shockingly smoothly, once I typed in the correct timing in the correct spot. Clearing CMOS on this thing is a pain in the ass, especially since not all settings get saved in profiles. But hey, here's the proof! 96 GB, running nice and happy at 1333 MHz, at the timings the memory manufacturer's data sheet specified (Samsung, in this case).
     

     
    Eagle eyed readers will see the next part of the update.
     
    Previously I had been using rando GPUs I and friends had lying around. The plan all along has been to run 4 GPUs, but here comes SR-2 pain in the ass part the second-- due to the janky way the chipset on this era stuff is laid out, the maximum SATA speed is 3GB/s, and there is almost no USB3 on the board....and what there is, is sharing bandwidth with the singular SATA 3 (6GB/s) controller, all of which is set behind a singular PCIe 2.0 lane (5GB/s bandwidth). In simple terms, for intense IO, routing through the cipset will bottleneck. Intense IO may be a stretch for four simultaneous gaming VMs, but maybe not-- 4+ SSD drives plus USB connectivity. Simpler math says that'll suck if I can't figure out a solution....which I did: I need to run a PCIe USB controller card, with 4x controllers so that I can pass each controller to separate VMs (enabling hot plug and downstream hubs and things, which is nice), plus an HBA PCIe card to bypass the chipset entirely for my SSDs.
     
    So, that's pretty sweet! One problem...the SR2, despite being a 10 slot form factor, only has 7 slots, and GPUs are dual slot. Well shit.
     
    Oh, wait, reference design 1080ti can be water blocked to be single slot. And I already have one....
     
    Hmmmmmmmmm
     
    Yep. Right when the internet was losing its mind over the 3080's performance, prior to realizing you can't actually buy them, I scooped up 3 additional 1080ti that also came with water blocks. Looking back, 20x0 series may have been smarter to buy, as they include a USB controller on the damn thing, but that wouldn't have solved the HBA issue-- if you can use chipset storage, that's the way to go IMO). Anyway, while I'm waiting on radiators to arrive....and free time... I lent one of the 1080Ti to a buddy, borrowed his 1080 in exchange, and threw this together (my main rig is keeping its 1080Ti until it gets upgraded). Right now I have the following setup: 1080TI in slots 1 and 2, HBA card slot 3, 1080ti slots 4 and 5, USB3 controller card slot 6, 1080 non-ti slot 7/8. Since I don't have water cooling fully done yet, this spacing lets the cards breathe without requiring me to tear down my other rig, which is nice.
     
    Pr0nz:

     

     
    Lastly, before calling it for the evening, I ran a set of Cinebench R15 and R20 for @CommanderAlex's comparison sake (spoiler....10980XE cleans house)
     

     

     
    There's no way to spin this. The performance of stock X58 era stuff....is not great. I'll do some more thorough benchmarking in games (and VMs) before I really start tweaking knobs with overclocking, but....while my X5675 scaled Cinebench perf pretty much linearly with OC percent (~148 single core @ 4.5ghz in R15), that's still not great.
     
    Next update will likely be a while, but next step is to play with OC as-is before ordering all the water cooling parts, and do a dry (ha!) run with friends to see whether performance of this era stuff is really going to work as gaming VMs where each player is running 2c/4t. Next update may be some performance figures and/or a for sale post-- I don't think overclocking the system is going to make up for only having 2c/4t per player and all the latency from the PCIe multiplexer chips due to the janky-ass PCIe layout (from a virtualization perspective, it's fine in normal use). If that's the case, I'm heavily leaning towards selling it all as a unit to someone who really wants to tinker with the SR2 rather than trying to do some super niche virtualization thing like I am....it's really just not the right set up for my use case. If interested, shoot me a PM haha.
     
    I know what my old X5675 would do at its 4.5 GHz limit, which was honestly really good performance in modern-ish titles at higher resolutions, so there's a chance this works. It's really just a question of will it work at 2c/4t (at...likely closer to 4/4.2) AND will the unraid hypervisor be able to run things with 2c/CPU allocated. More details on getting the SR2 working with virtualization in a follow up post, because it's super non-trivial due to the aforementioned NF200 PCIe multiplexer layout. Seriously, to the best of my internet sleuthing, only one guy has done this before and published how he did it, and it's not a small job to work around the multiplexers. But it CAN be done, and I've gotten it to work already in testing, so more details to come.
     
    Finally, as I'm writing this and running FAH, the 2x stock X5675s, 2x 1080ti, and 1x 1080 apparently pull enough juice to make my UPS scream bloody murder and trip the overload protection. Fcking Awesome.
  7. Like
    bimmerman got a reaction from Slayer3032 in General Intel HEDT Xeon/i7 Discussion   
    So, it's been a while since Project BadDecisions was noodled on, but I've made some progress. Also bought some shiny bits (GPUs), some more shiny bits (EK, Heatkiller bits), and even more (socket 2011 2P Supermicro 4 gpu board, cpus, ram, cooler...)
     
    Anyway. On to the update!
     
    Since the SR2 uses a server chipset with some fuckery, it DOES support ECC. However, getting more than 4GB/slot to work has historically been an enormous pain in the ass. However....most posts on the subject on the internet are circa 2010-2014 era, and things have changed. Why would I need more than 2 CPU x 6 slot/CPU x 4GB/slot = 48 GB total? Well, for those who don't know/remember, the point of this project is to run 4 gaming VMs simultaneously, and the rule of thumb at present is to run 16GB per player, plus some for overhead for the hypervisor. That math....doesn't work with only 48GB total system memory. The best I had configured was 11 GB / player with 1 GB per VM allocated to the hypervisor. It technically works, especially for games like Left 4 Dead (which, realistically, is what this'll be used for primarily in my friend group).
     
    So, because the other part of this project is to really push the SR-2 to the limit, I figured why not try for the elusive 96GB. Going over 48GB requires (according to internet, I can't prove/disprove) ECC memory, and supposedly specific kinds of it.
     
    I....yolo'd it and bought a Supermicro motherboard with CPUs and 128GB of 1333 DDR3 ECC RAM for less than the price of just 96 GB ECC on ebay. Upside, I have another test bed if the SR2 just fails and I need more cores, since that SM board will tackle up to 12 cores/socket and has 4x x16 3.0 slots at double spacing....so it's technically the better way to go. If yall remember the Supermicro board I had bought before, this is the same thing just for socket 2011 (x79 era stuff). Anyway, that's boring, and can't overclock, and was intended to be a way to get ram.
     
    So, I popped out 12 sticks of the ECC goodness and got tinkering. I can't speak to how much of a pain this would be with the earlier production stuff (think X5650, 70s, etc), but the internet says those are much more challenging, especially ES/QS versions, to get this to work. YMMV. For me, with X5675s, this was stupid easy. Step the first: put in 6 sticks, 3 per CPU, and then manually set the timings, speed, and voltage in BIOS. The key here is to manually set the Command Rate to 2. Then power off, install sticks, and BOOM 96 GB. The only critical thing is that you get dual rank 8GB modules (2Rx4). Results are iffy on 16GB or other rank combinations, despite the CPUs' compatibility with 16GB dimms and other ranks...in other motherboards. The SR-2 is a weirdo.
     

     
    It went shockingly smoothly, once I typed in the correct timing in the correct spot. Clearing CMOS on this thing is a pain in the ass, especially since not all settings get saved in profiles. But hey, here's the proof! 96 GB, running nice and happy at 1333 MHz, at the timings the memory manufacturer's data sheet specified (Samsung, in this case).
     

     
    Eagle eyed readers will see the next part of the update.
     
    Previously I had been using rando GPUs I and friends had lying around. The plan all along has been to run 4 GPUs, but here comes SR-2 pain in the ass part the second-- due to the janky way the chipset on this era stuff is laid out, the maximum SATA speed is 3GB/s, and there is almost no USB3 on the board....and what there is, is sharing bandwidth with the singular SATA 3 (6GB/s) controller, all of which is set behind a singular PCIe 2.0 lane (5GB/s bandwidth). In simple terms, for intense IO, routing through the cipset will bottleneck. Intense IO may be a stretch for four simultaneous gaming VMs, but maybe not-- 4+ SSD drives plus USB connectivity. Simpler math says that'll suck if I can't figure out a solution....which I did: I need to run a PCIe USB controller card, with 4x controllers so that I can pass each controller to separate VMs (enabling hot plug and downstream hubs and things, which is nice), plus an HBA PCIe card to bypass the chipset entirely for my SSDs.
     
    So, that's pretty sweet! One problem...the SR2, despite being a 10 slot form factor, only has 7 slots, and GPUs are dual slot. Well shit.
     
    Oh, wait, reference design 1080ti can be water blocked to be single slot. And I already have one....
     
    Hmmmmmmmmm
     
    Yep. Right when the internet was losing its mind over the 3080's performance, prior to realizing you can't actually buy them, I scooped up 3 additional 1080ti that also came with water blocks. Looking back, 20x0 series may have been smarter to buy, as they include a USB controller on the damn thing, but that wouldn't have solved the HBA issue-- if you can use chipset storage, that's the way to go IMO). Anyway, while I'm waiting on radiators to arrive....and free time... I lent one of the 1080Ti to a buddy, borrowed his 1080 in exchange, and threw this together (my main rig is keeping its 1080Ti until it gets upgraded). Right now I have the following setup: 1080TI in slots 1 and 2, HBA card slot 3, 1080ti slots 4 and 5, USB3 controller card slot 6, 1080 non-ti slot 7/8. Since I don't have water cooling fully done yet, this spacing lets the cards breathe without requiring me to tear down my other rig, which is nice.
     
    Pr0nz:

     

     
    Lastly, before calling it for the evening, I ran a set of Cinebench R15 and R20 for @CommanderAlex's comparison sake (spoiler....10980XE cleans house)
     

     

     
    There's no way to spin this. The performance of stock X58 era stuff....is not great. I'll do some more thorough benchmarking in games (and VMs) before I really start tweaking knobs with overclocking, but....while my X5675 scaled Cinebench perf pretty much linearly with OC percent (~148 single core @ 4.5ghz in R15), that's still not great.
     
    Next update will likely be a while, but next step is to play with OC as-is before ordering all the water cooling parts, and do a dry (ha!) run with friends to see whether performance of this era stuff is really going to work as gaming VMs where each player is running 2c/4t. Next update may be some performance figures and/or a for sale post-- I don't think overclocking the system is going to make up for only having 2c/4t per player and all the latency from the PCIe multiplexer chips due to the janky-ass PCIe layout (from a virtualization perspective, it's fine in normal use). If that's the case, I'm heavily leaning towards selling it all as a unit to someone who really wants to tinker with the SR2 rather than trying to do some super niche virtualization thing like I am....it's really just not the right set up for my use case. If interested, shoot me a PM haha.
     
    I know what my old X5675 would do at its 4.5 GHz limit, which was honestly really good performance in modern-ish titles at higher resolutions, so there's a chance this works. It's really just a question of will it work at 2c/4t (at...likely closer to 4/4.2) AND will the unraid hypervisor be able to run things with 2c/CPU allocated. More details on getting the SR2 working with virtualization in a follow up post, because it's super non-trivial due to the aforementioned NF200 PCIe multiplexer layout. Seriously, to the best of my internet sleuthing, only one guy has done this before and published how he did it, and it's not a small job to work around the multiplexers. But it CAN be done, and I've gotten it to work already in testing, so more details to come.
     
    Finally, as I'm writing this and running FAH, the 2x stock X5675s, 2x 1080ti, and 1x 1080 apparently pull enough juice to make my UPS scream bloody murder and trip the overload protection. Fcking Awesome.
  8. Like
    bimmerman got a reaction from Zando_ in General Intel HEDT Xeon/i7 Discussion   
    So, it's been a while since Project BadDecisions was noodled on, but I've made some progress. Also bought some shiny bits (GPUs), some more shiny bits (EK, Heatkiller bits), and even more (socket 2011 2P Supermicro 4 gpu board, cpus, ram, cooler...)
     
    Anyway. On to the update!
     
    Since the SR2 uses a server chipset with some fuckery, it DOES support ECC. However, getting more than 4GB/slot to work has historically been an enormous pain in the ass. However....most posts on the subject on the internet are circa 2010-2014 era, and things have changed. Why would I need more than 2 CPU x 6 slot/CPU x 4GB/slot = 48 GB total? Well, for those who don't know/remember, the point of this project is to run 4 gaming VMs simultaneously, and the rule of thumb at present is to run 16GB per player, plus some for overhead for the hypervisor. That math....doesn't work with only 48GB total system memory. The best I had configured was 11 GB / player with 1 GB per VM allocated to the hypervisor. It technically works, especially for games like Left 4 Dead (which, realistically, is what this'll be used for primarily in my friend group).
     
    So, because the other part of this project is to really push the SR-2 to the limit, I figured why not try for the elusive 96GB. Going over 48GB requires (according to internet, I can't prove/disprove) ECC memory, and supposedly specific kinds of it.
     
    I....yolo'd it and bought a Supermicro motherboard with CPUs and 128GB of 1333 DDR3 ECC RAM for less than the price of just 96 GB ECC on ebay. Upside, I have another test bed if the SR2 just fails and I need more cores, since that SM board will tackle up to 12 cores/socket and has 4x x16 3.0 slots at double spacing....so it's technically the better way to go. If yall remember the Supermicro board I had bought before, this is the same thing just for socket 2011 (x79 era stuff). Anyway, that's boring, and can't overclock, and was intended to be a way to get ram.
     
    So, I popped out 12 sticks of the ECC goodness and got tinkering. I can't speak to how much of a pain this would be with the earlier production stuff (think X5650, 70s, etc), but the internet says those are much more challenging, especially ES/QS versions, to get this to work. YMMV. For me, with X5675s, this was stupid easy. Step the first: put in 6 sticks, 3 per CPU, and then manually set the timings, speed, and voltage in BIOS. The key here is to manually set the Command Rate to 2. Then power off, install sticks, and BOOM 96 GB. The only critical thing is that you get dual rank 8GB modules (2Rx4). Results are iffy on 16GB or other rank combinations, despite the CPUs' compatibility with 16GB dimms and other ranks...in other motherboards. The SR-2 is a weirdo.
     

     
    It went shockingly smoothly, once I typed in the correct timing in the correct spot. Clearing CMOS on this thing is a pain in the ass, especially since not all settings get saved in profiles. But hey, here's the proof! 96 GB, running nice and happy at 1333 MHz, at the timings the memory manufacturer's data sheet specified (Samsung, in this case).
     

     
    Eagle eyed readers will see the next part of the update.
     
    Previously I had been using rando GPUs I and friends had lying around. The plan all along has been to run 4 GPUs, but here comes SR-2 pain in the ass part the second-- due to the janky way the chipset on this era stuff is laid out, the maximum SATA speed is 3GB/s, and there is almost no USB3 on the board....and what there is, is sharing bandwidth with the singular SATA 3 (6GB/s) controller, all of which is set behind a singular PCIe 2.0 lane (5GB/s bandwidth). In simple terms, for intense IO, routing through the cipset will bottleneck. Intense IO may be a stretch for four simultaneous gaming VMs, but maybe not-- 4+ SSD drives plus USB connectivity. Simpler math says that'll suck if I can't figure out a solution....which I did: I need to run a PCIe USB controller card, with 4x controllers so that I can pass each controller to separate VMs (enabling hot plug and downstream hubs and things, which is nice), plus an HBA PCIe card to bypass the chipset entirely for my SSDs.
     
    So, that's pretty sweet! One problem...the SR2, despite being a 10 slot form factor, only has 7 slots, and GPUs are dual slot. Well shit.
     
    Oh, wait, reference design 1080ti can be water blocked to be single slot. And I already have one....
     
    Hmmmmmmmmm
     
    Yep. Right when the internet was losing its mind over the 3080's performance, prior to realizing you can't actually buy them, I scooped up 3 additional 1080ti that also came with water blocks. Looking back, 20x0 series may have been smarter to buy, as they include a USB controller on the damn thing, but that wouldn't have solved the HBA issue-- if you can use chipset storage, that's the way to go IMO). Anyway, while I'm waiting on radiators to arrive....and free time... I lent one of the 1080Ti to a buddy, borrowed his 1080 in exchange, and threw this together (my main rig is keeping its 1080Ti until it gets upgraded). Right now I have the following setup: 1080TI in slots 1 and 2, HBA card slot 3, 1080ti slots 4 and 5, USB3 controller card slot 6, 1080 non-ti slot 7/8. Since I don't have water cooling fully done yet, this spacing lets the cards breathe without requiring me to tear down my other rig, which is nice.
     
    Pr0nz:

     

     
    Lastly, before calling it for the evening, I ran a set of Cinebench R15 and R20 for @CommanderAlex's comparison sake (spoiler....10980XE cleans house)
     

     

     
    There's no way to spin this. The performance of stock X58 era stuff....is not great. I'll do some more thorough benchmarking in games (and VMs) before I really start tweaking knobs with overclocking, but....while my X5675 scaled Cinebench perf pretty much linearly with OC percent (~148 single core @ 4.5ghz in R15), that's still not great.
     
    Next update will likely be a while, but next step is to play with OC as-is before ordering all the water cooling parts, and do a dry (ha!) run with friends to see whether performance of this era stuff is really going to work as gaming VMs where each player is running 2c/4t. Next update may be some performance figures and/or a for sale post-- I don't think overclocking the system is going to make up for only having 2c/4t per player and all the latency from the PCIe multiplexer chips due to the janky-ass PCIe layout (from a virtualization perspective, it's fine in normal use). If that's the case, I'm heavily leaning towards selling it all as a unit to someone who really wants to tinker with the SR2 rather than trying to do some super niche virtualization thing like I am....it's really just not the right set up for my use case. If interested, shoot me a PM haha.
     
    I know what my old X5675 would do at its 4.5 GHz limit, which was honestly really good performance in modern-ish titles at higher resolutions, so there's a chance this works. It's really just a question of will it work at 2c/4t (at...likely closer to 4/4.2) AND will the unraid hypervisor be able to run things with 2c/CPU allocated. More details on getting the SR2 working with virtualization in a follow up post, because it's super non-trivial due to the aforementioned NF200 PCIe multiplexer layout. Seriously, to the best of my internet sleuthing, only one guy has done this before and published how he did it, and it's not a small job to work around the multiplexers. But it CAN be done, and I've gotten it to work already in testing, so more details to come.
     
    Finally, as I'm writing this and running FAH, the 2x stock X5675s, 2x 1080ti, and 1x 1080 apparently pull enough juice to make my UPS scream bloody murder and trip the overload protection. Fcking Awesome.
  9. Like
    bimmerman got a reaction from Pasi123 in General Intel HEDT Xeon/i7 Discussion   
    So, it's been a while since Project BadDecisions was noodled on, but I've made some progress. Also bought some shiny bits (GPUs), some more shiny bits (EK, Heatkiller bits), and even more (socket 2011 2P Supermicro 4 gpu board, cpus, ram, cooler...)
     
    Anyway. On to the update!
     
    Since the SR2 uses a server chipset with some fuckery, it DOES support ECC. However, getting more than 4GB/slot to work has historically been an enormous pain in the ass. However....most posts on the subject on the internet are circa 2010-2014 era, and things have changed. Why would I need more than 2 CPU x 6 slot/CPU x 4GB/slot = 48 GB total? Well, for those who don't know/remember, the point of this project is to run 4 gaming VMs simultaneously, and the rule of thumb at present is to run 16GB per player, plus some for overhead for the hypervisor. That math....doesn't work with only 48GB total system memory. The best I had configured was 11 GB / player with 1 GB per VM allocated to the hypervisor. It technically works, especially for games like Left 4 Dead (which, realistically, is what this'll be used for primarily in my friend group).
     
    So, because the other part of this project is to really push the SR-2 to the limit, I figured why not try for the elusive 96GB. Going over 48GB requires (according to internet, I can't prove/disprove) ECC memory, and supposedly specific kinds of it.
     
    I....yolo'd it and bought a Supermicro motherboard with CPUs and 128GB of 1333 DDR3 ECC RAM for less than the price of just 96 GB ECC on ebay. Upside, I have another test bed if the SR2 just fails and I need more cores, since that SM board will tackle up to 12 cores/socket and has 4x x16 3.0 slots at double spacing....so it's technically the better way to go. If yall remember the Supermicro board I had bought before, this is the same thing just for socket 2011 (x79 era stuff). Anyway, that's boring, and can't overclock, and was intended to be a way to get ram.
     
    So, I popped out 12 sticks of the ECC goodness and got tinkering. I can't speak to how much of a pain this would be with the earlier production stuff (think X5650, 70s, etc), but the internet says those are much more challenging, especially ES/QS versions, to get this to work. YMMV. For me, with X5675s, this was stupid easy. Step the first: put in 6 sticks, 3 per CPU, and then manually set the timings, speed, and voltage in BIOS. The key here is to manually set the Command Rate to 2. Then power off, install sticks, and BOOM 96 GB. The only critical thing is that you get dual rank 8GB modules (2Rx4). Results are iffy on 16GB or other rank combinations, despite the CPUs' compatibility with 16GB dimms and other ranks...in other motherboards. The SR-2 is a weirdo.
     

     
    It went shockingly smoothly, once I typed in the correct timing in the correct spot. Clearing CMOS on this thing is a pain in the ass, especially since not all settings get saved in profiles. But hey, here's the proof! 96 GB, running nice and happy at 1333 MHz, at the timings the memory manufacturer's data sheet specified (Samsung, in this case).
     

     
    Eagle eyed readers will see the next part of the update.
     
    Previously I had been using rando GPUs I and friends had lying around. The plan all along has been to run 4 GPUs, but here comes SR-2 pain in the ass part the second-- due to the janky way the chipset on this era stuff is laid out, the maximum SATA speed is 3GB/s, and there is almost no USB3 on the board....and what there is, is sharing bandwidth with the singular SATA 3 (6GB/s) controller, all of which is set behind a singular PCIe 2.0 lane (5GB/s bandwidth). In simple terms, for intense IO, routing through the cipset will bottleneck. Intense IO may be a stretch for four simultaneous gaming VMs, but maybe not-- 4+ SSD drives plus USB connectivity. Simpler math says that'll suck if I can't figure out a solution....which I did: I need to run a PCIe USB controller card, with 4x controllers so that I can pass each controller to separate VMs (enabling hot plug and downstream hubs and things, which is nice), plus an HBA PCIe card to bypass the chipset entirely for my SSDs.
     
    So, that's pretty sweet! One problem...the SR2, despite being a 10 slot form factor, only has 7 slots, and GPUs are dual slot. Well shit.
     
    Oh, wait, reference design 1080ti can be water blocked to be single slot. And I already have one....
     
    Hmmmmmmmmm
     
    Yep. Right when the internet was losing its mind over the 3080's performance, prior to realizing you can't actually buy them, I scooped up 3 additional 1080ti that also came with water blocks. Looking back, 20x0 series may have been smarter to buy, as they include a USB controller on the damn thing, but that wouldn't have solved the HBA issue-- if you can use chipset storage, that's the way to go IMO). Anyway, while I'm waiting on radiators to arrive....and free time... I lent one of the 1080Ti to a buddy, borrowed his 1080 in exchange, and threw this together (my main rig is keeping its 1080Ti until it gets upgraded). Right now I have the following setup: 1080TI in slots 1 and 2, HBA card slot 3, 1080ti slots 4 and 5, USB3 controller card slot 6, 1080 non-ti slot 7/8. Since I don't have water cooling fully done yet, this spacing lets the cards breathe without requiring me to tear down my other rig, which is nice.
     
    Pr0nz:

     

     
    Lastly, before calling it for the evening, I ran a set of Cinebench R15 and R20 for @CommanderAlex's comparison sake (spoiler....10980XE cleans house)
     

     

     
    There's no way to spin this. The performance of stock X58 era stuff....is not great. I'll do some more thorough benchmarking in games (and VMs) before I really start tweaking knobs with overclocking, but....while my X5675 scaled Cinebench perf pretty much linearly with OC percent (~148 single core @ 4.5ghz in R15), that's still not great.
     
    Next update will likely be a while, but next step is to play with OC as-is before ordering all the water cooling parts, and do a dry (ha!) run with friends to see whether performance of this era stuff is really going to work as gaming VMs where each player is running 2c/4t. Next update may be some performance figures and/or a for sale post-- I don't think overclocking the system is going to make up for only having 2c/4t per player and all the latency from the PCIe multiplexer chips due to the janky-ass PCIe layout (from a virtualization perspective, it's fine in normal use). If that's the case, I'm heavily leaning towards selling it all as a unit to someone who really wants to tinker with the SR2 rather than trying to do some super niche virtualization thing like I am....it's really just not the right set up for my use case. If interested, shoot me a PM haha.
     
    I know what my old X5675 would do at its 4.5 GHz limit, which was honestly really good performance in modern-ish titles at higher resolutions, so there's a chance this works. It's really just a question of will it work at 2c/4t (at...likely closer to 4/4.2) AND will the unraid hypervisor be able to run things with 2c/CPU allocated. More details on getting the SR2 working with virtualization in a follow up post, because it's super non-trivial due to the aforementioned NF200 PCIe multiplexer layout. Seriously, to the best of my internet sleuthing, only one guy has done this before and published how he did it, and it's not a small job to work around the multiplexers. But it CAN be done, and I've gotten it to work already in testing, so more details to come.
     
    Finally, as I'm writing this and running FAH, the 2x stock X5675s, 2x 1080ti, and 1x 1080 apparently pull enough juice to make my UPS scream bloody murder and trip the overload protection. Fcking Awesome.
  10. Like
    bimmerman got a reaction from CommanderAlex in General Intel HEDT Xeon/i7 Discussion   
    So, it's been a while since Project BadDecisions was noodled on, but I've made some progress. Also bought some shiny bits (GPUs), some more shiny bits (EK, Heatkiller bits), and even more (socket 2011 2P Supermicro 4 gpu board, cpus, ram, cooler...)
     
    Anyway. On to the update!
     
    Since the SR2 uses a server chipset with some fuckery, it DOES support ECC. However, getting more than 4GB/slot to work has historically been an enormous pain in the ass. However....most posts on the subject on the internet are circa 2010-2014 era, and things have changed. Why would I need more than 2 CPU x 6 slot/CPU x 4GB/slot = 48 GB total? Well, for those who don't know/remember, the point of this project is to run 4 gaming VMs simultaneously, and the rule of thumb at present is to run 16GB per player, plus some for overhead for the hypervisor. That math....doesn't work with only 48GB total system memory. The best I had configured was 11 GB / player with 1 GB per VM allocated to the hypervisor. It technically works, especially for games like Left 4 Dead (which, realistically, is what this'll be used for primarily in my friend group).
     
    So, because the other part of this project is to really push the SR-2 to the limit, I figured why not try for the elusive 96GB. Going over 48GB requires (according to internet, I can't prove/disprove) ECC memory, and supposedly specific kinds of it.
     
    I....yolo'd it and bought a Supermicro motherboard with CPUs and 128GB of 1333 DDR3 ECC RAM for less than the price of just 96 GB ECC on ebay. Upside, I have another test bed if the SR2 just fails and I need more cores, since that SM board will tackle up to 12 cores/socket and has 4x x16 3.0 slots at double spacing....so it's technically the better way to go. If yall remember the Supermicro board I had bought before, this is the same thing just for socket 2011 (x79 era stuff). Anyway, that's boring, and can't overclock, and was intended to be a way to get ram.
     
    So, I popped out 12 sticks of the ECC goodness and got tinkering. I can't speak to how much of a pain this would be with the earlier production stuff (think X5650, 70s, etc), but the internet says those are much more challenging, especially ES/QS versions, to get this to work. YMMV. For me, with X5675s, this was stupid easy. Step the first: put in 6 sticks, 3 per CPU, and then manually set the timings, speed, and voltage in BIOS. The key here is to manually set the Command Rate to 2. Then power off, install sticks, and BOOM 96 GB. The only critical thing is that you get dual rank 8GB modules (2Rx4). Results are iffy on 16GB or other rank combinations, despite the CPUs' compatibility with 16GB dimms and other ranks...in other motherboards. The SR-2 is a weirdo.
     

     
    It went shockingly smoothly, once I typed in the correct timing in the correct spot. Clearing CMOS on this thing is a pain in the ass, especially since not all settings get saved in profiles. But hey, here's the proof! 96 GB, running nice and happy at 1333 MHz, at the timings the memory manufacturer's data sheet specified (Samsung, in this case).
     

     
    Eagle eyed readers will see the next part of the update.
     
    Previously I had been using rando GPUs I and friends had lying around. The plan all along has been to run 4 GPUs, but here comes SR-2 pain in the ass part the second-- due to the janky way the chipset on this era stuff is laid out, the maximum SATA speed is 3GB/s, and there is almost no USB3 on the board....and what there is, is sharing bandwidth with the singular SATA 3 (6GB/s) controller, all of which is set behind a singular PCIe 2.0 lane (5GB/s bandwidth). In simple terms, for intense IO, routing through the cipset will bottleneck. Intense IO may be a stretch for four simultaneous gaming VMs, but maybe not-- 4+ SSD drives plus USB connectivity. Simpler math says that'll suck if I can't figure out a solution....which I did: I need to run a PCIe USB controller card, with 4x controllers so that I can pass each controller to separate VMs (enabling hot plug and downstream hubs and things, which is nice), plus an HBA PCIe card to bypass the chipset entirely for my SSDs.
     
    So, that's pretty sweet! One problem...the SR2, despite being a 10 slot form factor, only has 7 slots, and GPUs are dual slot. Well shit.
     
    Oh, wait, reference design 1080ti can be water blocked to be single slot. And I already have one....
     
    Hmmmmmmmmm
     
    Yep. Right when the internet was losing its mind over the 3080's performance, prior to realizing you can't actually buy them, I scooped up 3 additional 1080ti that also came with water blocks. Looking back, 20x0 series may have been smarter to buy, as they include a USB controller on the damn thing, but that wouldn't have solved the HBA issue-- if you can use chipset storage, that's the way to go IMO). Anyway, while I'm waiting on radiators to arrive....and free time... I lent one of the 1080Ti to a buddy, borrowed his 1080 in exchange, and threw this together (my main rig is keeping its 1080Ti until it gets upgraded). Right now I have the following setup: 1080TI in slots 1 and 2, HBA card slot 3, 1080ti slots 4 and 5, USB3 controller card slot 6, 1080 non-ti slot 7/8. Since I don't have water cooling fully done yet, this spacing lets the cards breathe without requiring me to tear down my other rig, which is nice.
     
    Pr0nz:

     

     
    Lastly, before calling it for the evening, I ran a set of Cinebench R15 and R20 for @CommanderAlex's comparison sake (spoiler....10980XE cleans house)
     

     

     
    There's no way to spin this. The performance of stock X58 era stuff....is not great. I'll do some more thorough benchmarking in games (and VMs) before I really start tweaking knobs with overclocking, but....while my X5675 scaled Cinebench perf pretty much linearly with OC percent (~148 single core @ 4.5ghz in R15), that's still not great.
     
    Next update will likely be a while, but next step is to play with OC as-is before ordering all the water cooling parts, and do a dry (ha!) run with friends to see whether performance of this era stuff is really going to work as gaming VMs where each player is running 2c/4t. Next update may be some performance figures and/or a for sale post-- I don't think overclocking the system is going to make up for only having 2c/4t per player and all the latency from the PCIe multiplexer chips due to the janky-ass PCIe layout (from a virtualization perspective, it's fine in normal use). If that's the case, I'm heavily leaning towards selling it all as a unit to someone who really wants to tinker with the SR2 rather than trying to do some super niche virtualization thing like I am....it's really just not the right set up for my use case. If interested, shoot me a PM haha.
     
    I know what my old X5675 would do at its 4.5 GHz limit, which was honestly really good performance in modern-ish titles at higher resolutions, so there's a chance this works. It's really just a question of will it work at 2c/4t (at...likely closer to 4/4.2) AND will the unraid hypervisor be able to run things with 2c/CPU allocated. More details on getting the SR2 working with virtualization in a follow up post, because it's super non-trivial due to the aforementioned NF200 PCIe multiplexer layout. Seriously, to the best of my internet sleuthing, only one guy has done this before and published how he did it, and it's not a small job to work around the multiplexers. But it CAN be done, and I've gotten it to work already in testing, so more details to come.
     
    Finally, as I'm writing this and running FAH, the 2x stock X5675s, 2x 1080ti, and 1x 1080 apparently pull enough juice to make my UPS scream bloody murder and trip the overload protection. Fcking Awesome.
  11. Like
    bimmerman got a reaction from Pasi123 in General Intel HEDT Xeon/i7 Discussion   
    uhhhhhhhh so that looks interesting to me....
     
    EK parts for the BadDecisions build have started arriving! Also...may be getting the X79 variant of my quad-GPU dual supermicro board here next week with ram and procs and coolers. Yassss
  12. Like
    bimmerman got a reaction from PlayStation 2 in What feature in your car could you not live without?   
    Yup, totally agreed on the torque. Super looking forward to replacing my (awesome, just aging) v8 car with an overly powered EV when the time comes for that reason; I really like being able to mash the pedal and have the car MOVE.
  13. Agree
    bimmerman reacted to Velcade in What feature in your car could you not live without?   
    are amazing!  I've got these in the truck and they're amazing.  Use them all the time.
  14. Like
    bimmerman got a reaction from PlayStation 2 in What feature in your car could you not live without?   
    headed and ventilated seats. Sooo nice for road trips.
     
    also, lots of torque. I want to merge NOW
  15. Like
    bimmerman got a reaction from GrockleTD in General Intel HEDT Xeon/i7 Discussion   
    uhhhhhhhh so that looks interesting to me....
     
    EK parts for the BadDecisions build have started arriving! Also...may be getting the X79 variant of my quad-GPU dual supermicro board here next week with ram and procs and coolers. Yassss
  16. Like
    bimmerman got a reaction from drevmcast in General Intel HEDT Xeon/i7 Discussion   
    I mean.....I did kinda buy 4x 1080ti and single slot waterblocks. It'd be a shame to not spin it up at least once!
     
    Anybody here messed with adding pcie slots by cables plugged into M.2 slots? If so....TR3 may be back on the menu.
     
    Still debating dual 2011v3 Xeon vs X299 (Asus Z10/X299 Sage mobos, each have 7 x8-16 pcie slots) vs Trx40. Latter has plenty of lanes for GPU but unlike the Xeon/i9 options, there are only 4 pcie slots so adding in USB or 10G or other cards isn't possible without janky adapters (see: M.2 -> pcie riser and slot). X299 and 10980xe may end up winning-- fast cores, 18 cores so enough for 4 vm and overhead, and plenty of pcie. Badly wish there was a Trx40 equivalent of the X299 WS Sage mobo.
  17. Agree
    bimmerman reacted to drevmcast in General Intel HEDT Xeon/i7 Discussion   
    This is such a bummer for me, I really liked this idea . Hopefully you get it all sorted out. Let me know if you want to get rid of the sr2  .
  18. Like
    bimmerman got a reaction from drevmcast in General Intel HEDT Xeon/i7 Discussion   
    Yup, saw that. It's not surprising!
     
    So, SR2 build update. Goal has been to build a 4 gamer 1 tower kind of build for local lan parties once covid is over (so, never?). I've gotten virtualization and GPU passthrough to be working, but performance....is kinda dogshit due to lack of cores and stock-ish speeds of X58 xeons. Each VM is being allocated 4-6 threads total, with best performance coming from pinning 2-4 threads per CPU for hypervisor. Essentially, each VM is being given a hyperthreaded dual core virtual CPU, which isn't optimal, and really shows the age of the platform.
     
    So, rethinking my platform. Tempted to go dual E5-v3 Xeon (2011-3 / x99 era), but no overclocking and the high core count CPUs are slooooow. Ideally I'd be able to allocate somewhere between 12-16 threads per VM, and with Star Wars Squadrons supporting 5 player teams....that means I need a truly ludicrous number of threads (~32-36 total cores or so) for VR, hypervisor, and VM to all be happy and have good performance. Threadripper would make sense as well, but there are exactly zero TRX40 boards with 5+ CPU connected PCIe slots for GPUs. 'Only' doing 4 players is definitely reasonable, but kind of a bummer to not be able to do 5.

    Le. Sigh. gaming laptops are so much easier to deal with for group parties.
  19. Like
    bimmerman got a reaction from Zando_ in General Intel HEDT Xeon/i7 Discussion   
    Right??? It baffles me why there is such PCIe bandwidth but they focused on M.2 sockets instead of slots. Oh well.
     
    I did find a 10980XE in stock at B&H. Sooooo yea, kinda tempting. Would be super fun to mess around with!
     
    Good tip on the ESs. I'll look into that more. The micro SD card slot made me giggle a bit.
  20. Like
    bimmerman reacted to Zando_ in General Intel HEDT Xeon/i7 Discussion   
    One of these absolute motherfuckers maybe? 

    https://www.asus.com/Commercial-Servers-Workstations/WS-C621E-SAGE/overview/
     
    https://www.anandtech.com/show/11960/asus-announces-ws-c621e-sage-workstation-motherboard-dual-xeon-overclocking

    Although few caveats being I have no idea what they cost or what the CPUs compatible with them cost (they claim OC control as well, but can any Xeons other than the single socket only W-3175X actually OC these days?). 

    But hey, dual socket, high core count options, and then 3x PCIe 3.0 x16, 2x PCIe 3.0 x16/x8, and then 2 PCIe 2.0 x16/x8. 
     
    Ah, Office Depot has them for $589, other places are closer to $700: https://www.officedepot.com/a/products/6753785/Asus-WS-C621E-SAGE-Workstation-Motherboard/?cm_mmc=PLA-_-Google-_-Computer_Cards_Components-_-6753785&utm_source=google&utm_medium=cpc&gclid=CjwKCAjwq_D7BRADEiwAVMDdHssx9UXaYypRnrSSPj-znh3iLI3u7I-4lJreO2KivTFA23NxX9d5HRoC5U0QAvD_BwE&gclsrc=aw.ds. But again, the CPUs are likely bonkers expensive as well so prooobably less practical than settling for 4 users and getting a TR setup. 
  21. Like
    bimmerman got a reaction from Zando_ in General Intel HEDT Xeon/i7 Discussion   
    Yup, saw that. It's not surprising!
     
    So, SR2 build update. Goal has been to build a 4 gamer 1 tower kind of build for local lan parties once covid is over (so, never?). I've gotten virtualization and GPU passthrough to be working, but performance....is kinda dogshit due to lack of cores and stock-ish speeds of X58 xeons. Each VM is being allocated 4-6 threads total, with best performance coming from pinning 2-4 threads per CPU for hypervisor. Essentially, each VM is being given a hyperthreaded dual core virtual CPU, which isn't optimal, and really shows the age of the platform.
     
    So, rethinking my platform. Tempted to go dual E5-v3 Xeon (2011-3 / x99 era), but no overclocking and the high core count CPUs are slooooow. Ideally I'd be able to allocate somewhere between 12-16 threads per VM, and with Star Wars Squadrons supporting 5 player teams....that means I need a truly ludicrous number of threads (~32-36 total cores or so) for VR, hypervisor, and VM to all be happy and have good performance. Threadripper would make sense as well, but there are exactly zero TRX40 boards with 5+ CPU connected PCIe slots for GPUs. 'Only' doing 4 players is definitely reasonable, but kind of a bummer to not be able to do 5.

    Le. Sigh. gaming laptops are so much easier to deal with for group parties.
  22. Informative
    bimmerman got a reaction from m4shroom in General Intel HEDT Xeon/i7 Discussion   
    That.....is quite weird!
     
    Maybe try upping PCIE volts a tick, or IOH / IOH PLL. Honestly that's bizarre. I wonder if your GPU itself is unstable and that (or power delivery to it) is the issue rather than mobo overclock.
  23. Like
    bimmerman got a reaction from the pudding in General Intel HEDT Xeon/i7 Discussion   
    Whelp, end of an era. Giving away my X5675/mobo/RAM/AIO/R9 290 to a friend who wants to build a PC to play overwatch.
     
    Downclocked it to 21x200, 1600 ram, leaving uncore and qpi and voltages at their 4.52 setting for stability. Here's hoping they like it!
     
    Keeping the case (HAF 932) for the supermicro beast board. Updates forthcoming on the SR2 front as well.

  24. Like
    bimmerman got a reaction from drevmcast in General Intel HEDT Xeon/i7 Discussion   
    Whelp, end of an era. Giving away my X5675/mobo/RAM/AIO/R9 290 to a friend who wants to build a PC to play overwatch.
     
    Downclocked it to 21x200, 1600 ram, leaving uncore and qpi and voltages at their 4.52 setting for stability. Here's hoping they like it!
     
    Keeping the case (HAF 932) for the supermicro beast board. Updates forthcoming on the SR2 front as well.

  25. Like
    bimmerman got a reaction from YT_DomDaBomb20 in General Intel HEDT Xeon/i7 Discussion   
    Whelp, end of an era. Giving away my X5675/mobo/RAM/AIO/R9 290 to a friend who wants to build a PC to play overwatch.
     
    Downclocked it to 21x200, 1600 ram, leaving uncore and qpi and voltages at their 4.52 setting for stability. Here's hoping they like it!
     
    Keeping the case (HAF 932) for the supermicro beast board. Updates forthcoming on the SR2 front as well.

×