Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Gorgon

Bifrost a 6 GPU Folding Machine

Recommended Posts

Posted · Original PosterOP

The Bifrost Folder - Maximizing Folding at Home Performance while Minimizing Cost

 

Conventional dedicated builds for Folding@Home (F@H) tend to fall into three categories Single GPU, Dual GPU and, 3 or more GPUs, economies of scale dictate that driving multiple GPUs from a single system will maximize the F@H performance (PPD/$).

 

Background:

Spoiler

There are some constraints that need to be considered.

  1. NVidia GPUs require a dedicated CPU Core or Thread per GPU.
  2. Intel Desktop CPUs are limited to 16 PCIe Gen 3 and AMD to 20 PCIe Gen 3 lanes being available.
  3. Experience has shown that higher-performing NVidia GPUs require at least 8 PCIe Lanes under Windows and 4 PCIe Lanes under Linux to prevent the PCIe Lane bandwidth reducing or "bottlenecking" the GPU performance.
  4. Some Desktop Motherboards are "SLI enabled" using PCIe switches to split the 16 PCIe x3 Lanes from the CPU to 2 PCIe 16-lane slots each operating at x8 (x8/x8) allowing 2 GPUs to be fully utilized from a single CPU.
  5. Historically being able to run more than two GPUs off a single Desktop motherboard either required using slots connected to the Chipset with a reduction in performance for that slot or moving to more expensive High-End Desktop (HEDT) or Workstation class CPUs (Threadripper/Xeon) and Motherboards (x299/x399).

In the HEDT systems the enclosure used typically limits the number of GPUs that can be used due to space (number of PCIe expansion slots) and/or the inability to provide sufficient airflow to keep the GPUs operating efficiently. Closed-Loop watercooling can be used with good effect in HEDT builds to provide superior cooling but the additional cost of the Closed-Loop components make these builds more expensive when amortized per GPU than Dual GPU systems using Common Off-the-Shelf (COTS) components.

 

Two products have recently emerged that offer potential for lowering the per GPU system cost namely the emergence of the m.2 PCIe interface and PCIe slot Bifurcation.

 

M.2 interfaces came into use as a solution for allowing Solid State Drives (SSDs) to operate at faster than the 6Mbps SATA3 transfer rate. M.2 leverages the existing PCIe standards and provides PCIe lanes to devices in either 2 or 4-lane configurations.

 

PCIe slot bifurcation is a feature that has emerged in recent Desktop motherboards that split the 16 CPU PCIe lanes in a single x16 PCIe slot to x8/x8 or x4/x4/x4/x4 configurations within the same slot allowing expansion cards to feed different devices or many similar devices requiring less than the whole of the slots available bandwidth. This feature is typically used in storage servers to allow 4 M.2 devices to be installed on a single expansion card and each have access to 4 PCIe lanes.

 

These M.2 adapters are typically very inexpensive as minimal electronics are required to facilitate driving the M.2 slots due to the the Bifurcation.

 

The final piece of the puzzle are m.2 to PCIe slot adapters. These adapters extend the 4 PCIe lanes from the M.2 slots to the first 4 lanes of a PCIe x16 connector using low-noise cabling like that found in traditional PCIe x16 slot extenders.

 

To put all these pieces together it is proposed to use an open-air mining chassis as these will greatly simplify running the GPUs at lower temperatures with higher performance. An additional benefit of using a mining chassis is that these are designed to use two or more power supplies and the mining craze has also made dual Power-Supply adapters commonplace. While many GPU can be run from one large capacity power supply two medium capacity power supplies of the same aggregate capacity are typically less expensive.

This build was inspired by the 7 GPU Octane Render which used HEDT components namely a Asus Workstation Motherboard, a Xeon-W processor and PCIe x16 extenders where the proposed build will use Desktop rather than HEDT parts as the x4 PCIe bandwidth required for F@H under Linux is less than the x8/x16 bandwidth required for Octane Render under Windows. Replacing the PCIe x16 extenders with M.2 to PCIe x16 adapters will also allow the use of the AMD cooling fan and heatsink as the smaller cables will not obstruct airflow over the motherboard as much.

 

An existing rig will be cannibalized for parts to be installed in the mining chassis so optimal cable lengths for the m.2 extenders can be measured.

Once the m.2 extenders are acquired the rig will have a second power supply added and will be populated with a full complement of GPUs and then testing will be performed to verify the folding slots all work as expected with no degradation in performance moving from x8 to x4 PCIe slots.

 

Finally the rig will be populated with Hybrid GPUs to asses differences in noise and heat levels.

 

This design can be extended to use HEDT Components. Utilizing an x399 motherboard, and Threadripper CPU and multiple m.2 adapters and stacking mining rigs a 16 GPU rig could be built but would still have a higher cost per slot (915$Cdn) than a rig built with Desktop components (885$Cdn) but might be an option if higher-end Threadripper processors are desired for BOINC CPU tasks.

 

Parts List:

Link to post
Share on other sites
Posted · Original PosterOP

The Frame, Power Supply adapter and Hyper M.2 card were ordered.

 

Once the mining frame was received and assembled an old IDE ribbon cable was stripped and marked with 5cm lines and used to measure the lengths so the m.2 to PCIe 3 x16 adapters could be ordered.

MiningFrame.jpg.ada5cd9e1616fd02dc71df6ef01d7358.jpg

The stand-offs and images of the Gigabyte x570 Aorus Pro WiFi were used along with an old Radeon HD card to acquire the cable lengths for the m.2 Adapters.

 

As the x570 Pro WiFi is being used for the proof of concept the cable lengths will be different for the Gaming X due to variations in the layout of the PCIe x16 and m.2 slots.

 

The following adapters were ordered from the manufacturer's website as their prices were lower and selection better than on AliExpress or Amazon:

  • R43MR 25cm
  • R43UR 10cm
  • R43UL 15cm
  • R43UL 25cm
  • R43UL 35cm
  • R43ML 50cm

Though the Asus Hyper m.2 and Thermaltake Power Supply adapter were ordered at the same time as the mining frame and Amazon showed them in-stock for some reason they still have not shipped. Go figure ...

 

With the Folding Month done time to tear apart the rigs and populate the chassis for a test fit:

ChassisPopulated.thumb.jpg.0a6c672d2bfc2437cc950658499a49fe.jpg

Notice how the USB2 ports on the I/O shield are blocked by the chassis. The purple & White cable is the included power switch. Nice touch

 

Nov. 5th

The Amazon order arrived today with the Asus Hyper M.2 v2 card and the Thermaltake Dual Power Supply adapter. Have to wait for the m.2 to PCIe3 x16 adapters to arrive but good news got a notification that the adapters have shipped from ADT-Link at Shanghai via DHL Express which is great news as I was hoping that the 35$US shipping and handling fee did not mean China Post which can take forever.

 

ADT-Link products are also available on eBay, Amazon and AliExpress in most to least expensive options but the manufacturer has the best selection and prices on their web store which uses PayPal which gave me confidence to order directly from them and so far I’m glad I did.

 

Knowing I would need some additional PCIe power cables for this project I started looking around and found buried on Corsair’s Canadian web site Type 4 sleeved dual PCIe cables for the AX and RM-series for $3.99Cdn so I ordered 14 to replace the ribbon style cables on my other Rigs with a variety of TX and CX power supplies. The description for these cables notes that they include Capacitors, presumably to help with ripple suppression and hopefully they’re 750mm cables like the ones that ship with the AX and RM-series. One gripe I have is that Corsair ships the cables from the US (Ontario, CA) and charges $30 S&H which may be a PITA as no Canadian sales tax was collected from their storefront so I’ll likely be on the hook for Duty and HST.

 

Picked up the second Corsair RM750x from a local Bricks and Mortar store that had them on sale this week for $20 off MSRP and have replaced the TX650 that was used for the test fit.

 

I’m using black Velcro ties from Home Depot to cable manage along the square aluminum bars.

 

I managed to plug all 5 of the Fractal 120mm fans into individual motherboard fan headers. They’re all set to follow the CPU temperature for now and the rig with the 2080 Super and 2070 Super Folding at 180 and 170W Power Limits runs almost silently. Once I get all the cards running I’ll likely connect a 10kOhm NTC thermistor to the motherboard and dangle it strategically and use that to control the fans from the Easy Fan5 BIOS setup. Having external Thermistor inputs and full fan control in BIOS is one of the things that helped me decide on using Gigabyte motherboards as I’m running Linux fan control is a lot more convoluted than in Windows.

Link to post
Share on other sites
On 11/2/2019 at 4:52 PM, Gorgon said:

Bifrost a 6 GPU Folding Machine

 

Image result for counting fingers gif


Official LTT Folding Event Starts Oct 12th-Nov 22nd! Donate to Science, Enter to Win Prizes!

Sign Up | Folding Thread | FAQ | Install Guide

"Put as much effort into your question as you'd expect someone to give in an answer"- @Princess Luna

Make sure to Quote posts or tag the person with @[username] so they know you responded to them!

Purple Build Post ---  Blue Build Post --- Blue Build Post 2018

RGB Build Post 2019 --- Rainbow 🦆 2020 --- Project ITNOS --- P600S VS Define R6/S2

CPU i7-4790k    Motherboard Gigabyte Z97N-WIFI    RAM G.Skill Sniper DDR3 1866mhz    GPU EVGA GTX1080Ti FTW3    Case Corsair 380T   

Storage Samsung EVO 250GB, Samsung EVO 1TB, WD Black 3TB, WD Black 5TB    PSU Corsair CX550M    Cooling Cryorig H7 with NF-A12x25

Link to post
Share on other sites
20 minutes ago, TVwazhere said:

-snip-

You gotta use imagination. It is an invisible quantum GPU!


Favebook's F@H Stats

Favebook's BOINC Stats

 

CPU i7-8700k (5.0GHz)  Motherboard Aorus Z370 Gaming 7  RAM Vengeance® RGB Pro 16GB DDR4 3200MHz  GPU  Aorus 1080 Ti

Case Carbide Series SPEC-OMEGA  Storage  Samsung Evo 970 1TB & WD Red Pro 10TB

PSU Corsair HX850i  Cooling Custom EKWB loop

 

Display Acer Predator x34 120Hz

Link to post
Share on other sites
Posted · Original PosterOP
2 hours ago, TVwazhere said:

Image result for counting fingers gif

Jayz2cents taught me allz the maths I need

 

I only have 4 2070 Super Hybrids so I might pick up another to fill the system out as though there are mounting holes for 6 GPUs there are only 5 120mm fan-mounts that can be used to mount the radiators.

 

Initially I will be using:

  • EVGA RTX 2080 Super Black Gaming
  • EVGA RTX 2070 Super XC Gaming
  • EVGA RTX 2070 XC Gaming
  • EVGA RTX 2060 Super XC Gaming
  • EVGA RTX 2060 XC Ultra Gaming
  • EVGA GTX 1070 Ti SC Gaming
Link to post
Share on other sites
Posted · Original PosterOP

The Corsair PCIe Power Cables arrived yesterday. Like the ones found on HX and RM-series they are 600mm between the power supply and 1st 6/8-pin PCIe power connector and there is 150mm between the 1st and 2nd 6/8-pin.

 

They cleared customs OK so I'm a little happier about the $30 S&H fee as they arrived fairly quickly.

 

Notification was received that the m.2 to PCIe3 x16 adapters have shipped via DHL and have arrived at Dubai en-route and are expected here Tuesday so the $35US for shipping and handling seems well worth the cost compared to the speed at which China Post operates.

 

I'm running the 2080 Super and 2070 Super in the system currently with default Power-Limits and no overclocks to establish a baseline for performance. The only non-standard setting is I'm running the 2080 Super's fans at 75% as it is the outside (lower) card and the 2070 Super's fans at 85% as it is the inside (upper) card and even in the open chassis with all 5 of the Fractal 120mm Fans at 100% was hitting 82C with the stock fan curves.

 

I'm still running the system off the primary CPU m.2 NVMe disk so I'll clone that this weekend to the 128GB SATA SSD and move over to that so I have both m.2. slots available for when the adapters arrive.

 

I may install Windows 10 in dual-boot on the SSD to see just how much performance loss is seen under Windows compared to Linux.

 

I still have to determine the final placement of the GPUs and balance the power across the two power supplies. The primary supply will have the CPU on it and the three lower power cards (1070Ti, 2060 & 2060 Super) and the secondary power supply will have the three higher power cards (2080 Super, 2070 Super & 2070).

 

The m.2 to PCIe3 x16 adapters are powered and come with SATA to 4-pin "Mini Molex" (Berg LP4 "Floppy" connector) adapters of which there has been great debate in mining threads about overheating and melting mainly, it would seem, due to deficiencies in the molded SATA power connectors. The SATA power connectors have 4 12V pins and are rated at 54W whereas the PCIe specification allows cards to draw up to 75W from the PCIe slot.

 

The SATA to LP4 adapters will be replaced with 4-pin Molex to LP4 adapters that are included with Corsair power supplies. These use 22AWG wire which has a 11A "fusing" capacity and so should be fine for the maximum 6.5A at 12V that a 75W load would draw. Though the LP4 cables in the ATX specification are only rated for 4A on the 12V rail the connectors themselves have a higher current rating.

 

The adapters will be supplied by the same power supply that supplies the motherboard to avoid any possible ground current issues and one Molex power-supply cable will be used for a maximum of two GPUs to limit the load on these 18AWG cables to 13A maximum.

Link to post
Share on other sites
Posted · Original PosterOP

Removed the NVMe SSD and Clipped the excess connectors off the SATA power cable to reduce clutter.

SSD.thumb.jpg.5f33dc24ceb299aa36989782db1fb760.jpg

Installed a 10k Thermistor to monitor the Heat off the top of the 2080 Super. Finally reversed the plug for the 12V RGB for the 120mm DeepCool Gammaxx L120 AIO so I now have more RGB bling.

RGB.thumb.jpg.b63c132ffadf79d2533e8d0763aaaa23.jpg

Did a fresh install of Windows 10 Home 64-bit (1903) on the 128GB SATA SSD then shrunk the partition by 32GB and then did a Minimal install of Ubuntu 18.0.4.3 Desktop into the freed up space.

 

Fixed up lm-sensors, Net SNMP and Zabbix for monitoring then installed and configured the BOINC and Folding at Home clients.

 

The System is back up and running with 2 GPUs on F@H and 14 threads running on World Community Grid.

 

The m.2 adapters have cleared Customs at Quebec City and should be delivered on Tuesday.

Link to post
Share on other sites
Posted · Original PosterOP

The adapters arrived today. The cover/Heatsink was removed from the Asus Hyper M.2 v2 card:

HyperM2.thumb.jpg.6dad6802bb40217c24ae2f270101c054.jpg

The support posts were moved to the 2280 position to accommodate the m.2 adapters and a drop of Loctite applied to their threads. The 10cm R43UR adapter was installed for the second GPU:

FirstAdapter.thumb.jpg.c4c8d54602c79cf63b274eb52cc8a2e2.jpg

Then the 15, 25 & 35cm R43UL adapters for the third, fourth and 5th GPUS:

AllAdapter.thumb.jpg.e404d97299d396fb49af1c37e257f467.jpg

Here are the 25cm R43MR and 50cm R43ML adapters for the first and 6th GPUs:

m2Adapters.thumb.jpg.3f2ca6a3122109f7cf8ded9d13a35d46.jpg

The adapters and GPUs were installed in the chassis for a test fit:

TestFitAdapters.thumb.jpg.7bc1bebdbf0beaf5dbd21aa66ea14844.jpg

All but the top two adapters were removed and the RTX 2080 Super and RTX 2070 Super installed on them then the system was booted. The PCIe Slot Bifurcation Control was changed from "Auto" to 4/4 in the BIOS.

Both cards were recognized and folding was started to verify the GPUs are working. The RTX 2070 Super resumed working on the WU it was working on and the RTX 2080 Super, which had been finished, started a new 14196 WU and is now 22% through and showing a Frame Time of 1:33 which is slightly longer than the recently observed frame times of 1:27 to 1:30 for this WU. (edit: this eventually finished with a TPF of 1:30)

Further testing will be done to measure any performance differences.

Link to post
Share on other sites
Posted · Original PosterOP

AllFolding.thumb.jpg.2eb66eb6754b54729bc98417acf1c19a.jpgIt's alive!

6SlotsFolding.thumb.jpg.f58f54522929528d6566359cf4a5fe12.jpg

Finished the WUs on the RTX 2080 Super and RTX 2070 Super.

Installed all cards. Booted system up with "pause-on-start" "true" on folding client. With all the slots populated the system enumerated the GPUs thus:

  • GPU0 Upper m.2 Slot - GTX 1070 Ti - PS0 for PCIe
  • GPU1 - Lower m.2 Slot - RTX 2080 Super - PS1 for PCIe
  • GPU2 - Upper Hyper M.2 Slot - RTX 2060 - PS0 for PCie
  • GPU3 - 2nd from Top Hyper M.2 Slot - RTX 2060 Super - PS1 for PCIe
  • GPU4 - 3rd from Top Hyper M.2 Slot - RTX 2070 - PS1 for PCIe
  • GPU5 - Bottom Hyper M.2 Slot - RTX 2070 Super - PS1 for PCIe.

Configured GPUs to minimum Power Limits and added Slots in F@H Advanced Control.

Started each slot folding and watched the load on the Power Supplies to ensure neither supply was overloaded:

nvidia-smi -i 0 -pm 1; nvidia-smi -i 0 -pl 90
nvidia-smi -i 1 -pm 1; nvidia-smi -i 1 -pl 125
nvidia-smi -i 2 -pm 1; nvidia-smi -i 2 -pl 125
nvidia-smi -i 3 -pm 1; nvidia-smi -i 3 -pl 125
nvidia-smi -i 4 -pm 1; nvidia-smi -i 4 -pl 105
nvidia-smi -i 5 -pm 1; nvidia-smi -i 5 -pl 125

PS0: 536W; PS1: 390W

Raised power limits to defaults one slot at a time recording the load on the Power Supplies:

nvidia-smi -i 0 -pm 1; nvidia-smi -i 0 -pl 180    PS0: 610W    PS1: 390W
nvidia-smi -i 1 -pm 1; nvidia-smi -i 1 -pl 250    PS0: 630W    PS1: 470W
nvidia-smi -i 2 -pm 1; nvidia-smi -i 2 -pl 190    PS0: 631W    PS1: 470W
nvidia-smi -i 3 -pm 1; nvidia-smi -i 3 -pl 175    PS0: 650W    PS1: 520W
nvidia-smi -i 4 -pm 1; nvidia-smi -i 4 -pl 185    PS0: 660W    PS1: 565W
nvidia-smi -i 5 -pm 1; nvidia-smi -i 5 -pl 215    PS0: 700W    PS1: 660W

PS0: 700W; PS1: 660W

 

Added +150 and +60MHz GPU shader overclock for GPU0 and 4 respectively to bring their clocks more in-line with the other GPUs at 1920-1950MHz.

 

Several hours later realized that I was still running 14 threads of World Community Grid (BOINC) and the per CPU load was at 1.3. Reduced WCG to 10 threads to leave a dedicated thread per GPU which dropped the CPU load average to 1.06.

 

The RTX 2060 Super is running at 78C a little hotter than the other cards which are operating at 68-74C.

 

Link to post
Share on other sites

Image result for wow gif

Image result for amazing gif


Folding Stats

 

SYSTEM SPEC

Intel  i7 7700k | Motherboard Asus Strix Z270H | RAM 16gb 2666 Corsair Void Pro | GPU Nvidia RTX 2070 Super & Asus Strix 1070Ti AE | Cooling Asus Ryujin 240 | Case Cooler Master MC500M | Storage Samsuing SM951 1tb, Samsung SM950 128gb 

PSU EVGA SuperNova 850 G2 | Display Acer Predator XB271HU | Keyboard Corsair K70 Lux | Mouse Corsair M65 Pro  

Sound Logitech Z560 THX | Operating System Windows 10 Pro

Link to post
Share on other sites

Wow It’s amazing. I wish to have a folding as yours someday. I ordered a gaming pc with dual gpu for folding only but there is no room left for more gpus in my case. But planning to build my own soon.

Link to post
Share on other sites
Posted · Original PosterOP

Reversed the direction of the cooling fans on the Chassis from Exhaust to Intake. GPU temperatures after 15 minutes 67-71C so a few degrees cooler.

 

After 24 hours an analysis of WUs processed on the 2080 Super and 2070 Super show that the performance is well within one Standard Deviation of the performance observed for the same WUs when the GPUs were installed in the PCIe3 x16 slots at x8/x8. So far no appreciable difference in performance moving from x8 to x4 and the rest of the GPUs are producing at similar averages that were observed previously:

 

GTX 1070 Ti - 895.92kPPD (+150MHz o/c)

RTX 2060 - 1.04MPPD

RTX 2060 S - 1.26MPPD

RTX 2070 - 1.34MPPD (+60MHz o/c)

RTX 2070 S - 1.51MPPD

RTX 2080 S - 1.84MPPD

 

Hindsight being 20/20, 850W power supplies would likely have been a better choice for this mix of cards.

 

PS0: Motherboard, All Risers, PCIe connectors on GTX 1070Ti, RTX 2060

PS1: PCIe connectors on the rest of the GPUs.

Link to post
Share on other sites
On 11/11/2019 at 2:57 PM, Gorgon said:

The adapters arrived today. The cover/Heatsink was removed from the Asus Hyper M.2 v2 card:

HyperM2.thumb.jpg.6dad6802bb40217c24ae2f270101c054.jpg

The support posts were moved to the 2280 position to accommodate the m.2 adapters and a drop of Loctite applied to their threads. The 10cm R43UR adapter was installed for the second GPU:

FirstAdapter.thumb.jpg.c4c8d54602c79cf63b274eb52cc8a2e2.jpg

Then the 15, 25 & 35cm R43UL adapters for the third, fourth and 5th GPUS:

AllAdapter.thumb.jpg.e404d97299d396fb49af1c37e257f467.jpg

Here are the 25cm R43MR and 50cm R43ML adapters for the first and 6th GPUs:

m2Adapters.thumb.jpg.3f2ca6a3122109f7cf8ded9d13a35d46.jpg

The adapters and GPUs were installed in the chassis for a test fit:

TestFitAdapters.thumb.jpg.7bc1bebdbf0beaf5dbd21aa66ea14844.jpg

All but the top two adapters were removed and the RTX 2080 Super and RTX 2070 Super installed on them then the system was booted. The PCIe Slot Bifurcation Control was changed from "Auto" to 4/4 in the BIOS.

Both cards were recognized and folding was started to verify the GPUs are working. The RTX 2070 Super resumed working on the WU it was working on and the RTX 2080 Super, which had been finished, started a new 14196 WU and is now 22% through and showing a Frame Time of 1:33 which is slightly longer than the recently observed frame times of 1:27 to 1:30 for this WU. (edit: this eventually finished with a TPF of 1:30)

Further testing will be done to measure any performance differences.

The way you are exploiting that ASUS NVME riser is genius.


Hardware & Programming Enthusiast - Creator of Folding@Home in the Dark browser extension and GPU PPD database

Link to post
Share on other sites
Posted · Original PosterOP

Pulled the Ryzen 7 2700x and 120mm AiO out of the rig and installed a Ryzen 3 1200 and its Wraith Stealth cooler. Bit of a brain fart. Forgot it was only a 4c/4t processor and not a 4c/8t processor. Figured that out after looking at its PPD production and wondering why it seemed about 20% lower that it should have been and that the GPUs which were running at 95-100% utilization were a little lower and sometime hitting the mid 70s.

 

Moved the AiO to a System with a Ryzen 2700 replacing it’s stock cooler to allow it a bit more headroom for BOINC tasks. Installed the 2700x in another system for BOINC with it’s stock Wraith Prism cooler.

 

Removed the 1200 and installed a 2700 and it’s Wraith Spire cooler and we’re back to typical production values.

 

What I was attempting to show was that as long as you have one thread per GPU even an older processor would work fine even in the x570 motherboard so a Ryzen 5 1400 would have been a better choice but I don’t have one of those at hand.

 

I should comment, however, that if your considering building a 6 GPU rig I do recommend the x570 over a x470 or earlier as the x570 chipset provides a PCIe3 x2/x4 m.2 slot whereas the Promontory chipset found in x470 and earlier only provides a PCIe3 x2 m.2 slot which may work for a 1660 or 1060 I suspect will result in performance degradation for 2060s or higher GPUs.

 

I’m going to wait for a couple more days then I’ll have about a week’s worth of folding done and take a look at the yields of the cards to ensure I’m still not seeing a loss in performance.

 

Next I’ll try running the system for a few days under Windows 10 to see what, if any, performance impact is seen.

Link to post
Share on other sites
Posted · Original PosterOP

Moved system to Windows 10, installed the Folding@Home Client and configured the slots.

 

Also installed EVGA Precision X1 which had the side benefit of upgrading the vBIOS on some of the cards. Went with Precision X1 rather than MSI Afterburner as I wanted to see what options were available for controlling the RGB LEDs on the GPUs.

 

Installed the Aorus RGB Fusion software so I could change the Motherboard and CPU cooler's LEDs to blue to match the GPUs. It appears that Gigabyte in the x570 line of products has removed basic RGB control from the BIOS which is a PITA for Linux.

 

Had an issue where 8192MB RAM was displayed in the BIOS but the OS only detected 4GB. Had to install both sticks on one channel to get it recognized. Further troubleshooting will be required to determine if this is an issue with the memory controller on the CPU or if there is a bad slot on the motherboard or an issue with the memory timings/terminations/voltages. It was working well under Linux and appears to have become an issue when the CPU was swapped.

 

Configured all slots to fold. Naturally the order in which Windows enumerates the slots is different than under Linux which will make analysis slightly more complicated.

 

PCIe BiFurcation had to be forced into 4/x4 mode in the BIOS for the 4 cards connected to the Asus Hyper M2 card to be recognized by the OS. Once this was done, however, the cards were recognized and GPUZ reports all the cards having PCIe3 x4 available to them.

 

Slot0 - GTX 1070Ti

Slot1 - RTX 2060S

Slot2 - RTX 2070S

Slot3 - RTX 2080S

Slot4 - RTX 2060

Slot5 - RTX 2070

 

The 1070Ti had a +139 and the 2070 +55MHz overclock applied to bump both these GPUs to the mid 1900MHz graphics clock frequencies like was required under Linux.

 

Initial results show 5-5.6MPPD yields under Windows compared to 7-7.7MPPD under Linux with the higher-end cards suffering disproportionately more which agrees with results observed by others on folding at home support forums.

 

Like under Linux the cards are operating in the Mid 60s to lower 70s degrees Celsius.

 

The cards will be run for 4 or 5 days to acquire a broad enough range of results in HfM for analysis.

Link to post
Share on other sites
Posted · Original PosterOP
On 11/12/2019 at 9:25 PM, Gorgon said:

Reversed the direction of the cooling fans on the Chassis from Exhaust to Intake. GPU temperatures after 15 minutes 67-71C so a few degrees cooler.

 

After 24 hours an analysis of WUs processed on the 2080 Super and 2070 Super show that the performance is well within one Standard Deviation of the performance observed for the same WUs when the GPUs were installed in the PCIe3 x16 slots at x8/x8. So far no appreciable difference in performance moving from x8 to x4 and the rest of the GPUs are producing at similar averages that were observed previously:

 

GTX 1070 Ti - 895.92kPPD (+150MHz o/c)

RTX 2060 - 1.04MPPD

RTX 2060 S - 1.26MPPD

RTX 2070 - 1.34MPPD (+60MHz o/c)

RTX 2070 S - 1.51MPPD

RTX 2080 S - 1.84MPPD

 

Hindsight being 20/20, 850W power supplies would likely have been a better choice for this mix of cards.

 

PS0: Motherboard, All Risers, PCIe connectors on GTX 1070Ti, RTX 2060

PS1: PCIe connectors on the rest of the GPUs.

Update with observed results after 6 days.

 

Both the 2080 Super and 2070 Super show values within 1 Standard Deviation of previous values comparing folding on the m.2 adapters at PCIe3 x4 compared to PCIe3 x8:

RTX 2080S 1.84MPPD; previously 1.89MPPD

RTX 2070S 1.47MPPD; previously 1.47MPPD

 

The rest of the cards produced values consistent with previous use in PCIe3 x8 under Linux.

RTX 2070 1.31MPPD

RTX 2060S 1.24MPPD

RTX 2060 1.03MPPD

GTX 1070Ti 877kPPD

 

The cards completed between 32 and 42 Work Units each during the test and data obtained will be used for comparison once a similar data size is obtained under Windows 10.

 

An EVGA RTX 2070/2080 Hybrid Kit has been ordered and will be used to modify the RTX 2070 Super which will be used along with 4 existing RTX 2070 Super XC Hybrid Cards to test their operation in this application.

AllFolding_BW.thumb.jpg.c4fc529bc39377af7fd3b27a6c2038b1.jpg

Link to post
Share on other sites
Posted · Original PosterOP

Windows Performance is still poor but I’m going to continue until the Weekend so I can get a good selection of WU data for comparison. Tuesday the download bug reappeared and messed up those WUs with reboots and having to delete and re add slots.

 

The EVGA Hybrid kit showed up today so I shut the rig down for an hour or so and installed the Hybrid Kit on the RTX 2080 Super XC. I was going to install it on the 2070 Super XC but it should make more of an improvement on the 2080 Super.

 

The installation went smoothly and I used the strips covering the thermal pads on the kit to cover the thermal strips on the mid plate of the cooler that was replaced and then bagged up the cooler and all the extra parts should I want to reinstall the cooler in the future.

 

The 2080 Super XC was running at 64C GPU with stock clocks and with the hybrid kit I’m seeing 57C so a little warmer than the 2070 Super Hybrids I’m running but that to be expected as it’s the same cooling design with more CUDA cores.

Link to post
Share on other sites
Posted · Original PosterOP

1 x RTX 2080 Super XC Hybrid; 4 x RTX 2070 Super XC Hybrids; 1 x RTX 2070 Super XC:

HybridFolding.thumb.jpg.876505900eef18f02f327504e9c38259.jpg

Radiators

Radiators.thumb.jpg.233e71cee80937215cc079b9bc61ea68.jpg

I was always that kid in family photos who was sticking out his tongue or rolling his eyes. Let's call this view Karma:

ThatKid.thumb.jpg.8690ac060da6d273ab4f2e50dd116209.jpg

So I'll have to boot into windows and load Precision X1 to change the LEDs from Green to Blue

Link to post
Share on other sites
Posted · Original PosterOP

The results are in.

 

Going from x8 to x4 under Linux the 2080 Super showed a 2.48% decrease in Production (PPD) while the 2070 Super showed a 0.34% increase in production and both values being within a Standard Deviation.

We can say with some confidence:

Under Linux a higher-end NVidia GPU would see no significant decrease in Production moving from a PCIe3 x8 to a PCIe3 x4 slot.

 

Comparing Windows to Linux Production on a PCIe3 x4 link we observed:

-29.3%  RTX 2080 Super

-21.6%  RTX 2070 Super

-18.1%  RTX 2070

-23.0%  RTX 2060 Super

-21.1%  RTX 2060

-20.0%  GTX 1070 Ti

 

The RTX 2080 Super's Production  appears to suffer the most under Windows on an x4 link but this may have been compounded by it being connected to the x570 motherboard slot connected to the ChipSet and hence further limited by the shared PCIe3 x4 link between the Ryzen 7 2700 CPU and the ChipSet.

 

The general observation is:

Under Windows a higher-end NVidia GPU on a PCIe3 x4 link will see a 20% decrease in Folding at Home Production in Points per Day (PPD) compared to the same card under Linux.

Link to post
Share on other sites
On 11/22/2019 at 6:57 PM, Gorgon said:

So I'll have to boot into windows and load Precision X1 to change the LEDs from Green to Blue

You know with all the gaming support that has been happening for Linux lately I was really hoping that manufactures would start to catch on and make it so we can have led control and other supporting software in Linux too. Especially considering on graphics cards is clearly firmware implemented since you can change it and it will stay that way across the board.

 

Peripherals are a completely different rant that I can go on about this too.


Folding Community Board:

Also check out our Official Folding Month event:

 

Link to post
Share on other sites
Posted · Original PosterOP
9 hours ago, palespartan said:

You know with all the gaming support that has been happening for Linux lately I was really hoping that manufactures would start to catch on and make it so we can have led control and other supporting software in Linux too. Especially considering on graphics cards is clearly firmware implemented since you can change it and it will stay that way across the board.

 

Peripherals are a completely different rant that I can go on about this too.

Even in Windows Precision X-1 only seems to be able to control the LEDs on the first 2 GPUs installed and the 2080 Super is the 6th so I’ll either have to shutdown the system, rearrange the cards, boot into windows, change the LEDs then reverse or just put up with it.

 

Ill likely just leave it as the Hybrid cards are not really that much quieter than dual fan cards in the mining frame. With about 4” space between cards they’re not choked at all for airflow and unlike in a regular case they don’t need lots of fans to get adequate heat transfer.

 

The Hybrid cards make more sense in regular cases where you can run the case Fans slower with them installed.  Granted you get a 7-10C temperature drop with them which should equate to 2 or 3 extra bins of frequency but that’s still only a 2-3% performance increase so the benefit is purely better removal of heat.

Link to post
Share on other sites
Posted · Original PosterOP

Swapped out the Hybrid GPUs for the mix of standard dual fan GPUs as the lower running temps of the Hybrid GPUs in standard cases will be of more benefit and the standard GPUs will run cooler in the mining frame than doubled up in the Standard cases so there should be overall better performance in this configuration.

 

So, final conclusions after a few months.

 

Standard dual fan GPUs would be my preference. The Hybrid GPUs while natively will clock slightly higher the incremental cost increase isn’t worth in in an open frame chassis and the funds saved could be better put towards faster GPUs.

 

In retrospect 850W or 1000W power supplies might have been a better choice. Dual 750W units did work with 5 x RTX 2070 Supers and 1 x RTX 2080 Super but it was tight juggling the load and the Power Supplies were operating closer to 100% rather than in the 40-60% sweet spot in the efficiency curve and the higher % load is likely to wear the units out faster.

 

For my current card mix: RTX 2070A, RTX 2060 Super, RTX 2060, GTX 1070 Ti, GTX 1660 Ti & GTX 1060 6GB the dual 750W power supplies are better suited but I’d be happier with 850W units and for 2080s or above 1000W or greater would be advised but the price jumps significantly per Watt once you move past the 850W size.

 

All-in-all the chassis takes up about half the volume of the 3 large cases you’d likely want to run these GPUs in dual GPU configurations.

 

This approach could be used to gradually build up a system by acquiring the chassis, motherboard, RAM, CPU and a single power supply initially then adding GPUs as time and funds permit eventually adding the second power supply and NVMe card and NVMe to PCIe x4 adapters as needed.

 

Managing one system is easier in most cases but when errors occur, such as the download bug, the system would need to be rebooted more often than 3 separate dual GPU systems would with a resultant decrease in production due to the unaffected slots having to roll-back to the last checkpoint.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×